CloudCasa is a cloud native Backup-as-a-Service (BUaaS) solution for Kubernetes backup of clusters and cloud databases. CloudCasa offers a free service tier that allows you to backup your Kubernetes metadata and resource data to our secure storage, and orchestrate CSI snapshots and (for EKS) AWS EBS persistent volumes (PVs) on local storage. It also offers creation and management of AWS RDS database snapshots and snapshot copies.
Our paid service tier offers fair, capacity-based pricing for backup of Kubernetes PVs to our secure storage. We do not charge for worker nodes or clusters, so our pricing is simple and transparent.
Just create an account and log in to the service at home.cloudcasa.io/signup. You will not need to set up any infrastructure or install any software other than our Kubernetes agent. After creating an account, you can register your Kubernetes clusters and RDS databases. For protecting Kubernetes, CloudCasa will direct you to run a kubectl command on each cluster to install the agent and initiate a connection to the CloudCasa service. For protecting RDS databases, you will need to configure your AWS account(s). After this setup is done, you will be able to protect your clusters and databases using our flexible scheduling policies.
CloudCasa is designed to protect any Kubernetes distribution, installed on-premises or in the cloud. CloudCasa is designed to work with popular K8s distributions such as Red Hat OpenShift, SUSE Rancher, and VMware Tanzu, and also with cloud-based managed Kubernetes services such as EKS, AKS, GKE, Oracle, and DigitalOcean. Almost all flavors and recent versions of Kubernetes are supported. A full list of tested platforms are available in the Requirements and Support section of this FAQ. Additionally, you may reach out to us through our CloudCasa Forum if you have any questions about your environment.
No. Currently, there are no limits on worker nodes per cluster. Our free service allows registering unlmited clusters with any number of worker nodes.
No. Currently, there are no limits on the number of clusters per user. Our free service allows registering any number of clusters with any number of worker nodes.
There is no cost to use the Free Service tier for PV and RDS snapshot management and Kubernetes resource backup. No payment information is required to sign up.
We offer premium services tiers which allow backups of PV data to our cloud storage or to user-provided object storage along with many additional features. See https://cloudcasa.io/pricing/ for details.
Not for our free service plan! No payment information is even required to sign up.
For our premium service plans, we offer simple and transparent capacity-based pricing tiers. Please see: https://cloudcasa.io/pricing/
Standard support, which includes 8:30 am - 7:00 pm Monday-Friday phone support, is included with the paid service plans. We also provide a support chat function in the CloudCasa UI, or you can contact us via email at firstname.lastname@example.org.
For Free service plan users, we actively monitor the CloudCasa Forum and we are happy to hear from you via the support chat feature as well!
A 24/7 Enterprise support plan is included with the Enterprise service, and is available as a separate option for other service plans.
The default is once an hour, but there are ways to trigger backups more frequently if necessary. For more information, reach out to us through the CloudCasa Forum or contact support.
Our free service limits data retention to 30 days. If this precludes you from benefiting from our service, let us know by contacting support.
Paid service plans provide unlimited retention.
While use of CloudCasa is free, we do have a Fair Use Policy that reserves our right to prevent abuse of our complimentary services. We hope that a good experience with CloudCasa will encourage customers to look at our other services and product offerings that address their more complex needs.
No it is not, but the free service does support backup of Kubernetes resource data to CloudCasa.
If you want to backup your PV data to CloudCasa, we offer inexpensive and fair, capacity-based pricing plans where you only pay for the data you are protecting.
Requirements and Support Questions
- The registered cluster must be version 1.13 or higher for protecting resource data. In order to leverage CSI snapshots, the registered cluster must be version 1.17 or higher. The CSI driver must support volume snapshots at the v1beta1 API level. For a list of vendors that support CSI snapshots, please see: https://kubernetes-csi.github.io/docs/drivers.html.
- kubectl must be installed and configured.
- You will need cluster administrative access to install CloudCasa's lightweight agent on your cluster. While registering your cluster in the user interface (UI), each cluster will be given a unique YAML file to be applied to your cluster.
- Network access from your cluster outgoing to the CloudCasa service (agent.cloudcasa.io) on port 443.
The registered cluster must be version 1.13 or higher for protecting resource data. In order to leverage CSI snapshots, the registered cluster must be version 1.17 or higher. The CSI driver must support volume snapshots at the v1beta1 API level. For a list of vendors that support CSI snapshots, please see: https://kubernetes-csi.github.io/docs/drivers.html.
- CloudCasa relies on Cloud Native Computing Foundation (CNCF)'s gRPC for its communication. Your cluster should allow an outgoing TCP connection to the CloudCasa service (agent.cloudcasa.io) on port 443. This typically doesn't require any changes to your network firewall and there is no need to open network ports for incoming connections.
- For accessing our home page, https://home.cloudcasa.io service must be reachable from your favorite web-browser.
The ClusterAdmin role is required.
To verify that your CSI snapshot configuration is correct, run kubectl -n get pvc to observe the storageclass used to define the PVC of interest.
$ kubectl -n mongo get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mongo-vol-mongo-0 Bound pvc-bbc599d1-f988-4178-99a3-545bce997b58 4Gi RWO csi-hostpath-sc 49m
You can also find more details about this PV by running the following command:
$ kubectl get pv -o yaml
$ kubectl get pv pvc-bbc599d1-f988-4178-99a3-545bce997b58 -o yaml
The CSI section in the YAML file will appear similar to this:
volumeAttributes: storage.kubernetes.io/csiProvisionerIdentity: 1604419926974-8081-hostpath.csi.k8s.io
Backup and Restore Questions
With the default Kubernetes cluster backup settings of "Full Cluster" and "Include persistent volumes", all namespaces and all cluster-scoped resources will be included in the backup. All resource data is automatically backed up to CloudCasa's secure cloud storage. By default, all persistent volumes (PVs) that support the CSI snapshot API and AWS EBS PVs will be snapshotted on local storage, but not transferred to the cloud. If "Snapshot and copy to CloudCasa" is selected, the PV data will be transferred to the cloud.
Yes! Your data is compressed.
Yes! Your data is encrypted both over the network and at rest.
Yes! Both free and paid plans support protecting persistent volumes (PVs) using local snapshots. Paid plans also allow "snapshot and copy" of PVs to external object storage. This option can be selected in the user interface when defining a backup.
PVs using any CSI drivers that support the CSI snapshot interface are supported. CloudCasa also supports some popular "in-tree" (non-CSI) volumes types, including awsElasticBlockStore, azureDisk, and gcePersistentDisk. Further, it fully supports Azure File CSI PVs even though Azure Files does not support the full CSI snapshot interface.
Yes. You can simply select an alternate cluster in the Restore dialog. However, you must take in to account the following restrictions:
- The target cluster must be registered with CloudCasa and in the ACTIVE state before a restore can be done.
2. For PVCs of type CSI:
Ensure that the CSI drivers that were deployed in the source cluster are deployed with the same configuration on the target cluster. This can be confirmed by listing the CSI drivers using the command:
$ kubectl get csidriver
Ensure that the CSI drivers in the target cluster have access to the storage snapshots that were taken during the backup of the source cluster. For Example, CSI drivers for EBS volumes require the associated IAM policies to be created, and the configmap with access credentials to be created in the cluster.
Yes. Just select a different destination cluster when you set up your restore. If you are restoring an EKS cluster and have linked your AWS account, you can even have CloudCasa automatically re-create the cluster for you using saved metadata.
Several CloudCasa features can assist you when restoring to a different cluster, such as storage class remapping and namespace renaming.
See the Kubernetes Restore Guide and the CloudCasa online help for more info.
The default setting when you first enter the “Add New Backup” wizard is to protect EVERYTHING. However, CloudCasa offers a variety of selection options to let you specify exactly what you want backed up, so it is important to understand how they work and interact.
If the "Full Cluster" option is chosen, CloudCasa will back up all resources in the cluster, including all namespace-scoped and cluster-scoped resources.
If the “Select Namespaces” option is chosen, only the selected namespaces will be backed up. Cluster-scoped resources will only be backed up if they are associated with selected namespace-scoped resources.
If the “Select Labels” option is selected, only resources with the specified key-value labels will be backed up. If multiple key-value labels are specified, the relationship between them is logical AND, meaning ALL listed labels need to be present and their values need to match. For example, if “owner:bob” and “env:production” are specified, then only resources with BOTH the labels “owner:bob” AND “env:production” will be backed up. This filtering applies to both namespace-scoped and cluster-scoped resources.
If “Select Namespaces” and “Select Labels” are BOTH selected, the filtering has a cumulative effect. Only resources that are both in one of the selected namespaces (or are associated with a resource that is, in the case of cluster-scoped resources) AND have the specified labels will be backed up.
Remember that the filtering applies to persistent volume snapshots as well!
Tags are used as resource identifiers within CloudCasa's database. Labels are typically used as identifiers in your Kubernetes clusters.
If you are restoring an EKS cluster and CloudCasa has saved your EKS configuration metadata (it does this automatically if you have added your AWS account under Configuration/Cloud Accounts), you can choose to have CloudCasa automatically re-create your cluster for you based on the saved metadata.
Otherwise, you will first need to create a new Kubernetes cluster. Next, apply the agent yaml file of your original cluster. You can obtain this from the Setup tab in the UI. Finally, run a restore. If you will be restoring PVs from snapshots, this requires that any CSI driver must be installed with the same name as the source.
See the CloudCasa Kubernetes Restore Guide for more details.
Yes! CloudCasa supports adding a prefix and/or suffix to the restored namespace.
If it is known that a job will fail, its execution will be skipped. Common reasons for skipping a job include the cluster having no active connection, and another instance of the same job already running.
A job will also be marked as skipped if scheduling of it has been paused through the UI.
You can set backup destinations in three different ways.
- The default backup destination for your entire account can be set under the Preferences tab.
- A per-cluster default backup destination can be set under Advanced options in the Edit cluster window.
- The backup destination for an individual backup job can be set under advanced settings in the Edit backup window.
The per-cluster setting will supersede the account-wide default, and the backup job setting will supersede both.
Yes! We do block-level de-duplication on PV data, so only new or changed unique blocks will be backed up. So every backup looks like a full, but only changed data is actually copied.
Yes! To delete a policy, find it in the list under Configuration/Policies and click on the Remove icon for it. However, the system won't allow you to delete a policy if it is currently referenced by a backup definition. You would need to edit your backup definitions and select alternate policies for them prior to deleting the policy. You can quickly see which backups use a given policy by clicking the down arrow on the policy.
Amazon AWS and RDS Questions
All Amazon RDS databases are supported, including Aurora.
Yes. Multiple AWS accounts can be associated with a single CloudCasa account, and RDS databases or clusters from multiple accounts and regions can be managed together.
Yes. When defining the backup, simply enable the “Copy to another region” option and select a region to copy to. You can choose to retain the initial snapshot or to remove it when the copy completes. The schedule and retention period for the copy will be determined by the copy policy you select.
Remember that that copying to other regions can take several hours to complete, and that RDS backup snapshots will be disabled until the copy is completed. This limitation is imposed by AWS, not by CloudCasa.
Yes. However, right now you must have configured the backup to copy to the region you wish to restore to. You can also perform an ad-hoc copy to the region you want by defining an RDS backup with a remote copy, using the “run now” option, and selecting “copy only”.
Yes. Policies defined in CloudCasa can be applied to both Kubernetes clusters and RDS databases, so you can have common policies for all parts of a given application or workload. Note, however, that policies user for RDS copies cannot have an hourly component. This is to prevent problems caused by AWS preventing new snapshots while a copy is in progress.
Currently CloudCasa only queries AWS for updates related to RDS every 12 hours, so databases that you have recently added or automatic snapshots that were recently created may not appear in the UI. You can trigger a refresh manually by going to Configuration/Cloud Accounts and clicking the "Run Inventory" button for each account that you are interested in.
A cross-account role is used to grant the CloudCasa service permission to manage your RDS backups, manage snapshots of EKS EBS PVs, perform EKS backup and restore, and perform AWS security scanning. For convenience, this is created using a CloudFormation stack and can be removed by removing the stack. The stack lets the user choose whether or not to include permissions for each function. A user with administrator permission is required in order to create the CloudFormation stack when initially linking your AWS account to your CloudCasa account, but the cross-account role that is used requires a much smaller set of permissions. The exact permissions used are the following:
For core functions:
For EBS volume snapshotting for EKS:
For RDS backup/restore:
For EKS backup/restore:
For AWS Security Scanning:
- arn:aws:iam::aws:policy/SecurityAudit (AWS managed policy)
When a cluster is deleted from the CloudCasa user interface, existing backups (snapshots or copies) are not automatically deleted. Backups are only deleted when their assigned retention period expires.However, you will not be able to access existing backups once you delete a cluster without contacting CloudCasa support. The UI will warn you if you attempt to delete a cluster that has existing backups.
A cluster cannot be deleted if a backup job is pointing to it. You will need to delete the backup jobs referring to the cluster before deleting the cluster.
If you no longer intend to use CloudCasa on a cluster, run the following commands in your cluster.
kubectl delete namespace/cloudcasa-io clusterrolebinding/cloudcasa-io
kubectl delete crds -l component=kubeagent-backup-helper"
Note that if the cluster is also deleted in the user interface (UI), the recovery points are no longer usable.
After a cluster is unregistered, its recovery points are currently unusable even if the same cluster name is reused.
In such cases, the last cluster that you register assumes the identity. Any cluster previously registered using the same registration YAML file can no longer communicate with CloudCasa service. If the current cluster is not the intended cluster, rerun the command on the intended cluster.
If you installed your agent by using the kubectl command provided in the CloudCasa UI, you can upgrade in the following way:
1.Run "kubectl delete namespace cloudcasa-io" to remove the existing agent.
2. Go to the Protection/Clusters tab in the UI and select your cluster. The kubectl command to install the new agent will be displayed. It will look something like "kubectl apply -f https://api.cloudcasa.io/kubeclusteragents..." but will be unique for each cluster. Just run this command and you're done!
If you installed the agent using a Helm chart or through a marketplace such as Rancher or Digital Ocean, you will need to follow the update instructions specific to your install method. See the appropriate item below
The following Helm upgrade instructions are also available in the marketplace:
- Log in to CloudCasa, go to Protection/Clusters, and click on the cluster in the list to obtain the cluster ID.
- Execute the following commands to update the agent:
$ helm repo update $ helm upgrade cloudcasa.io cloudcasa-repo/cloudcasa-helmchart --set cluster_id=<Cluster ID>
Note that agent version releases in the Rancher marketplace can lag slightly behind CloudCasa releases, so you should check that the agent version you want is available before upgrading.
Just re-run the 1-Click app installation as described in the DO marketplace.
Note that agent version releases on the DO marketplace can lag slightly behind CloudCasa releases, so you should check that the agent version you want is available before upgrading.
Sometimes new features we introduce require new AWS permissions, which in turn requires launching a new version of the CloudFormation stack.
Go to Configuration/Cloud Accounts. There will be a "!" icon next to each account that needs a CloudFormation stack update. Click the Edit icon for each flagged account. At the top of the Edit Account pane you will see the message "Stack update available" with a "Re-launch stack" link under it. Clicking on the link will take you to the AWS CloudFormation console where the stack will launch once you have logged in. This is the same process you used when you initially configured your account in CloudCasa.
Our AWS CloudFormation stack creates a cross-account role that grants CloudCasa the permissions it needs, and only the permissions it needs, to perform its functions. Currently these permissions can be broken down into four sets: permissions required to backup and restore RDS databases, permissions required to snapshot EKS EBS PVs, permissions required to backup EKS cluster parameters and automatically create clusters on restore, and permissions required for cloud security scanning.
Since you may not wish to grant CloudCasa permissions required for a feature you aren't using, the CloudFormation stack launch page lets you choose which permission sets to include: RDS, EBS Volume Snapshot, EKS, Security Scanning, or all of the above. All will be enabled by default.
If you wish to change your selection later, you can so so by re-launching the CloudFormation stack.