How to Develop an Effective Multicloud Backup Approach
Organizations are reaping the benefits of taking a multicloud approach to backing up their data, apps and other resources, but there are things to consider before making that jump.
Most businesses today back up some, if not all, of their resources in the cloud, citing convenience, high availability and cost-effectiveness. While many are still backing up data to one cloud, more and more are choosing a multicloud backup approach—one that relies on multiple clouds to back up critical data, applications and other resources.
Businesses eager to jump on the cloud bandwagon a decade ago grabbed the opportunity but over time realized that putting all of their eggs into one basket may not be the right move. Not all have the right functions for today’s backup needs, and it’s never a good idea to risk vendor lock-in. The multicloud approach to backup is an increasingly common option. Here are some tips for doing it right:
Plan first. Even if you are a cloud veteran, managing storage in multiple clouds is a new challenge. Start by thinking about what your company’s applications, data and infrastructure are likely to look like five years from now. Then think about what exactly needs to be backed up, how fast you’ll need to recover, and what your auditing and compliance needs are likely to be. This will help you determine the most important features and functions for your specific requirements. For example, you may choose very differently if you are dealing with on-premises backup to the cloud versus cloud-to-cloud backup.
Consider a platform approach. Each of the major cloud vendors has its own methods and technologies for backing up to the cloud, but they don’t communicate with one another. That means companies have to manage everything separately. It also makes it difficult to centrally aggregate your resources. As Anthony Cusimano, solutions evangelist at Veritas, puts it, “You can’t drop an Amazon EC2 Instance backup into Azure Compute and flip it on as if it were a virtual machine going from Atlanta to Los Angeles.”
A platform approach ensures that everything is in sync and searchable because it typically relies on a centralized catalog for all data, applications and infrastructure being managed. For example, if the platform is running in Google Cloud, Amazon Web Services and Microsoft Azure, it would have a shared catalog and share the same knowledge, although all three may not always manage the same data. If there is a disaster that results in the AWS implementation going down, the data stored in AWS can easily be converted to Azure data. The platform approach also helps with automated retention. A protection policy on the platform can be applied universally to different workloads, and that single managed policy ensures that everything is being treated equally. And because everything is run on the platform, it’s easy to create role-based access controls (RBACs) for different administrators.
Look for solutions that are as efficient, granular and flexible as possible. An ideal solution will have the ability to restore resources in a granular way without requiring any rebuilding. It should also provide the ability to use different storage, tiers and technologies from multiple cloud providers; ensure that you’re using the right storage capacity or storage tier; and have the ability to burst during times of disaster. “You have to ensure that you have enough compute resources to allow you to spin back workloads natively in the cloud without causing bottlenecks,” explains Don Foster, global vice president of sales engineering at Commvault. The ability to auto-detect new workloads in a way that ensures that nothing is missed also is critical, he adds.
To ensure compliance, it’s also important to look for a solution with built-in classification policies, along with ways for backup admins to classify data and keep personally identifiable information (PII) out of backups.
Cost is a big deal, but there are ways to keep it under control. The more cloud storage instances you have, the more you’ll pay. But the last thing you want is a multicloud backup solution that requires compute and storage constantly running in the cloud. Instead, look for a solution that can spin resources up and down when backups and restores occur.
One way to keep costs down is by using the tools the solution provides for deduplication and compression. “If you’re sending raw blocks of backup from Microsoft Azure blob storage to AWS S3, you’re going to be paying out the nose for leaving Microsoft and paying to bring it in to S3,” Cusimano explains. Instead, identify what’s important, and what can be compressed and deduplicated. Use the solution’s dashboard to track budget and resource utilization, such as how many instances are currently running and how much storage you currently have in each cloud. Once you determine what’s normal, you can create alerts for anything outside the norm.
You’re in control. Really. You have clout at every part of the process, says Pat Hurley, a vice president at Acronis. Before settling on vendors, take advantage of the competitive marketplace and be aggressive in negotiating price. Even after you’ve signed up, you still have the upper hand. “If they aren’t meeting their SLAs [service-level agreements], be vocal about it. You’re not limited to a single cloud provider, and you can move your data whenever you want.”
Accept that multicloud backup is here to stay. Even if your company hasn’t yet made the leap, it’s only a matter of time. “If the solutions you’re consider don't inherently have a multicloud capability, you're probably looking in the wrong direction,” Foster says. “The last thing you want is to have fragmented silos for what a recovery strategy looks like. That’s why multicloud is so important.”
About the Author
You May Also Like