Data Protection Is Critical in the Multi-Cloud
This session reveals risks to consider when designing a data protection strategy and the importance of remote recovery for catastrophic incidents.
October 31, 2024
In multi-cloud, designing a protection strategy for your data is as critical as collecting it. Limited data loss, infrastructure failures, and high-level data threats are only a few risks to consider when planning.
Having backups at different physical locations is important to avoid data loss from events like fires or natural disasters.
In this archived keynote session, Hector Diaz, Member, Board of Directors - Data Center Institute, explores methods to achieve agile and affordable data protection in multi-cloud.
This segment was part of our live virtual event titled, “The Essential Guide to Cloud Management.” The event was presented by ITPro Today and InformationWeek on October 17, 2024.
A transcript of the video follows below. Minor edits have been made for clarity.
Transcript:
Hector Diaz: Let's examine the risks we think about when designing protection a strategy. So, I tend to think about these in three major buckets.
The first one is what I'll call a limited data loss, in many cases, human error. In some cases, you have a server meltdown, or maybe some storage system gets corrupt. In the case of human error, it might be intentional, because someone got laid off and they're unhappy. In some cases, it might just be, "Oops, I didn't mean to," but that file is gone. We can protect against those data losses through the regular backups we all do daily. And if you lose a file, you're going to restore that file from your backup system.
The next level of data loss is infrastructure failures. They involve either losing all or part of a data center. I saw one case where there was a problem with corrosion in a sprinkler system that caused a leak in a part of the data center. Water was sprayed on top of server racks. That prompted the shutdown of power to that section.
When that happens, the best way to ensure you have any applications critical to your business is through a high-availability setup. Essentially, a high-availability setup is two systems that mirror each other. There are different ways of achieving that.
I tend to talk about those setups today by calling them high-availability systems, not disaster recovery, because, in my mind, the real disaster is when you get hit and someone is encrypting or exfiltrating your data. Once they encrypt it, they demand a ransom to give you recovery keys.
And in that case, high availability is not going to help you. You'd better have what I'm calling a “backups on steroids” kind of system in place for your disaster recovery. And I'll give you more details on that. So, let's talk a little bit about recovery locations.
If you have one of these limited data losses and delete a file by mistake, you're going to recover that file in the same location where you lost it more than likely. You're going to recover that file from either a local backup or a secondary copy of that backup.
If you're doing things right, it should be at a different physical location than where you're keeping your first backup. Again, be careful about cases where you have a data center fire. You may have a chemical tanker truck spilling some nasty stuff close to your site and get told you need to evacuate.
If you're asked by the fire department to shut down, you'd better have backups somewhere else. If you have a larger event with infrastructure failure, we tend to call this “the smoking hole in the ground” problem, as in you had a meteorite strike, right? So, you must recover to a remote site using your failover mechanisms.
Watch the archived “The Essential Guide to Cloud Management” live virtual event on-demand today.
About the Author
You May Also Like