By Mike Crest
It’s hard to believe, but there was a time when backup was essentially a one-to-one exercise. In smaller shops, an administrator would plug a tape into the reader and the server data would get copied – usually overnight. In large data centers, servers and endpoints would back up to tape libraries with essentially the same architecture, just with far more automation and efficiency.
Today’s IT infrastructure is about virtualization. If it could be virtualized, chances are it was. In fact, analyst firm 451 Research reports that 51 percent of all x86 servers were virtualized in 2012, and server virtualization continues growing at an annual rate of 13 percent. In practical terms, businesses are getting far greater server utilization rates than with conventional servers, and they’re using fewer physical machines because they’re able to get greater virtual machine (VM) densities on their host machines. But, each VM looks and acts like a standalone server and therefore requires coordinated backup.
Of course, virtual densities are one issue. Virtual sprawl is another due to the shear volume of VMs being created – where are they, what server do they reside on, are the VMs still critical and need to be backed up, etc.
But let’s not get comfortable with the notion that the new paradigm is simply one of virtualization. Further complicating the backup challenges is the growing use of cloud-based resources for applications and data.
Regardless of the proliferation of the cloud and virtualization, businesses will likely retain a certain level of conventional, physical infrastructure that falls into the traditional backup solution set. So we’re faced with layers of complexity that show no signs of disappearing. As a result, we often have to cobble together solutions that fit the unique requirements of virtual, physical and cloud in favor of legacy solutions covering all environments. .
There is a better way.
The key to protecting virtualized environments is tight integration with the virtualization system. You have to think differently, because there are different types of files, which pose their own challenges. It isn’t a simply a matter of copying the files.
The punch line: If you have a virtual infrastructure, which I’m betting you do, there’s a compelling reason to invest in a virtualization-aware backup and recovery solution because traditional physical environment backup solutions are limited in terms of their ability to protect virtual environments.
Yet as virtualization continues to grow in importance, the demands of the business continue to increase as well – and these demands don’t stop at just a backup and recovery solution for virtualized environments. Organizations require physical backup and recovery as well as the protection of their virtual environments– all while evolving to the cloud. Backup solutions need to span the physical, virtual and cloud resources where data resides.
Finally, recovery-point objectives (RPO) and recovery-time objectives (RTO) are under pressure from the business to minimize downtime. Delivering on the promise of true continuous data protection and high availability is critical across physical, virtual and cloud environments. That promise is faced with the headwinds of de-layering and rationalizing the portfolio of backup and recovery solutions and is equally critical to an organization’s ability to scale.
Solution providers can continue to piece together disparate systems with duct tape and bailing wire if they like. Chances are such systems will become less efficient and more costly. Worse, they will have a higher probability of failure in the event of a disaster.
Instead, solution providers should look for integrated, reliable, easy-to-use and scalable solutions, so that when disasters happen, they aren’t navigating multiple solutions from multiple vendors with varying administrative requirements and expertise. For the sake of your data-dependent customers, solution providers should look deeper, demand more and expect a greater level of capability from their data protection solution.
Mike Crest is general manager of the Data Management business at CA Technologies, responsible for the team that develops, markets, sells and supports the company’s CA ARCserve and CA ERwin product lines. He has more than 20 years of IT industry experience specializing in direct and indirect sales and marketing. Mike was named a Top 25 Innovator of 2012 and a Top 25 Channel Sales Leader of 2010 by CRN Magazine, as well as a CRN Channel Chief for the past two consecutive years.
Leave a Reply