Apr 15 2010

High-Availability Benefits from Virtualization

Colleges and universities combine virtualization with fault-tolerant hardware for ultimate resiliency – and to ease server management.
May 2010 E-newsletter

UPSes Keep Campuses Up and Running

Pump Up Availability with Virtualization

Review: Compellent Storage Center

Get Ready for Unified Fabric

Exchange HA: Keeping on Message

It's been said that some organizations are too big to fail. The same is true of your critical applications and data. A server crash, a power failure, even user error can cause your systems to become unavailable precisely when you need them most. That's why more colleges and universities are turning to virtualization to ensure high availability of their most critical IT assets.

“The benefit to using virtualization for high availability is that it's much simpler for IT managers,” says Dan Kusnetzky, vice president of research operations for the 451 Group. “You don't have to change applications manually if they're running inside encapsulated servers or clients using motion technology. Virtualization offers simplicity, in that you have multiple machines running on a single server and the workload can move back and forth as needed.”

Of course, high availability means different things to different people. For some, it's having a virtualized system where, if a critical app or even an entire server fails, a new virtual machine automatically takes over within minutes or possibly seconds. For others, it's using fault tolerant servers that provide full hardware redundancy, allowing for real-time replication of processing and data storage and assuring uptime that approaches 99.999 percent. 

Keep It Moving

The Medical College of Georgia relies on a virtualized server architecture to ensure that its more than 5,000 faculty, staff and resident physicians always have access to the services they need.

“Virtualization brings a lot of high-availability options to the table,” says Lawrence Kearney, system support specialist at the Augusta college. “You can find ways to do out-of-band high availability that might otherwise not be available to you.”

Using Citrix Systems Xen or VMware virtualization hosts and clustering software, says Kearney, you can create a script that allows a virtual server to instantly failover to another node in the same cluster, then quickly spin up a new instance of the server – allowing for continuous operation without affecting the user experience.

The key is to determine what high availability means to your business model, prioritize what apps and services really need to be highly available, and how all of that fits into your budget, he adds.

“For example, e-mail, file services and web server access to applications are three things that can never be down,” Kearney says. Apps that are needed by large numbers of users concurrently also get high priority. But some departmental apps that may be used by only 15 or 20 people are generally not made highly available, he says.

At Temple University in Philadelphia, which also includes a teaching hospital with nearly 750 beds, ensuring high availability can literally be a matter of life and death.

“A server failure is not just a case of people being unable to get their e-mail,” says Adam Ferrero, executive director of network services at Temple. “Anything that affects hospital operations needs to be as available as possible. If our network goes down, the mortality rate goes up.”

About 100 of the university's more than 500 servers run VMware, which allows the university to use vMotion to move apps from one virtual machine to another in case of hardware failure. Another key to ensuring uptime is to build in redundancy, says Ferrero.

“We just built a new joint data center for the university health system, and everything in it is completely redundant – all our servers have multiple switches, power supplies and network cards,” he says.

If the university's primary data center fails, a secondary one takes over, he adds. And the university relies on two high-availability Crossbeam X-Series chassis to virtualize its security infrastructure and deliver firewall and intrusion detection software.

Virtualization alone, however, won't guarantee continuous operation. The most reliable solution is to create a virtualized environment using fault-tolerant hardware, to synchronize data processing across multiple virtual machines.

“The lowest level of high-availability requirements can be met by virtual machine software combined with motion technology, but the highest levels of availability cannot be achieved by virtualization because the transition time is too long,” says Kusnetzky.

For environments that cannot tolerate even a few seconds a month downtime – such as electronic funds transfers, where even small delays in transaction time can cost millions – you still need nonstop continuous computers.

“Put in boxes designed for continuous availability, have virtualization software running on them,” Kusnetzky says, “and you'll never see a failure.”

Three Questions to Answer

Which apps require high availability? Enabling an app for high availability typically costs more because of the need for redundant hardware and software. For that reason, an organization must decide which systems really are critical and need to have 24x7 availability.

What's the required uptime? An organization must also decide how much downtime is acceptable. Will going offline a few minutes a month affect your operations? How about a few seconds? Apps that need to run continuously require more planning and an investment in fault-tolerant hardware.

Is there a continuity strategy? Even the best failover strategy will falter if a natural disaster wipes out an organization's regional infrastructure. If you need five-nines uptime, be prepared to replicate critical systems and data at a second location – ideally, in a different time zone.

Availability by the Numbers

27%  Data centers that have experienced an outage in the past year
 
56%  Data center managers who identify availability as their No. 1 priority

1st  Rank of human error as the cause of most data center outages

50 minutes  Average enterprise IT downtime per week

3.6%  Annual enterprise revenue lost to downtime, on average

Sources: Aperture Research Institute, Emerson Network Power, EMA Research, Infonetics Research

Close

Become an Insider

Unlock white papers, personalized recommendations and other premium content for an in-depth look at evolving IT