High-Availability Benefits from Virtualization

Schools combine virtualization with fault-tolerant hardware for ultimate resiliency – and to ease server management.

May 2010 E-newsletter

SANs: Keeping Data on Tap

Pump Up Availability with Virtualization

Review: Compellent Storage Center

Get Ready for Unified Fabric

Exchange HA: Keeping on Message

It's been said that some organizations are too big to fail. The same is true of your critical applications and data. A server crash, a power failure, even user error can cause your systems to become unavailable precisely when you need them most. That's why more school districts are turning to virtualization to ensure high availability of their most critical IT assets.

“The benefit to using virtualization for high availability is that it's much simpler for IT managers,” says Dan Kusnetzky, vice president of research operations for the 451 Group. “You don't have to change applications manually if they're running inside encapsulated servers or clients using motion technology. Virtualization offers simplicity, in that you have multiple machines running on a single server and the workload can move back and forth as needed.”

Of course, high availability means different things to different people. For some, it's having a virtualized system where, if a critical app or even an entire server fails, a new virtual machine automatically takes over within minutes or possibly seconds. For others, it's using fault-tolerant servers that provide full hardware redundancy, allowing for real-time replication of processing and data storage and assuring uptime that approaches 99.999 percent. 

Keep It Moving

The Dougherty County School System uses a two-pronged approach to ensure availability, employing both virtualization and thin-client computing.

At the administrative level, Dougherty is migrating physical servers to virtual ones using VMware vCenter, says Bill Dorminy, network administrator for the county school system in Albany, Ga. Though the main benefits are lower costs and reduced power consumption, Dorminy says virtualization also makes it easier to manage security and data recovery.

“We can complete local backups of entire servers and have those replicated to an offsite data center in a matter of seconds,” he says. “Through the use of our iSCSI storage area network, we are able to take snapshots of all of our virtualized systems and replicate those to a secure offsite location. In case of emergency, we can easily bring the latest snapshot online and get these systems functional in minutes.”

In classrooms, the county employs NComputing's L-series thin clients to ensure uptime. Each classroom has one host Windows PC and five thin clients, says Educational Technology Coordinator Les Barnett. If a host fails, the system automatically rolls over to a backup host in a neighboring classroom or the school's media center.

“So the worst thing that can happen to me is one PC in a classroom goes out,” Barnett says. “There's no real downtime. The five thin clients continue to work while we get the host back up and running.”

The Lake Elsinore Unified School District in Riverside, Calif., first began virtualizing its environment using VMware in January 2007. The district ensures critical apps are always available via a combination of Double-Take software, which provides real-time data replication and failover solutions, and HP LeftHand SANs, which house vast amounts of critical data for the school district, including student attendance records and personnel files.

The result is a system almost free of downtime, says Systems Administrator Jeff McCullough. “Teachers don't complain about e-mail being down anymore, because it's never down.”

Virtualization alone, however, won't guarantee continuous operation. The most reliable approach is to create a virtualized environment using fault-tolerant hardware to synchronize data processing across multiple virtual machines.

“The lowest level of high-availability requirements can be met by virtual machine software combined with motion technology, but the highest levels of availability cannot be achieved by virtualization because the transition time is too long,” Kusnetzky says.

For environments that cannot tolerate even a few seconds a month of downtime – such as electronic funds transfers, where even small delays can cost millions – you still need nonstop computing muscle.

“Put in boxes designed for continuous availability, have virtualization software running on them, and they'll never see a failure,” Kusnetzky says.

Three Questions to Answer

Which apps require high availability? Enabling an app for high availability typically costs more because of the need for redundant hardware and software. For that reason, an organization must decide which systems really are critical and need to have 24x7 availability.

What's the required uptime? An organization must also decide how much downtime is acceptable. Will going offline a few minutes a month affect your operations? How about a few seconds? Apps that need to run continuously require more planning and an investment in fault-tolerant hardware.

Is there a continuity strategy? Even the best failover strategy will falter if a natural disaster wipes out an organization's regional infrastructure. If you need five-nines uptime, be prepared to replicate critical systems and data at a second location – ideally, in a different time zone.

Availability by the Numbers

27%  Data centers that have experienced an outage in the past year

56%  Data center managers who identify availability as their No. 1 priority

1st  Rank of human error as the cause of most data center outages

50 minutes  Average enterprise IT downtime per week

3.6%  Annual enterprise revenue lost to downtime, on average

Sources: Aperture Research Institute, Emerson Network Power, EMA Research, Infonetics Research