This is an excerpt from our white paper Taking I.T. Agile With Server Virtualization.
How Server Virtualization Works
Not so long ago in the vast majority of data centers and server rooms, every application or service resided on its own dedicated physical server. All the capacity of each of these servers — including CPU cycles, memory and input/output (I/O) — was allocated exclusively to its resident application or service. For instance, if an app didn’t utilize all of its host’s capacity, the excess capacity basically went to waste.
What’s more, if an app or service in such an environment started to max out some aspect of a physical server’s capacity, the only option was to buy another server and then allocate it to that application or service — even if only a fraction of its capacity was actually needed. Server virtualization radically alters this scenario.
With virtualization, a layer of intelligence and automation called a hypervisor is placed between the physical server’s hardware resources and its operating system. This hypervisor allows multiple virtual servers (more commonly referred to as virtual machines, or VMs) to run on a single physical server and share its CPU, memory and I/O.
These VMs operate independently from one another. Therefore, their capacities — and even their operating systems — can vary. For example, a single physical server can run one Windows VM that consumes up to 50 percent of its physical resources and half a dozen Linux VMs, each of which only requires a small percentage of them.
In addition to allowing a physical server to be shared among multiple VMs, virtualization also makes it easy to move virtual servers between hosts. A VM is, in essence, really just a piece of code. It can run as easily on one machine as another.
As a result, a group of virtualization-enabled physical servers in a data center can be treated as a single pool of CPU, memory and I/O capacity that can be allocated flexibly to whatever applications or services need use of them at any given time.
When to Consider It
The broad benefits that server virtualization offers have it rapidly becoming a standard feature within the infrastructure of just about every large organization, as well as many small and midsize enterprises. But its adoption typically is triggered by one or more of the following specific conditions that threaten the IT staff’s ability to meet the needs of the organization with optimum effectiveness and efficiency:
Underutilization of server capacity: Many organizations find that a large percentage of the servers in their data centers operate at as little as 20 percent to 35 percent of actual capacity. Low utilization rates mean that these enterprises won’t get nearly as much value from their server investments as they could.
In many cases, organizations also have limited budgets for the purchase of new servers. Utilizing the idle capacity on those already in use is a prime driver for most virtualization initiatives.
Fluctuating workloads: Many organizations have workloads that vary widely. Sometimes peak processing demands are periodic and predictable, such as the approach of the April tax filing deadline or the beginning of a semester for colleges and universities.
At other times, fluctuations are less predictable. Retailers can’t always predict sudden surges of consumer interest. Government agencies can’t always anticipate a crisis that affects local constituents. Either incident could suddenly place an unexpected burden on IT capacity.
Regardless of what causes a spike, IT shops recognize that they must be able to maintain acceptable service levels for critical applications, even in the event of sudden demand surges.
Server provisioning as a process bottleneck: Organizations of all kinds find themselves increasingly dependent on technology to expand their services, bring new products to market and otherwise evolve to meet changes taking place in the world around them. Unfortunately, the time it takes to procure, install and configure a physical server can slow down the process of developing and rolling out critical IT capabilities.
As this process bottleneck becomes more problematic, organizations become interested in fast, easy and less expensive ways of provisioning server capacity to support new capabilities.
Service outages and inadequate resiliency: Hardware failures, sluggish operating systems and other technical problems also can precipitate a move to virtualization. This is particularly true if technical hiccups interfere with delivery of essential services to internal or external users. Outages spur organizations to look for ways to make their IT infrastructures more resilient without large capital investments in redundant hardware.
Of course, many organizations decide to adopt server virtualization not because of any particular crisis in IT operations, but simply because they want to continue to fully optimize the resource efficiency and agility of their data centers. But any organization contemplating a move to server virtualization should nonetheless audit its data center to get a handle on the scope of the initial implementation, as well as to determine the applications and services that most urgently need to be virtualized and craft a long-term migration strategy.