Close

See How Your Peers Are Moving Forward in the Cloud

New research from CDW can help you build on your success and take the next step.

Jun 14 2012
Hardware

When It Comes to Server Lifecycle Management, Start with a Good Strategy

IT managers at colleges can learn from how a Chicago produce distributor developed a continuous refresh strategy.

Server and client virtualization, plus a host of other emerging technology trends, are making server refresh strategies more important than ever. As enterprises become more dependent on high-speed servers, there's almost no wiggle room for downtime or performance dips. So how are IT managers adjusting to this new reality? 

Christopher Nowak sums it up in two words: perpetual motion.

Nowak is the chief technology officer for Anthony Marano, a distributor of fresh produce based in Chicago. Although he and his staff try to always be on the move when it comes to maintaining the highest levels of server performance, there's nothing chaotic about their approach to new technologies. 

The Anthony Marano IT team has established a clear three-phase strategy that falls under the heading of server lifecycle management. The approach consists of reserving state-of-the-art servers for the organization's production systems (or prepping the latest and greatest hardware to take over production duties), using the previous generation of hardware for testing and emergency-backup activities and decommissioning older server resources. 

"This means our most critical production loads are on relatively new equipment all the time," Nowak says. "And our secondary applications — the areas that we could restore from a backup and are not going to be a catastrophe if they go down — are on our older equipment. This means we increase capacity as we need to, and we are always planning our next move." 

Nowak contrasts his current server refresh approach with past strategies, where the motion was anything but perpetual. "It used to be, 'OK, we've got new servers. We are good for at least two-and-a-half years. We don't even have to look at them,'" he recalls.   

But those days are long gone. A move to client virtualization and a migration to Microsoft Windows 7 are two recent initiatives that helped make the three-phase strategy the de facto refresh standard for the produce supplier. "We are always ramping up our horsepower and implementing more software packages, which means we are always in a state of server migration at one of those three levels," Nowak says.

Ad Hoc Refresh

When it comes to staying current with the latest server technologies, not all organizations are as diligent as Anthony Marano. This is partially because of the down economy, says Mark Bowker, senior analyst with the Enterprise Strategy Group.

 

Timing Isn't Everything When Should Enterprises Replace Servers?

"Although many organizations have a formal server refresh strategy in place, some enterprises, especially smaller ones with strained IT budgets, may still try to squeeze as much use out of their existing servers as possible — no matter how long they've been in service," he says. When these organizations do purchase new equipment, it is often on a project-by-project basis meant to address an emerging need, he adds. 

Andrew Jeffries, Lenovo's worldwide ThinkServer product marketing manager, concurs, saying server refreshes remain important, but they're not necessarily automatic for some organizations. 

"Many large enterprises are not ready to make the jump to new servers unless they see new technology that is compelling and has a clear return on investment," Jeffries says. "The good news is that even during the tough economic period that we've been through, leading manufacturers like Intel and others kept up their R&D efforts for next-generation memory technologies, new processor choices and new micro-architectures. The latest generation Xeon platform has a powerful story to tell."

But even given the financial constraints of today's shaky economic times, delaying server upgrades may not be a prudent policy. New, higher-performance server architectures can ensure that enterprises run existing IT workloads using fewer physical servers, which immediately lets a business recoup savings on management, maintenance and utility costs. 

"If I can better manage my server infrastructure with the same IT staff, that is an important benefit," ESG's Bowker says.

Lenovo's Jeffries advises IT managers to also consider power-management innovations that can bring down costs for individual servers and maintain server racks — even entire server farms. 

Incorporating technological innovations into day-to-day operations is another plus. With each refresh "there is an opportunity to see what else is out there on the market," Bowker notes. 

But IT managers need to take two factors into consideration before they make a move to a new server platform: the skill set of their staff and their relationship with their current equipment suppliers. 

"It really comes down to service, support and what ultimately will make the IT professional's job easier," Bowker adds.

Commitment to Blades

The servers that power the Anthony Marano operations are actually clusters of blade servers, which, along with associated storage area networks (SANs), have anchored the infrastructure at the organization since it moved to server virtualization on a large scale. 

Putting Blades on Ice Is Liquid Cooling Making a Comeback for Blade Servers?

In line with Nowak's phased approach, the newest and fastest blade clusters run production applications for about two years before the IT staff transitions them to less critical duties. "The lifespan we figure on for our blades is about three years, but some of that time is devoted to the commissioning and decommissioning processes," he explains.

Nowak says provisioning new blade clusters can take several months because the organization also takes advantage of the time by deploying new software. For example, the organization plans to load the next set of blade servers with VMware vSphere 5, virtualization software released last year. 

Business Benefits

The incentives for Nowak's perpetual refresh strategy aren't surprising. He sees this as the best way to maintain the highest possible levels of performance and reliability and likens the demands placed on IT departments to those placed upon Sisyphus, the mythological king who was condemned to an eternity of labor that had him roll a huge boulder up a hill, only to have it tumble down again, leaving him to start the chore anew. 

"None of our applications are shrinking in terms of their demands on the servers," he says. "Either all of our applications need more resources to run new features, or we are becoming more demanding on the server clusters because of higher usage volumes by our staff."

The move to client virtualization has only increased these server demands. "Having all these virtualized desktops sitting inside virtual server clusters is great. It means that backups and managing the safety of the data is all in the computer room. But this has definitely made the server refresh cycles tighter and changed the dependency that we have on the hardware," Nowak says. "If the server infrastructure isn't spot on with Six Sigma reliability, we don't even have local desktops; we don't have anything running." 

A New Partnership

The more intense pressure for server refreshes is also altering the organization's relationship with hardware suppliers. Nowak relies on them to keep him abreast of new innovations coming to market. 

"Before, I would look at technology that's off the shelf and quick to implement," he says. "With our current refresh model, we are looking out a few months ahead of time to see how we can take advantage of a new product introduction and roll that into our schedule. 

"We are more forward-thinking than we were in the past," he adds. "And that's a good thing. But we have to ask more provocative questions and have a better vendor relationship to ask 'What's on your product roadmap for the next six to nine months?'"