For years, Utah State University kept more capacity in its data center than it needed. This approach made sense because the data needs of the research teams and the colleges would change over time, so the IT staff erred on the side of having excess capacity to accommodate spikes, especially for the researchers.
Today, with virtualization and more energy-efficient power and cooling technology, IT managers have found that modular data centers give them the flexibility they need as research, educational and administrative demands change.
A modular data center consists of scalable, pre-engineered modular components that, at a minimum, include power distribution, cooling, uninterruptible power supply systems and backup generation, but that can also include monitoring and control systems, access control and physical security.
“The requirements on our data center grow and shrink, and we needed to think more broadly than just moving to blades,” says David Tidwell, the university’s physical infrastructure coordinator. “Although blades definitely help with flexibility because they are scalable, cooling and electricity are also important. Cooling, generators, backup power — everything needs to ramp up and down with your needs.”
Tidwell and his staff began taking a more modular approach in 2007 when a series of old, inefficient under-floor cooling units needed to be replaced.
The new data center, which opened four years ago, uses only about 50 percent of the power of the previous data center. It features efficient in-row cooling technology from APC, which places cooling systems between server racks in a row to keep units with specific temperature needs at optimum temperatures. The coolers are housed in APC NetShelter AR3150 rack cabinets. The data center also uses three APC 80-kilowatt UPS systems that are easily upgradable to 250kW units. All of this equipment is monitored with APC’s InfraStruxure management system, which includes sensors and software.
In fact, everything Tidwell bought for the new data center is as modular as possible. Even the cable tray under the raised floor is modular and can be expanded in 2-foot-by-2-foot sections as needed.
“We can add to everything as we need it,” Tidwell says. “We may have 10 racks in a row, but if we need more space, it’s just a matter of rolling in another in-row cooler and a couple of racks. The UPS is expandable, and we have sized our chillers so they are built for the future.”
The decision-making process Tidwell and his staff undertook that led to a modular data center is fairly common, says Jason Schafer, a research manager at Tier1 Research in Bethesda, Md. Schafer says modular data centers are part of what Tier1 describes as “Datacenter 2.0” — a fundamental shift in the way data centers are designed, built and commissioned.
“Modular technology is much faster to get up and running, and it can save money over time,” Schafer says. “And when you adopt a modular approach to the data center, you can take advantage of advances in efficiency and technology as it happens, without replacing your entire infrastructure.”
The time it takes to fully deploy a modular data center.
SOURCE: Cisco Systems
If anyone knows the benefits of modular data centers, it’s Greg Hidley, technical director for the California Institute for Telecommunications and Information Technology (Calit2), based at the University of California, San Diego. Calit2 started building a modular data center in 2008, the year it won (with UCSD) a grant from the National Science Foundation for Project GreenLight, an effort designed to develop more energy-efficient computing.
As part of the project, the team at UCSD installed a Sun Microsystems modular data center (Sun has since been bought by Oracle) to test its theories. In addition to a 40 percent energy reduction right off the bat, the deployment reaped many other benefits.
“Using a modular data center greatly improved our ability to add sensors and monitor environmental characteristics close to the computers, without undo interference by other building-level power, thermal and air-flow activities that you would have in a typical data center,” Hidley says.
Hidley and his team placed different types of computers inside the modular environment and equipped them with monitoring instruments so that at any given moment, they knew exactly how much power each computer and device was drawing, how much heat was dissipating and how efficiently the air handlers, chilled water systems and power distribution systems were operating.
“The goal was to show what modular data centers could provide in terms of cost containment, green IT, saving money and power,” Hidley says. “We’ve done that, and our research is ongoing. Every day people are running processes in the modular data center and looking at what they can do.”
Modular Products Come on Strong
Modular data center technology has come a long way in a short time. When the first products came on the market three years ago, offered mainly by server manufacturers, they were not what users were looking for. Not only were they based on proprietary technology, but they also didn’t solve the biggest problem data center managers were facing: how to quickly add more capacity.
What a difference a few years makes. Today’s modular data center products — IBM’s Portable Modular Data Center, HP’s EcoPOD, Cisco Systems and NetApp’s FlexPod and VCE’s Vblock — are targeted at organizations dealing with capacity, scalability and cost issues.
“Everybody talks about capacity planning, but there really is no such thing. At best, it’s ‘capacity guessing,’ ” says Jason Schafer, a research manager at Tier1 Research. “Modular data center technologies take away some of the need for exact capacity planning, because they can keep pace with where the organization is at any given time.”
Schafer expects the use of modular data center components to grow significantly over the next year.
“If organizations don’t consider modular components at the very least as part of their build strategies, they will be starting off at a disadvantage both financially and in terms of flexibility,” he adds.