4 Tips For Your Next Data Center Overhaul
Use these tips the next time you plan a data center refresh.
A data center's physical infrastructure is the foundation for its success or failure. A solid physical infrastructure is an effective base on which to build services, while deficiencies in physical infrastructure will plague even the most diligent data center operators.
At the University of Wisconsin–Whitewater, we recently completed a technology refresh of our data center. The upgrade has let us move more effectively into contemporary technologies, such as blade-based computing and large-scale virtualization. Physical infrastructure is far less a barrier than we'd previously faced, and we are in a much better position to provide the flexible and cost-effective services that our institution demands. Here's what we learned in the process:
Provide for adequate floor space.
There are several factors to consider when it comes to physical space. First, adequate floor space for current and anticipated future demand is essential. To create this estimate, we projected our growth using average-case and worst-case figures and found an appropriate target that fit within the maximum space, power and cooling constraints available to us without building a new facility.
Once we had determined how much floor space we needed, we had to look at the space both above and beneath the floor. Because we were renovating an existing data center space, we had a raised floor of about 24 inches that we could work with. Unfortunately, this space was used for both power and data distribution, without much thought given to routing or airflow. As a result, the plenum was not distributing air effectively, and we were experiencing heat problemseven though we had a large surplus of cooling available.
Almost 80 percent of data centers were built before the dot-com era and are technically obsolete.
SOURCE: Gartner
To clean up the mess under the floor and also prevent it from happening in the future, we decided to move data cabling overhead. Doing so necessitated raising our suspended ceiling by about 2 feet to maintain the minimum height below ceiling obstructions of 8.5 feet required by the Telecommunications Industry Association's TIA-942 data center standard. Installing a ladder rack and fiber duct below the suspended ceiling creates a visually striking cable plant, one that our engineers are motivated to keep clean and orderly because it's in plain view throughout the data center.
Keep the cabling under the floor.
We were careful to keep the cabling under the floor mostly perpendicular to the airflow and completely away from the cold aisles so that the cabling did not interfere with the airflow. Because hot aisles do not have any bearing on airflow under the floor, they are a good location for cabling. While we found that we did not need to install ducts under the floor, doing so would further increase cooling efficiency.
Also following TIA-942, we used a tiered distribution concept for cabling our equipment. Rather than cabling all data center components directly to our core network equipment, we established a distribution area in each aisle and cabled individual racks to this distribution area. In TIA parlance, this is referred to as a horizontal distribution area and can be thought of as a “wiring closet” for the aisle. This allows changes to be made between the server and the horizontal distribution frame without disturbing unrelated cabling. Minimizing the scope of change minimizes the potential impact as well.
Alongside our cable distribution network (and using the same cable distribution channels), we also installed a common bonding network that was tied to our building grounding system. This meant each individual enclosure that was added to the data center environment would have easy access to a grounding point so that the enclosure could be grounded properly.
Calculate your cooling requirements over five to 10 years.
An interesting note about modern computing equipment: For planning purposes, at least, it is almost completely efficient in converting electrical power to heat. As a result, we can generally assume that every watt of electrical energy that we put into our data center will need to be extracted in the form of heat energy. Because one kilowatt-hour is roughly equivalent to 3,413 British thermal units (BTUs), or 0.28 tons of cooling capacity, we can use our anticipated electrical load to drive the calculations for our cooling load.
Doing so assumes that we're using our cooling effectively. Our own data center provided an excellent counter-point to this. We started our renovation project with about 60 tons of cooling available to support a 50kW data center. This cooling capacity should have been sufficient to support more than 200kW, but because of poor distribution we still experienced heat-related failures in several areas.
To remedy this situation, we did several things: First, because our air-conditioning system needed to be replaced, we purchased units that better matched our cooling requirements. Cooling is an area where oversizing can really cost, so we scaled the system to match our five-to-10-year projections and left room in the plan for additional growth. We calculated our forced-air cooling system to support power densities of up to 8kW per enclosure.
Because almost all of our enclosures had power densities less than this, we matched our forced-air cooling to fit the most common need, and plan to install additional capabilities as point solutions where we need them. For enclosures with higher heat densities we plan to use a ducted in-row cooling system. By better matching our forced-air cooling strategy to our true needs, we were able to reduce our cooling costs by almost 40 percent. Cooling targets were specified per TIA-942 to maintain 68 to 72 degrees Farenheit with a relative humidity of 40 to 55 percent.
We also changed the room orientation to match the hot aisle/cold aisle configuration recommended by TIA-942. This arrangement makes best use of cold air by positioning racks so that aisles in the data center alternate between cold (inlet) and hot (exhaust). By arranging racks and using filler panels, we limited the mixing of hot and cold air and therefore were able to make the most of the cool air that we were generating. Through the simple changes of rack orientation and clearing our underfloor plenum, we were able to reduce our inlet and exhaust temperatures by almost 10 degrees Farenheit with no change to our A/C set points.
Protect the data center from fire.
We implemented a dual-interlock sprinkler system that uses a very early smoke detection apparatus (VESDA) air-sampling system. This system regularly samples the air in the data center looking for signs of a fire. If smoke is detected, an alert is triggered and staff can review and correct the situation even before a fire begins.
Project Metrics
Here are some direct results from the data center refresh at the University of Wisconsin–Whitewater:
Space Utilization:
Space utilization dropped from 80 percent to about 40 percent even though the school reduced the size of the data center 50 percent.
Energy Efficiency:
Cooling demand dropped by about 50 percent because of fewer, more efficient air-conditioning units used more effectively.
Cycle Time:
Cycle time for complex maintenance decreased from about 4 hours to 1 hour or less because of better organization.