Cool Power

A growing number of college and university data centers employ pragmatic technologies to keep things green.

Data centers everywhere use a tremendous amount of energy. Nationwide, they used 61 billion kilowatt-hours in 2006, according to the latest numbers issued by the U.S. Environmental Protection Agency. Indirectly, they contribute a considerable share of greenhouse gases to the environment through inefficient design and those huge electrical appetites.

Colleges and universities have found that practical application of advanced environmentally friendly technologies in their data centers puts a dent in their biggest environmental problem: energy consumption. Each of these solutions is innovative, but hardly on the fringe, employing technology that is becoming widely available.

Denser Processors

All Paul Henderson has to do is run his hands over the servers to know he’s saving money and the environment. “They run so cool. It’s amazing,” says Henderson, head of systems and network engineering at Princeton University’s Plasma Physics Laboratory. What has Henderson so excited? Server processors — high-density quad-core processors that do four times as much work and require less energy than single-core chips.

The super-efficient processors save energy in two ways: First, they simply need fewer amps to run; second, less electricity means less heat. Because the servers produce less heat they require less cooling, which reduces electricity use even more.

Princeton’s Plasma Physics Lab knew it had a big power consumption issue because its computers do a considerable amount of the heavy data processing for research at the school. Before 2006, the lab used a high-performance computing grid of servers with AMD Athlon processors to analyze and distribute experimental data. Given the intensity of the work — calculations on average take 240 hours, with processors running at 100 percent for the duration — servers would overheat and crash. After evaluating several options, the lab replaced its grid of 190 systems with 180 Sun Fire X2100 servers with dual-core AMD Opteron processors. “One thing we are also thrilled with is the reliability of the servers,” says Henderson. “Job completion rates are now 99.9 percent, versus the old cluster where job completion rates were about 50 percent due to the poor design of those systems and the consequent overheating.”

In August, Henderson took things a step further: He upgraded servers in one of the lab’s five clusters to quad-core AMD Opteron processors. All together, the 72 systems had 576 CPUs, with each system consuming 2 amps, versus the dual-core servers, which drew 0.9 amps, and single-core servers, which drew 2.9 amps. “We’ve quadrupled the density and only doubled the power consumption. It’s a phenomenal amount of power you get,” says Henderson.

Not counting the latest quad-core enhancements, the lab estimates performance improvements of 300 to 400 percent. Based on electricity costs of 12 cents per kilowatt-hour to run the servers and cool the data center, the Sun Fire servers save the lab $80,000 per year.

Consolidate Servers

In Lynchburg, Va., Randolph College is turning to technology to protect the environment cherished by its most famous alumna, writer Pearl S. Buck. “To us, using technology to reduce consumption and to function in a more environmentally responsible manner is both a mandate and a marriage,” says Victor Gosnell, the school’s newly appointed chief technology officer. “Not only do we look to function greener as a department, but we also look for ways that technology can be used to help other areas to operate greener.” That includes using e-mail and the Internet whenever possible, rather than printing, to reduce the consumption of paper. It also means more telecommuting by the IT staff.

Over the past year, the data center has saved 26,280 kilowatt-hours by reducing its servers from 52 to 48. Gosnell now hopes to consolidate more servers and, in turn, reduce the demand for energy-sapping air conditioning. “Going green is not a process that we will ever complete, but rather an ongoing quest.”

Sensors Manage Cooling Systems

Bryant University, in Smithfield, R.I., last year consolidated three disparate systems rooms into a modern data center. At the heart of the data center is a scalable BladeCenter solution from IBM and a power-and-cooling solution from American Power Conversion, called InfraStruXure.

Bryant consolidated 84 servers into 40 virtualized server blades. IBM’s BladeCenter H chassis with 14 bays reduces the amount of floor space used, while boosting processor density. Instead of scores of single-core processors scattered across three rooms, the school can pack 112 quad-core processors into one rack to handle its ever-growing data processing needs.

Higher density processing, though, requires more cooling. To offset that, Bryant’s data center uses IBM Systems Director Active Energy Manager software. The software tool tracks actual power usage, temperatures and heat emitted over time so data center operators can actively manage power and cooling. At the same time, blower fan modules in the chassis adjust to compensate for changing temperatures (at the lower speeds they draw less power). Also, sensors on the servers automate water-cooling units.

Meanwhile, the school’s data center operators want to measure productivity on a per-kilowatt basis, which they consider key to long-term improvements in data center energy efficiency. The goal is to measure everything from processors in the data center to the energy used by 3,500 student notebook computers.

Bryant’s greening may go even further. “We’re looking at how to connect the whole campus, including energy management and HVAC [heating venting and air conditioning],” says Art Gloster, vice president of information services at Bryant. The move would allow better balance of heating and cooling for buildings, automatically shutting off both to unoccupied rooms. “We want to take it to the next level.”

Green Architecture

The University of California, San Diego, takes a similarly holistic approach. This fall, the university will open an 80,000-square-foot expansion of its San Diego Supercomputer Center (SDSC), designated as Leadership in Energy and Environmental Design (LEED) Silver by the nonprofit U.S. Green Building Council. UC San Diego was recently commended by the California Public Utilities Commission for operating at 53 percent above state standards for efficiency.

The new construction also employs a hybrid displacement ventilation system instead of conventional air conditioning. This allows the building to thermodynamically breathe, according to the university. Hybrid systems use a mix of mechanical and natural air vents to move cool and warm air throughout the building. Vents can be opened and closed depending on weather conditions and the time of year.

The expanded SDSC includes a new 5,000-square-foot data center that features several cooling innovations. High efficiency air-handling units in the center blow from below (rather than above) the computer room floor. The center also uses tepid (rather than chilled) water for cooling, integrating sensors to match cooling with real-time loads. The data center expansion also includes an aisle containment system called CoolFlex, manufactured by Knürr for Emerson Network Power, that completely separates cold air used to cool servers from the gear’s hot- air exhaust.

Coupled with enclosed server racks, the separation saves a great deal of fan energy. “The typical data center delivers 65-degree [Fahrenheit] air from the raised floor. Unfortunately, you can still see 85-degree or higher air at the intake of equipment in the tops of the racks due to hot-air mixing,” says Dallas Thornton, division director for cyberinfrastructure services at SDSC. Most IT equipment needs air in the 72- to 78-degree Fahrenheit range, according to the American Society of Heating, Refrigerating and Air-Conditioning Engineers, an Atlanta-based trade group.

Glycol-Based Cooling

Tufts University began experimenting with different types of cooling methods for its data center in 2005. It began using glycol, a heat-transfer fluid commonly used in machine engines, particularly in microturbines known as combined heat and power (CHP) engines. In cold locations, the glycol within the fluid cooler can be cooled to below 50 degrees Fahrenheit and stored in an exchange coil.

If the outside air is cold enough, the refrigeration cycle is turned off and the air that flows through the glycol-filled coil cools the IT environment.

Tufts also rearranged the data center to create hot and cold aisles, aligning data racks in rows with their fronts facing each other and an empty aisle in between. Use of hot and cold aisles, a layout that is quickly becoming commonplace in data centers, has reduced energy costs at the Tufts facility by almost 20 percent, and it has also reduced carbon emissions.

Not All Greenhouse Gas is Bad

There is a greenhouse gas worth keeping. The University of Notre Dame’s Center for Research Computing relocated some of its high-performance computing servers to the city of South Bend’s Potawatomi Greenhouses so the hot air produced in cooling the data center can be easily captured and piped into the greenhouse.

The city had been paying more than $100,000 a year to heat the 26,000-square-foot greenhouse. In turn, the heat associated with the computer cluster now provides a heating infrastructure for the greenhouses at a cost much lower than the city would pay for natural gas. The projected reduction in heating costs offers hope that the city will be able to keep the greenhouses open.

The technique of reusing heat in this manner, known as grid heating, allows the research center to efficiently vent heat while adding more processing power. “Grid heating parallels other works to make data centers more sustainable while growing the computational capacity,” says Paul Brenner, high-performance computing engineer at the center. “My primary field is high-performance scientific computing, and we need as many processors as humanly possible.”

For other practical green ideas, visit: