The Juilliard School CTO Tunde Giwa says data center planning should look beyond three to five years to ensure infrastructure keeps pace as IT service offerings expand.

Jan 29 2013
Data Center

Perfect Harmony: How The Juilliard School Keeps Its Data Center Cool and Cost-Effective

Colleges can lower temperatures and their power expenses with smart planning.

The Juilliard School CTO Tunde Giwa says data center planning should look beyond three to five years to ensure infrastructure keeps pace as IT service offerings expand. Credit: Matt Furman

It was late July 2011, and New York City was hotter than a sauna. The record-breaking heat caused cascading power failures across the city — including one on the campus of The Juilliard School, the renowned conservatory at the Lincoln Center for the Performing Arts.

That was bad news for the school's data center, where internal temperatures hit 110 degrees, says Juilliard Chief Technology Officer Tunde Giwa. After the campus's central cooling unit went down, the data center's email servers and enterprise resource planning system both failed. Although Giwa was able to bring in a portable cooling unit that lowered temperatures and enabled the servers to power on, critical systems were offline for half a day.

"That particular incident made us realize we had to up our game and get a backup cooling unit," says Giwa, whose IT team supports more than 500 desktop computers along with about 60 servers, about half of them virtual.

Last summer, Juilliard replaced a half-dozen ­individual uninterruptible power supply systems in its data center with a whole-room, 20-kilovolt-amp APC Symmetra UPS. Now even if the school's backup generator fails, the IT team will have up to 40 minutes of battery power to perform an orderly shutdown of the servers. And Juilliard added a 60,000-Btu backup cooling unit that will automatically kick in if the primary cooling system fails again.

"Relying on the campus's central cooling plant created a single point of failure for the data center," Giwa says. "That was a big warning sign that we needed some backup."

Phantom Problems

Sudden or inexplicable equipment failures are the biggest signal that a data center cooling system isn't what it ought to be, says Rich Siedzik, director of computer and telecommunications services for Bryant University in Smithfield, R.I.

"For us, the first signs were that we started to have problems with equipment, and we couldn't identify what was causing them," Siedzik says. "It turned out to be heat. We just weren't pushing enough air through the cabinets."

When the 3,400-student university built a dedicated data center six years ago, it restructured its cooling system to increase airflow across components.

While virtualization has reduced the number of physical servers — thereby reducing the amount of heat they produce — the increase in networking equipment to support a bring-your-own-device program and wireless ­networking has more than made up the difference.

"These days everyone expects you to be up 24-7."

"Until about two years ago, we could bring the chiller down for 20 minutes to do maintenance without having to shut down other equipment in the data center," Siedzik says. "Today, we can go maybe six to 10 minutes with the chiller off before we have to start shutting things down. These days everyone expects you to be up 24-7. A redundant chiller is no longer optional."

Before installing new equipment in the data center, Siedzik and his team also use APC StruxureWare Central data center monitoring software to model how much heat any additional gear will produce, and where to locate new equipment to maximize airflow. "The software gives us a heat map of the data center, shows us where the hot and cold spots are and how hard each fan is working," he says. "We discovered that if we moved a box to a different rack, some in-row coolers wouldn't have to work as hard."

Power Hungry

52%

The percentage of data centers that reported uptime problems due to power or cooling constraints

SOURCE: "The Datacenter's Role in Delivering Business Innovation: Using DCIM to Provide a Common Management Approach" (IDC Global, November 2012)

Virtualization has brought many efficiencies, but it's also made data centers denser and more power hungry, says Gartner Research Director Nik Simpson.

"Ten years ago, a typical rack load would be from 3 to 5 kilowatt-hours," he says. "Now the average is around 8 kilowatt-hours, with peaks from 15 to 20. That's way more than a typical house burns, and that's just one 6-foot rack. Universities that have to support high-performance computing environments will have higher power and cooling requirements than many corporations."

That's why smart design plays a huge role in the efficiency of data center cooling and power systems.

When the 12,000-student University of ­Wisconsin–Whitewater refreshed its data center a few years ago, efficiency was top of mind, Network ­Operations Center Manager Tom Jordan says. The university started by organizing its equipment into hot and cold aisles. Rack equipment pulls in air from the cold aisles and vents it into the hot ones. Now 15 to 20 degrees hotter, the air is sucked up into the chiller and sent back to the cold aisles.

The team also ran power cables below the raised floor of the data center, raised the ceiling 2 feet to run data and other low-voltage cabling overhead, and cut the amount of floor space by half. Together, the changes drove down the temperature by 10 degrees without requiring additional costs for cooling, he says.

Still, the best way to avoid being caught in a hot spot is to think strategically about what the data center will look like in three to five years, Juilliard's Giwa says.

"Even if you're in a tough budgetary environment like we are, you need to plan ahead to ensure that, as the IT services you offer continue to expand, your infrastructure keeps pace with those demands," Giwa says. "It's difficult to predict precisely what the future might look like, but it's always a useful exercise."

4 Signs Your Data Center's Too Hot (or Too Cold)

1. Equipment is failing: Frequent drive or memory failures are often the first clue that the data center is running too hot.

2. You're glowing in the dark: If hot spots can be seen when using infrared cameras or an instant read thermometer, the ­airflow isn't efficient.

3. The cold aisle is too cold: When cold aisles feel as though they're trapped in a blizzard, the AC and airflow are too high, Gartner Research Director Nik Simpson says. The ideal temperature should be close to 80 degrees Fahrenheit. If it's much lower, "you might as well be in the basement shoveling dollar bills into the furnace."

4. Your fear factor is on the rise: If plugging in a new device trips a breaker, causing racks of servers to crash, the data center isn't ready for high-density computing, Simpson says. It's time for a major upgrade.

Matt Furman
Close

Become an Insider

Unlock white papers, personalized recommendations and other premium content for an in-depth look at evolving IT