Power and Cooling Strategies for Every Organization
A few years ago, the city of Roseville, Calif., sought to optimize energy efficiency and cooling in its data center. We had outgrown our aging uninterruptible power supply system and also wanted to improve the cooling layout and configuration. These infrastructure improvements were needed to accommodate spiraling storage demand and to replace antiquated equipment.
After deploying three 10-ton Liebert CRV cooling units, once again the data center was able to operate with design temperature and humidity ranges while increasing cooling capacity. The energy bill for the data center has decreased, saving the city approximately $14,000 per year.
Teaming with our partners, we achieved a creative and effective solution to our power and cooling challenges. But the deployment was not without its obstacles. What follows are some lessons learned that others embarking on power and cooling initiatives may draw from.
Begin tracking energy expenses now.
If your organization is going to embark on a project such as this, graph your current energy costs in order to show return on investment and definitive results after implementation.
Select the right solution.
Our old cooling system comprised two Liebert Vertical Deluxe System 3 units and four Mitsubishi mini-split systems. Hot air coming from the rear of the server cabinets would migrate back to the top of the Liebert DS3s that were placed around the perimeter of the room. Typically, the return air tended to mix with the supply air, which resulted in lower return air temperatures. The design is not the most efficient way to provide cooling to data center racks.
Instead, we deployed the Liebert CRV self-contained row-based solution. Placing the units in the row of server racks provides cooling close to the server heat source, allowing more efficient airflow. The rear air migrates directly from the back of the server rack to the return side of the Liebert CRV units.
We had to relocate a cooling unit after installation because one area of the data center wasn’t cooling adequately. To avoid this, drill down in the design by charting the desired temperatures in all areas of the data center, and make sure your design accommodates all areas.
For power protection, we selected two Liebert NX systems that are scalable from 40 kilovolt amps to 80 kVA to accommodate future growth. The systems are installed in a “2N” or “A/B” configuration so that each is independent from the other. Our computer loads are configured with dual-input power supplies. Each is powered from a different UPS, providing electrical isolation and redundancy.
See the products in action.
When you’re evaluating power and cooling solutions, visit a site that’s operating the devices. It’s only when the units are running that you can hear them. The cooling units we chose were a lot noisier than our old ones.
Build in redundancy.
The cooling units possess intelligence to work collaboratively to provide optimal cooling. As a fail-safe, if one unit fails, the other units can pick up the slack. Should more than one in-row cooling unit fail, one of our old cooling systems serves as a backup.
Opt for proactive monitoring.
To remotely monitor the Liebert CRVs and NX UPSs, we installed IntelliSlot Web Cards. A web-based administration page offers many useful system details. For example, we can remotely monitor the temperature at the CRV units, as well as each individual rack’s temperature sensor. We also can see real-time system load and detailed battery information for the UPSs. Finally, we can use the SMTP notification capabilities to contact service personnel via e-mail when necessary.
Coordinate construction and cutover.
The server room had to remain operational throughout construction, which presented challenges. For one, the project required installers to create access and routing for new refrigerant piping through heavy concrete walls and driveways. Before the equipment pad was installed, the contractor excavated below the first floor and core-drilled 10 2-inch holes through 24 inches of concrete. After the refrigerant lines and conduits for electrical and controls were installed, workers took extra caution to seal and waterproof the pipes and conduits. All units had to be powered off in order to migrate to the new UPSs. We did this by scheduling the cutover date well in advance and coordinating with customers, partners and system administrators. On the cutover date, the IT department was nearly fully staffed to execute the switch and make sure all the systems came back online smoothly.
Learn how to accommodate small spaces.
Our team had to work with very confined floor space, under-floor restrictions and minimal space above the ceiling tiles. This meant that the different crafts (electricians, sheet metal workers, pipefitters, floor installers, cable installers and startup personnel) had to coordinate access and schedules to allow for safe and effective work while keeping the data center running.
Despite the obstacles, our data center power and cooling project was successful. We achieved our goals of reliability, energy efficiency and scalability. We hope that sharing what we’ve learned helps others to achieve a successful implementation in their own data centers.
Temperature Is Rising
The city of Roseville’s previous cooling units generated return air temperatures of about 72 degrees Fahrenheit. Once we deployed the new Liebert CRVs, close proximity to the load generated return air temperatures of about 85 degrees. Higher return air temperatures result in greater heat exchange and overall higher capacity of the unit. Supply air temperatures feeding the server cabinets are controlled by the sensors at the racks linked to a controller. Cool, 65-degree air reaches the supply side server cabinet, while warm, 85-degree air hits the return side without a chance for the air to mix, as previously designed.
CREDIT: Bob Stefko/Getty Images