Close

See How Your Peers Are Moving Forward in the Cloud

New research from CDW can help you build on your success and take the next step.

May 13 2019
Data Center

4 Key Considerations to Scale GPU–Ready Data Centers

Higher education institutions looking to adopt high–performance computing should plan for certain design and power necessities.

Universities interested in high–performance computing are investing in graphics processing units, which deliver faster processing times and greater memory bandwidth than traditional central processing units.

At the University of Illinois at Chicago, GPU clusters significantly reduce the time researchers take to conduct complex tasks.

“One of the best ways you can understand mechanics of diseases such as cancer and develop more effective therapies is by simulating the behavior of molecules in our body with a supercomputer, to see how proteins and a drug would interact,” Himanshu Sharma, who directs UIC’s Advanced Cyberinfra¬structure for Education and Research (ACER), told EdTech. “You can speed up molecular dynamics research tremendously with applications optimized for GPUs versus CPUs only. Your computation time can go down by a factor of eight to 10.”

Digital%20Transformation_IR_1.jpg

4 Areas to Consider When Adopting GPUs

While GPU systems can provide higher–performance than typical CPU-only systems, they require a more advanced approach to design and operation. In a 2018 technical overview, Nvidia advises organizations to consider the following points:

  1. Power Density: For highest efficiency, admins should consider racks hosting 30 kilowatts to 50kW per rack, as well as controlled-temperature airflow into the systems. Component-level cooling provides improved performance per watt.

  2. Throughput: Because of the performance increases from GPUs, admins should re-evaluate all system subcomponents to minimize bottlenecks at other points in the computing environment.

  3. Networking and Storage: GPU systems drive large data transfers and require a robust, low-contention network to achieve good scaling. Admins should also ensure that storage systems are capable of responding to the needs of a given workload. For example, running parallel HPC applications may require supporting storage technology to allow multiple processes to access the same files at the same time.

  4. Monitoring and Management: With dense GPU systems, monitoring and management become even more important, as admins must ensure high performance of all systems to enable consistent performance.

To learn more about how universities are using high performance computing to improve research, education opportunities and funding, check out “Universities Leverage High-Performance Computing for Multiple Returns on Investment.”

MF3d/Getty Images