On the plains of Canyon, Texas, West Texas A&M University plans to deploy a software-defined data center (SDDC), recognizing that it will take a few years to reach that goal.
“We’ve built a virtualized private cloud to support the growth of the campus,” Webb says. “It has given us the ability to scale and be more agile, providing IT resources and the ability to spin up academic services, such as online courses.”
The university also recently deployed storage virtualization. The combination of server and storage virtualization enables Webb to manage IT resources more efficiently across the main campus, a disaster recovery facility in nearby Amarillo and a nearby colocation site. “Now, I can replicate the data center to the Amarillo facility and do it all in software,” he says.
Webb says software-defined networking has the potential to reduce the need for many switches, reducing the cost of hardware. “We also hope that by virtualizing the networking gear, we’ll be able to add logical networking segments without having to bring down the network, thus reducing downtime and giving students, faculty and staff the uptime they expect today,” he explains. The university will take a closer look at the technology when its hardware is due for an upgrade.
Richard Villars, vice president of data center and cloud for IDC, says SDDCs are an evolutionary process in which servers, storage and networking are managed as a single IT resource.
“Organizations have had great success with server and storage virtualization, and network virtualization holds great promise,” Villars says. “As they approach their refresh cycles, many organizations will look for ways to make them interdependent.”
Managing A Wise Testbed
At the University of Wisconsin-Madison (UW-Madison), the university has become a part of a National Science Foundation program, called CloudLab, to create three testbeds for SDDCs.
Aditya Akella, associate professor in the department of computer sciences, says the CloudLab testbeds are being developed at UW-Madison, the University of Utah and Clemson University. “The idea is to create a testbed where academic researchers can build and experiment with disruptive architectures for SDDCs,” he says. “They can also deploy novel cloud applications and understand how they perform on current and future SDDCs.”
Akella cites, as an example, climate scientists spinning up real-time weather analysis in a cloud designed for low-latency analytics over massive data streams. “We also plan to open up this capability to students where they can create virtual machines in CloudLab and design and test applications as part of their coursework,” he says.
UW-Madison’s CloudLab equipment arrived in January and includes 240 Cisco Systems UCS servers, 12 Nexus 3172 top-of-rack switches, six Nexus 3132 aggregation switches and a core Nexus 3172 switch. With support for the OpenFlow 1.0 protocol, the cluster can be used for software-defined networking and network virtualization experiments.
Each server boasts 128 gigabytes of DRAM, between one and two hard drives of 1 to 1.2 terabyte capacity, and a 480GB solid-state drive. In addition, Seagate has donated 120 3TB hard drives.
“This storage capability will let experimenters test storage virtualization technologies that combine main memory, solid-state storage and hard drives in novel ways,” Akella concludes.
3 Musts for a Software-Defined Data Center
Richard Villars, vice president of data center and cloud for IDC, outlines three goals IT managers should set for developing a software-defined data center (SDDC).
- Eliminate overprovisioning. When IT organizations embraced server virtualization, they were able to dramatically reduce the number of physical servers required. SDDCs enable IT organizations to extend this concept to storage and network appliances.
- Streamline operational costs. As organizations virtualize, they can consolidate staff or shift them to other tasks, which reduces operational costs over time.
- Reduce the cost of future migrations. In the past, migrating to a new server, storage or network system could take six months and more than $500,000. Because virtual hardware is easier to provision, organizations can migrate more efficiently and spread the costs out over three to five years.