There's nothing like foresight.
A few years ago, the IT team at the University of Arizona in Tucson realized that with the deployment of more data-intensive apps and the growing popularity of server virtualization, the need for additional connectivity in the university's data center might rapidly overtake them.
So in 2009, the university began a five-year project to replace all of its administrative systems and became an early adopter of Nexus switches from Cisco Systems.
"We looked at the network architecture and physical layout, every aspect of the data center, and determined that what we had in place at the time was not going to scale to meet the needs of this project," says Derek Masseth, senior director of infrastructure services at the university.
Masseth and his team recognized that the old system could not meet the needs of incoming compute density, particularly with the university's virtualization requirements. That's when Masseth started investigating emerging switch technologies.
"I knew at the time it was risky, and that Nexus was first-generation hardware in late 2009," he says. "But we saw it as something we had to do."
Masseth is glad he and his team took the plunge. Today, 10 Gigabit Ethernet top-of-rack switches are practically a necessity for colleges and universities that want to deliver the bandwidth levels that can support server virtualization, emerging video applications and overall performance improvements.
A top-of-rack switch is a 24- to 48-port fixed port switch that is typically deployed on the top or near the top of a server rack in data centers. The switch is normally used to deliver server connectivity for IP data or converged IP and storage traffic. Networking manufacturers refer to them as top-of-rack switches to distinguish them from smaller, low-cost, lower-port-count switches.
Matthias Machowinski, directing analyst for enterprise networks and video at Infonetics Research, says top-of-rack switches offer IT departments the ability to converge multiple networks or use fewer network adapters. Higher education institutions are an obvious candidate for these switches, he adds, because they have large user populations, are technologically advanced and use many apps with high bandwidth requirements.
"Virtualization is also driving higher utilization levels in servers and server refreshes," Machowinski says. "If you're updating your servers, you're more likely to move to 10 Gig-E connectivity.''
Ready for Anything
Because Dickinson College in Carlisle, Pa., already has a virtualized environment, it chose 10 Gig-E top-of-rack switches primarily for backup purposes and to establish bandwidth between the college's two data centers, says Kevin Truman, director, infrastructure systems.
Dickinson's IT staff built an on-campus disaster recovery center, putting their development systems in one data center and production systems in another, at which point it became clear that they needed to increase bandwidth between the two centers, Truman says.
The college is using several models from HP's ProCurve line. Among Dickinson's 40 servers, 10 are virtual, and these 10 host an additional 79 virtual servers. As the other 30 servers are replaced or phased out, the applications housed there will move to the virtual server environment.
"The 10 Gig-E link between the data centers has provided the college viable redundant facilities and enabled fast and reliable backup capabilities,'' Truman says.
Think Like One
When the College of DuPage built a second data center for disaster recovery two years ago, the goal was to make the two data centers look like one, with VMware servers on each side, "so if one goes down, the other kicks in,'' says Rich Kulig, manager of network services at COD, a community college in Glen Ellyn, Ill.
Virtualization has let the IT department consolidate servers and spread them across both data centers, Kulig says. When the project is completed, if one data center is lost, the college will be able to run off the other data center.
COD also wanted a seamless Ethernet network, but the challenge was figuring out how to extend the Ethernet segment across both data centers to avoid issues with IP addressing.
The answer, Kulig discovered, was 10 Gig-E top-of-rack switches. The college deployed the A5800 switch series from HP.
"We're trying to build true redundant systems. Both have a SAN, so if we lose one side, everything would move over to the other data center," Kulig says.
COD chose HP because of its Intelligent Resilient Framework (IRF) grouping feature, Kulig says, which allows the college to virtualize the Ethernet switches across both data centers. That let the IT staff create the same IP segments on both campuses, making it easy to move systems as they transition back and forth.
"If I had a disaster in one of the data centers, I would have a problem with IP addresses and DNS resolution in the other data center," Kulig explains. "With virtualization, it's like one big switch, and I don't have to mess with that if I were to lose half a data center."
113 Degrees Fahrenheit
The maximum operating temperature of an HP A5800 10 Gigabit Ethernet switch
Performance has been excellent, he says, although his team hasn't taken full advantage of all 10 Gig-E ports yet. The switches were purchased with the idea of virtualizing their primary student registration system because the hardware is 5 years old and is about to be replaced.
COD now has about 60 virtualized servers split across campus on 10 VMware hosts. Kulig says another 10 to 15 stand-alone servers that run student systems for registration, admissions, payment and financials will move to VMware by the end of 2011.
Kulig estimates the cost of the switches at around $60,000. He says they didn't experience any issues during the upgrade. "Everything went unbelievably well, quite honestly," he says. "It was way easier than we anticipated. It was up in one afternoon …
I can't think of any reason not to upgrade, especially if you're doing a lot of virtualization."
Behind the Racks
The University of Arizona is using Cisco Nexus 5010 and 5020 switches that support converged Fibre Channel over Ethernet (FCoE). The top-of-rack switches connect to the IP network through a Cisco Nexus 7010 switch and to a storage area network through a Cisco MDS 9509 Multilayer Director.
Arizona's Derek Masseth says the converged FCoE and the ability to drive the IP traffic across the same physical wire buys the IT staff a lot of space inside the rack and makes it easier to cool it and manage cabling.
Masseth says the move helped the university realize a 50 percent capital expenditures savings from the start. If the university had maintained its old architecture, he says, it would have cost $1.2 million. But migrating to FCoE, along with 10 Gig-E top-of-rack switches, cost only $600,000.
Since the deployment, throughput has improved significantly, which drives up the ratio of virtual to physical servers they can deploy.
One of the most limiting factors of the previous architecture wasn't the amount of computer memory in a server, but rather the amount of connectivity to each machine. "With 10 Gig-E and the ability to run Fibre Channel and IP over the same physical wire, we're able to drive those ratios very, very high," Masseth says. The 10 Gig-E switches and converged network mean there are fewer cables to manage.
"The convergence had a pretty dramatic impact on our staffing, and we've paid very close attention to the human aspect of the architecture. That's worked out well for us," Masseth says.