Imagine the effect on a university's network when thousands of students rush to register online for classes as soon as the registration period opens. Even if there is plenty of storage and network bandwidth, such a surge is bound to tax all resources when so much stress is placed on the core infrastructure at one time.
That was the scenario the IT staff at Loyola University in Chicago faced a few years ago. Although the staff had taken many proactive steps over the years, application and network storage performance continued to suffer during peak times.
The staff knew the infrastructure was sound because it had progressively added capacity and units to its IBM storage area network, starting several years ago with an IBM System Storage DS4000 series SAN and eventually adding three model DS8400 units to the mix. The IBM gear, which mixes Serial ATA and fiber disk storage on demand, is critical to the environment, interfacing with almost everything in the data center.
But it wasn't enough. As the environment grew – two SANs reside at the campus's main data center and two at a disaster recovery center in downtown Chicago – it became harder and harder to manage the SANs, and performance suffered.
"Because the production system was online all the time, it was difficult to add more storage or change where volumes were pointing," explains Dan Vonder Heide, the university's director of infrastructure services. It became increasingly difficult to administer databases that resided on older, slower disks or on partitions not configured for the application's performance demands, he added.
To solve the problem, an IBM System Storage SAN Volume Controller (SVC) was added at each location last year. The units cache data that travel between the servers and the SAN, increasing performance. And because the SVC virtualizes the layer between host and storage, the IT staff could move data to different locations in real time without outages.
Immediately after implementing the SVCs, the staff noticed improvements in both performance and management. For example, the first registration period with the SVCs in place was the smoothest in memory, with no performance pain points during peak registration periods, says CIO Susan Malisch.
In addition, the database team saw an immediate 30 percent performance boost on its database servers – a huge improvement that, when combined with an enterprise virtualization strategy, allows the organization to extend the lifecycle of its servers, Malisch notes.
There were other benefits as well. The time it takes to complete a backup has dropped between 20 and 50 percent, depending on the type of files being backed up. For example, a database backup that used to take one hour can now be completed in roughly 40 minutes. The system now has 62 terabytes of storage managed by the SVC, and Vonder Heide expects that to increase to roughly 80TB by the end of the fiscal year. Finally, the team has seen boot times for its blade server infrastructure (about 60 servers) decrease from four minutes to one minute.
"Overall performance improvements tied to easier hardware management have translated into increased student satisfaction at our campuses, making the purchase of SVC a solid strategic investment," Malisch says.
The Path to SANs
For Loyola, the path toward a SAN deployment started with a need to provide more scalable and responsive storage as a new student information system was implemented. But for many other universities, the need for SANs or other networked storage often follows a move to server virtualization.
"Once you get into server virtualization, you absolutely have to have some type of networked storage because you need visibility from the virtual machines down to the arrays," says Bob Laliberte, a senior analyst at Enterprise Strategy Group.
That was exactly the path taken by the network administrators at the Geographical Information Center at California State University, Chico. Because the center serves as both a GIS repository for students and a community and government resource, it must have strict parameters on availability and downtime.
After solving part of the problem with a move to a VMware virtual server environment, the center then chose to implement iSCSI SANs with exceptionally high availability and scalability. It found the right performance combination in a pair of Cybernetics SAN D-series units. Scalability is important to the university because of the sheer size of the geographic images the center stores: One high-resolution aerial photograph can take up as much as 2 gigabytes of space.
High availability also was critical because so many groups rely on the data, says GIS network administrator Randy Needham.
"A lot of the iSCSI SANs we looked at didn't have high availability and automatic failover at a reasonable price, but that was critical to us," he says. "A lot of them required mutation of the data, and we would have to remap the location of the data if one of them went down. That wasn't viable for us."
For the Isenberg School of Management at the University of Massachusetts Amherst, the move to storage area networking has been increasingly aggressive. The first implementation came about three years ago, when the IT team realized that to manage its VMware server farm with appropriate efficiency and resiliency, it was time to implement a SAN.
The team installed two D-Link DSN-3400 iSCSI SAN arrays, one at each of the school's two data centers. This let the staff reprovision storage more quickly without affecting server performance, respond more quickly to disasters, and service equipment without impacting the network.
The percentage of organizations that plan to buy new SAN systems storage, among those that have budgeted for data storage purchases
Source: Enterprise Strategy Group, based on a survey of 515 IT managers
As the group's infrastructure and storage requirements grew, the D-Link SANs have been repurposed to handle long-term storage of classroom video materials. In their place, the group recently installed two NetApp SANs – a Fibre Channel FAS3140 at the main data center and an FAS2040 for the data center across campus.
"We had grown to the point where we needed more functionality, and these units gave us replication, deduplication and the ability to upgrade as we grow," explains Dale Starr, manager of systems and operations.
The strategic communications department at Riverside Community College District in Riverside, Calif., has taken a similar path. For years, the department, which is separate from the college's main IT department, relied on the storage in its six aging servers plus some borrowed capacity on the IT department's Fibre Channel SAN to handle its storage needs.
But as the department grew, it needed to become self-reliant. In preparation for a move to a virtualized environment based on blade servers early next year, the group has installed the iSCSI-based Overland Storage SnapServer SAN S2000.
"It was really time to stand on our own, and getting the SAN is the first step toward creating a storage system that will help us reprovision faster and handle our growing storage needs," says Darren Dong, director of communications and web development.
Virtualization with a Side of SAN
When it's time to virtualize your servers, you're going to need networked storage.
The reason? If you want to take full advantage of the high availability and mobility features that server virtualization offers, says Bob Laliberte, a senior analyst at Enterprise Strategy Group, the virtual machines need to be connected to shared storage.
With that type of dynamic environment, it's critical that an organization have end-to-end visibility from the virtual machine through the network and inclusive of the storage arrays. That's important, Laliberte says, because many of the problems that occur with virtual machines aren't necessarily tied to the hypervisors or even the physical server environment, but to network congestion and conflicts in the storage arrays.
A SAN is one networked storage choice that fits the bill. Both iSCSI and Fibre Channel SANs are options; iSCSI tends to be more cost effective, but Fibre Channel can offer better performance and lower latency.
If you take the SAN route, you should also look into vendors that offer advanced capabilities such as ensuring that quality of service and access control lists dynamically move with the virtual machine. Also, remember to carefully map out the workloads across the virtual machines and the shared storage environment. This will deliver a more balanced performance.
"When you have different workloads on your virtual machines, for example, make sure they aren't all trying to get to the same disk in the shared disk array," Laliberte says.