Fitchburg State CIO Charlie Maner
Aug 20 2007

Speed Bump

iSCSI SAN performance on campuses will get a boost from 10-Gigabit Ethernet ... eventually.

Higher education institutions that implement Internet Small Computer System Interface (iSCSI) storage area networks instead of Fibre Channel SANs usually do so for two reasons: because iSCSI works by sending Internet protocol packets over Ethernet, institutions can save money by using their Ethernet networking equipment, and they don’t have to hire information technology staff with specialized Fibre Channel knowledge.

But iSCSI still has one big failing compared with Fibre Channel: performance. iSCSI SANs have chugged along at Gigabit Ethernet speed while Fibre Channel SANs support four-gigabits-per-second connectivity. That will change soon, as iSCSI SAN vendors are preparing to make 10 Gig-E technology available.

As with any new technology, there is a divergence of opinion on how large the impact will be and how soon 10 Gig-E will boost iSCSI deployment on university campuses. For now, 10 Gig-E is price prohibitive for most SANs and will remain that way until 10 Gig-E copper ports are widely available in place of more expensive 10 Gig-E optical gear.

Even iSCSI SAN vendors don’t agree on when that will be. A handful, including LeftHand Networks, support 10 Gig-E now. Others, such as EMC and EqualLogic, don’t expect 10 Gig-E iSCSI to take off until 2008.

Greg Schulz, founder of the StorageIO Group, a Stillwater, Minn.-based storage consultancy, forecasts that 10 Gig-E will be embraced by universities with extraordinary data loads for applications such as backup and cross-campus replication, but there will be no mass migration to 10 Gig-E until the costs come down. “You have those who need the speed that will make the switch over, but for the general mass movement, now you’re in opposition to the whole value proposition of iSCSI, which is low cost and ease of use,” Schulz says. “Ten Gig-E will help iSCSI grow. It’s going to help IP backup and other IP-based storage as an enabler, but until that cost comes down, that puts it at odds with where iSCSI has been having success up until now.”

10 Gig’s Role in Storage

Still, 10 Gig-E plays a role in pumping up iSCSI performance already at Fitchburg (Mass.) State College and Stanford (Calif.) University. Fitchburg State switched from direct-attached, server-based storage two years ago to iSCSI storage arrays from LeftHand Networks after evaluating both iSCSI and Fibre Channel SAN alternatives. Charlie Maner, CIO at Fitchburg State, says his decision came down to functionality and value, as well as the need for adequate although not excessive application performance.

“The only thing, quite frankly, that the Fibre Channel vendors could offer is that they were faster than iSCSI, but iSCSI met my performance requirements,” Maner says. “It’s kind of like if you really need to buy a minivan, just because a Porsche is faster you’re not going to buy it.”

Maner is now taking advantage of a campuswide Ethernet infrastructure upgrade project to set the stage for 10 Gig-E migration for the iSCSI SAN. The school is installing a new Enterasys Ethernet backbone with 10-Gig uplinks. Maner manages two data centers on opposite sides of the campus with all data mirrored between the two locations. The 10-Gig connectivity will dramatically speed up that process when the upgrade is completed in the first quarter of 2008.

At Stanford University, the Office of Research Administration operates an iSCSI SAN while most other groups around the school have Fibre Channel storage. But for Lee Merrick, IT manager for the department, the decision to deploy iSCSI eliminated the need to install and manage independent Fibre Channel and IP networks. Going with iSCSI also eliminated the costs of Fibre Channel.

Merrick’s department uses a 10-Gig backbone to connect locations geographically dispersed around the campus with 5 terabytes of storage capacity on two EqualLogic PS100e iSCSI disk arrays. Each EqualLogic array is equipped with three Gigabit Ethernet connections so the arrays are driving data to the SAN at 300 megabytes per second.

Merrick says his department stuck with Ethernet when switching from direct-attached storage (DAS) to a SAN last year as part of a $100,000 infrastructure upgrade. That included two EqualLogic SANs, a 10 Gig network with HP ProCurve 3500yl switches, VMware server virtualization software, and servers.

“We wanted to go iSCSI,” he says. “We couldn’t afford the parallel infrastructure — Fibre Channel and IP equipment.”

Transitioning from direct-attached storage to an iSCSI SAN has enabled Merrick to provide a much more reliable and resilient storage infrastructure for Office of Research Administration users. For now, the 10 Gig connection is only on the server side but helps keep the storage network redundant. “We use 10 Gig primarily as a backbone pipe,” he says. “We have multiple servers and iSCSI traffic over a single link. Every switch is connected to two other switches for redundancy. We could lose three of our four sites and still recover our data.”

Is It Overkill?

However, not everyone is sold on the benefits of 10 Gig-E for iSCSI SANs. Shaun Black, director of network services at Le Moyne College in Syracuse, N.Y., is not ready to jump on the 10-Gig bandwagon any time soon. Black is confident that his current iSCSI SAN environment, built around EqualLogic PS300e arrays, will satisfy his applications requirements for at least the next two years.

“My general sense is that the whole 10-Gig Ethernet requirement for iSCSI is really a red herring,” Black says.

“I’m sure that there are certain applications that have very high I/O transaction rates where iSCSI may not be appropriate and Fibre Channel might be the only solution, but I think the percentage of applications that require that is increasingly small, particularly with companies like EqualLogic with dedicated iSCSI products in the marketplace.”

At the same time Le Moyne College moved from DAS to an iSCSI SAN, it also switched to a VMware ESX environment. Using VMware enabled the school to consolidate 35 physical servers down to six virtual servers.

Black had some initial concerns that the number of virtual servers could potentially overwhelm the available bandwidth of the iSCSI SAN, but the reality has proven to be better-than-expected performance.

“We went into the project expecting that the overhead associated with virtualization and then the combining multiple servers hitting a single backend storage array might result in somewhat of a performance penalty hit, but we’ve seen just the opposite,” says Black. “We saw halving of our backup windows, compared with when we had direct-attached disk, which was very surprising.”

Black doesn’t envision any performance bottlenecks based on the Gig-E speed of the iSCSI SAN. Part of the reason is the modular scalable design of the EqualLogic arrays. As he adds new arrays, those arrays provide a performance boost balanced across all arrays in the SAN, as well as more Gig-E ports, with bandwidth aggregated across all Ethernet ports.

“We expect that in the next two years we’ll need to add another array so we can add more capacity and more I/O capacity with the way EqualLogic does clustering,” he says.

The Need for Speed: Fibre Channel vs. iSCSI

The great debate between Fibre Channel and iSCSI for storage area networks centers on the bandwidth speed issue. The current mainstream implementations give a big edge to Fibre Channel with 4-Gbps arrays, switches and host bus adapters (HBAs), compared with the one Gigabit Ethernet standard for iSCSI. But the leap to 10 Gig-E will move the raw speed advantage over to the iSCSI side.

Additionally, Ethernet backbones have the ability to handle data traffic over distances beyond the 5-kilometer limitation of Fibre Channel. Fibre Channel customers can leverage these capabilities by using one of the protocols that allow Fibre Channel data to be sent across Ethernet networks: Fibre Channel over IP, Internet Fibre Channel Protocol, and Fibre Channel over Ethernet.

Both technology groups are now working to define road maps through the end of the decade. The Fibre Channel industry is committed to extending its historical upgrade path by doubling the speed every three to four years. The current standards efforts are aimed at delivering 8-Gbps Fibre Channel products next year and 16-Gbps equipment in 2011.

Things are not as clear for the Ethernet community, which has historically seen speeds increase by a factor of 10, evolving from 1 megabit to 10Mb, 1 gigabit, and now 10 Gig technology. However, beyond 10 Gig-E, the next Ethernet bandwidth standard remains open to debate. The next step will likely be another 10x jump to 100-Gig Ethernet, but there might be an interim 40-Gig standard that would be scalable to 100 Gig.

Close

Become an Insider

Unlock white papers, personalized recommendations and other premium content for an in-depth look at evolving IT