"We pay a lot of attention to every customer, but this was Google," UCSF's Clifford Sacks says.

How to Set Up Temporary Networks on Campus

UC San Francisco overcomes technical and logistical challenges as it sets up a traffic-intensive temporary network for a conference center event.

 

Adding 350 hard-wired network connections for a daylong event would be a tall order for any IT team. But when the customer is one of the most visible and successful technology companies in the world, the pressure to perform certainly amps up.

Providing a slew of Gigabit Ethernet nodes for developers attending Google's I/O BootCamp – the one-day event prior to the annual Google I/O conference – was only one of the challenges faced by the staff of the Mission Bay Conference Center (MBCC) on the campus of the University of California, San Francisco.

"We pay a lot of attention to every customer, but this was Google. They wanted everything to be perfect," says Clifford Sacks, assistant manager for information services at UCSF's Campus Life Services.

Held in San Francisco each spring, Google I/O (Innovation in the Open) brings developers together to exchange ideas and learn how to build on the company's platforms. In May, more than 5,000 attended the conference held at the city's Moscone Center.

The preconference BootCamp was exactly the kind of event that UCSF hoped to attract when it built MBCC in 2005. The center, designed to serve the campus community and the public, aims to generate revenue for the cash-strapped university, Sacks says.

"We have an excellent network and technical depth to offer, so we're a good location for it," he says. "But we knew we had to plan for technical complications because of the size and Google's requirements."

Deterring Downtime

The UCSF team set up BootCamp in nine MBCC rooms, ranging from an auditorium to small spaces that would provide so-called office hours, where attendees could meet individually with Google staff or participate in small-group tutorials.

Every room had to provide high-speed connectivity; hence the 350 new Gig-E lines. In addition, preparations were made to provide access to UCSF's Aruba 802.11g wireless network for up to 1,200 additional devices (two wireless devices per user).

"The great advantage we have is that our wired network infrastructure is really good," Sacks says. "The conference center is a newer building with all CAT 6 wiring, and it's gigabit speed everywhere."

Still, the 600 developers signed up for BootCamp were eager to cram as much hands-on training as possible into a single day, so network downtime was not an option. Besides offering steady throughput, the wired connections were intended to handle the bursts of network traffic inherent in the BootCamp sessions, Sacks says.

"It was the kind of event where the person leading the session tells everyone to push a button to view a demonstration or a piece of code, so hundreds of people click at once," he says.

The first prep step for Sacks and the UCSF technology team was to inventory existing networking equipment. They then carefully assessed the hardware needed to meet Google's requirements and moved to fill the gap, mostly with $10,000 worth of switches and cables for the hundreds of additional Ethernet nodes.

Sacks and Computer Resource Specialist Desmond ­Chargualaf selected D-Link DGS 1210-48 Web Smart and D-Link DGS 1016D switches for the project. "For the bigger switches especially, the D-Link was best," Sacks says. "Most of the switches on the market with the number of ports we needed are at the 100-megabit speed, rather than gigabit speed."

Sorting the Spaghetti

Choosing the right switches was key, but perhaps the toughest aspect of the project was figuring out where and how many to deploy, and then determining the cabling requirements.

"Before we could order any equipment, we had to diagram where all the wires would be and what types of switches we needed for each room," Sacks says. "Once we had a plan in place for the switches, we worked out the lengths of cabling we needed. We didn't want to have to wind 10 extra feet of cable around the base of a table."

Along with the switches, UCSF purchased 300 Belkin RJ45 CAT 6 patch cables – 188 that were 25 feet long, and 112 that were 50 feet. Because of other space and staff commitments at MBCC, Sacks, Chargualaf and two other IT team members had only a day to set up the 350 hard-wired lines before BootCamp participants arrived – and then had just one hour on the morning of the event to restring 100 lines to accommodate a last-minute room change.

Ready to Host

Beyond sheer volume, security must also be a major concern for any institution staging an event that requires a temporary network, such as the one UCSF installed for BootCamp, says Tim Zimmerman, an analyst for Gartner.

Separating the additional network connections from the host network by creating a virtual LAN, using web authentication protocols and applying specific application access control policies are all steps that can help prevent a serious breach.

Because the network segment used for BootCamp was isolated from the main UCSF network, the event raised no new security concerns for the UCSF staff, Sacks says. The team did, however, have to adjust the network's Dynamic Host Configuration Protocol scope release time to provide valid IP addresses for the swell of BootCamp participants.

"The normal release time would have been too long for the setup and rotations of the conference sessions, causing users to fill up the DHCP scope, and thus not allowing future users the ability to get a valid IP address," Sacks says.

Once BootCamp began, technical problems with the wired connections were few and quickly solved, even when a participant accidentally kicked one switch's power supply, briefly knocking out part of the temporary network, Chargualaf says.

The only system problem that arose was a slowing of the 802.11g wireless network because many participants skipped the wired Ethernet lines in favor of wireless convenience, Sacks says.

"There were a few more users on the wireless network than Google had estimated," he says, adding, "The good thing is that our slow is better than fast at a hotel, so we noticed the slowdown, but the participants really didn't."

Moving Up to N

Wireless bottlenecks will soon be a thing of the past at the Mission Bay Conference Center. By year-end, the University of California, San Francisco, will join the growing number of colleges that have migrated their networks to 802.11n.

"The university is upgrading the whole campus and trying to get wireless everywhere," says Clifford Sacks, assistant manager for information services at UCSF's Campus Life Services.

With theoretical speeds of 600 megabits per second, wireless-N is up to 10 times faster than the 802.11g technology in place at UCSF and provides twice the range of any previous 802.11 protocol. In addition to the switch to N, the conference center will increase the number of wireless access points in each of its rooms. Both the upgrade and the boost in AP density will help prevent bottlenecks on the wireless network during future events such as Google's BootCamp, Sacks says.

High-availability wireless infrastructures have become a necessity on college campuses, says Aberdeen Group analyst Andrew Borg.

"For the faculty, the need for collaboration means you have to address bandwidth and latency issues," Borg says. "For the students, it's all about ubiquity. They all walk around with multiple devices these days, which creates crushing density that the network has to accommodate."

UCSF will continue to use Aruba technology for the 802.11n APs and other hardware, along with the vendor's AirWave Wireless Management Suite, with a version update to support wireless-N features.

"We will be running a parallel wireless system for a while, making sure everything works," Sacks says.

<p>Paul S. Howell</p>
Sep 24 2011

Sponsors