Sweating the small stuff Home

Pushing the Speed Limit:
For Researchers, the Internet Just Got Faster
(8.6 gigabits/second)

By Tariq Malik

It's only a matter of time before the Internet's speed limit on transmitting information takes a huge leap forward, at least for scientists.

A group of computer savvy researchers at the California Institute of Technology (Caltech) have developed a new software protocol capable of transporting an entire DVD's worth of data in about five seconds. (Think of that as a two-hour movie, including the added features.) The method could soon be the mainstay of high-data transmission for astronomers and physicists who routinely deal with huge amounts of data.

"We need a global network capable of sending these files around the world, not just across a university or even a country," said Caltech associate professor Steven Low, who led the software development, during an interview. "And since we're using software, we're able to address the problem without making any hardware changes to existing systems."

Fast results with FAST

In the computing world, data moves about the electronic ether with the help of a transfer control protocol (TCP), a software algorithm that controls Internet congestion between networked computers. The protocol chops files into small packets of information, as opposed to sending an entire data set to a receiving computer in one piece, and adjusts the transfer rate depending on the amount of congestion at any given time.

But the protocol used today by million of researchers and Internet junkies to send digital information was developed in 1988. "It was a time where it couldn't even handle a single uncompressed phone call," said Caltech physics professor Harvey Newman, who led the development of the FAST test-bed.

To update file transfer speeds, Low and his team developed the Fast Active queue management Scalable Transmission Control Protocol, or FAST. The project also included participation from the Stanford Linear Accelerator (SLAC) near Sunnyvale, California and the European Organization for Nuclear Research (CERN) in Geneva, Switzerland among others.

During a November demonstration of the protocol at the Superconducting Conference in Baltimore, Maryland, researchers were able to transfer data across 2,845 miles (4,000) kilometers) at a speed of about 8.6 gigabits per second. One gigabit per second is about the equivalent of 100 Megabytes per second.

"We were able to transfer about 22 terabytes of data over a period of six hours," Newman said. That's just more than two collections of the printed works in the Library of Congress, according to estimates by Caltech researcher Roy Williams. The data transfer is also about the equivalent of 9 billion pages of densely typed text, Newman added.

Unlike the single path TCP protocol, FAST uses 10 parallel routes for its delivery, allowing researchers to send massive amounts of data while still keeping the size of each information packet down to current standards. During a data transfer, FAST monitors network congestion and rapidly adjusts the amount of information being sent to ensure a prompt delivery. The method is about 153,000 times faster than the average telephone modem connection and 6,000 times the speed of the typical DSL line, researchers said.

In comparison tests using only one pathway to send data from the Sunnyvale facility to CERN, a distance of about 6,236 miles (10,037 kilometers), FAST was still more than three times as efficient as the standard TCP method.

Share, and be quick about it

The most obvious use for FAST, according to researchers, is in the science community, where there is an inherent need to reliably push data and results from experiment to researcher and back again.

"This [FAST] protocol would be incredibly useful to us," said Sheperd Doelman, a staff scientist with the MIT Haystack Observatory in Westford, Massachusetts. Doeleman works with an international Very Large Baseline Interferometry (VLBI) project that combines observations from radio telescopes in the United States, Chile, Finland and Spain to make them work as one giant virtual telescope scanning the entire sky. "First of all, it would mean a huge cost savings for us."

The VLBI project typically generates about 36 miles of data recorded on magnetic tape, which is then shipped from each individual telescope to a correlation center where it can be spliced in with the rest of the observations. Being able to transfer data via the Internet could cut down on recording costs, as well as the time needed to integrate the observations from different VLBI stations.

"Right now we have to ship our data in a van or plane to our correlation centers," Doeleman said. "That turnaround time is really the limiting factor in our operation."

Meanwhile, scientists at CERN are building the Large Hadron Collider, possibly the most powerful research tool in particle physics, which is scheduled to go online 2007. "It's the Holy Grail for physicists, about like what the Hubble telescope is for astronomers," said Low. "But it's also going to be generating data on the petabyte scale, literally mountains of data per second, and we need a way to store it and send it in real time." One petabyte is 1,000 terabytes.

For now, however, FAST researchers stressed the fact that that their system is only a prototype, with only a small amount of network traffic sharing the transmitted files. In a heavy-sharing environment, such as the millions of Internet users world wide, it could be a different story.

"If you put this system out on a really high-speed, heavily shared network, then we're not sure it's going to work," Low said of the FAST protocol. "Honestly, if it works as well as it did here, then I'll be surprised."

But Low added that with the prototype system in place, refinements will be made over the next few months and should result in a preliminary FAST protocol ready for distribution by the end of summer.

Not so useful for everyone╔yet

But while the FAST protocol may help put reams and reams of electronic research at the fingertips of researchers, the impact on personal computer users, who rarely feel the urge to access data from the VLBI or a particle accelerator, may be less apparent.

"This system is designed to combine high speed over long distances," Low said. "As for your basic home computer user, the current protocol works okay for now." But growing applications in the commercial computing and the entertainment industries increase demand in upcoming years.

FAST researchers also added that while the new protocol would be a boon to scientists, it could mean more woes for the entertainment industry plagued by computer piracy since faster transfer speeds could also allow faster bootlegging. Extra security measures would be necessary, and are being studied separate to FAST project, they added.