Featured Research

from universities, journals, and other organizations

No-wait data centers: Data-transmission delays across server farms can be reduced by 99. 6 percent

Date:
July 16, 2014
Source:
Massachusetts Institute of Technology
Summary:
Big websites usually maintain their own "data centers," banks of tens or even hundreds of thousands of servers, all passing data back and forth to field users' requests. Like any big, decentralized network, data centers are prone to congestion: Packets of data arriving at the same router at the same time are put in a queue, and if the queues get too long, packets can be delayed. Researchers have designed a new network-management system that, in experiments, reduced the average queue length of routers in a Facebook data center by 99.6 percent -- virtually doing away with queues.

Big websites usually maintain their own "data centers," banks of tens or even hundreds of thousands of servers, all passing data back and forth to field users' requests. Like any big, decentralized network, data centers are prone to congestion: Packets of data arriving at the same router at the same time are put in a queue, and if the queues get too long, packets can be delayed.

Related Articles


At the annual conference of the ACM Special Interest Group on Data Communication, in August, MIT researchers will present a new network-management system that, in experiments, reduced the average queue length of routers in a Facebook data center by 99.6 percent -- virtually doing away with queues. When network traffic was heavy, the average latency -- the delay between the request for an item of information and its arrival -- shrank nearly as much, from 3.56 microseconds to 0.23 microseconds.

Like the Internet, most data centers use decentralized communication protocols: Each node in the network decides, based on its own limited observations, how rapidly to send data and which adjacent node to send it to. Decentralized protocols have the advantage of an ability to handle communication over large networks with little administrative oversight.

The MIT system, dubbed Fastpass, instead relies on a central server called an "arbiter" to decide which nodes in the network may send data to which others during which periods of time. "It's not obvious that this is a good idea," says Hari Balakrishnan, the Fujitsu Professor in Electrical Engineering and Computer Science and one of the paper's coauthors.

With Fastpass, a node that wishes to transmit data first issues a request to the arbiter and receives a routing assignment in return. "If you have to pay these maybe 40 microseconds to go to the arbiter, can you really gain much from the whole scheme?" says Jonathan Perry, a graduate student in electrical engineering and computer science (EECS) and another of the paper's authors. "Surprisingly, you can."

Division of labor

Balakrishnan and Perry are joined on the paper by Amy Ousterhout, another graduate student in EECS; Devavrat Shah, the Jamieson Associate Professor of Electrical Engineering and Computer Science; and Hans Fugal of Facebook.

The researchers' experiments indicate that an arbiter with eight cores, or processing units, can keep up with a network transmitting 2.2 terabits of data per second. That's the equivalent of a 2,000-server data center with gigabit-per-second connections transmitting at full bore all the time.

"This paper is not intended to show that you can build this in the world's largest data centers today," Balakrishnan says. "But the question as to whether a more scalable centralized system can be built, we think the answer is yes."

Moreover, "the fact that it's two terabits per second on an eight-core machine is remarkable," Balakrishnan says. "That could have been 200 gigabits per second without the cleverness of the engineering."

The key to Fastpass's efficiency is a technique for splitting up the task of assigning transmission times so that it can be performed in parallel on separate cores. The problem, Balakrishnan says, is one of matching source and destination servers for each time slot.

"If you were asked to parallelize the problem of constructing these matchings," he says, "you would normally try to divide the source-destination pairs into different groups and put this group on one core, this group on another core, and come up with these iterative rounds. This system doesn't do any of that."

Instead, Fastpass assigns each core its own time slot, and the core with the first slot scrolls through the complete list of pending transmission requests. Each time it comes across a pair of servers, neither of which has received an assignment, it schedules them for its slot. All other requests involving either the source or the destination are simply passed on to the next core, which repeats the process with the next time slot. Each core thus receives a slightly attenuated version of the list the previous core analyzed.

Bottom line

Today, to avoid latencies in their networks, most data center operators simply sink more money into them. Fastpass "would reduce the administrative cost and equipment costs and pain and suffering to provide good service to the users," Balakrishnan says. "That allows you to satisfy many more users with the money you would have spent otherwise."

Networks are typically evaluated according to two measures: latency, or the time a single packet of data takes to traverse the network, and throughput, or the total amount of data that can pass through the network in a given interval.


Story Source:

The above story is based on materials provided by Massachusetts Institute of Technology. The original article was written by Larry Hardesty. Note: Materials may be edited for content and length.


Cite This Page:

Massachusetts Institute of Technology. "No-wait data centers: Data-transmission delays across server farms can be reduced by 99. 6 percent." ScienceDaily. ScienceDaily, 16 July 2014. <www.sciencedaily.com/releases/2014/07/140716183200.htm>.
Massachusetts Institute of Technology. (2014, July 16). No-wait data centers: Data-transmission delays across server farms can be reduced by 99. 6 percent. ScienceDaily. Retrieved October 24, 2014 from www.sciencedaily.com/releases/2014/07/140716183200.htm
Massachusetts Institute of Technology. "No-wait data centers: Data-transmission delays across server farms can be reduced by 99. 6 percent." ScienceDaily. www.sciencedaily.com/releases/2014/07/140716183200.htm (accessed October 24, 2014).

Share This



More Computers & Math News

Friday, October 24, 2014

Featured Research

from universities, journals, and other organizations


Featured Videos

from AP, Reuters, AFP, and other news services

The Best Apps to Organize Your Life

The Best Apps to Organize Your Life

Buzz60 (Oct. 23, 2014) — Need help organizing your bills, schedules and other things? Ko Im (@konakafe) has the best apps to help you stay on top of it all! Video provided by Buzz60
Powered by NewsLook.com
Nike And Apple Team Up To Create Wearable ... Something

Nike And Apple Team Up To Create Wearable ... Something

Newsy (Oct. 23, 2014) — For those looking for wearable tech that's significantly less nerdy than Google Glass, Nike CEO Mark Parker says don't worry, It's on the way. Video provided by Newsy
Powered by NewsLook.com
Chameleon Camouflage to Give Tanks Cloaking Capabilities

Chameleon Camouflage to Give Tanks Cloaking Capabilities

Reuters - Innovations Video Online (Oct. 22, 2014) — Inspired by the way a chameleon changes its colour to disguise itself; scientists in Poland want to replace traditional camouflage paint with thousands of electrochromic plates that will continuously change colour to blend with its surroundings. The first PL-01 concept tank prototype will be tested within a few years, with scientists predicting that a similar technology could even be woven into the fabric of a soldiers' clothing making them virtually invisible to the naked eye. Matthew Stock reports. Video provided by Reuters
Powered by NewsLook.com
Internet of Things Aims to Smarten Your Life

Internet of Things Aims to Smarten Your Life

AP (Oct. 22, 2014) — As more and more Bluetooth-enabled devices are reaching consumers, developers are busy connecting them together as part of the Internet of Things. (Oct. 22) Video provided by AP
Powered by NewsLook.com

Search ScienceDaily

Number of stories in archives: 140,361

Find with keyword(s):
 
Enter a keyword or phrase to search ScienceDaily for related topics and research stories.

Save/Print:
Share:  

Breaking News:

Strange & Offbeat Stories

 

Space & Time

Matter & Energy

Computers & Math

In Other News

... from NewsDaily.com

Science News

Health News

Environment News

Technology News



Save/Print:
Share:  

Free Subscriptions


Get the latest science news with ScienceDaily's free email newsletters, updated daily and weekly. Or view hourly updated newsfeeds in your RSS reader:

Get Social & Mobile


Keep up to date with the latest news from ScienceDaily via social networks and mobile apps:

Have Feedback?


Tell us what you think of ScienceDaily -- we welcome both positive and negative comments. Have any problems using the site? Questions?
Mobile iPhone Android Web
Follow Facebook Twitter Google+
Subscribe RSS Feeds Email Newsletters
Latest Headlines Health & Medicine Mind & Brain Space & Time Matter & Energy Computers & Math Plants & Animals Earth & Climate Fossils & Ruins