Researchers at the Georgia Institute of Technology have created the fastest detailed computer simulations of computer networks ever constructed -- simulating networks containing more than 5 million network elements. This work will lead to improved speed, reliability and security of future networks such as the Internet, according to Professor Richard Fujimoto, lead principal investigator of the DARPA-funded project (Defense Advanced Research Projects Agency).
These "packet-level simulations" model individual data packets as they travel through a computer network. Downloading a web page to one's home computer or sending an e-mail message typically involves transmitting several packets through the Internet. Packet-level simulations provide a detailed, accurate representation of network behavior (e.g., congestion), but are very time consuming to complete.
Engineers and scientists routinely use such simulations to design and analyze new networks and to understand phenomena such as Denial of Service attacks that have plagued the Internet in recent years. Because of the time required to complete the simulation computations, most studies today are limited to modeling a few hundred network components such as routers, servers and end-user computers.
"The end goal of research on network modeling and simulation is to create a more reliable and higher-performance Internet," says Fujimoto. "Our team has created a computer simulation that is two to three orders of magnitude faster than simulators commonly used by networking researchers today. This finding offers new capabilities for engineers and scientists to study large-scale computer networks in the laboratory to find solutions to Internet and network problems that were not possible before."
The Georgia Tech researchers have demonstrated the ability to simulate network traffic from over 1 million web browsers in near real time. This feat means that the simulators could model a minute of such large-scale network operations in only a few minutes of clock time.
Using the high-performance computers at the Pittsburgh Supercomputing Center, the Georgia Tech simulators used as many as 1,534 processors to simultaneously work on the simulation computation, enabling them to model more than 106 million packet transmissions in one second of clock time -- two to three orders of magnitude faster than simulators commonly used today. In comparison, the next closest packet-level simulations of which the research team is aware have simulated only a few million packet transmissions per second.
The research team plans to present their findings at the IEEE International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems (MASCOTS) in October. Team members include: Mostafa Ammar, Regents professor of Computing; Kalyan Perumalla, post-doctoral/research faculty; George Riley, assistant professor in School of Electrical and Computer Engineering; and Fujimoto. Graduate students involved in this project include Alfred Park, Computing and Talal Jaafar, Electrical and Computer Engineering.
Major funding was provided by the Network Modeling and Simulation Program of the Defense Advanced Research Projects Agency (DARPA) and the National Science Foundation. The cluster computing platforms at Georgia Tech were obtained through a grant from Intel.
The above post is reprinted from materials provided by Georgia Institute Of Technology. Note: Content may be edited for style and length.
Cite This Page: