New! Sign up for our free email newsletter.
Science News
from research organizations

New computational technique relieves logjam from massive amounts of data

Date:
August 1, 2012
Source:
Michigan State University
Summary:
It's relatively easy to collect massive amounts of data on microbes. But the files are so large that it takes days to simply transmit them to other researchers and months to analyze once they are received. Researchers have now developed a new computational technique that relieves the logjam that these "big data" issues create.
Share:
FULL STORY

It's relatively easy to collect massive amounts of data on microbes. But the files are so large that it takes days to simply transmit them to other researchers and months to analyze once they are received.

Researchers at Michigan State University have developed a new computational technique, featured in the current issue of the Proceedings of the National Academy of Sciences, that relieves the logjam that these "big data" issues create.

Microbial communities living in soil or the ocean are quite complicated. Their genomic data is easy enough to collect, but their data sets are so big that they actually overwhelm today's computers. C. Titus Brown, MSU assistant professor in bioinformatics, demonstrates a general technique that can be applied on most microbial communities.

The interesting twist is that the team created a solution using small computers, a novel approach considering most bioinformatics research focuses on supercomputers, Brown said.

"To thoroughly examine a gram of soil, we need to generate about 50 terabases of genomic sequence -- about 1,000 times more data than generated for the initial human genome project," said Brown, who co-authored on the paper with Jim Tiedje, University Distinguished professor of microbiology and molecular genetics. "That would take about 50 laptops to store that much data. Our paper shows the way to make it work on a much smaller scale."

Analyzing DNA data using traditional computing methods is like trying to eat a large pizza in a single bite. The huge influx of data bogs down computers' memory and causes them to choke. The new method employs a filter that folds the pizza up compactly using a special data structure. This allows computers to nibble at slices of the data and eventually digest the entire sequence. This technique creates a 40-fold decrease in memory requirements, allowing scientists to plow through reams of data without using a supercomputer.

Brown and Tiedje will continue to pursue this line of research, and they are encouraging others to improve upon it as well. The researchers made the complete source code and the ancillary software available to the public to encourage extension.

"We want this program to continue to evolve and improve," Brown said. "In fact, it already has. Other researchers have taken our approach in a new direction and made a better genome assembler."


Story Source:

Materials provided by Michigan State University. Note: Content may be edited for style and length.


Journal Reference:

  1. J. Pell, A. Hintze, R. Canino-Koning, A. Howe, J. M. Tiedje, C. T. Brown. Scaling metagenome sequence assembly with probabilistic de Bruijn graphs. Proceedings of the National Academy of Sciences, 2012; DOI: 10.1073/pnas.1121464109

Cite This Page:

Michigan State University. "New computational technique relieves logjam from massive amounts of data." ScienceDaily. ScienceDaily, 1 August 2012. <www.sciencedaily.com/releases/2012/08/120801154841.htm>.
Michigan State University. (2012, August 1). New computational technique relieves logjam from massive amounts of data. ScienceDaily. Retrieved April 24, 2024 from www.sciencedaily.com/releases/2012/08/120801154841.htm
Michigan State University. "New computational technique relieves logjam from massive amounts of data." ScienceDaily. www.sciencedaily.com/releases/2012/08/120801154841.htm (accessed April 24, 2024).

Explore More

from ScienceDaily

RELATED STORIES