New! Sign up for our free email newsletter.
Science News
from research organizations

Scientists build a “periodic table” for AI

Date:
March 4, 2026
Source:
Emory University
Summary:
Choosing the right method for multimodal AI—systems that combine text, images, and more—has long been trial and error. Emory physicists created a unifying mathematical framework that shows many AI techniques rely on the same core idea: compress data while preserving what’s most predictive. Their “control knob” approach helps researchers design better algorithms, use less data, and avoid wasted computing power. The team believes it could pave the way for more accurate, efficient, and environmentally friendly AI.
Share:
FULL STORY

Artificial intelligence is now routinely used to combine and interpret different kinds of information, including text, images, audio, and video. Yet one major obstacle remains. Developers must decide which algorithm is best suited for a specific task, and that choice is often complicated and time consuming in the fast growing field of multimodal AI.

Physicists at Emory University have proposed a clearer, more systematic approach. Writing in The Journal of Machine Learning Research, they describe a new mathematical framework that organizes AI methods and guides the design of algorithms for specific problems.

"We found that many of today's most successful AI methods boil down to a single, simple idea -- compress multiple kinds of data just enough to keep the pieces that truly predict what you need," says Ilya Nemenman, Emory professor of physics and senior author of the study. "This gives us a kind of 'periodic table' of AI methods. Different methods fall into different cells, based on which information a method's loss function retains or discards."

A loss function is the mathematical formula that measures how far an AI model's predictions deviate from the correct answer. During training, the system continually adjusts itself to reduce that error. The lower the loss, the better the model performs.

"People have devised hundreds of different loss functions for multimodal AI systems and some may be better than others, depending on context," Nemenman says. "We wondered if there was a simpler way than starting from scratch each time you confront a problem in multimodal AI."

The Variational Multivariate Information Bottleneck Framework

To address that question, the team created a general mathematical structure for building problem specific loss functions. Their method focuses on deciding what information should be preserved and what can be discarded. They call it the Variational Multivariate Information Bottleneck Framework.

"Our framework is essentially like a control knob," says co author Michael Martini, who worked on the project as an Emory postdoctoral fellow and research scientist in Nemenman's group. "You can 'dial the knob' to determine the information to retain to solve a particular problem."

"Our approach is a generalized, principled one," adds Eslam Abdelaleem, first author of the paper. Abdelaleem began the work as an Emory PhD candidate in physics before graduating in May and moving to Georgia Tech as a postdoctoral fellow.

"Our goal is to help people to design AI models that are tailored to the problem that they are trying to solve," he says, "while also allowing them to understand how and why each part of the model is working."

Using the framework, AI developers can propose new algorithms, forecast which ones are likely to succeed, estimate how much training data they will need, and anticipate possible failure points.

"Just as important," Nemenman says, "it may let us design new AI methods that are more accurate, efficient and trustworthy."

A Physics Driven Perspective on Machine Learning

The researchers approached AI design differently from many in the machine learning community.

"The machine-learning community is focused on achieving accuracy in a system without necessarily understanding why a system is working," Abdelaleem explains. "As physicists, however, we want to understand how and why something works. So, we focused on finding fundamental, unifying principals to connect different AI methods together."

Abdelaleem and Martini began by working through equations by hand, searching for the core idea beneath the complexity of modern AI techniques.

"We spent a lot of time sitting in my office, writing on a whiteboard," Martini says. "Sometimes I'd be writing on a sheet of paper with Eslam looking over my shoulder."

The effort stretched over several years. They developed mathematical foundations, reviewed them with Nemenman, tested ideas on computers, and often had to return to the drawing board after pursuing approaches that did not work.

"It was a lot of trial and error and going back to the whiteboard," Martini says.

A Eureka Moment and a Smartwatch Surprise

Their breakthrough came when they identified a single principle describing the balance between compressing data and reconstructing it. The idea captured the tradeoff at the heart of many AI methods.

"We tried our model on two test datasets and showed that it was automatically discovering shared, important features between them," Martini says. "That felt good."

After the intense push that led to this insight, Abdelaleem checked his Samsung Galaxy smart watch as he was leaving campus. The device uses AI to monitor health signals such as heart rate. That day, however, it misread his excitement.

"My watch said that I had been cycling for three hours," Abdelaleem says. "That's how it interpreted the level of excitement I was feeling. I thought, 'Wow, that's really something! Apparently, science can have that effect."

Testing the Framework and Looking Ahead

To evaluate their approach, the team applied the framework to dozens of existing AI methods.

"We performed computer demonstrations that show that our general framework works well with test problems on benchmark datasets," Nemenman says. "We can more easily derive loss functions, which may solve the problems one cares about with smaller amounts of training data."

Because the framework helps eliminate unnecessary features, it may also lower the computational demands of AI systems.

"By helping guide the best AI approach, the framework helps avoid encoding features that are not important," Nemenman says. "The less data required for a system, the less computational power required to run it, making it less environmentally harmful. That may also open the door to frontier experiments for problems that we cannot solve now because there is not enough existing data."

The researchers hope others will apply the framework to design algorithms tailored to specific scientific challenges.

They are also continuing to expand the work themselves. One area of interest is biology, including efforts to identify patterns related to cognitive function.

"I want to understand how your brain simultaneously compresses and processes multiple sources of information," Abdelaleem says. "Can we develop a method that allows us to see the similarities between a machine-learning model and the human brain? That may help us to better understand both systems."


Story Source:

Materials provided by Emory University. Note: Content may be edited for style and length.


Journal Reference:

  1. Eslam Abdelaleem, Ilya Nemenman, K. Michael Martini. Deep Variational Multivariate Information Bottleneck -- A Framework for Variational Losses. Journal of Machine Learning Research, 2 Sep 2025 [abstract]

Cite This Page:

Emory University. "Scientists build a “periodic table” for AI." ScienceDaily. ScienceDaily, 4 March 2026. <www.sciencedaily.com/releases/2026/03/260303145714.htm>.
Emory University. (2026, March 4). Scientists build a “periodic table” for AI. ScienceDaily. Retrieved March 4, 2026 from www.sciencedaily.com/releases/2026/03/260303145714.htm
Emory University. "Scientists build a “periodic table” for AI." ScienceDaily. www.sciencedaily.com/releases/2026/03/260303145714.htm (accessed March 4, 2026).

Explore More

from ScienceDaily

RELATED STORIES