New! Sign up for our free email newsletter.
Science News
from research organizations

The breakthrough that makes robot faces feel less creepy

Columbia engineers have taught a robot to learn lip movements by observation, much like a human learning in front of a mirror.

Date:
January 16, 2026
Source:
Columbia University School of Engineering and Applied Science
Summary:
Humans pay enormous attention to lips during conversation, and robots have struggled badly to keep up. A new robot developed at Columbia Engineering learned realistic lip movements by watching its own reflection and studying human videos online. This allowed it to speak and sing with synchronized facial motion, without being explicitly programmed. Researchers believe this breakthrough could help robots finally cross the uncanny valley.
Share:
FULL STORY

When people talk face to face, nearly half of their attention is drawn to the movement of the lips. Despite this, robots still have great difficulty moving their mouths in a convincing way. Even the most advanced humanoid machines often rely on stiff, exaggerated mouth motions that resemble a puppet, assuming they have a face at all.

Humans place enormous importance on facial expression, especially subtle movements of the lips. While awkward walking or clumsy hand gestures can be forgiven, even small mistakes in facial motion tend to stand out immediately. This sensitivity contributes to what scientists call the "Uncanny Valley," a phenomenon where robots appear unsettling rather than lifelike. Poor lip movement is a major reason robots can seem eerie or emotionally flat, but researchers say that may soon change.

A Robot That Learns to Move Its Lips

On January 15, a team from Columbia Engineering announced a major advance in humanoid robotics. For the first time, researchers have built a robot that can learn facial lip movements for speaking and singing. Their findings, published in Science Robotics, show the robot forming words in multiple languages and even performing a song from its AI-generated debut album "hello world_."

Rather than relying on preset rules, the robot learned through observation. It began by discovering how to control its own face using 26 separate facial motors. To do this, it watched its reflection in a mirror, then later studied hours of human speech and singing videos on YouTube to understand how people move their lips.

"The more it interacts with humans, the better it will get," said Hod Lipson, James and Sally Scapa Professor of Innovation in the Department of Mechanical Engineering and director of Columbia's Creative Machines Lab, where the research took place.

See link to "Lip Syncing Robot" video below.

Robot Watches Itself Talking

Creating natural-looking lip motion in robots is especially difficult for two main reasons. First, it requires advanced hardware, including flexible facial material and many small motors that must operate quietly and in perfect coordination. Second, lip movement is closely tied to speech sounds, which change rapidly and depend on complex sequences of phonemes.

Human faces are controlled by dozens of muscles located beneath soft skin, allowing movements to flow naturally with speech. Most humanoid robots, however, have rigid faces with limited motion. Their lip movements are typically dictated by fixed rules, which leads to mechanical, unnatural expressions that feel unsettling.

To address these challenges, the Columbia team designed a flexible robotic face with a high number of motors and allowed the robot to learn facial control on its own. The robot was placed in front of a mirror and began experimenting with thousands of random facial expressions. Much like a child exploring their reflection, it gradually learned which motor movements produced specific facial shapes. This process relied on what researchers call a "vision-to-action" language model (VLA).

Learning From Human Speech and Song

After understanding how its own face worked, the robot was shown videos of people talking and singing. The AI system observed how mouth shapes changed with different sounds, allowing it to associate audio input directly with motor movement. With this combination of self-learning and human observation, the robot could convert sound into synchronized lip motion.

The research team tested the system across multiple languages, speech styles, and musical examples. Even without understanding the meaning of the audio, the robot was able to move its lips in time with the sounds it heard.

The researchers acknowledge that the results are not flawless. "We had particular difficulties with hard sounds like 'B' and with sounds involving lip puckering, such as 'W'. But these abilities will likely improve with time and practice," Lipson said.

Beyond Lip Sync to Real Communication

The researchers stress that lip synchronization is only one part of a broader goal. Their aim is to give robots richer, more natural ways to communicate with people.

"When the lip sync ability is combined with conversational AI such as ChatGPT or Gemini, the effect adds a whole new depth to the connection the robot forms with the human," said Yuhang Hu, who led the study as part of his PhD work. "The more the robot watches humans conversing, the better it will get at imitating the nuanced facial gestures we can emotionally connect with."

"The longer the context window of the conversation, the more context-sensitive these gestures will become," Hu added.

Facial Expression as the Missing Link

The research team believes that emotional expression through the face represents a major gap in current robotics.

"Much of humanoid robotics today is focused on leg and hand motion, for activities like walking and grasping," Lipson said. "But facial affection is equally important for any robotic application involving human interaction."

Lipson and Hu expect realistic facial expressions to become increasingly important as humanoid robots are introduced into entertainment, education, healthcare, and elder care. Some economists estimate that more than one billion humanoid robots could be produced over the next decade.

"There is no future where all these humanoid robots don't have a face. And when they finally have a face, they will need to move their eyes and lips properly, or they will forever remain uncanny," Lipson said.

"We humans are just wired that way, and we can't help it. We are close to crossing the uncanny valley," Hu added.

Risks and Responsible Progress

This work builds on Lipson's long-running effort to help robots form more natural connections with people by learning facial behaviors such as smiling, eye contact, and speech. He argues that these skills must be learned through observation rather than programmed through rigid instructions.

"Something magical happens when a robot learns to smile or speak just by watching and listening to humans," he said. "I'm a jaded roboticist, but I can't help but smile back at a robot that spontaneously smiles at me."

Hu emphasized that the human face remains one of the most powerful tools for communication, and scientists are only beginning to understand how it works.

"Robots with this ability will clearly have a much better ability to connect with humans because such a significant portion of our communication involves facial body language, and that entire channel is still untapped," Hu said.

The researchers also acknowledge the ethical concerns that come with creating machines that can emotionally engage with humans.

"This will be a powerful technology. We have to go slowly and carefully, so we can reap the benefits while minimizing the risks," Lipson said.


Story Source:

Materials provided by Columbia University School of Engineering and Applied Science. Note: Content may be edited for style and length.


Journal Reference:

  1. Yuhang Hu, Jiong Lin, Judah Allen Goldfeder, Philippe M. Wyder, Yifeng Cao, Steven Tian, Yunzhe Wang, Jingran Wang, Mengmeng Wang, Jie Zeng, Cameron Mehlman, Yingke Wang, Delin Zeng, Boyuan Chen, Hod Lipson. Learning realistic lip motions for humanoid face robots. Science Robotics, 2026; 11 (110) DOI: 10.1126/scirobotics.adx3017

Cite This Page:

Columbia University School of Engineering and Applied Science. "The breakthrough that makes robot faces feel less creepy." ScienceDaily. ScienceDaily, 16 January 2026. <www.sciencedaily.com/releases/2026/01/260116035308.htm>.
Columbia University School of Engineering and Applied Science. (2026, January 16). The breakthrough that makes robot faces feel less creepy. ScienceDaily. Retrieved January 16, 2026 from www.sciencedaily.com/releases/2026/01/260116035308.htm
Columbia University School of Engineering and Applied Science. "The breakthrough that makes robot faces feel less creepy." ScienceDaily. www.sciencedaily.com/releases/2026/01/260116035308.htm (accessed January 16, 2026).

Explore More

from ScienceDaily

RELATED STORIES