Using artificial intelligence to control digital manufacturing
Researchers train a machine-learning model to monitor and adjust the 3D printing process to correct errors in real-time
- Date:
- August 2, 2022
- Source:
- Massachusetts Institute of Technology
- Summary:
- A new computer vision system watches the 3D printing process and adjusts velocity and printing path to avoid errors. Training the system in simulation, researchers avoid the costly trial-and-error associated with setting 3D printing parameters for new materials.
- Share:
Scientists and engineers are constantly developing new materials with unique properties that can be used for 3D printing, but figuring out howto print with these materials can be a complex, costly conundrum.
Often, an expert operator must use manual trial-and-error -- possibly making thousands of prints -- to determine ideal parameters that consistently print a new material effectively. These parameters include printing speed and how much material the printer deposits.
MIT researchers have now used artificial intelligence to streamline this procedure. They developed a machine-learning system that uses computer vision to watch the manufacturing process and then correct errors in how it handles the material in real-time.
They used simulations to teach a neural network how to adjust printing parameters to minimize error, and then applied that controller to a real 3D printer. Their system printed objects more accurately than all the other 3D printing controllers they compared it to.
The work avoids the prohibitively expensive process of printing thousands or millions of real objects to train the neural network. And it could enable engineers to more easily incorporate novel materials into their prints, which could help them develop objects with special electrical or chemical properties. It could also help technicians make adjustments to the printing process on-the-fly if material or environmental conditions change unexpectedly.
"This project is really the first demonstration of building a manufacturing system that uses machine learning to learn a complex control policy," says senior author Wojciech Matusik, professor of electrical engineering and computer science at MIT who leads the Computational Design and Fabrication Group (CDFG) within the Computer Science and Artificial Intelligence Laboratory (CSAIL). "If you have manufacturing machines that are more intelligent, they can adapt to the changing environment in the workplace in real-time, to improve the yields or the accuracy of the system. You can squeeze more out of the machine."
The co-lead authors are Mike Foshey, a mechanical engineer and project manager in the CDFG, and Michal Piovarci, a postdoc at the Institute of Science and Technology in Austria. MIT co-authors include Jie Xu, a graduate student in electrical engineering and computer science, and Timothy Erps, a former technical associate with the CDFG. The research will be presented at the Association for Computing Machinery's SIGGRAPH conference.
Picking parameters
Determining the ideal parameters of a digital manufacturing process can be one of the most expensive parts of the process because so much trial-and-error is required. And once a technician finds a combination that works well, those parameters are only ideal for one specific situation. She has little data on how the material will behave in other environments, on different hardware, or if a new batch exhibits different properties.
Using a machine-learning system is fraught with challenges, too. First, the researchers needed to measure what was happening on the printer in real-time.
To do this, they developed a machine-vision system using two cameras aimed at the nozzle of the 3D printer. The system shines light at material as it is deposited and, based on how much light passes through, calculates the material's thickness.
"You can think of the vision system as a set of eyes watching the process in real-time," Foshey says.
The controller would then process images it receives from the vision system and, based on any error it sees, adjust the feed rate and the direction of the printer.
But training a neural network-based controller to understand this manufacturing process is data-intensive, and would require making millions of prints. So, the researchers built a simulator instead.
Successful simulation
To train their controller, they used a process known as reinforcement learning in which the model learns through trial-and-error with a reward. The model was tasked with selecting printing parameters that would create a certain object in a simulated environment. After being shown the expected output, the model was rewarded when the parameters it chose minimized the error between its print and the expected outcome.
In this case, an "error" means the model either dispensed too much material, placing it in areas that should have been left open, or did not dispense enough, leaving open spots that should be filled in. As the model performed more simulated prints, it updated its control policy to maximize the reward, becoming more and more accurate.
However, the real world is messier than a simulation. In practice, conditions typically change due to slight variations or noise in the printing process. So the researchers created a numerical model that approximates noise from the 3D printer. They used this model to add noise to the simulation, which led to more realistic results.
"The interesting thing we found was that, by implementing this noise model, we were able to transfer the control policy that was purely trained in simulation onto hardware without training with any physical experimentation," Foshey says. "We didn't need to do any fine-tuning on the actual equipment afterwards."
When they tested the controller, it printed objects more accurately than any other control method they evaluated. It performed especially well at infill printing, which is printing the interior of an object. Some other controllers deposited so much material that the printed object bulged up, but the researchers' controller adjusted the printing path so the object stayed level.
Their control policy can even learn how materials spread after being deposited and adjust parameters accordingly.
"We were also able to design control policies that could control for different types of materials on-the-fly. So if you had a manufacturing process out in the field and you wanted to change the material, you wouldn't have to revalidate the manufacturing process. You could just load the new material and the controller would automatically adjust," Foshey says.
Now that they have shown the effectiveness of this technique for 3D printing, the researchers want to develop controllers for other manufacturing processes. They'd also like to see how the approach can be modified for scenarios where there are multiple layers of material, or multiple materials being printed at once. In addition, their approach assumed each material has a fixed viscosity ("syrupiness"), but a future iteration could use AI to recognize and adjust for viscosity in real-time.
Additional co-authors on this work include Vahid Babaei, who leads the Artificial Intelligence Aided Design and Manufacturing Group at the Max Planck Institute; Piotr Didyk, associate professor at the University of Lugano in Switzerland; Szymon Rusinkiewicz, the David M. Siegel '83 Professor of computer science at Princeton University; and Bernd Bickel, professor at the Institute of Science and Technology in Austria.
The work was supported, in part, by the FWF Lise-Meitner program, a European Research Council starting grant, and the U.S. National Science Foundation.
Story Source:
Materials provided by Massachusetts Institute of Technology. Original written by Adam Zewe. Note: Content may be edited for style and length.
Journal Reference:
- Michal Piovarci, Michael Foshey, Jie Xu, Timothy Erps, Vahid Babaei, Piotr Didyk, Szymon Rusinkiewicz, Wojciech Matusik, Bernd Bickel. Closed-Loop Control of Direct Ink Writing via Reinforcement Learning. arXiv.org, Jan. 27, 2022 (submitted); DOI: 10.48550/arXiv.2201.11819
Cite This Page: