Researchers train a machine learning model to monitor and adjust the 3D printing process to correct errors in real time

Scientists and engineers are constantly developing new materials with unique properties that can be used for 3D printing, but figuring out how to print with these materials can be a complex puzzle and expensive.

Often, a pro operator must use manual trial and error – possibly making thousands of prints – to determine the ideal settings that consistently print new material efficiently. These settings include the print speed and the amount of material the printer lays down.

MIT researchers have now used artificial intelligence to streamline this procedure. They developed a machine learning system that uses computer vision to monitor the manufacturing process and then correct errors in the way it processes material in real time.

They used simulations to teach a neural network how to adjust print settings to minimize errors, then applied that controller to a real 3D printer. Their system printed objects more accurately than any other 3D print controller they compared it to.

The job avoids the prohibitive process of printing thousands or millions of real objects to train the neural network. And it could make it easier for engineers to incorporate new materials into their prints, which could help them develop objects with particular electrical or chemical properties. It could also help technicians adjust the printing process on the fly if material or environmental conditions change unexpectedly.

“This project is truly the first demonstration of the development of a manufacturing system that uses machine learning to learn complex control policy,” says lead author Wojciech Matusik, a professor of electrical engineering and computer science at MIT who leads the computer design and manufacturing group. (CDFG ) within the Computer Science and Artificial Intelligence Laboratory (CSAIL). “If you have smarter manufacturing equipment, they can adapt in real time to the changing workplace environment, to improve yields or system accuracy. You can get more out of the device.”

Lead co-authors are Mike Foshey, mechanical engineer and project leader at CDFG, and Michal Piovarci, postdoctoral fellow at the Institute of Science and Technology in Austria. MIT co-authors include Jie Xu, a graduate student in electrical engineering and computer science, and Timothy Erps, a former CDFG strategy associate. The research will be presented at the Association for Computing Equipment’s SIGGRAPH conference.

Pick Parameters

Determining the ideal parameters of a digital manufacturing process can be one of the most costly parts of the motor vehicle process. lots of trial and error is required. And once a technician finds a combination that works well, those settings are only ideal for a specific condition. It has little data on how the material will perform in other environments, on different hardware, or if a new whole batch exhibits different properties.

The Using a machine learning system also presents many challenges. First, the researchers had to measure what was happening on the printer in real time.

To do this, they developed an artificial vision system using two cameras pointing at the nozzle of the 3D printer. The system illuminates the material as it is deposited and, based on the amount of light passing through it, calculates the thickness of the material.

“You can think of the vision system as a pair of eyes watching the process in real time,” Foshey explains. the race of the printer.

But the development of a controller based on a neural network to understand this manufacturing process is data-intensive and would require making millions of impressions. So the researchers built a simulator instead.

Simulation successful

To design their controller, they used a process known as reinforcement learning in which the model learns through trial and error with a reward. The model was responsible for selecting the print settings that would create a particular object in a simulated environment. After showing the expected result, the model was rewarded when the parameters it chose minimized the error between its impact and the expected result.

In this case, an “error” means that the model dispensed too much material, placing it in areas that should have been left open, or failed to dispense enough, leaving open points that should be filled. As the model performed more simulated impressions, it updated its control policy to maximize the reward, becoming more and more accurate.

However , the real world is messier than a simulation. In practice, situations usually change due to slight variations or noise in the printing process. The researchers therefore created a digital model that approximates the noise of the 3D printer. They used this model to add noise to the simulation, which led to more realistic results.

“The interesting thing we found is that in implementing this noise model, we were able to transfer the control policy that was purely simulation trained onto untrained hardware with no physical experimentation,” says Foshey. “We didn’t need to do any fine-tuning on the actual equipment afterwards.”

When they tested the controller, it printed the objects with more precision than any other control method they have evaluated. It worked particularly well when printing infill, which prints the interior of an object. Some other controllers deposited so much material that the printed object puffed up, but the researchers controller adjusted the print path to keep the object level.

Their control policy can even learn how materials spread after being dropped and adjust the settings accordingly.

“We were also able to design control policies that can control different spells of materials on the fly. So if you had a crafting process in the field and wanted to change the material, you wouldn’t have to revalidate the crafting process. You just had to load the new material and the controller would automatically adjust,” says Foshey.

Now that they have shown the effectiveness of this technique for 3D printing , researchers want to develop controllers for other manufacturing processes. They would also like to see how the approach can be modified for scenarios where there are multiple layers of material or multiple materials printed at the same time. Additionally, their approach assumed that each material had a fixed viscosity (“syruposity”), but a potential iteration could use AI to recognize and adjust the viscosity in real time.

Other co-authors of this work include Vahid Babaei, who leads the Artificial Intelligence Aided Design and Manufacturing group at the Max Planck Institute Piotr Didyk, Associate Professor at the University of Lugano in Switzerland Szymon Rusinkiewicz, Professor of ‘computer science David M. Siegel’ 83 at Princeton University and Bernd Bickel, professor at the Institute of Science and Technology in Austria.

The work was supported, in part, by the FWF Lise-Meitner Program, a European Research Council Starting Grant, and the US Nationwide Science Foundation.

Related Articles

Back to top button