Welcome You To The  3D Printing Zoom Store…

MIT invents 3D printer that lets you watch yourself making objects

MIT invents 3D printer that lets you watch yourself making objects

A new type of 3D printer developed by MIT researchers uses computer vision to monitor itself and create objects faster using a wider range of materials than traditional printers.

This allows engineers to use materials they couldn’t use before, opening the door to more sophisticated and useful creations, such as a robotic gripper in the shape of a human hand controlled by flexible and reinforced “tendons,” they said University.

In typical 3D printers, tiny resin nozzles are applied to a surface and then smoothed with a scraper or roller and cured with UV light. However, if the material hardens slowly, it may be crushed or smeared by the roller. This limits the types of materials 3D printers can work with.

However, MIT’s new contactless 3D printing system does not require mechanical components to smooth the resin used to mold objects, allowing it to work with materials that cure more slowly than the acrylates traditionally used in 3D printing.

Materials that cure more slowly often have better properties, such as: B. increased elasticity, greater durability and longer lifespan.

“We have directly fabricated a wide range of complex high-resolution composite systems and robots: tendon-powered hands, pneumatically actuated walking manipulators, pumps that mimic a heart, and metamaterial structures,” wrote the scientists from MIT, MIT spinout Inkbit, and ETH Zurich in a paper describe their work.

According to the team, the new 3D printer can also print 660 times faster than comparable 3D inkjet printers.

Printing complex devices

The study builds on the MultiFab, an affordable multi-material 3D printer that the researchers first introduced in 2015. The MultiFab, equipped with thousands of nozzles, could deposit tiny droplets of resin that were then UV-cured, allowing high-resolution printing with up to 10 different materials simultaneously.

In their most recent project, the team focused on developing a non-contact printing technique to expand the range of materials used to create more complex devices. They invented a method called Vision-Controlled Jetting, which integrated four high frame rate cameras and a pair of lasers to constantly monitor the print surface. As the nozzles eject tiny droplets of resin, the cameras capture these moments.

The system’s computer vision converts these images into a detailed depth map in less than a second. This map is compared to the CAD model of the item being manufactured, allowing precise adjustments to resin production to match the intended design. This automated process can fine-tune each of the printer’s 16,000 nozzles, providing exceptional control over the smallest details of the printed device.

AI for printing

The MIT project is part of a growing effort to use machine learning for 3D printing. AI-generated designs can be immediately implemented in print, allowing for rapid prototyping and testing, said Nat Trask, a professor of engineering at the University of Pennsylvania, in an interview.
“For metamaterial design in particular, people are printing small, intricate, tiled patterns that combine to create a desirable mass mechanical response,” Trask added. “While people have been pursuing this for some time, the patterns and geometry are limited by the complexity that people can reason through designs. Using generative models, the same tools as DALL-E that generate images of cats playing basketball on the moon, people can explore more complex designs that target different material responses.”

In the old way of designing things, a human would spend days building a computer model of a design and then running simulations based on solving large systems of equations, Trask said. In recent years, machine learning (ML) has been able to replace these simulations with predictions that are 1,000 times faster.

“In the next few years, I expect to see machine learning tools that can predict the behavior of a part on the fly. This allows AI/ML to not only suggest new printing geometries, but also have a feedback loop that explores designs using online physics models,” said Trask.

AI enables users to implement image-based process monitoring and control more effectively, said Ben Schrauwen, senior vice president and general manager of Oqton, a 3D printing company, in an interview. He said the discovery and development of new polymers and alloys can be accelerated through the use of AI models that understand molecular and atomic structures and interactions.

“For example, you could have ChatGPT-like interfaces to interact with large corpora of research literature and develop ideas,” he added. “AI is also used to automatically recognize and segment 3D images of human anatomy and automate the design of dental parts. Following this idea, AI can already automatically identify 3D models and suggest the optimal segmentation, support placement, orientation and nesting of parts for 3D printing.”

Oqton has developed AI-based software that allows dental labs to automatically prepare files for 3D printing instead of relying on manual preparation by operators and technicians. Automated file preparation can have a strong impact on the entire manufacturing process.
“We have seen dental labs save hours of manual work per day through AI-based automated support generation,” said Schrauwen. “The result is that labs can print more parts with the same number of machines. As more organizations and industries realize that 3D printing is so cost-effective and fast, we can expect more of them to adopt this technology.”

“The overall impact of AI on additive manufacturing (AM) workflows is that the process is more predictable and technicians can send a job to print overnight and have the confidence in the morning that it is ready for the next steps.” ” he continued.

AI also plays a key role in improving quality control procedures, said Schrauwen. One example is the ability to monitor all 3D printing tasks from a single, unified viewpoint.

“This makes it easy to know the status of all jobs in the process, track status changes, view video feeds from the build platform as the job progresses and keep an eye on live sensor readings,” he said. “Unlike traditional Manufacturing Execution Systems (MES), a next-generation AI-based solution can pull information from different machines and applications in a unified experience, identify problems and suggest solutions.”