Climate Opportunities & Partnerships

Middle East & Africa

AgriFood

ChatGPT Designs Its First Robot - for Picking Tomatoes

First human - AI collaboration to build a robot to tackle humanity challenges

Jun 10, 2023 / 3 Min
Photo: TU Delft

In an interesting exploration of AI capabilities, researchers at the Delft University of Technology (TU Delft) and EPFL have collaborated with OpenAI's viral chatbot ChatGPT to tackle a new challenge: designing a robot. 

 

Seeking to address humanity's future challenges, through their dialogue with ChatGPT, the researchers decided to focus on the realm of food supply and arrived at the concept of a tomato-harvesting robot.

 

Helpful suggestions
The researchers, assistant professor Cosimo Della Santina, PhD student Francesco Stella from TU Delft, and Josie Hughes from EPFL, followed all of ChatGPT's design decisions. The input proved particularly valuable in the conceptual phase, according to Stella: "ChatGPT extends the designer's knowledge to other areas of expertise. For example, the chat robot taught us which crop would be most economically valuable to automate." 

But ChatGPT also came up with useful suggestions during the implementation phase: "Make the gripper out of silicone or rubber to avoid crushing tomatoes" and "a Dynamixel motor is the best way to drive the robot." The result of this partnership between humans and AI is a robotic arm that can harvest tomatoes.

 

ChatGPT as a researcher
The researchers found the collaborative design process to be positive and enriching. "However, we did find that our role as engineers shifted towards performing more technical tasks," says Stella. In Nature Machine Intelligence, the researchers explore the varying degrees of cooperation between humans and Large Language Models (LLM), of which ChatGPT is one. In the most extreme scenario, AI provides all the input to the robot design, and the human blindly follows it. In this case, the LLM acts as the researcher and engineer, while the human acts as the manager in charge of specifying the design objectives.

 

Risk of misinformation
Such an extreme scenario is not yet possible with today's LLMs. And the question is whether it is desirable. "In fact, LLM output can be misleading if it is not verified or validated. AI bots are designed to generate the 'most probable' answer to a question, so there is a risk of misinformation and bias in the robotic field," Della Santina says. Working with LLMs also raises other important issues, such as plagiarism, traceability, and intellectual property.

 

"Ultimately, an open question for the future of our field is how LLMs can be used to assist robot developers without limiting the creativity and innovation needed for robotics to rise to the challenges of the 21st century," Stella concludes.

 

Sources: tudelft.nl, Tech Times.