AI & Humanoids

Will AI need a body to come close to human-like intelligence?

Will AI need a body to come close to human-like intelligence?
Does AI need a body to ever come close to achieving something like human intelligence? And if it does, what kind of body would it need?
Does AI need a body to ever come close to achieving something like human intelligence? And if it does, what kind of body would it need?
View 1 Image
Does AI need a body to ever come close to achieving something like human intelligence? And if it does, what kind of body would it need?
1/1
Does AI need a body to ever come close to achieving something like human intelligence? And if it does, what kind of body would it need?

The first robot I remember is Rosie from The Jetsons, soon followed by the urbane C-3PO and his faithful sidekick R2-D2 in The Empire Strikes Back. But my first disembodied AI was Joshua, the computer in WarGames who tried to start a nuclear war – until it learned about mutually assured destruction and chose to play chess instead.

At age seven, this changed me. Could a machine understand ethics? Emotion? Humanity? Did artificial intelligence need a body? These fascinations deepened as the complexity of non-human intelligence did with characters like the android Bishop in Aliens, Data in Star Trek: TNG, and more recently with Samantha in Her, or Ava in Ex Machina.

But these aren’t just speculative questions anymore. Roboticists today are wrestling with the question of whether artificial intelligence needs a body? And if so, what kind?

And then there’s the “how” of it all; if embodied intelligence is the way forward to true artificial general intelligence (AGI), could soft robots be the key to that next step?

The limits of disembodied AI

Recent papers are beginning to show the cracks in today’s most advanced (and notably disembodied) AI systems. A new study from Apple examined so-called “Large Reasoning Models” (LRMs) – language models that generate reasoning steps before answering. These systems, the paper notes, perform better than standard LLMs on many tasks, but fall apart when problems get too complex. Strikingly, they don’t just plateau – they collapse, even when given more than enough computing power.

Worse, they fail to reason consistently or algorithmically. Their “reasoning traces” – how they work through problems – lack internal logic. And the more complex the challenge, the less effort the models seem to expend. These systems, the authors conclude, don’t really “think” the way humans do.

“What we are building now are things that take in words and predict the next most likely word ... That’s very different from what you and I do,” Nick Frosst, a former Google researcher and co-founder of Cohere, told The New York Times.

Cognition is more than just computation

How did we get here? For much of the 20th century, artificial intelligence followed a model called GOFAI – “Good Old-Fashioned Artificial Intelligence” – which treated cognition as symbolic logic. Early AI researchers believed intelligence could be built by processing symbols, much like a computer executes code. Abstract, symbol-based thinking certainly doesn’t need a body to advance.

This idea began to fray when early robot AI failed to handle messy, real-world conditions. Researchers in psychology, neuroscience, and philosophy began to ask a different question, rooted in greater understandings that came from studies of animal and plant intelligences which all adapt, learn and respond to complex environmental conditions. These organisms learn through physical interactions, not symbolic ideas.

In humans, the enteric nervous system that governs our gut is often called the “second brain” because it uses the same types of cells and chemicals the brain uses to help us digest – also, incidentally, these are the same components an octopus tentacle uses to sense and react locally, within the limb.

This all raises the question, what if the foundation of adaptable intelligence is that it’s distributed throughout an organism and doesn’t live only in the brain, disconnected from the physical world?

This is the central idea of embodied cognition. Acting, sensing, and thinking are not separate – they are one process. As Rolf Pfeifer, the Director of the University of Zurich’s Artificial Intelligence Laboratory told EMBO Reports: “Brains have always developed in the context of a body that interacts with the world to survive. There is no algorithmic ether in which brains arise.”

Embodied intelligence: A different kind of thinking

So we may need smarter bodies to go along with the AI, and Cecilia Laschi, a pioneer in soft robotics, thinks smarter equals softer. After years working with rigid humanoid robots in Japan, she shifted her research to soft-bodied machines, inspired by the octopus – an animal that has no skeleton and whose limbs think for themselves.

“If you have a human robot walking you control all different movements,” she says in an interview with New Atlas. “If there is something different on the terrain, you have to reprogram a little bit.”

But animals don’t need to re-think and plan out their walking motions. “Our knee is compliant,” she explains. “We compensate for uneven ground mechanically, without using the brain.” This is embodied intelligence – the idea that some elements of cognition can be outsourced to the body.

Embodied intelligence has clear advantages from an engineering perspective; offloading perception, control, and decision-making to a robot’s physical structure means reduced computational demands in the main robot brain, leading to machines that can function more effectively in unpredictable environments.

In a May special issue of Science Robotics, Laschi defines it like this: “Motor control is not entirely managed by the computing system … motor behavior is partially shaped mechanically by external forces acting on the body.” Behavior is shaped by environment, and intelligence is learned through experience, not pre-programmed into software.

Thinking in this way, intelligence isn’t just about faster chips or bigger models – it’s about interaction. Key to advancing this intelligence is the field of soft robotics, which uses materials like silicone or special fabrics to enable more flexible robot bodies. These bodies are adaptive, fluid, and capable of real-time learning. A soft robotic arm, like an octopus tentacle, can grasp, explore, and respond without needing to calculate every move.

Flesh and feedback: How to make materials think for themselves

To make soft robotics work as well as a tentacle, though, roboticists have to move away from programming for every possibility and instead design new ways for machines to sense and react. To build machines with lifelike autonomy, researchers are turning to a new concept: autonomous physical intelligence (API).

Ximin He, Associate Professor of Materials Science and Engineering at UCLA, has pioneered work in this space by designing soft materials – like responsive gels and polymers – that don’t just react to stimuli but also regulate their own movement using built-in feedback.

“We’ve been working on creating more decision-making ability at the material level,” He tells New Atlas in an interview. “Materials that change shape in response to a stimulus can also ‘decide’ how to modulate that stimulus based on how they deform – correcting or adjusting their next motion.”

In 2018, He’s lab demonstrated this with a gel that could self-regulate its movement. Since then, they’ve shown that the same principle applies to a range of soft materials, including liquid crystal elastomers that work efficiently in air.

The key to API is nonlinear time-lag feedback. In traditional robots, a control system analyzes sensory data and tells the machine what to do. He’s approach embeds this logic directly in the materials themselves.

“In robotics, you need sensing and actuation – but also decision-making between them,” He explains. “That’s what we’re embedding physically, using feedback loops.”

He compares this to biological systems. Negative feedback, like our glucose regulation or a thermostat, works to correct overshoots. Positive feedback amplifies change. Nonlinear feedback combines both, allowing for controlled, rhythmic behaviors – like a pendulum or walking gait.

“A lot of natural motion – like walking or swimming – relies on periodic, steady movement,” He says. “With nonlinear, time-lagged feedback, we can design soft robots to move forward, backward, forward again – without needing external control at each step.”

This represents a major step forward from soft robots that rely on external stimuli to function, as He and her colleagues shared in a recent review paper. By integrating sensing, control, and actuation into the material itself, researchers like He are paving the way toward machines that don’t just react, but decide, adapt, and act on their own.

The future is soft (and smart)

Soft robotics is a nascent field, but it holds profound promise. Laschi points to surgical tools like endoscopes that could examine and react to sensitive human tissue at the same time, or rehab devices that could flex or adapt to the patient’s needs, as early and obvious uses.

So, to move from AI to AGI, machines may need bodies – specifically, soft and adaptable ones. Most life on Earth, including humans, learns by moving, touching, failing, and adjusting. We know how to deal with an unpredictable, chaotic world – something today’s AIs still struggle with. We know what an apple is not because we read a definition, but because we’ve held one, tasted one, dropped it, bruised it, cut it, squeezed it, watched it rot.

That kind of knowledge – tacit, sensory, contextual – is hard to teach a model that’s only ever seen text or pixels. Direct connection to, and feedback with, the real world gets around the limitations of language that LLMs currently deal with, and offers the potential for AI to build a different understanding of the world. A conception of the world from its own POV, not a human one, but something different. If a soft robot was provided with different kinds of sensory inputs (think infrared vision, low-frequency hearing, or the ability to smell cancer or other diseases), it could even develop an alternative (and possibly super-useful) understanding of life on Earth.

“If you want to develop something like human intelligence in a machine, the machine has to be able to acquire its own experiences,” Giulio Sandini, a Professor of Bioengineering at the University of Genoa, Italy told EMBO Reports. You must let it learn from experience, as children do. And that likely requires a body.

1 comment
1 comment
YourAmazonOrder
Given (https://newatlas.com/computers/ai-blackmail-more-less-seems/) - maybe giving them bodies isn't the most intelligent idea humanity can come up with.