top of page
Writer's pictureSoCient STS

On humanoid AI

INTRO  Does real AI need to think like a human?!  Should we strive to create humanoid AI?  Is it more than prejudice that we consider humanoid AI more advanced?  To produce real AI do we have to look further than just logical thinking?  Does it need its own body? Should that be a biological body?  Consciousness + body: from knowing to understanding? Is our ultimate goal to create a super human species…?


It’s tempting to think of the mind as a layer that sits on top of more primitive cognitive structures… If the operations of the brain can be separated out and stratified, then perhaps we can construct something akin to just the top layer, and achieve human-like artificial intelligence (AI) while bypassing the messy flesh that characterises organic life … I understand the appeal of this view, because I co-founded SwiftKey…Our goal is to emulate the remarkable processes by which human beings can understand and manipulate language… But… most of the time I’m reminded that we’re nowhere near achieving human-like AI.


Why? Because the layered model of cognition is wrong. Most AI researchers are currently missing a central piece of the puzzle: embodiment. Things took a wrong turn at the beginning of modern AI, back in the 1950s. Computer scientists decided to try to imitate conscious reasoning by building logical systems based on symbols ... But symbolic logic proved hopelessly inadequate when faced with real-world problems, where fine-tuned symbols broke down in the face of ambiguous definitions and myriad shades of interpretation.


In later decades, as computing power grew, researchers switched to using statistics to extract patterns from massive quantities of data… i.e. ‘machine learning’… a bottom-up approach in which algorithms discern relationships by repeating tasks… Machine learning has produced many tremendous practical applications in recent years. … But algorithms are a long way from being able to think like us. The biggest distinction lies in our evolved biology, and how that biology processes information…


… in 2005 the biologist James Shapiro at the University of Chicago … argued that eukaryotic cells work ‘intelligently’ to adapt a host organism to its environment by manipulating their own DNA in response to environmental stimuli … long before we were conscious, thinking beings, our cells were reading data from the environment and working together to mould us into robust, self-sustaining agents. What we take as intelligence, then, is not simply about using symbols to represent the world as it objectively is. Rather, we only have the world as it is revealed to us, which is rooted in our evolved, embodied needs as an organism…


I suspect that this basic imperative of bodily survival in an uncertain world is the basis of the flexibility and power of human intelligence. But few AI researchers have really embraced the implications of these insights… The motivating drive of most AI algorithms is to infer patterns from vast sets of training data ... By contrast, thanks to our needs as an organism, human beings carry with them extraordinarily rich models of the body in its broader environment… for an AI algorithm… as things stand, it’s questionable whether this approach will be able to capture anything like the richness of our own bodily models.


Warnings on “intelligent machines could end mankind”? … I believe we’re still very far from needing to worry about anything approaching human intelligence – and we have little hope of achieving this goal unless we think carefully about how to give algorithms some kind of long-term, embodied relationship with their environment.


- By Ben Medlock -


From AEON. Selected & edited by SoCientists. Click 'here' for full article.

4 views0 comments

Recent Posts

See All

Comments


bottom of page