As AI powered smart assistants find their way into our lives, we seek reference points to help us deal with their unfamiliar presence.
Amazon’s Echo and Google Home are recent inventions but for centuries we have been toying with the idea of artificial bodies and minds. So as we try to make sense of these new arrivals we have a wealth of imagery to draw on: from Biblical Golem and Shelley’s Frankenstein, to Kubrick’s Space Odyssey and HBO’s Westworld,
Our first encounters with AI create an opportunity to revisit our views on humanity and intelligence. We can ask – how is popular culture is shaping our relationship with AI? Does it distort our expectations? Where does real AI differ from fictional AI most?
“I’m sorry, I don’t understand the question” – as Amazon’s Alexa states her inability to comprehend my simple request, her calm, bland, metrical manner echoes the most famous AI – HAL.
“I’m sorry, Dave, I’m afraid I can’t do that” – the smart assistant HAL 9000 pronounced its homicidal verdict to the spaceship’s crew. In doing so HAL assumed a true sense of agency.
Alexa is nothing like HAL, of course. She’s nowhere near as intelligent, and not so menacing. When we compare Alexa to her fictional counterparts, she is disappointing. But what is it about her performance and character that misses our expectations? Is it the computing power, the analytical and problem-solving ability, or something else? Popular representations of AI can help us answer the question.
Sci fi often portrays intelligent machines as lacking humanity and as inherently malevolent. Consider the parasitic machines in the Matrix.
But some of the most interesting recent depictions of AI have opted for a Nietzschean mélange of the “all too human” and the ‘übermensch’ (SuperHuman). They are uncannily human in physique, speech and behaviour, yet have superhuman abilities. The androids of Westworld and Ex Machina are cast in our own image yet they are much more capable. They become a reflection on who we would like to be.
This anthropomorphism is central to fictional portrayals of AI and shapes how we perceive it. Westworld’s inwards journey to consciousness, with suffering as its cornerstone, makes it easy for us to sympathize with the androids as they discover their humanity.
In Ex Machina, Ava’s ability to charm and manipulate Caleb is also rooted in her humanity. Ava’s ability to understand emotions, to have dreams and fears, is more important to the story than her data processing abilities. Her escape, like the android takeover at the end of Westworld, is brutal and bloody. But both are endings that are undeniably human: they are political and moral acts of resurgence and revenge.
The disembodied Samantha in ‘Her’ is hyper-intelligent. She is able to have a thousand simultaneous conversations. Yet it is her human qualities, not these superpowers, that define her relationship with Theodore. Her intelligence lies in her acute emotional perceptiveness, ability to connect and to care.
In seeking the company of OS’s, Theodore and others do not seek something new, least of all something artificial. They seek human emotions, and companionship, empathy and support in a strange and lonely world.
The success of the human-bot relationship depends on the AIs’ ability to be human – the ‘superpowers’ are an added bonus.
Two types of Intelligence: Computing and Human
More philosophical than sci-fi, these takes on AI betray a preoccupation with our own desires, dreams and fears. They are less concerned with AI’s capabilities. The story they tell is about our desire to animate AI with much more than the intelligence needed for playing chess or analysing a dataset. They show we are looking for AI to be more human.
Without the emotional, moral and social facets of human intelligence a smart assistant is less like Samantha and more like HAL. Our own research suggests that with voice-based assistants communication becomes more intimate. The act of speaking anthropomorphises the AI. Speaking is something we have done for millennia. The moment an AI speaks and assumes a vocal form, it comes to life; as we hear it talk, we imbue it with personality.
The computing power and problem-solving abilities of current AI lag behind those of fictional AI. But, this is not the cause of our disappointment with the AI we encounter in our daily life.
We are like Caleb in ‘Ex Machina’, Theodore in ‘Her’ and William in ‘Westworld’. They long for their android love interests to be ‘real’. After exposure to their experiences, we have high expectation of AI. We expect better conversational, emotional and empathic capacities. We want them to be more ‘human’. We want more humanity and intuition not just more computing intelligence.
When we create AIs in our own image we celebrate ourselves. We also celebrate our own, all-too-human vision of intelligence. As AI move from fiction to fact we trigger the AI effect: the magic-shattering moment when an AI’s ability to perform a task is no longer captivating. We see what the AI does as computation rather than a display of intelligence.
When AI fails to live up to what we imagined it to be, we will always be disappointed. So we should stop looking at popular culture for insight into the technical capability of AI. Instead, we should see it as an opportunity to reflect on the very nature of intelligence, humanity and consciousness. A chance to confront questions about who we are and who we want to be.
// Anna Zavyalova