The way we feel about artificial intelligence is changing
Our love-hate relationship with AI is set to conflict us even more. Right now, we love it when it helps us, improves our day and saves us time. We hate it when it doesn’t understand our commands or embarrasses us in public.
Over time, slowly, without us stopping to notice, artificial intelligence will transform our relationship with tech, whether that’s via our phones, smart speakers or appliances. Yet how we feel about AI hasn’t yet earned the same strength of feeling as, say our phones. The jolt as you pat your pocket and it isn’t there. The grief over a smashed screen.
But AI is coming. Amazon’s Alexa is breaking out from her Echo speaker prison and Siri is heading to our homes.
The future stages of how we feel towards AI will directly relate to how useful, intuitive and personalised AI can be. Of course, we will be interacting with AI via various different devices. The next time you think that you hate your smartphone, or get pissed off with your smart thermostat, remember who you’re really in a relationship with.
Stage one: Fantasy & Fear
Our fictional visions for our relationship with AI – friendly, subservient, equal, overpowering, rebellious, authoritative – have been well documented so we won’t bore you here. Cortana, Marvin, Rosie, Skynet, HAL, Data, the Cylons, TARS, the list goes on.
In the manner of a terrified sixteen year-old running through all the potential fairy tales and crises before a first date, science fiction has presented us with so many different outcomes for how we will live alongside artificial intelligence that it’s difficult to reconcile these with the limited AI we are already being served by. Especially when it can be so damn frustrating.
Stage two: Frustration & Anger
Some of your early encounters with AI will no doubt have been through mobile, wearable and smart home tech that uses focuses on AI’s ability to learn and make predictions based on training data, otherwise known as machine learning. Google Assistant/Now on your Android phone perhaps.
The thing about virtual assistants or health and fitness recommendations or the connected home is that it’s very easy to forget to be impressed by all the requests the AI could have given you when it gets the one you needed wrong. Hello, frustration and anger. At this point, some people abandon the feature, others persevere in the (intentionally or not) selfless act of making the system more intelligent, the more they use it.
Stage three: Acceptance
Moov CEO Meng Li’s 2014 take on AI outsmarting our emotions is a great read about our desire to please our lifestyle-improving gadgets, the same way we would with a (good) personal trainer. When AI starts to make our life better (by helping us get fit, lose weight, feel healthy), that’s when we can either overcome the frustrations of imperfect systems or keep trying until we find one that works for us. The same can apply to the smart home as we learn what works, what doesn’t and what we can live with when it comes to the voice assistants in our smart speakers.
Stage four: Infatuation
Seriously. It’s clear that as long as AI stumbles to perform beyond a limited set of interactions, there is no initial infatuation stage to be had. That’s unless you count mucking about in the first hour of owning a new gadget or downloading a new app. So the haze of infatuation, bordering on addiction (we’ll get to that in a minute) could very well come later.
One of the next examples of this could be chat bots which, according to Mark Zuckerberg, will replace downloading apps. “I’ve never met anyone who likes calling businesses,” he said at Facebook’s F8 conference.
So instead of getting a little thrill every time you open the Uber or Deliveroo apps, the rush could come instead from chatting and interacting with an artificially intelligent bot via simple text messages inside Facebook Messenger.
If and when this next gen of chat bots works as advertised, they will be new and fresh and could even make smartwatches really, really useful.It’s not just bots at Facebook, either. At the beginning of the year, Zuckerberg said it was his personal mission for 2016 to build himself a simple AI for his home “kind of like Jarvis in Iron Man.”
Stage five: Trust
AI isn’t just becoming more adept at recognising conversations, it also has a nice line in identifying images. In The Ambient office, we use a Netatmo Welcome smart security camera – when its facial recognition algorithms pick up a face it doesn’t recognise and the saved faces (the Ambient team) aren’t in, Paul gets an alert and the camera starts recording.
We don’t just trust the hardware (the camera) itself, we trust the AI algorithms to know the difference between my face – which it sees and tags everyday – and other brown-haired women with chubby cheeks. Every time it correctly identifies us – and more importantly correctly identifies the people who share an office with us as strangers – we trust it a little bit more.
This is the stage that assistants based on voice recognition and natural language processing need to get to in terms of trust, on our part, that it will give us what we’re asking for or eventually what we need, without asking.
A separate but no less interesting question is this: if AI is making decisions on information to present to us, recommendations to give us or settings to change in our smart home, which decisions will we trust it to enough to outsource? What time we exercise and what to eat afterwards? How hot to warm the house? Whether to respond to an email now or later? Would we trust an AI fitness coach to the extent that it could exaggerate, bend the truth or even lie to motivate us?
Then there’s the connected car, in which trusting AI with passenger safety could get more difficult for human drivers as more and more control is relinquished. Tesla and its Autopilot feature is an example of how AI might help us get to fully autonomous vehicles. But the truth is that all the car manufacturers are at it. Toyota has just opened a Research Institute at the University of Michigan to study AI, materials science and robotics. The primary focus will be autonomous/chauffeured driving.
“Although the industry, including Toyota, has made great strides in the last five years, much of what we have collectively accomplished has been easy, because most driving is easy,” said Dr. Gill Pratt, TRI’s chief executive at the GPU Conference in San Jose. “Where we need autonomy to help most is when the driving is difficult. It’s this hard part that TRI intends to address.”
Stage six: Reliance
One of the most promising AI applications that is on the verge of launching is Viv, an open source Siri rival from Dag Kittlaus, who co-founded Siri then sold it to Apple. Viv is being demoed at TechCrunch Disrupt NY in May and from various hints, looks likely to launch to the public in 2017. It’s designed to add an intelligent interface across various platforms and pull in information from any app, site or individual who wishes to teach/supply Viv.
Viv will be a little different to the AI of Moov or Facebook Messenger. One of the goals of a virtual assistant or interface will be total reliance on the part of the human users. Of the “can’t live without Viv” variety. The first, and often last, port of call. That means accuracy, Viv and Siri “know what they’ve been taught” but it’s no coincidence that the interaction mimics the one that we would have with a real life assistant who we might rely on. Human-ish names, language that is getting more and more conversational – these don’t add much to the accuracy and usefulness of the AI. Apart from the benefit of saving time, the extras are emotional.
“Talking is seven times faster than typing so I knew this was going to be how we’re going to interact with our devices,” said Kittlaus, speaking about the pre-iPhone Siri at SXSW. “We knew we had to give this assistant a personality. We actually made Siri an alien but one that is familiar enough with our pop culture. Siri’s favourite colour is a kind of green but with more dimensions.”
Stage seven: Love & Fear
We can’t talk about AI without using a picture from Her
We started off with science fiction, so let’s return to it. Over the past few years, there have been some particularly well realised visions of our future relationships with AI. One is that we could end up being so convinced by and attracted to our AI assistants, perhaps accessed via a practically invisible wireless earbud – that we fall in love with them. See Spike Jonze’s Her. The other is that we could trust – or is it underestimate? – an AI robot so completely that it eventually becomes dangerous, to us and others. See (amongst others) Alex Garland’s Ex Machina.
Neither one of those outcomes looks likely to happen in our near future when you consider that we can’t even get Siri to correctly recognise what restaurant we’re asking about.
Just as a bit of a postscript to the ‘fear’ scenario, remember Dag Kittlaus, the Viv CEO who co-founded Siri? After he sold Siri to Apple, he wrote an as yet unpublished dystopian novel about a Siri-like virtual assistant that gets out of control and becomes a threat to humanity. “I’d like to see some limits built into AI as a precaution,” he said on stage at SXSW, with a sheepish laugh.
Even scarier is Roko’s Basilisk, a thought experiment which suggests that in the future a malevolent AI will come to power, punishing any humans who get in its way. In the experiment, this could extend to humans not only in the future but in the past i.e our present (it’s complicated). Whole institutions are dedicated towards making sure future AI is friendly and Elon Musk has said it is potentially more dangerous than nuclear weapons.
How AI feels about us could be much more important than how we feel about AI. In other words, it may be best to skip the frustration, anger and fear stage for your own good.
How do you feel about present and future AI? Let us know in the comments.