I guess by now we have all heard of the very controversial Google Duplex demonstration at Google I/O 2018, where a human-voice synthesized bot called several local businesses and was able to interact with humans who had no idea they were talking to a machine. Many of us are fascinated by the technological progress that could be witnessed. A part of me was fascinated just like that. But to me the real fascinating discussion is about ethics, specifically AI ethics that come along with approaches like Duplex.
For now Duplex is “only” able to converse in closed domains, in conversational exchanges that are functional, with strict limits on what is going to be said. Google is calling Duplex an “experiment”. According to them it is explicitly not a finished product, and there is even no guarantee if it will ever be widely available in this form, or widely available at all.
But the baseline discussion of the related issues and implications is very valid to be had now. In fact it opens up a Pandora’s box of ethical and social challenges. And the “experiment” stage is likely going to change, and taken recent experience, that could happen faster than we might think.
These are the main issues/questions I see
- Is it ok for an AI system to purposefully deceive a human and camouflage its own machine-based origin? The voice styles Google used in the demonstration were not synthesized robotic tones but distinctly human-sounding, in both the female and male flavors it showcased. Duplex even uses ‘ums’ and ‘ahs’ — to make the “conversational experience more comfortable“.
- Does Google have an obligation to tell people that they are talking to a machine? During the demo they did not lose a word on this fundamental topic. (Meanwhile they have given a statement that they will do that. “We are designing this feature with disclosure built-in, and we’ll make sure the system is appropriately identified.”)
- What is it good for to artificially include ‘ums’ and ‘ahs’ (in case a conversation still take place) when the voice assistant will introduce itself in the first place like “Hello, you’re talking to a piece of software” (receiver likely to hang up then?). So how exactly should such systems tell someone that they are speaking to an AI? A real tricky question…
- I generally see a paradox (and I am keen to learn how things will play out in reality then): humans do not necessarily like speaking to machines but they prefer to speak to other humans. So how will people react if a machine sounds very human on the one hand but it reveals it is a machine on the other?
- Does technology that mimics humans in such a way erode our trust in what we see and hear? What does this mean for our future where technology and AI-based systems reach deeper and deeper into our everyday life?
- Does technology like this one increase the digital divide in our society? There are the privileged tech people who are “in the know” and are able to offload bothersome conversations to be carried out by a machine. And on the other hand there are those people (in that case, rather low-paid service workers) who are receiving these calls and have to deal with robot calls. (Although perhaps we end up in a future sooner or later where robot assistants of humans/citizens deal with robot assistants of businesses and municipalities directly). And going a step further, is all this driving the overarching development that humans become second class citizens compared to the tools that are being claimed to be here to help us?
- How many people does Google’s way of handling this (“Wow, our software is so convenient for you and it does that by deceiving service personnel”) lose on our way to responsibly increase the reach of AI-based systems in our life? There is a big risk that the share of people who fear AI technologies are being developed without proper oversight or regulation clearly increases this way.
- Adding the remarks that such systems will of course reveal their true nature only after the initial uproar increases worries that Google is a player viewing ethics more like an after-the-fact consideration rather than a key ingredient for design and introduction. How deep and nuanced is Google’s true appreciation of the ethical concerns at play around AI technologies?
These are the main implications I see
- Artificial systems that do not reveal their true identity have the potential to increase human suspicion into technology and need to be handled with extreme care. Google says that it hopes a set of social norms will organically evolve that make it clear when the caller is an AI. Well, I sure do hope that we will put some more consideration into this as a society, including our regulatory bodies.
- We need to be extremely careful with the fraudulent and malicious potential that becomes possible. If AI-powered bots can freely pose as humans the scope for mischief is incredible, ranging from pure hoaxes to automated scam calls (people might see themselves confronted with automated ‘liars’ on a massive scale) to the full blown scenario of misinformation, propaganda, fake news, etc. Supercharged, that is. So providers need to be very spot on when it comes to spam and scam detection. The broader problem of societal effects of misinformation and propaganda remains a bigger challenge that we already struggle with today.
- We will likely see a sharp aggravation at the point when we let these assistants speak with our own voices. Check out see [https://lyrebird.ai/] to see for yourself where this tech stands. Will we have automated assistants call some of our (more distant) friends (or family?) to tick off this annoying need for saying hi from time to time?
- We need to develop measures to sustain authenticity in a digital world full of AI-based assistants. Independent from a calling AI-software actually saying that it is AI-software we need receiver side mechanisms for authenticity verification.
- No matter what security, control and authentication measures providers and developers of such services will come up with, there is a risk that the algorithms to run such services will end up in the wrong hands. Or that they find their way from the “big Silicon Valley players’ data centers” to run locally. Google’S Duplex system is (at least for now) computationally costly, i.e. Google cannot and should not just release this as software that anyone can run on their home computers. But that might change quicker than we think. It always has. Moore’s law…
- Human interaction is likely to suffer. If it is impossible to spot the difference between a human or a machine on the phone, there is a big risk of us to become much more suspicious as a result. Having “assistance” services that are determined by anthropomorphization driving features such as natural language utilization, increase the risk of misinterpretation. Think of Duplex’s ‘ums’ and ‘ahs’. They are not just weird (at least that is the association we have in May of 2018), but they are misleading and deceptive. undermining people’s trust in your service but also more widely still, in other people generally. This way , AI-bot driven phone calls might make us all a little bit more brusque, if not ruder. Those new “digital talkers” will start getting on our nerves. With respective impact for our patience, respect and trust in conversation in general. Thus small talk and its underlying social value in general, may it be during phone calls or everyday life situation on the street will be challenged.
- If Google really is the front-runner and owner of the cutting-edge of this technology, they might be in a superior position to leverage it for increasing their domination in “organizing the world’s information and making it universally accessible”. What if they not only use this to put it into our hands to delegate those pesky phone calls to local businesses but rather have their systems call local businesses themselves and obtain more valuable meta-information which makes Google’s services even more relevant (and monetizable)?
Where do we go from here?
Well, all of this is not to say I don’t like the technological progress behind this. But I do believe this is a technology that requires very conscious dealing with when it comes to designing and utilizing it. As autonomous systems get more powerful and capable of performing those tasks that we would normally expect a human to be doing, the ethical considerations around those systems scale as exponentially large as the potential applications.
We have not done a really good job in applying ethics in technology in the last 25 years of the internet. Let’s learn from this experience and the related shambles (social network filter bubbles and echo chambers, personal data exploitation, wild micro-targeted advertising chases, fake news and online propaganda, etc.) With AI-assistants acting on our behalf, we are entering a whole new ball game. We’re really just getting started.
For those of you who enjoy memes – even of serious topics – there are of course people already mocking Duplex, and it’s hilarious. 🙂
Leave a Reply