On 8 May, Google revealed that one of its apps will soon be able to book appointments for you over the phone. Some saw this as an important step forward in artificial intelligence (AI). But others were not as impressed.
Praise and criticism
At its conference, Google demonstrated new technology called Google Duplex. It booked an appointment with a human on the receiving end of the call. The AI navigated questions and answers in a convincingly human way, including conversational cues such as “mm-hmm”.
Reactions were divided. Some outlets praised Google Duplex for its “giant leap” in AI progress. And others said that in some ways it even passes the Turing Test, which evaluates whether a person can distinguish a robot from another human. But others pointed out the demonstration showed a “deceitful and unethical” technology.
The tech company quickly responded to this criticism. On 10 May, it announced that Google Duplex would identify itself. In a statement to The Verge, the company said:
We are designing this feature with disclosure built-in, and we’ll make sure the system is appropriately identified. What we showed at I/O was an early technology demo, and we look forward to incorporating feedback as we develop this into a product.
But the fact Google didn’t think to include this before its initial presentation is indicative to some of the company’s – and the wider tech sector’s – long-term, problematic ethical stance.
An article in TechCrunch pointed out that:
the episode is worryingly suggestive of a company that views ethics as an after-the-fact consideration.
Google’s experiments do appear to have been designed to deceive … Because their main hypothesis was ‘can you distinguish this from a real person?’. In this case it’s unclear why their hypothesis was about deception and not the user experience…
On Twitter, tech critic Zeynep Tufecki also criticised the ethics of Google Duplex:
Google Assistant making calls pretending to be human not only without disclosing that it's a bot, but adding "ummm" and "aaah" to deceive the human on the other end with the room cheering it… horrifying. Silicon Valley is ethically lost, rudderless and has not learned a thing.
— zeynep tufekci (@zeynep) May 9, 2018
Tufecki says that a clearly defined line between human and AI is essential for protecting people’s trust. And, crucially, that this should have been a consideration from day one of Google Duplex’s design. She also points at the overt deceit of the demonstration, saying it showed tech companies such as Google “[have] not learned a thing”.
Google’s relationship with ethics has often been in question. The US military uses Google’s image recognition technology to analyse drone footage. This is something thousands of the company’s employees spoke out against earlier this year. And in November 2017, a lawsuit claiming Google unlawfully harvested data from iPhone users through bypassing default security settings in 2011 and 2012 was launched.
The company’s ethics in the real world have also come under fire. The Google Bus – transport laid on for employees getting to and from work – has become a symbol of gentrification in California. These buses are damaged and vandalised in protest about the tech company’s presence in the Bay Area driving up living costs. And in Berlin, Germany, a proposed Google site is facing fierce local resistance. Campaigners cite “gentrification, displacement and privatisation of public space” as their concerns.
The ethics of technological development is a deeply complex subject. What appears on the surface a move towards greater control and convenience for society can hide darker motives. In its 2015 book To Our Friends, anarchist group The Invisible Committee said:
With Google, what is concealed beneath the exterior of an innocent interface and a very effective search engine, is an explicitly political project… One never maps a territory that one doesn’t contemplate appropriating.
And this is played out in recent AI scandals such as Cambridge Analytica’s data harvesting.
Google’s history of questionable ethics both on and offline, show that ethics have consistently taken a backseat during technological development. And when these technologies are now integral to the fabric of society, that’s a massive problem.
– Not all tech developers are ethics-phobic. The Institute of Electrical and Electronics Engineers’ ‘Ethically Aligned Design’ publication attempts to advance the discussion for ethical AI.
– Check out the Electronic Frontier Foundation for information and practical solutions for digital security.
– Watch the tech-critical documentary Stare Into the Lights, My Pretties for free.
Featured image via YouTube
We know everyone is suffering under the Tories - but the Canary is a vital weapon in our fight back, and we need your support
The Canary Workers’ Co-op knows life is hard. The Tories are waging a class war against us we’re all having to fight. But like trade unions and community organising, truly independent working-class media is a vital weapon in our armoury.
The Canary doesn’t have the budget of the corporate media. In fact, our income is over 1,000 times less than the Guardian’s. What we do have is a radical agenda that disrupts power and amplifies marginalised communities. But we can only do this with our readers’ support.
So please, help us continue to spread messages of resistance and hope. Even the smallest donation would mean the world to us.