What Are The Ethical Issues Of Google -- Or Anyone Else -- Conducting AI Research In China?
from the don't-be-evil,-but-AI-first? dept
AI is hot, and nowhere more so than in China:
The present global verve about artificial intelligence (AI) and machine learning technologies has resonated in China as much as anywhere on earth. With the State Council’s issuance of the "New Generation Artificial Intelligence Development Plan" on July 20 [2017], China's government set out an ambitious roadmap including targets through 2030. Meanwhile, in China's leading cities, flashy conferences on AI have become commonplace. It seems every mid-sized tech company wants to show off its self-driving car efforts, while numerous financial tech start-ups tout an AI-driven approach. Chatbot startups clog investors' date books, and Shanghai metro ads pitch AI-taught English language learning.
That's from a detailed analysis of China's new AI strategy document, produced by New America, which includes a full translation of the development plan. Part of AI's hotness is driven by all the usual Internet giants piling in with lots of money to attract the best researchers from around the world. One of the companies that is betting on AI in a big way is Google. Here's what Sundar Pichai wrote in his 2016 Founders' Letter:
Looking to the future, the next big step will be for the very concept of the "device" to fade away. Over time, the computer itself -- whatever its form factor -- will be an intelligent assistant helping you through your day. We will move from mobile first to an AI first world.
Given that emphasis, and the rise of China as a hotbed of AI activity, the announcement in December last year that Google was opening an AI lab in China made a lot of sense:
This Center joins other AI research groups we have all over the world, including in New York, Toronto, London and Zurich, all contributing towards the same goal of finding ways to make AI work better for everyone.
Focused on basic AI research, the Center will consist of a team of AI researchers in Beijing, supported by Google China's strong engineering teams.
So far, so obvious. But an interesting article on the Macro Polo site points out that there's a problem with AI research in China. It flows from the continuing roll-out of intrusive surveillance technologies there, as Techdirt has discussed in numerous posts. The issue is this:
Many, though not all, of these new surveillance technologies are powered by AI. Recent advances in AI have given computers superhuman pattern-recognition skills: the ability to spot correlations within oceans of digital data, and make predictions based on those correlations. It's a highly versatile skill that can be put to use diagnosing diseases, driving cars, predicting consumer behavior, or recognizing the face of a dissident captured by a city's omnipresent surveillance cameras. The Chinese government is going for all of the above, making AI core to its mission of upgrading the economy, broadening access to public goods, and maintaining political control.
As the Macro Polo article notes, Google is unlikely to allow any of its AI products or technologies to be sold directly to the authorities for surveillance purposes. But there are plenty of other ways in which advances in AI produced at Google's new lab could end up making life for Chinese dissidents, and for ordinary citizens in Xinjiang and Tibet, much, much worse. For example, the fierce competition for AI experts is likely to see Google's Beijing engineers headhunted by local Chinese companies, where knowledge can and will flow unimpeded to government departments. Although arguably Chinese researchers elsewhere -- in the US or Europe, for example -- might also return home, taking their expertise with them, there's no doubt that the barriers to doing so are higher in that case.
So does that mean that Google is wrong to open up a lab in Beijing, when it could simply have expanded its existing AI teams elsewhere? Is this another step toward re-entering China after it shut down operations there in 2010 over the authorities' insistence that it should censor its search results -- which, to its credit, Google refused to do? "AI first" is all very well, but where does "Don't be evil" fit into that?
Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: ai, artificial intelligence, china, research
Companies: google
Reader Comments
Subscribe: RSS
View by: Time | Thread
when you get big enough...
Nor is all this tech we love to play with..the smartphones, the ubiquitous videocams, the "smart" devices -- cars, electric meters, and so on. We see this with our impact on the environment, too.
Yes, google is having an AI lab in China...but by how much would it change the direction of China if they didn't? Wouldn't the chinese just set one up themselves?
[ link to this | view in thread ]
[ link to this | view in thread ]
My guess is, China has no qualms about making AI clones.
[ link to this | view in thread ]
It's the dangers associated with mass private data being collected and shared by governments and corporations without transparency or user consent.
The issue is no longer just about invasion of privacy - it's the targeted psychological and behavioural (and physical) manipulation that becomes possible when you amass huge quantities of identifiable person-level data on a societal scale.
In the worse cases these data could be used to identify and track (and get rid of) political enemies of tyrannical governments. It's not like we haven't seen that happen before..
But there are other serious dangers, such as optimised, targeted political advertising being used to "game" elections (there is plenty of evidence that this has occurred over the last few cycles) and optimally targeted marketing of worthless or harmful products to vulnerable and easily manipulated populations (e.g. what the big video game publishers are currently doing, or what junk food and booze companies do). We need to stop pretending and acknowledge that humans, and communities of humans, can be "hacked" using using big data and machine learning.
China is worse than Google and Google is worse that many others. But it is governments that are failing to protect our personal data from being collected by these actors.
Personal data needs to legally belong, and be controlled by, the person it is about. Maybe then we could start seeing the incredible potential of AI being used for more beneficial purposes.
[ link to this | view in thread ]
Hmm
Google AI - China
https://en.wikipedia.org/wiki/Historic_recurrence
[ link to this | view in thread ]
Dave, I don't understand "ethics"
[ link to this | view in thread ]
Re:
[ link to this | view in thread ]