Navi Pillay, the U.N.’s human rights chief, has strongly urged the U.S. and other countries to suspend use of AI-assisted software that would “bring race and sex discrimination into the domain of analytics”, thus targeting its key use in privacy, freedom of expression and criminal investigations.
The research is being carried out by Naspers, a tech company based in South Africa, and overseen by the company’s AI, DeepMind.
Dr. Pillay says the technology “allows machines to apply machine learning to speech, photos, text, video, and the movements of people and could be used to ‘exploit gender stereotypes’, ‘relieve stress’ and even ‘deter crime’.” She said because the use of AI has been “marketed in a way which is ‘passive’ and without being ‘actively coercive’, governments “may still overlook its very sinister intentions.”
The researcher argues the advances in AI are being undertaken in the interests of human rights and privacy. “Given the very powerful and fundamentally unethical potential of this technology, we should all be very concerned about the irreversible and serious adverse effects it could have on human rights and on our relationship with other human beings,” said Pillay.
But she notes the ongoing infringement on privacy and freedom of expression is something which has already happened in developing countries including India, Vietnam, Tanzania, Zambia, and Argentina. These countries are all subject to the same freedom of expression and privacy laws as the US.
She says AI-assisted analytics have the potential to “eliminate the very sensitive categories of the human rights to privacy and to freedom of expression and information, through computational operations that could be labeled as ‘bias prevention,’ ‘morality rehabilitation,’ ‘health screening,’ ‘reduction of stigma’ and other descriptive words or more general advertising terminology.”
Pillay argues technology, “which is able to analyze and modify my words – even my speech, body gestures and even gestures in photographs in ‘real time’ – is a significant encroachment on my right to freedom of expression and of privacy.”
As a result, she has asked that the U.S. and other states which are developing AI in their immigration, police and criminal investigations withhold the use of the technology. She urges the U.S. Department of Homeland Security to suspend the use of AI to identify immigrants in the United States by excluding any individual-subject demographic from that automated system.
An official at the CIA, who asked not to be named, says the government and private firms are trying to harness AI. “It’s not the same as ‘robot deportation,’” says the official. “It’s about finding a way to identify and remove people without singling them out, for example with a particularly menacing face, or language.”
In a similar vein, Microsoft President Brad Smith recently admitted the company accidentally used AI to identify American citizens as foreigners, even though they had never been accused of wrongdoing or issued a subpoena to the U.S. government.
The Twitter feed of Naspers international corporate affairs posted that the company has stopped using the technology for non-data security related use and will inform users when its algorithms are developed and are actively available for commercial use.
“This technology will continue to be used, where appropriate, in AI communities, academia, development partners and public-sector organizations, in order to find the best solutions to the problems of discrimination and in order to enhance human rights protection and privacy for us all,” said Pillay.