U.N. human rights chief calls for a moratorium on AI use

1836
4
U.N. human rights chief calls for a moratorium on AI use

GENEVA — The U.N. Human Rights chief is calling for a moratorium on the use of artificial intelligence technology that poses a serious risk to human rights, including face-scanning systems that track people in public spaces.

Michelle Bachelet, U.N. High Commissioner for Human Rights, also said Wednesday that countries should expressly ban AI applications that don't comply with international human rights law.

Applications that are not allowed include government-based social scoring systems which judge people based on their behavior and certain AI-based applications that categorize people into groups such as ethnicity or gender.

AI-based technologies can be a force for good but they can also have harmful, even catastrophic effects if used with sufficient regard to how they impact people's human rights, Bachelet said in a statement.

Her comments came with a new U.N. report that examines how countries and businesses have rushed into applying AI systems that affect people s lives and livelihoods without setting up proper safeguards to prevent discrimination and other harms.

It is not about having AI, Peggy Hicks, the Rights Office's director of thematic engagement, told journalists as she presented the report in Geneva. It s about recognizing that if AI is going to be used in these critical function areas - very critical - human rights, it s got to be done the right way. And yet we simply haven't put in place a framework to ensure that happens. Bachelet didn t call for an outright ban of facial recognition technology, but said governments should stop the scanning of people s features in real time until they can show the technology is accurate, won t discriminate and meets certain privacy and data protection standards.

While countries weren t mentioned by name in the report, China is among the countries that have developed facial recognition technology — particularly for surveillance in the western region of Xinjiang, where many of its minority Uyghers live. The key authors of the report said naming specific countries wasn t part of their mandate and doing so could even be counterproductive.

In the Chinese context, as we in other contexts, Hicks is concerned about transparency and discriminatory applications addressed to particular communities.

She cited several US and Australian court cases where artificial intelligence had been incorrectly applied. The report also voices wariness about tools that try to deduce people's emotional and mental states by analysing their facial expressions or body movements, saying such technology is susceptible to bias, misinterpretations and lacks scientific basis.

Use of emotion recognition systems by public authorities, for example to scrutinize individuals for police stops or arrests or assess the veracity of statements during interrogations, risks undermining human rights, such as the rights to privacy, liberty and to a fair trial, claims the report.

The report s recommendations echo the thoughts of many political leaders in Western democracies, who hope to tap AI s economic and societal potential while addressing growing concerns about the reliability of tools that can track and profile individuals and make recommendations about who gets access to jobs, loans and educational opportunities.

The EU regulators have already taken steps to ban the riskiest AI applications. Proposed regulations outlined by European Union officials in 2016 would restrict some uses of AI, such as real-time scanning of facial features, and ban others that could threaten people s safety or rights.

Generally, President Joe Biden's administration has expressed similar concerns, though it hasn't yet outlined a detailed approach to curtailing them. A newly formed group called the Trade and Technology Council, jointly led by European and American officials, has sought to collaborate on developing shared rules for AI and other tech policy.

Efforts to limit the riskiest uses of AI have been backed by Microsoft and other U.S. tech giants hoping to guide the rules affecting the technology. Microsoft has worked with and provided funding to the U.N. rights office to help improve its use of technology, but funding for the report came through Hicks's regular budget.

Western nations were at the forefront of expressing concerns about the discriminatory use of AI.

If you think about the ways in which AI might be used in discriminatory fashion or to further strengthen discriminatory tendencies, it is pretty scary, said U.S. Commerce Secretary Gina Raimondo during a virtual conference in June. We have to make sure we don t let that happen. She was speaking with Margrethe Vestager, the European Commission Executive Vice President for the digital age, who suggested some artificial intelligence (AI) uses should be off-limits completely in democracies like ours. She cited social scoring, which can close someone's privileges in society, and the broad, blanket use of biometric identification in public space.