UN urges moratorium on use of AI threatening human rights

15 September 2021, 15:04

UN High Commissioner for Human Rights Michelle Bachelet
UN Human Rights Artificial Intelligence. Picture: PA

Applications that should be prohibited included government ‘social scoring’ systems that judge people based on their behaviour.

The UN is calling for a moratorium on the use of artificial intelligence technology that poses a serious risk to human rights, including face-scanning systems that track people in public spaces.

Michelle Bachelet, the UN High Commissioner for Human Rights, also said that countries should expressly ban AI applications that did not comply with international human rights law.

Applications that should be prohibited included government “social scoring” systems that judge people based on their behaviour and certain AI-based tools that categorise people into clusters by ethnicity or gender for example.

AI-based technologies could be a force for good but they could also “have negative, even catastrophic, effects if they are used without sufficient regard to how they affect people’s human rights,” Ms Bachelet said in a statement.

Her comments came with a new UN report examining how countries and businesses have rushed into applying AI systems that affect people’s lives and livelihoods without setting up proper safeguards to prevent discrimination and other harms.

She did not call for an outright ban of facial recognition technology, but said governments should halt the scanning of people’s features in real time until they can show the technology is accurate, will not discriminate and meets certain privacy and data protection standards.

While countries were not mentioned by name in the report, China in particular has been among the countries who have rolled out facial recognition technology — particularly as part of surveillance in the western region of Xinjiang, where many of its minority Uighurs live.

The report also voiced concern about tools that try to deduce people’s emotional and mental states by analysing their facial expressions or body movements, saying such technology was susceptible to bias, misinterpretation and lacked scientific basis.

“The use of emotion recognition systems by public authorities, for instance for singling out individuals for police stops or arrests or to assess the veracity of statements during interrogations, risks undermining human rights, such as the rights to privacy, to liberty and to a fair trial,” the report said.

The report’s recommendations echo the thinking of many political leaders in Western democracies, who hope to tap into AI’s economic and societal potential while addressing growing concerns about the reliability of tools that can track and profile individuals and make recommendations about who gets access to jobs, loans and educational opportunities.

Microsoft logo
Microsoft and other US tech giants are backing efforts to set limits on the riskiest uses (Niall Carson/PA)

European regulators have already taken steps to rein in the riskiest AI applications. Proposed regulations outlined by European Union officials this year would ban some uses of AI, such as real-time scanning of facial features, and tightly control others that could threaten people’s safety or rights.

US president Joe Biden’s administration has voiced similar concerns about such applications, although it has not yet outlined a detailed approach to curtailing them.

A newly formed group called the Trade and Technology Council, jointly led by American and European officials, has sought to collaborate on developing shared rules for AI and other tech policy.

Efforts to set limits on the riskiest uses have been backed by Microsoft and other US tech giants who hope to guide the rules affecting the technology they have helped to build.

By Press Association

More Technology News

See more More Technology News

A person using their smartphone

Just 18% of teachers think phone ban would improve pupil behaviour – poll

A laptop user with their hood up holding a bank card

EE warns Christmas shoppers over rising threat of scams

The Royal Shakespeare Theatre in Stratford-upon-Avon (RSC/PA)

Royal Shakespeare Company to look at AI and immersive technology in theatre

A young girl uses the TikTok app on a smartphone

Safety is ‘at the core’ of TikTok, European executive says

Microsoft surface tablets

Microsoft outage still causing ‘lingering issues’ with email

The Google logon on the screen of a smartphone

Google faces £7 billion legal claim over search engine advertising

Hands on a laptop

Estimated 7m UK adults own cryptoassets, says FCA

A teenager uses his mobile phone to access social media,

Social media users ‘won’t be forced to share personal details after child ban’

Google Antitrust Remedies

US regulators seek to break up Google and force Chrome sale

Jim Chalmers gestures

Australian government rejects Musk’s claim it plans to control internet access

Graphs showing outages across Microsoft

Microsoft outage hits Teams and Outlook users

A person holds an iphone showing the app for Google chrome search engine

Apple and Google ‘should face investigation over mobile browser duopoly’

UK unveils AI cyber defence lab to combat Russian threats, as minister pledges unwavering support for Ukraine

British spies to ramp up fight against Russian cyber threats with launch of cutting-edge AI research unit

Pat McFadden

UK spies to counter Russian cyber warfare threat with new AI security lab

Openreach van

Upgrade to Openreach ultrafast full fibre broadband ‘could deliver £66bn boost’

Laptop with a virus warning on the screen

Nato countries are in a ‘hidden cyber war’ with Russia, says Liz Kendall