Richard Spurr 1am - 4am
First international guideline on AI safety published by UK standards body
16 January 2024, 11:14
The British Standards Institution has produced advice on how organisations can develop safe and responsible artificial intelligence products.
A first-of-its-kind international standard on how to safely manage artificial intelligence (AI) has been published by the UK’s national standards body.
The guidance sets out how to establish, implement, maintain and continually improve an AI management system, with a focus on safeguards.
It has been published by the British Standards Institution (BSI) and offers direction on how businesses can responsibly develop and deploy AI tools both internally and externally.
It comes amid ongoing debate about the need to regulate the fast-moving technology, which has become increasingly prominent over the last year thanks to the public release of generative AI tools such as ChatGPT.
The UK held the first global AI Safety Summit last November, where world leaders and major tech firms from around the world met to discuss the safe and responsible development of AI, as well as the potential long-term threats the technology could pose.
Those threats included AI being used to create malware for cyber attacks and even being a potentially existential threat to humanity, if humans were to lose control of the technology.
Susan Taylor Martin, chief executive of BSI, said of the new international standard: “AI is a transformational technology. For it to be a powerful force for good, trust is critical.
“The publication of the first international AI management system standard is an important step in empowering organisations to responsibly manage the technology which, in turn, offers the opportunity to harness AI to accelerate progress towards a better future and a sustainable world.
“BSI is proud to be at the forefront of ensuring AI’s safe and trusted integration across society.”
The guidance includes requirements to create context-based risk assessments, as well as additional controls for both internal and external AI products and services.
Scott Steedman, director general for standards at BSI, said: “AI technologies are being widely used by organisations in the UK despite the lack of an established regulatory framework.
“While government considers how to regulate most effectively, people everywhere are calling for guidelines and guardrails to protect them.
“In this fast moving space, BSI is pleased to announce publication of the latest, international management standard for industry on the use of AI technologies, which is aimed at helping companies embed safe and responsible use of AI in their products and services.
“Medical diagnoses, self-driving cars and digital assistants are just a few examples of products that already benefit from AI.
“Consumers and industry need to be confident that in the race to develop these new technologies we are not embedding discrimination, safety blind spots or loss of privacy.
“The guidelines for business leaders in the new AI standard aim to balance innovation with best practice by focusing on the key risks, accountabilities and safeguards.”