Table of Contents
The UK government has signed the first international treaty on artificial intelligence, in a move aimed at preventing misuse of the technology such as the spread of misinformation or the use of biased data to make decisions.
Under the legally binding agreement, states must implement safeguards against any threats AI poses to human rights, democracy and the rule of law. The treaty, called the Framework Convention on Artificial Intelligence, was drawn up by the Council of Europe, an international human rights organisation, and was signed on Thursday by the EU, the UK, the US and Israel.
Justice Secretary Shabana Mahmood said AI had the capacity to “radically improve” public services and “accelerate” economic growth, but must be adopted without compromising basic human rights.
“This convention is an important step towards ensuring that these new technologies can be harnessed without eroding our oldest values, such as human rights and the rule of law,” he said.
Below is a summary of the convention and its impact on the use of AI.
What is the purpose of the convention?
According to the Council of Europe, its aim is to “fill any legal loopholes that may result from rapid technological advances.” Recent advances in AI (the term for computer systems that can perform tasks typically associated with intelligent beings, such as learning and problem-solving) have triggered a regulatory scramble around the world to mitigate the potential flaws of the technology.
This means there is a patchwork of regulations and agreements covering the technology, from the EU AI Act to last year’s Bletchley declaration at the inaugural global AI safety summit, and a voluntary testing regime signed by a number of countries and companies at the same meeting. Thursday’s agreement is an attempt to create a global framework.
The treaty states that AI systems must comply with a set of principles, including: protection of personal data, non-discrimination, safe development and human dignity. As a result, governments are expected to introduce safeguards, such as curbing misinformation generated by AI and preventing systems from being trained with biased data, which could lead to erroneous decisions in a variety of situations, such as applications for employment or benefits.
Who is covered by the treaty?
It covers the use of AI by public authorities and the private sector. Any company or body using relevant AI systems must assess their potential impact on human rights, democracy and the rule of law, and make that information publicly available. People should be able to challenge decisions made by AI systems and lodge complaints with authorities. Users of AI systems should also be aware that they are dealing with an AI and not a human being.
How will it be implemented in the UK?
The UK now needs to check whether its various provisions are covered by existing legislation, such as that of the European Court of Human Rights and other human rights laws. The government is preparing a consultation on a new draft artificial intelligence bill.
“Once the treaty is ratified and enters into force in the UK, existing laws and measures will be enhanced,” the government said.
As for the imposition of sanctions, the convention provides that authorities can prohibit certain uses of AI. For example, the EU AI Law prohibits systems that use facial recognition databases extracted from surveillance cameras or the Internet. It also prohibits systems that categorize humans based on their social behavior.