A group of policy experts convened by the EU recommended the use of AI for mass surveillance and mass & # 39; scoring of individuals & # 39; to prohibit, a practice that may involve gathering varied information about citizens – everything from criminal records to their behavior on social media – and then using it to assess their moral or ethical integrity.
The recommendations are part of the EU's ongoing efforts to establish itself as a leader in the so-called "ethical AI." Earlier this year, it issued its first directives on the subject, which stated that AI should be deployed in the EU in a reliable and "People-oriented" way.
The new report offers more specific recommendations. These include identifying areas of AI research that require funding; encourage the EU to include AI training in schools and universities; and suggest new methods to monitor the impact of AI. However, the document is currently only a series of recommendations and not a blueprint for legislation.
In particular, the suggestions that the EU should prohibit AI-enabled mass scoring and restrict mass surveillance are some of the relatively few concrete recommendations of the report. (Often the authors of the report suggest that further research in this or that area is needed.)
The fear of AI mass scoring has largely arisen from reports about the emerging social credit system in China. This program is often presented as a dystopian tool that gives the Chinese government enormous control over the behavior of the citizens; enable them to issue punishments (such as banning someone from traveling on high-speed trains) in response to ideological violations (such as criticism of the Communist party on social media).
However, more recent, nuanced reporting suggests that this system is less Orwellian then it seems. It is split into dozens of pilot programs, with the main focus on eliminating everyday corruption in Chinese society rather than punishing potential crime of thought.
Experts have also noted that similar surveillance and punishment systems already exist in the West, but instead of being controlled by governments, they are run by private companies. With this extra context, it is not clear what an EU-wide ban on "mass score" would be. Would it also relate to the activities of insurance companies, creditors or social media platforms, for example?
Elsewhere in today's report, EU experts suggest that citizens should not be "subject to unjustified personal, physical or mental tracking or identification" with the help of AI. This could be the use of AI to identify emotions in someone's voice or to track their facial expressions, they suggest. But again, these are methods that companies are already using, where they are used for tasks such as tracking employee productivity. Should this activity be prohibited in the EU?
Uncertainty about the scope of the report's recommendations is matched by the criticism that such policy documents are now toothless.
Fanny Hidvegi, member of the expert group who wrote the report and a policy analyst at Access Now, non-profit said the document was too vague and lacked "clarity about safeguards, red lines and enforcement mechanisms." Other stakeholders have criticized the EU process of driven by corporate interests. Philosopher Thomas Metzinger, another member of the AI expert group, has pointed out how first "red lines" about how AI don't have to are limited to mere & # 39; critical concerns & # 39 ;.
So while the EU can instruct experts to tell it to ban AI mass surveillance and scoring, it does not guarantee that legislation will be introduced to prevent this damage.