EU outlines regulation prohibiting certain uses of AI systems, but leaves door open for abuse
The European Union has published a new framework to regulate the use of artificial intelligence across the bloc’s 27 member states... The regulations cover a wide range of applications, from software in self-driving cars to algorithms used to vet job candidates, and arrive at a time when countries around the world are struggling with the ethical ramifications of artificial intelligence. Similar to the EU’s data privacy law, GDPR, the regulation gives the bloc the ability to fine companies that infringe its rules up to 6 percent of their global revenues, though such punishments are extremely rare.
... “It is a landmark proposal of this Commission. It’s our first ever legal framework on artificial intelligence,” said European Commissioner Margrethe Vestager during a press conference. “Today we aim to make Europe world-class in the development of secure, trustworthy, and human-centered artificial intelligence. And, of course, the use of it.”
Civil rights groups are wary about some aspects of the proposal. One of the most important components is a ban on four specific AI use cases throughout the EU. These bans are intended to protect citizens from applications that infringe on their rights, but critics say some prohibitions are too vaguely worded to actually stop harm.
One such prohibition is a ban on the use of real-time “remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement,” which would include facial recognition. The regulation adds, though, that there are numerous exceptions to this prohibition, including letting police use such systems to find the “perpetrator or suspect” of any criminal act that carries a minimum three-year sentence...
Sarah Chander, a senior policy advisor for digital rights group EDRi... said the proposal only offered “a veneer of fundamental rights protection.” Chander told The Verge that banning mass biometric surveillance outright was “the only way to safeguard democracy.”
... [A]ll AI systems classified as high-risk will also have to be indexed in a new EU-wide database. Daniel Leufer, a Europe policy analyst at Access Now, told The Verge that this was an unexpected and welcome addition to the proposal.
“It’s a really good measure, because one of the issues we have is just not knowing when a system is in use,” said Leufer. “For example, with ClearView AI we rely on investigative journalism and leaks to find out if anyone’s using it.” A database would provide “basic transparency about what systems are in use... enabling us to do our job.”