EU regulators pass the planet's first sweeping AI regulations
The European Parliament has approved sweeping legislation to regulate artificial intelligence, nearly three years after the draft rules were first proposed. Officials reached an agreement on AI development in December. On Wednesday, members of the parliament approved the AI Act with 523 votes in favor and 46 against, There were 49 abstentions. The EU says the regulations seek to "protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI, while boosting innovation and establishing Europe as a leader in the field." The act defines obligations for AI applications based on potential risks and impact. The legislation has not become law yet. It's still subject to lawyer-linguist checks, while the European Council needs to formally enforce it. But the AI Act is likely to come into force before the end of the legislature, ahead of the next parliamentary election in early June. Most of the provisions will take effect 24 months after the AI Act becomes law, but bans on prohibited applications will apply after six months. The EU is banning practices that it believes will threaten citizens' rights. "Biometric categorization systems based on sensitive characteristics" will be outlawed, as will the "untargeted scraping" of images of faces from CCTV footage and the web to create facial recognition databases. Clearview AI's activity would fall under that category. Other applications that will be banned include social scoring; emotion recognition in schools and workplaces; and "AI that manipulates human behavior or exploits people’s vulnerabilities." Some aspects of predictive policing will be prohibited i.e. when it's based entirely on assessing someone's characteristics (such as inferring their sexual orientation or political opinions) or profiling them. Although the AI Act by and large bans law enforcement's use of biometric identification systems, it will be allowed in certain circumstances with prior authorization, such as to help find a missing person or prevent a terrorist attack. Applications that are deemed high-risk — including the use of AI in law enforcement and healthcare— are subject to certain conditions. They must not discriminate and they need to abide by privacy rules. Developers have to show that the systems are transparent, safe and explainable to users too. As for AI systems that the EU deems low-risk (like spam filters), developers still have to inform users that they're interacting with AI-generated content. The law has some rules when it comes to generative AI and manipulated media too. Deepfakes and any other AI-generated images, videos and audio will need to be clearly labeled. AI models will have to respect copyright laws too. "Rightsholders may choose to reserve their rights over their works or other subject matter to prevent text and data mining, unless this is done for the purposes of scientific research," the text of the AI Act reads. "Where the rights to opt out has been expressly reserved in an appropriate manner, providers of general-purpose AI models need to obtain an authorization from rightsholders if they want to carry out text and data mining over such works." However, AI models built purely for research, development and prototyping are exempt. The most powerful general-purpose and generative AI models (those trained using a total computing power of more than 10^25 FLOPs) are deemed to have systemic risks under the rules. The threshold may be adjusted over time, but OpenAI's GPT-4 and DeepMind's Gemini are believed to fall into this category. The providers of such models will have to assess and mitigate risks, report serious incidents, provide details of their systems' energy consumption, ensure they meet cybersecurity standards and carry out state-of-the-art tests and model evaluations. As with other EU regulations targeting tech, the penalties for violating the AI Act's provisions can be steep. Companies that break the rules will be subject to fines of up to €35 million ($51.6 million) or up to seven percent of their global annual profits, whichever is higher. The AI Act applies to any model operating in the EU, so US-based AI providers will need to abide by them, at least in Europe. Sam Altman, CEO of OpenAI creator OpenAI, suggested last May that his company might pull out of Europe were the AI Act to become law, but later said the company had no plans to do so. To enforce the law, each member country will create its own AI watchdog and the European Commission will set up an AI Office. This will develop methods to evaluate models and monitor risks in general-purpose models. Providers of general-purpose models that are deemed to carry systemic risks will be asked to work with the office to draw up codes of conduct. This article originally appeared on Engadget at https://www.engadget.com/eu-regulators-pass-the-planets-first-sweeping-ai-regulations-190654561.html?src=rss
The European Parliament has approved sweeping legislation to regulate artificial intelligence, nearly three years after the draft rules were first proposed. Officials reached an agreement on AI development in December. On Wednesday, members of the parliament approved the AI Act with 523 votes in favor and 46 against, There were 49 abstentions.
The EU says the regulations seek to "protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI, while boosting innovation and establishing Europe as a leader in the field." The act defines obligations for AI applications based on potential risks and impact.
The legislation has not become law yet. It's still subject to lawyer-linguist checks, while the European Council needs to formally enforce it. But the AI Act is likely to come into force before the end of the legislature, ahead of the next parliamentary election in early June.
Most of the provisions will take effect 24 months after the AI Act becomes law, but bans on prohibited applications will apply after six months. The EU is banning practices that it believes will threaten citizens' rights. "Biometric categorization systems based on sensitive characteristics" will be outlawed, as will the "untargeted scraping" of images of faces from CCTV footage and the web to create facial recognition databases. Clearview AI's activity would fall under that category.
Other applications that will be banned include social scoring; emotion recognition in schools and workplaces; and "AI that manipulates human behavior or exploits people’s vulnerabilities." Some aspects of predictive policing will be prohibited i.e. when it's based entirely on assessing someone's characteristics (such as inferring their sexual orientation or political opinions) or profiling them. Although the AI Act by and large bans law enforcement's use of biometric identification systems, it will be allowed in certain circumstances with prior authorization, such as to help find a missing person or prevent a terrorist attack.
Applications that are deemed high-risk — including the use of AI in law enforcement and healthcare— are subject to certain conditions. They must not discriminate and they need to abide by privacy rules. Developers have to show that the systems are transparent, safe and explainable to users too. As for AI systems that the EU deems low-risk (like spam filters), developers still have to inform users that they're interacting with AI-generated content.
The law has some rules when it comes to generative AI and manipulated media too. Deepfakes and any other AI-generated images, videos and audio will need to be clearly labeled. AI models will have to respect copyright laws too. "Rightsholders may choose to reserve their rights over their works or other subject matter to prevent text and data mining, unless this is done for the purposes of scientific research," the text of the AI Act reads. "Where the rights to opt out has been expressly reserved in an appropriate manner, providers of general-purpose AI models need to obtain an authorization from rightsholders if they want to carry out text and data mining over such works." However, AI models built purely for research, development and prototyping are exempt.
The most powerful general-purpose and generative AI models (those trained using a total computing power of more than 10^25 FLOPs) are deemed to have systemic risks under the rules. The threshold may be adjusted over time, but OpenAI's GPT-4 and DeepMind's Gemini are believed to fall into this category.
The providers of such models will have to assess and mitigate risks, report serious incidents, provide details of their systems' energy consumption, ensure they meet cybersecurity standards and carry out state-of-the-art tests and model evaluations.
As with other EU regulations targeting tech, the penalties for violating the AI Act's provisions can be steep. Companies that break the rules will be subject to fines of up to €35 million ($51.6 million) or up to seven percent of their global annual profits, whichever is higher.
The AI Act applies to any model operating in the EU, so US-based AI providers will need to abide by them, at least in Europe. Sam Altman, CEO of OpenAI creator OpenAI, suggested last May that his company might pull out of Europe were the AI Act to become law, but later said the company had no plans to do so.
To enforce the law, each member country will create its own AI watchdog and the European Commission will set up an AI Office. This will develop methods to evaluate models and monitor risks in general-purpose models. Providers of general-purpose models that are deemed to carry systemic risks will be asked to work with the office to draw up codes of conduct. This article originally appeared on Engadget at https://www.engadget.com/eu-regulators-pass-the-planets-first-sweeping-ai-regulations-190654561.html?src=rss
What's Your Reaction?