The EU has become the first country in the world to pass legislation to regulate artificial intelligence, but some say it does not go far enough, while others argue that “additional constraints” could harm companies.
Since the launch of ChatGPT, European policymakers have rushed to develop rules and warnings for technology companies, and this week marked a monumental milestone in establishing the EU's artificial intelligence (AI) rules.
Wednesday in the European Parliament approved The Artificial Intelligence Act takes a risk-based approach and ensures that companies release legally compliant products before they are made available to the public.
The next day, the European Commission asked Bing, Facebook, Google Search, Instagram, Snapchat, TikTok, YouTube and X for details on how they limit the risks of generative AI under their respective laws. .
The EU's main concerns include AI illusions (when models make mistakes and make things up), the viral spread of deepfakes, and automated AI manipulation that could mislead voters in elections. Communities have their own grievances about the bill. On the other hand, some researchers say that it is not sufficiently effective.
technology monopoly
Brussels deserves “real praise” for being the first jurisdiction in the world to pass regulations that reduce many of the risks of AI, but there are some problems with the final deal, said Max, European director at the Open Markets Institute.・Mr. von Thun said.
He told Euronews Next that there were “significant loopholes for public institutions” and “relatively weak regulations for the largest foundation models that cause the most harm”.
The underlying model is a machine learning model trained on data and can be used to perform various tasks, such as writing poetry. ChatGPT is the base model.
But von Thun's biggest concern is technology monopolies.
“The AI Act addresses the biggest threat currently posed by AI: its role in reinforcing and entrenching the extreme power that a few dominant technology companies already have over our personal lives, economies, and democracies. “You can't deal with some AI,” he said.
Similarly, the European Commission said it should be wary of monopolistic abuses in the AI ecosystem.
“The EU needs to understand that the scale of the risks posed by AI is closely related to the size and power of the dominant companies developing and deploying these technologies. As far as we are concerned, we cannot successfully deal with the former,” von Thun said.
Last month, the threat of AI monopolies was brought into the spotlight when French startup Mistral AI was revealed to be partnering with Microsoft.
It came as a shock to some in the EU, as France had sought concessions on AI laws for open source companies like Mistral.
“Historic moment”
However, some startups welcomed the clarity the new regulations bring.
Alex Combesy, co-founder and CEO of French open source AI company Giskard, said: “The final adoption of the EU AI law by the EU Parliament is both a historic moment and a relief. ” he said.
He told Euronews Next: “The law imposes additional constraints and rules on developers of high-risk AI systems and underlying models, which are considered 'systemic risks'. “I am confident that checks and balances can be implemented effectively.”
“This historic moment paves the way for a future where AI is used responsibly to foster trust and keep everyone safe,” he said.
The law works by differentiating the risks posed by the underlying model based on the computing power that trained the underlying model. AI products that exceed a threshold of computing power will be more tightly regulated.
This classification is considered a starting point and may be considered by the committee in the same way as other definitions.
“Public goods”
However, not everyone agrees with this classification.
“From my point of view, AI systems used in the information sector should be classified as high risk and should be subject to stricter rules, which is clearly in the adopted EU AI law. is not the case,” says policy manager Katharina Tügel. At the Forum on Information and Democracy.
“The Commission with the power to amend the cases of use of high-risk systems may explicitly refer to AI systems used in the information sector as high-risk, taking into account the impact on fundamental rights. Yes,” she told Euronews Next.
“It's not just private companies that are driving our common future. AI must be a public good,” she added.
But others argue that companies also need to have a say and be able to cooperate with the EU.
“It is vital that the EU harnesses the dynamism of the private sector, which will power the future of AI. Getting this right will make Europe more competitive and more attractive to investors. “It's important for us to be successful,” said Julie-Lynn Teigland, EY Europe, Middle East, India and Africa (EMEIA) Managing Partner.
However, she said businesses within and outside the EU need to proactively prepare for the law to come into force. This means taking steps to ensure an up-to-date inventory of the AI systems you are developing or deploying, and determining your company's position in the AI value chain to understand your legal responsibilities. Masu.”
“Bittersweet taste”
For startups and small businesses, this can mean even more work.
“This decision has a bittersweet taste,” said France Digital's head of communications, Marianne Trudeau-Bittker.
“The AI Act addresses major challenges in terms of transparency and ethics, but despite some tweaks planned for start-ups and small businesses, particularly through the regulatory sandbox, or imposes substantial obligations on all companies that develop it.
“We are concerned that this document will only create additional regulatory barriers that benefit competition between the US and China, and reduce the chances of a European AI champion emerging.” she added.
“Effective implementation”
However, even if the AI Act is passed, enforcement is the next challenge.
“The focus has now shifted to its effective implementation and enforcement. This requires a renewed focus on complementary legislation,” said Liszt Wouk, head of EU research at the non-profit Future of Life Institute. he told Euronews Next.
Such complementary legislation includes the AI Liability Directive, which aims to support claims for damages caused by AI-enabled products and services, and the EU AI Liability Directive, which aims to streamline enforcement of the regulation. Includes stations.
“A key point to ensure that the law is worth the paper it is written on is that the AI Bureau has the resources to carry out the tasks it has been set out to do, and that the General AI Code of Practice It’s about properly drafting laws, including the laws of society,” he said.