Earlier this month, the European Union approved new regulations on artificial intelligence technology in a framework called the EU AI Law. The law aims to govern how artificial intelligence technologies are developed and deployed amid growing concerns about both the risks and applications of artificial intelligence across sectors such as government and health care. And education.
According to the EU's website for the law, the new rules will classify AI systems based on their “risk,” including biometric classification systems that infer sensitive attributes such as race, political opinion, and trade unions. It will ban AI practices and systems that pose “unacceptable risks.” except for “labeling or filtering lawfully obtained biometric data sets or for law enforcement classification of biometric data.” The regulation also newly bans emotional recognition technology in workplaces and educational institutions, except for medical or safety reasons, and introduces “high-risk” AI systems, such as tools used in education for assessment and admissions purposes. says that it is necessary to design. Enables deployers to implement human monitoring.
For U.S. edtech companies, experts say, these new regulations will affect what tools they can sell to customers in the EU, what they can offer to international students, and ultimately what resources they can devote to development. states that it may be given.Moreover, EU AI law could ultimately trigger domestic policy changes in the US
Shaila Rana, an IT professor at Purdue University, said the law focuses specifically on regulating AI tools used in education, and companies and universities based in the U.S. that do business with customers in the European Union. , pointed out that it is necessary to pay attention to new regulations. Rules.
Just as US tech companies and organizations must comply with EU data privacy regulations when doing business with European customers, companies must “think twice before developing and deploying” AI systems in EU markets. There is a need, he said. He added that he hopes the new regulations will serve as a model for how the U.S. approaches regulation of the AI industry.
“It will be similar to how GDPR works.” [General Data Protection Regulation, the EU’s data privacy law] teeth [for companies based in the United States]. “Even if she doesn't have data on EU nationals, ultimately if she wants to expand into the EU, she will have to comply with GDPR,” she said. “When it comes to educational technology, we need to follow these baselines in case the US has legal priority or we have international students from the EU.” , they will need to consider the regulatory requirements and obligations outlined in this legislation.”
Bernard Marr is a business and technology writer who has written extensively about AI for publications such as: guardian and wall street journal, stated in the email to government technology The regulations include a complete ban on technologies that threaten personal safety and rights, including AI that manipulates the behavior of vulnerable people like children. This may include devices such as voice-activated toys that may encourage harmful behavior. He said EU regulations generally “focus on protecting vulnerable groups” and will therefore have an impact on the EU market, particularly for organizations and businesses that want to deploy edtech tools for K-12 students. It was pointed out that there is a possibility of giving
“Given that many U.S. companies operate globally, complying with these international standards requires a fundamental shift toward incorporating ethical AI practices and privacy-preserving technologies into the development process. “is needed,” he wrote. “This adaptation requirement requires companies, regardless of where they operate, to prioritize user safety and data protection, thereby reshaping the global technology development environment to meet these overarching regulatory expectations. It highlights a broader trend that
Barnard said the EU AI law is likely to extend far beyond Europe's borders and influence how AI tools are designed and deployed around the world in the coming years.
“The EU's AI regulations are likely to trigger similar policies in the US, especially as concerns about privacy, bias, and the ethical use of AI continue to grow,” he said in an email. “The comprehensive, principles-based approach taken by the EU is a model for U.S. policymakers, advocating a balanced path that fosters innovation in education technology while addressing key ethical challenges.” There is a possibility that it will be.”
As for how this law could affect U.S. policymakers, Nazanin Andaribi, an assistant professor of information at the University of Michigan, said EU regulations could be similar to banning emotional recognition technology in the workplace and home. He said he hopes this will spark a movement in the United States as well. education. She said banning emotion recognition technology was a wise move on the part of the EU, based on research into its potential negative effects.
“I would love to see the United States move in this direction,” she said. “Harms to workers from emotion recognition technology will include not only harm to worker performance and employment status, but also harm to qualities such as privacy and well-being. [and concerns about] Prejudice, discrimination, and prejudice in the workplace. ”
Susan Ariel Aaronson, a professor of international affairs at George Washington University, believes regulatory focus on high-risk AI tools is also a step in the right direction, but technology developers It added that it recommends seeking more transparency on this issue. How do AI tools work and what data are used?
“I think it's really important for understanding to say how the model was built, what data it used, and how it got that data.” [AI] “Illusions and other issues associated with different LLM models,” she said of future policy considerations.