Chevron icon It indicates an expandable section or menu, or sometimes previous / next navigation options. Home

Europe is trying to regulate AI. That could backfire.

EU logo on a chip
The EU's AI legislation has been questioned by some commentators. Getty Images
  • The EU Parliament has approved the Artificial Intelligence Act.
  • The act, born in 2021, plans to categorize AI risk and ban unacceptable use cases.
  • One AI expert said it risks creating "AI policy tax havens" as countries try to attract investment.
Advertisement

The European Union is forging ahead with its plans to regulate artificial intelligence.

On Wednesday, the European Parliament approved the Artificial Intelligence Act, with 523 votes in favor, 46 against, and 49 abstentions.

"Europe is NOW a global standard-setter in AI," Thierry Breton, the European internal market commissioner, said on X. "We are regulating as little as possible — but as much as needed!"

Advertisement

It's the first attempt at sweeping AI control by a major regulator to protect its citizens from the technology's potential risks. Other countries, including China, have already brought in rules around specific uses of AI.

The legislation has been questioned by some commentators, such as AI and deepfakes expert Henry Ajder, who called it "very ambitious." While calling it an overall positive step, he warned it risked making Europe less competitive globally.

"My concern is that we will see companies explicitly avoiding developing in certain regions where there is robust and comprehensive regulation," he told Business Insider. "There will be countries that almost act as AI policy tax havens where they explicitly avoid enforcing harsh legislation to try to attract certain kinds of organizations."

Advertisement

The act has been in the works for some time. First mooted in 2021, it was provisionally agreed in negotiations with member countries in December 2023.

The EU legislation plans to assign the risks of AI applications into three categories, with applications that cause unacceptable risk set to be banned.

High-risk applications assigned to the second category will be subject to specific legal requirements, while applications in the third will largely left unregulated.

Advertisement

Key milestone

Neil Serebryany, CEO of California-based CalypsoAI, told BI that while the "Act includes complex and potentially costly compliance requirements that could initially burden businesses, it also presents an opportunity to advance AI more responsibly and transparently."

He called the legislation a "key milestone in the evolution of AI" and an opportunity for companies to consider social values in their products from the earliest stages.

The regulation is expected to come into force in May, providing it passes final checks. Implementation of the new rules will then be phased in from 2025.

Advertisement

How exactly the rules will apply to businesses is also still relatively vague.

Avani Desai, CEO of cybersecurity firm Schellman, said the Act may have a similar impact to the EU's general data protection regulation (GDPR) legislation and require US companies to meet certain requirements to operate in Europe.

Companies uncertain about the rules can expect more details on the specific requirements in the coming months as the EU Commission establishes the AI Office and begins to set standards, said Marcus Evans at law firm Norton Rose Fulbright.

Advertisement

"The first obligations in the AI Act will come into force this year and others over the next three years, so companies need to start preparing as soon as possible to ensure they do not fall foul of the new rules," he added.

Read next

Tech AI EU
Advertisement
Close icon Two crossed lines that form an 'X'. It indicates a way to close an interaction, or dismiss a notification.

Jump to

  1. Main content
  2. Search
  3. Account