Posted inTechnologyLatest NewsWorld

World’s first AI regulation takes effect in EU

Tech giants and global policymakers are facing a new reality as the world’s first AI regulation launches in the European Union

The EU AI Act marks a significant milestone in the regulation of AI. Image: Canva

The European Union’s Artificial Intelligence Act, the world’s first comprehensive AI regulation, came into force on Thursday, marking a pivotal moment in the global governance of artificial intelligence and setting off a worldwide rush for compliance that could reshape the tech industry far beyond EU borders.

The landmark legislation introduces a nuanced, risk-based approach to AI regulation, categorising systems from minimal to unacceptable risk. It imposes strict requirements on high-risk AI applications and outright bans certain AI uses deemed to threaten fundamental rights, signaling a new era of regulatory oversight in the rapidly evolving field of artificial intelligence.

“The EU AI Act marks a significant milestone in the regulation of artificial intelligence and will inevitably shape the way that companies who both develop and implement AI will approach the technology,” Endava CTO Matt Cloke told Arabian Business.

Global impact and extraterritorial Reach

One of the most significant aspects of the EU AI Act is its extraterritorial reach. The regulation applies not only to AI systems developed within the EU but also to those offered to EU customers or affecting EU citizens, regardless of the provider’s location.

This global impact is poised to set a new standard for AI governance worldwide, potentially influencing regulatory frameworks in other jurisdictions.

“One of the most notable aspects of the EU AI Act is its extra-territorial effect. In other words, the act not only applies to AI systems developed within the EU but also to those offered to EU customers or affecting EU citizens, regardless of where the providers are located,” Cloke said.

Risk-based categorisation

The Act introduces a tiered system of risk categories for AI systems:

  1. Minimal risk: Most AI systems, including recommender systems and spam filters, fall into this category and face no obligations under the Act.
  2. Specific transparency risk: Systems like chatbots must disclose to users that they are interacting with a machine. AI-generated content, including deep fakes, must be labeled as such.
  3. High risk: These systems must comply with strict requirements, including risk mitigation systems, high-quality datasets, activity logging, detailed documentation, clear user information, human oversight, and robust cybersecurity measures.
  4. Unacceptable risk: AI systems considered a clear threat to fundamental rights are banned. This includes systems that manipulate human behavior, allow social scoring, or engage in certain forms of predictive policing.

The new regulation presents both challenges and opportunities for businesses worldwide. Jacob Beswick, Director AI Governance at Dataiku, emphasised the urgency for businesses to prepare.

“Today marks the EU AI Act officially coming into force and given its extraterritorial application, many businesses will be preparing to comply with the new rules in order to continue operations within the EU,” Jacob Beswick, Director AI Governance at Dataiku, told Arabian Business.

“As one of the most comprehensive pieces of AI regulation to be passed to date, preparing for compliance is both a step into the unknown as well as an interesting bellwether as to what might be to come in terms of AI-specific regulatory obligations across the globe.”

Beswick outlined several key steps businesses should take over the next 18 months to ensure compliance:

  1. Take stock of AI assets and review operationalised AI systems within Europe.
  2. Qualify these assets, understanding their intended purpose and the technologies used.
  3. Determine where systems fall within the Act’s risk tiers.
  4. Begin taking action to mitigate non-compliance risks and avoid potential business disruptions.

Implementation timeline and enforcement

While the Act has entered into force, the majority of its rules will not start applying until August 2, 2026, the European Commission said in a statement on Thursday. However, prohibitions on AI systems deemed to present an unacceptable risk will apply after just six months, and rules for general-purpose AI models will take effect after 12 months.

Enforcement will be carried out by national competent authorities in EU Member States, overseen by the European AI Office at the EU level. Companies found non-compliant could face significant fines, ranging up to 7 percent of global annual turnover for violations of banned AI applications.

“AI has the potential to change the way we work and live and promises enormous benefits for citizens, our society and the European economy,” Margrethe Vestager, Executive Vice-President for a Europe Fit for the Digital Age, said.

“The European approach to technology puts people first and ensures that everyone’s rights are preserved.”

Preparing for compliance

As businesses begin to grapple with the new regulatory landscape, Cloke from Endava sees potential benefits beyond mere compliance. “For these companies, the EU AI Act offers both a challenge and an opportunity,” he noted.

“While the compliance requirements may initially seem daunting, they also present a chance to differentiate themselves by adopting best practices in AI governance. The emphasis on transparency and human oversight over the technology which this act brings aligns with growing public and consumer expectations around ethical use.”

The European Commission is taking steps to facilitate compliance and implementation. These include:

  1. Developing guidelines to define and detail how the AI Act should be implemented.
  2. Facilitating co-regulatory instruments like standards and codes of practice.
  3. Launching the AI Pact, inviting AI developers to voluntarily adopt key obligations ahead of legal deadlines.
  4. Opening a call for expression of interest to participate in drawing up the first general-purpose AI Code of Practice.

Global implications and future outlook

As the EU sets this global benchmark for AI regulation, the race is on for companies worldwide to adapt. Those who align with these standards early may gain a competitive edge in trust and credibility in an increasingly scrutinised AI landscape.

Cloke predicts a ripple effect beyond the EU’s borders. “As the EU sets a global benchmark for AI regulation, companies that adapt to these standards early on will be better positioned to gain trust and credibility in the market,” he said.

“Ultimately, the Act encourages a balanced approach to innovation, ensuring that the benefits of AI are recognised in a safe way and I will be surprised if I don’t see other countries implementing similar regulations.”

The EU AI Act represents a significant step towards creating a framework for responsible AI development and use that could potentially shape the future of AI governance on a global scale.

Follow us on

For all the latest business news from the UAE and Gulf countries, follow us on Twitter and LinkedIn, like us on Facebook and subscribe to our YouTube page, which is updated daily.
Tala Michel Issa

Tala Michel Issa

Tala Michel Issa is the Chief Reporter at Arabian Business and Producer/Presenter of the AB Majlis podcast. Her interviews feature global figures including former Nissan Chairman Carlos Ghosn, Mindvalley's...