Home » Technology » Artificial Intelligence » The EU’s AI Act: What You Need to Know

The EU’s AI Act: What You Need to Know

As of this Sunday, the European Union’s regulators can officially ban AI systems deemed to pose “unacceptable risk” or harm. This marks a major milestone under the EU’s AI Act, a comprehensive regulatory framework that aims to ensure the ethical and responsible use of artificial intelligence.

What is the AI Act?

The EU’s AI Act, which was approved in March 2023 and went into effect on August 1, introduces rules for AI systems based on their risk levels. February 2 is the first compliance deadline, and it focuses on banning AI applications that fall under the “unacceptable risk” category.

Here’s a quick breakdown of the EU’s risk classification for AI:

  1. Minimal risk (e.g., email spam filters): No regulatory oversight.
  2. Limited risk (e.g., customer service chatbots): Light-touch oversight.
  3. High risk (e.g., AI for healthcare recommendations): Heavy oversight.
  4. Unacceptable risk: Prohibited entirely.

What is considered “unacceptable risk”?

According to Article 5 of the AI Act, the following AI applications fall under the unacceptable risk category and are banned:

  • AI used for social scoring, such as building risk profiles based on behavior.
  • Manipulative AI designed to influence decisions subliminally or deceptively.
  • AI that exploits vulnerabilities related to age, disability, or socioeconomic status.
  • Predictive AI attempting to foresee crimes based on appearance.
  • AI using biometrics to infer personal traits like sexual orientation.
  • Systems that gather real-time biometric data in public for law enforcement purposes.
  • Emotion-detecting AI in workplaces or schools.
  • AI that expands facial recognition databases by scraping online images or security footage.

Companies violating these rules could face fines of up to €35 million (~$36 million) or 7% of their annual revenue, whichever is higher.

When will enforcement begin?

While companies are expected to comply by February 2, fines and enforcement won’t kick in until later. By August, the EU will identify competent authorities and finalize enforcement provisions.

According to Rob Sumroy, head of technology at Slaughter and May, “Organizations are expected to be fully compliant by February 2, but the next big deadline is in August. That’s when fines and enforcement will take effect.”

What about voluntary compliance?

In September, over 100 companies, including Amazon, Google, and OpenAI, signed the EU AI Pact, a voluntary agreement to align with the AI Act’s principles ahead of enforcement. However, some major players like Meta, Apple, and French AI startup Mistral opted out.

Sumroy notes that most companies are unlikely to engage in the banned practices anyway. The real challenge lies in the clarity of compliance guidelines, which are still being developed.

“For organizations, a key concern is whether clear guidelines, standards, and codes of conduct will arrive in time,” Sumroy said. “So far, the working groups are meeting their deadlines on the code of conduct for developers.”

Are there exceptions?

The AI Act does allow exceptions under specific circumstances. For instance:

  • Law enforcement may use biometric systems in public places for targeted searches, such as finding an abduction victim, with proper authorization from governing bodies.
  • Emotion-detecting AI in workplaces or schools can be used if there’s a valid medical or safety justification.

The European Commission has also promised additional guidelines by early 2025, following stakeholder consultations. However, these have not yet been published.

Overlapping regulations

Sumroy highlights that AI regulation doesn’t exist in isolation. Other legal frameworks, such as GDPR, NIS2, and DORA, may overlap with the AI Act, creating challenges like conflicting incident reporting requirements.

“It’s crucial for organizations to understand how these laws fit together, not just the AI Act itself,” Sumroy said.

 

As the EU moves closer to implementing its AI Act, ensuring compliance and understanding the broader regulatory landscape will be critical for organizations navigating this new era of AI governance.

Leave a Reply