Navigating the EU AI Act: Compliance, Risks, and Business Strategy

Photo of Paulina Pasławska

Paulina Pasławska

Sep 26, 2024 • 13 min read
Multi ethnic group of succesful creative business people using a laptop during candid meeting-3

With the EU AI Act set for full implementation in 2026, businesses must prepare for a new regulatory framework governing AI. The AI Act ensures that AI systems are developed and used in a safe, transparent, and ethical manner.

This legislation brings both challenges and opportunities for companies using AI. Product leaders need to proactively ensure compliance by understanding prohibited AI practices, maintaining documentation, and preparing for conformity assessments. With the deadline fast approaching, teams should start now, seeking external legal expertise if needed.

In this article, we’ll outline the key strategies, risks, and actions teams should take to align with the EU AI Act before 2026 and mitigate legal and operational challenges.

What is the EU AI Act?

The EU AI Act is a regulatory framework designed to promote trustworthy, human-centric AI across the European Union. Its primary goal is to ensure AI systems meet high standards for health, safety, fundamental rights, democracy, and environmental protection. By addressing potential AI risks, the Act provides safeguards that support innovation while building trust through compliance with ethical and legal standards.

Who does the EU AI Act apply to?

The EU AI Act applies to various stakeholders involved in the development, deployment, and distribution of AI systems. It covers:

  • AI providers: Companies or individuals placing AI systems or general-purpose AI models on the EU market, whether based within or outside the Union.
  • Deployers of AI systems: Entities using or implementing AI systems within the EU.
  • Providers and deployers outside the EU: Those based outside the EU whose AI systems are used within the Union.
  • Importers and distributors: Entities responsible for bringing AI systems into the EU market.
  • Product manufacturers: Companies integrating AI systems into their products and selling them under their own name.
  • Authorized representatives: Representatives ensuring compliance for non-EU-based providers selling AI systems in the EU.
  • Affected persons: Individuals or entities within the EU impacted by AI systems.

Identifying your role is crucial for determining your specific obligations and managing risks related to compliance, liability, and safety. Providers must ensure that AI systems comply with EU regulations before they are brought to market, while deployers focus on the safe and lawful use of AI. Importers, distributors, and representatives are responsible for making sure the AI systems they handle meet EU standards.

How does the EU AI Act classify AI systems by risk?

The EU AI Act uses a risk-based classification system to regulate AI systems according to their impact on safety, fundamental rights, and societal well-being. This tiered approach ensures that regulations match the level of risk posed. AI systems fall into four categories

  • Prohibited AI practices: These systems are banned due to their severe threat to individuals' rights or safety. They include AI that manipulates behavior, exploits vulnerabilities based on age or disability, or is used for indiscriminate surveillance and social scoring by governments. Such systems violate fundamental rights and are not allowed in the EU.
  • High-risk AI systems: AI used in sectors like healthcare, law enforcement, education, or critical infrastructure faces strict requirements. These include risk assessments, conformity evaluations, and transparency obligations to ensure they are safe, transparent, and non-discriminatory. High-risk systems significantly impact people's safety or rights, making regulatory oversight essential.
  • Limited-risk AI systems: These systems, such as those used in customer service or content recommendations, require transparency measures. For example, users must be informed they are interacting with AI. Though they face fewer regulatory checks than high-risk systems, transparency is key to maintaining trust.
  • Minimal-risk AI systems: AI systems that pose little to no risk, such as basic tools for everyday tasks or entertainment, are not subject to additional regulatory requirements. This category supports innovation by focusing regulatory resources on higher-risk AI applications.

By classifying AI systems in this way, the EU AI Act allows regulators to concentrate on high-risk applications while enabling flexibility and innovation in lower-risk areas. Businesses must assess their AI systems to determine their risk category and ensure compliance with the Act’s legal and ethical standards.

How can organizations apply the EU AI Act?

To comply with the EU AI Act, organizations must first assess their role in the AI delivery chain and take necessary steps. Here's how:

  • Identify your role: Determine if your organization is an AI provider, deployer, importer, distributor, manufacturer, or authorized representative to define your obligations under the Act.
  • Conduct a risk assessment: Categorize your AI systems as high-risk, limited-risk, or minimal-risk. High-risk systems, such as those used in healthcare or law enforcement, require stricter compliance measures.
  • Implement compliance measures:
    • For providers: Ensure AI systems meet safety, transparency, and accountability standards through conformity assessments and risk management. Establish monitoring and data governance practices.
    • For deployers: Ensure AI systems comply with legal and ethical guidelines, regularly monitor for biases or unintended outcomes.
    • For importers/distributors: Verify that imported or distributed AI systems comply with EU regulations.
  • Set up an accountability framework: Create processes to document compliance, manage risks, and handle incidents. Maintain detailed records and establish oversight mechanisms.
  • Collaborate with experts: Work with legal, compliance, and AI professionals, especially for high-risk systems, to avoid penalties.
  • Adapt AI systems and processes: Modify algorithms, improve data governance, and ensure AI models are explainable if necessary to meet EU standards.

Business risks: Fines, penalties, and reputational impact

The EU AI Act enforces strict compliance measures, with significant penalties for violations, including financial, operational, and reputational risks. This section outlines the business risks tied to non-compliance.

1. Financial penalties for non-compliance

The EU AI Act imposes heavy fines based on the type of violation and the organization’s role in the AI lifecycle:

  • Violation of prohibited AI practices (Article 5): The most severe infractions, like using AI systems that exploit vulnerabilities or enable social scoring, can result in fines up to €35 million or 7% of the company's worldwide annual turnover, whichever is higher.
  • Non-compliance with AI obligations: Failing to meet obligations—such as safety, transparency, or accuracy requirements—can result in fines up to €15 million or 3% of annual turnover. This includes:
    • Providers’ obligations (Article 16)
    • Authorized representatives (Article 22)
    • Importers and distributors (Articles 23 and 24)
    • Deployers’ obligations (Article 26)
    • Requirements for notified bodies (Articles 31, 33, and 34)
  • Providing false information: Offering misleading or incomplete information to regulators can result in fines of up to €7.5 million or 1% of annual turnover.

For small and medium-sized enterprises (SMEs), penalties are adjusted to reflect their financial capacity and ensure accountability.

2. Criteria for determining penalties

When assessing fines, authorities consider the specifics of each case. Key factors include:

  • Nature, gravity, and duration: The severity and impact of the breach, including the AI system’s purpose and the number of people affected.
  • Multiple regulatory actions: Whether other EU Member States have already imposed penalties for the same breach.
  • Previous infringements: Past violations under EU or national laws for similar activities.
  • Size and market share: The entity’s annual turnover and market presence.
  • Aggravating or mitigating factors: Financial gains or losses avoided due to the violation, and the level of cooperation with authorities.
  • Degree of responsibility: Efforts taken by the operator to prevent or mitigate the violation.
  • Voluntary disclosure: Whether the operator reported the breach to authorities.
  • Intentional or negligent conduct: Whether the violation was deliberate or due to negligence.
  • Remedial actions: Steps taken to reduce harm to affected individuals.

3. Reputational consequences

Beyond financial penalties, non-compliance with the EU AI Act can lead to serious reputational damage. AI systems are under increasing scrutiny, and violations can tarnish an organization's market standing. Reputational risks include:

  • Loss of consumer trust: Public awareness of violations may lead to decreased consumer confidence, especially in sensitive sectors like healthcare, finance, and education.
  • Erosion of business partnerships: Non-compliance could strain relationships with partners, investors, and other stakeholders.
  • Impact on competitiveness: Long-term reputational harm could hinder a company’s ability to compete, particularly in industries where ethical AI practices are critical.

The EU AI Act enforces ethical, responsible AI development. Organizations that fail to comply face not only financial penalties but also reputational risks that can undermine long-term success.

AI system considerations for product leaders

To ensure compliance with the EU AI Act, product leaders must take a proactive approach in developing or integrating AI tools. Key strategic considerations include:

  • Stay updated on AI Act changes: Regularly monitor updates to the act as regulations may evolve with AI advancements. Adjust your AI systems and processes accordingly.
  • Understand your product and processes: Determine whether your product uses AI systems and falls under the act’s scope. Ensure compliance, especially where AI interacts with consumers or handles sensitive data.
  • Know prohibited practices: Familiarize yourself with prohibited AI practices (article 5), ensuring your systems do not inadvertently violate these rules, such as behavior manipulation or exploitation of vulnerabilities.
  • Comply with high-risk system requirements: If your product involves high-risk AI (e.g., healthcare or law enforcement), meet stricter obligations like safety assessments and risk management. ensure compliance based on your role as a provider, deployer, or importer.
  • Maintain transparency and documentation: Keep thorough records of your AI systems' development, training, and usage. document decisions, risk assessments, and conformity procedures to demonstrate your commitment to ethical AI practices.
  • Mitigate risks: Conduct regular audits, strengthen AI governance, and implement safeguards to prevent biases or errors in decision-making.

Product leaders must ensure their AI systems comply with the EU AI Act, especially when deploying high-risk technologies such as emotion recognition systems and remote biometric identification systems. These systems, often used in sectors like border control, law enforcement, and education, are subject to strict oversight and transparency requirements. Human oversight is crucial for high-risk AI to safeguard safety and protect fundamental rights. While AI-enabled video games fall under low-risk AI, systems involved in sensitive sectors must meet stricter regulatory requirements to mitigate potential harm​.

Leaders should also be aware that certain AI applications, such as those involving social scoring or real-time biometric identification in public spaces, are prohibited under the Act due to the unacceptable risks they pose to fundamental rights​. Ensuring compliance across the AI value chain is essential, particularly when integrating models that could fall into these high-risk categories.

The role of the European Parliament and the European Commission

The European Parliament and European Commission are key players in shaping and enforcing the comprehensive horizontal legal framework of the EU AI Act. Their role is to ensure that AI technologies align with the principles of trustworthy AI.

The European Commission will oversee the creation of AI offices and an independent administrative authority responsible for monitoring compliance and enforcing AI regulations across the EU. The official journal of the EU will regularly provide updates on how new technologies, such as AI-enabled video games, generative AI, and spam filters, are regulated.

The Act also introduces a specific task of monitoring systemic risks associated with AI systems.

Timeline and next steps: What product teams should do now

With the EU AI Act set for full implementation by 2026, product teams need to act quickly to align their AI systems and processes with the new regulations. Although there’s still time to prepare, the complexity of the law means it’s important to start now to ensure smooth compliance.

Navigating the AI Act can be challenging, especially for smaller teams or businesses without in-house legal and technical expertise. Seeking external support from legal advisors, software houses, or AI specialists familiar with AI regulations can streamline compliance efforts and ensure your artificial intelligence systems meet the required standards.

Summary

By understanding the Act’s requirements, conducting thorough risk assessments, and implementing effective compliance measures, companies can take the lead in ethical AI development.

Photo of Paulina Pasławska

More posts by this author

Paulina Pasławska

Product Manager
Thinking about implementing AI?  Discover the best way to introduce AI in your company with AI Primer Workshop  Sign up for AI Primer

We're Netguru

At Netguru we specialize in designing, building, shipping and scaling beautiful, usable products with blazing-fast efficiency.

Let's talk business