Content marketing manager
Oct 10, 2024·10 min

Anticipate AI Act to support your Artificial Intelligence projects

AI Act - Europe

Artificial Intelligence (AI) is transforming businesses at an unprecedented pace, but it also brings new responsibilities. The AI Act, the world’s first dedicated legislation on AI, adopted by the European Parliament on July 12, 2024, establishes a regulatory framework to ensure that AI is developed and used ethically and safely. Similar to the GDPR, which forced companies to rethink data management, the AI Act imposes strict requirements, particularly for high-risk AI systems. Anticipating this regulation is crucial not only to avoid penalties but also to gain a competitive advantage in the European market.

One of the major challenges of this regulation lies in its gradual implementation, with some aspects still in the experimental phase. Gaining a deep understanding of the AI Act is essential for applying it strategically, without stifling innovation, while ensuring continuous compliance and optimal risk management.

This article provides a comprehensive analysis of the AI Act and the actions companies need to take to achieve compliance while capitalizing on the opportunities presented by artificial intelligence.

What is the AI Act?

The AI Act is a 'product regulation,' meaning it governs artificial intelligence products marketed in the European market. This regulation follows a risk-based approach, classifying AI systems according to their use rather than the underlying technology. In other words, it is not the technology itself that determines whether an AI system is permitted or prohibited, but its specific application.

The AI Act regulates the use of AI systems in products and services offered in Europe, ranging from merely informing users to outright bans on certain systems, depending on the identified risks. Non-commercial research activities are not covered, as the AI Act focuses on products intended for commercialization.

Like the GDPR, the AI Act has extraterritorial reach. It applies to all companies, whether based inside or outside the EU, as long as they market or use AI systems within the EU. For example, an American company providing AI models to European businesses would be required to comply with this regulation.

Regulatory duality between the EU and the US: AI Act vs. Cloud Act

The EU's AI Act and the US Cloud Act clash over data protection, digital sovereignty, and compliance requirements. While the AI Act focuses on safeguarding fundamental rights and ensuring transparency by imposing strict compliance obligations on companies operating within the EU, the Cloud Act grants US authorities the ability to access data held by American companies, regardless of where the data is stored.

The extraterritorial reach of the Cloud Act can be avoided by opting for European alternatives to American solutions. French cloud services, such as OVHcloud, S3NS, Scaleway, and Clever Cloud, offer comparable solutions in terms of cost and performance while complying with European legal requirements. Choosing these providers helps reduce the risk of legal conflicts and makes it easier to navigate the various regulatory obligations, particularly for systems handling private or even confidential data.

Companies concerned by AI Act
Focus on the different companies affected by the AI Act

AI Act 4 main objectives 

  1. Ensuring safety and respect for fundamental rights: the AI Act aims to guarantee that AI systems marketed in Europe are safe and aligned with the fundamental rights and values upheld by the EU.

  2. Providing legal certainty: it establishes a clear legal framework to encourage investment and drive innovation in the AI sector.

  3. Strengthening governance and enforcement of existing laws: the Act seeks to enhance the enforcement of regulations concerning fundamental rights and safety standards for AI systems.

  4. Creating a single market for AI: it aims to foster the development of a unified market for reliable and legally compliant AI applications, while avoiding fragmentation within the European market.

Risk Assessment for AI: the 5 Levels established by AI Act

AI Act - risks levels
The 5 levels of risk established by the AI Act

Level 1: Unacceptable risk

This risk level applies to AI systems that involve the unconscious manipulation of individuals, exploitation of social vulnerabilities, social scoring, emotional inference in workplaces or schools, as well as biometric categorization based on sensitive characteristics such as ethnicity or religion.

Example: imagine a city implementing a surveillance system using facial recognition technology in public spaces like train stations and airports. This system detects faces in real time, alerting security officers when a person wanted by law enforcement is identified, allowing for rapid intervention. While this enhances public safety, it raises ethical concerns about privacy.

Level 2: High risk

High-risk AI systems primarily include those already regulated under existing European regulations (e.g., medical devices, Machinery Directive). However, new AI applications in sectors such as education, vocational training, access to essential private services (e.g., banking credit or insurance), and essential public services like healthcare, emergency calls, and the judiciary are also covered.

Example: an insurance company uses AI to assess customer risk profiles based on sensitive data. This determines the conditions and premiums but may lead to discriminatory biases.

Level 3: Specific risk

This category covers AI systems that directly interact with individuals, generate content, or detect emotions. In such cases, users must be informed that they are interacting with an AI system.

Example: a bank deploys a chatbot to respond to customer inquiries about accounts, credit cards, and loans. This AI system can detect emotions, such as stress or frustration, to adjust responses empathetically and defuse tensions. For transparency, the bank must inform users that they are interacting with an AI, not a human advisor, to avoid confusion and maintain trust.

Level 4: Risk associated with general-purpose AI

Developers must provide a detailed summary of the datasets used to train the AI, ensuring copyright compliance, accompanied by technical documentation.

Example: a company develops an AI system for voice recognition using audio samples for training. It must prove that the audio files were legally acquired, describe their origin, and explain how the AI was trained while respecting copyright laws on the content used.

Level 5: Systemic risk associated with general-purpose AI

For general-purpose AI systems requiring computing power exceeding 10^25 FLOPS, additional transparency requirements apply, including:

  • Adversarial testing and evaluation,

  • Risk mitigation strategies,

  • Reporting of major incidents,

  • Cybersecurity measures,

  • Analysis of energy consumption.

By way of information, a system capable of 10^25 FLOPS would involve a potential total investment of several tens of billions of euros for its construction and substantial annual expenditure for its operation.

Example: A hospital uses an AI model requiring over 10^25 FLOPS to analyze medical records of millions of patients. This could lead to incorrect diagnoses due to biases or errors in training data, potentially impacting public health and spreading inaccuracies to other healthcare facilities across Europe. Providers of this model would need to inform the European Commission of risks, conduct tests to detect biases, and implement strategies to limit their spread, ensuring patient safety and rights.

Preparing for the AI Act Now

Strengthen AI governance

With the growth of Artificial Intelligence regulations, rigorous governance is essential. Use the EU regulation as a foundation to optimize existing governance practices.

Create a comprehensive inventory

Compile a complete inventory of AI systems, whether deployed or in development, identifying if your organization acts as a provider or user. Assess the risk associated with each system and determine specific obligations.

Adapt your AI acquisition practices

When evaluating AI solutions, ask providers about their compliance with the AI Act. Update your acquisition criteria to explicitly incorporate compliance requirements in RFPs.

Train multidisciplinary teams

Organize targeted training sessions for all teams (technical and non-technical), including legal, risk management, HR, and operational departments. These trainings should cover regulatory obligations at every stage of the AI lifecycle, such as data quality and provenance, bias management, and documentation.

Implement dedicated internal audits

Similar to financial audits, establish an independent audit function to review AI-related practices' compliance throughout its lifecycle. This ensures rigorous oversight of controls and risk management.

Adapt risk management frameworks

Revise your risk management frameworks to incorporate AI-specific elements. An adjusted framework will facilitate compliance with current and future regulations.

Ensure strong data governance

Implement robust data governance practices for datasets used in AI models, ensuring their documentation and compliance with regulations like the GDPR.

Optimize AI Transparency

Enhance the transparency of AI systems, especially generative models, by developing skills in explainable AI techniques. This will meet regulators' expectations regarding interpretability.

Integrate AI into Compliance Processes

Leverage AI tools to automate compliance tasks, such as monitoring regulatory requirements and managing internal processes. AI can also help maintain a system inventory and generate necessary documentation.

Engage in proactive dialogue with regulators

With compliance modalities still being developed, it is crucial to engage with the European AI Office to clarify regulatory expectations and requirements. Avoid grey areas to anticipate future changes.

The AI Act: a strategic opportunity for your business

According to a PwC study, AI could contribute to a 14% increase in global GDP by 2030, provided its development is rigorously regulated to prevent ethical and security issues. In this context, the AI Act plays a key role in providing a solid regulatory framework. A report by the European Commission estimates that this regulation could save European companies billions of euros annually by avoiding non-compliance costs.

An ethical and responsible approach

By adhering to these stringent regulations, companies not only meet legal requirements but also turn these obligations into a competitive advantage in an environment where ethical and security issues related to AI are increasingly important. Adopting an ethical approach to AI, focused on transparency, safety, and data protection, helps companies strengthen their reputation and reliability. The AI Act encourages companies to go beyond mere technical and legal requirements by considering the social impact of their technologies.

Balancing compliance and innovation

The challenge for companies is to reconcile regulatory compliance with innovation. The AI Act, designed to encourage innovation, allows developers, including SMEs and startups, to work confidently. As Guillaume Avrin, National Coordinator for Artificial Intelligence at the Directorate General for Enterprises, states, "well-designed regulation stimulates innovation." In the absence of a clear framework, developers hesitate to use AI for fear of being unable to prove compliance in case of issues.

To foster innovation, the AI Act also establishes "regulatory sandboxes", supervised testing environments where companies can develop and test their AI models. This flexibility, overseen by competent authorities, allows companies to experiment without immediately being subject to all legal requirements, providing a secure space for innovation.

Penalties for non-compliance with the AI Act

Penalties under the AI Act can range from 1% to 7% of the company's annual global turnover or from €7.5 million to €35 million, depending on the severity of the violation.

The amount of the fines depends on the nature of the non-compliance. This could involve violating bans on certain practices, failing to meet strict requirements for high-risk AI systems, or not complying with transparency obligations for low-risk but specific systems. The company's category is also taken into account in determining the penalties.

AI Act: key dates to remember

AI Act - Timetable and key dates
The AI Act roll-out schedule

Conclusion

Pierre Jarrijon, head of AI acceleration at Bpifrance, aptly summarizes the impact of the AI Act: "It will clarify things and establish a level playing field." Beyond providing a framework, he also sees economic opportunities: "Every requirement of the AI Act will create a new business."

France, with its 600 AI startups, is fully embracing this transformation through significant investments. On May 21, the day the AI Act was approved by the Council of Europe, the President announced an additional €400 million investment dedicated to training AI specialists, aiming to train 100,000 people per year in the sector, along with the creation of a new investment fund by the end of 2024. AI is a strategic priority for the government, which has allocated €2.5 billion from the France 2030 program.

To assist in adapting to the new requirements of the AI Act, specialized agencies are available to help you make the most suitable choices for your business strategy. They can also swiftly train your teams in best practices for artificial intelligence. The experts at BeTomorrow are ready to answer any questions and advise on the best strategy for optimizing AI integration within your organization.

Feel free to contact us!

Sources

10 min

Thank you for reading this article.

STAY TUNED

— OUR RESSOURCES

ARTICLE
AI Act - Europe
Oct 10, 2024·10 min

Anticipate AI Act to support your Artificial Intelligence projects

Content marketing manager
ARTICLE
Article BeTomorrow - AI - Semantics - Part 1
Oct 7, 2024·8 min

Semantics at the heart of Generative AI

Software Engineer
ARTICLE
BeTomorrow - PWA - blogpost
Sep 9, 2024·7 min

PWA: How it works and best practices

Our website uses cookies
Our website uses cookies to measure our audience, track performance during advertising campaigns and provide the best experience.