Block Details Banner Image
Vector IconSvg

The EU AI Act: A Guide for AI Companies

Author Image
Industry Insights
The EU AI Act is a crucial regulatory development for AI companies, especially those leveraging GPU-powered infrastructure to build advanced solutions. This guide will help you understand what the Act entails and how you can effectively navigate its requirements while optimizing your AI development with the best cloud solutions.

Keywords: EU AI Act, AI compliance, GPU-powered infrastructure, cloud solutions for AI, GPU-driven AI workloads, AI regulations, scalable cloud infrastructure, AI innovation, AI transparency, AI scalability.

The European Union has a history of creating regulations that influence technology. Following GDPR, DSA, and DMA, the latest addition to the EU's regulatory framework is the EU AI Act. This regulation, entering into force in August 2024, will impact how AI is developed, used, and perceived across Europe. However, as with any major regulation, there are both challenges and opportunities. Spoiler: This blog is a rollercoaster of compliance hurdles, but with a happy ending for those who partner with the right cloud provider. And if you missed our last episode, where we first introduced the EU AI Act and its grand entrance into the AI world, you can catch up here.

Understanding the EU AI Act

The EU AI Act is designed to create a framework for the safe and ethical development of artificial intelligence. It classifies AI systems into four levels of risk: minimal, limited, high, and unacceptable. The goal is to ensure that AI technology serves the interests of all EU citizens by imposing regulations proportional to the risk involved. Minimal and limited risk AI, such as chatbots and spam filters, require little to no regulation, while high-risk systems, like those used in healthcare, transportation, and law enforcement, must comply with strict guidelines, including risk assessments, documentation, and ongoing monitoring. At the highest level, unacceptable risk AI—such as social scoring systems—is outright banned. The Act also emphasizes transparency, human oversight, and the responsible use of AI data to maintain safety and fairness.

EU AI Act Risk Levels
Classification for AI systems: The 4 levels of risk as described by the EU AI Act

Challenges in the AI Act

Red tape & compliance costs

One of the main criticisms of the EU AI Act is the burden of compliance, particularly for high-risk AI systems. Companies developing these systems need to navigate a maze of risk assessments, documentation, and audits. For small and medium-sized enterprises (SMEs), the cost of compliance could range between 1% and 2.7% of their revenue—a substantial hit that could place them at a disadvantage compared to larger corporations with deeper pockets.

Risk to innovation

The Act’s stringent requirements may slow down the pace of AI innovation in Europe. High-risk AI projects will need to undergo detailed scrutiny before launch, delaying time to market. In a competitive landscape, this could mean that AI startups and new entrants struggle to innovate as quickly as their counterparts in less-regulated regions like the United States. French President Emmanuel Macron recently highlighted this concern, stating, "Our former model is over. We are overregulating and underinvesting. In the two to three years to come, if we follow our classical agenda, we will be out of the market," underscoring the potential risks of overregulation.

Complexity for open source developers

The Act’s provisions around foundation models and general-purpose AI systems are particularly concerning for open-source AI projects. Open-source models like those developed by EleutherAI and BigScience rely on community-driven efforts without the resources to manage heavy documentation or auditing. Requirements like the Quality Management System could become obstacles for smaller, volunteer-based open-source projects, potentially stifling contributions from grassroots innovators.

Commercial Use of Open-Source Models: For companies using open-source models, it is essential to ensure compliance with transparency, risk assessment, and documentation requirements, especially if the model falls under high-risk AI. It is also important to check model licenses for potential restrictions in regions like the EU. For instance, Meta's exclusion of its latest multimodal models from Europe might be tied to regulatory complexities introduced by the Act. License details should be confirmed upon signing in on platforms like HuggingFace.

Opportunities in the AI Act

Leading in Ethical AI

The EU AI Act positions Europe as a leader in ethical AI. By defining specific risk levels—ranging from minimal to unacceptable—the regulation sets clear boundaries on what’s permissible. This is aimed at safeguarding citizens from potentially harmful AI practices, such as social scoring and biased decision-making in critical areas like healthcare and education. By doing so, the Act could help foster consumer trust in AI solutions developed in Europe.

Harmonization across the EU

Another advantage of the EU AI Act is its standardization across the European Union. For AI companies, having a consistent set of rules means less guesswork when operating in multiple EU countries. It creates a uniform market with aligned expectations and regulations, making it easier for companies to expand within the EU. By setting these standards, the EU also aims to influence global AI regulation, potentially leveling the playing field for companies that already meet these rigorous standards.

Support for safe open source development

Despite its challenges, the AI Act also recognizes the importance of open ecosystems. Recommendations from open-source stakeholders, such as exempting non-commercial AI components from the Act’s strictest requirements, aim to protect grassroots development. This means that initiatives like BigScience’s BLOOM or EleutherAI can continue to grow, while complying with proportional and reasonable regulations.

The role of data management in AI compliance

To meet the EU AI Act's standards, AI companies must go beyond model development and focus on how they handle data. This includes robust access and encryption management, using strong encryption protocols, and setting user roles and permissions in alignment with GDPR. Implementing comprehensive AI audit and traceability measures will allow companies to keep logs of data access, processing activities, and model training. This traceability is critical for transparency and accountability, enabling businesses to track decisions back to the data and algorithms used.

What Does This Mean for Your AI Company?

As an AI company in Europe, navigating the EU AI Act will require thoughtful adaptation. Here are some key aspects to focus on:

  • Understanding Risk Categories: Classify your AI system into one of the four risk levels: Minimal, limited, high, or unacceptable. This classification will determine the regulatory obligations you need to meet. High-risk and unacceptable-risk systems face stricter scrutiny, making compliance crucial.
  • Comprehensive Documentation: Keep thorough records of training data, model architecture, and risk management. Documentation will support compliance and facilitate smoother audits.
  • Secure Data Systems and Management: In addition to model development, how you handle data plays a critical role in compliance with the EU AI Act. Implement robust access and encryption management protocols, aligned with GDPR, to safeguard data. Strong encryption and setting proper user roles and permissions help protect against unauthorized access. Moreover, establishing comprehensive AI audit and traceability measures allows you to maintain logs of data access, processing activities, and model training. This ensures transparency and accountability, enabling you to trace decisions back to the data and algorithms used, which is crucial for meeting regulatory obligations.
  • Regular Risk Assessments: Identify biases and security concerns through routine risk assessments. Addressing these proactively helps ensure ongoing compliance.
  • Monitor Post-Deployment: After deployment, continue to track your system’s performance to catch emerging issues quickly and stay compliant.
  • Open Source Considerations: Clarify the use of open-source components in your AI. While non-commercial open-source AI is often exempt, commercial use may trigger additional obligations.
  • Partnering with Experts: Work with providers like Genesis Cloud for GPU-powered infrastructure that supports compliance, scalability, and security.

Your move: Navigating the EU AI Act

The EU AI Act is neither a silver bullet nor a death sentence for AI innovation. It’s a complex framework that brings both challenges and opportunities. For AI-driven companies, understanding these nuances will be key to thriving in this new regulatory environment. At Genesis Cloud, we’re here to help you navigate these changes—from ensuring compliance to scaling your AI projects securely and cost-effectively.

Keep accelerating

The Genesis Cloud team 🚀

Never miss out again on Genesis Cloud news and our special deals: follow us on Twitter, LinkedIn, or Reddit.

Sign up for an account with Genesis Cloud here. If you want to find out more, please write to contact@genesiscloud.com.

Checkout our latest articles