A Practical Overview of the EU AI Act for Business Leaders
Europe’s landmark AI Act is here – and it’s reshaping how companies worldwide build and deploy AI systems. Much like GDPR shook up data privacy globally, the EU AI Act introduces sweeping rules for AI with extraterritorial reach. In other words, even if your enterprise is based outside Europe, the Act likely applies if you offer AI-driven products or services in the EU market or affect EU individuals. With provisions rolling out from August 2025 onward, tech leaders and CxOs must understand the Act’s requirements now to avoid hefty penalties and business disruptions.
What is the EU AI Act?
The AI Act aims to ensure that AI is safe, respects fundamental rights, and supports innovation in the European Union.
It introduces a risk-based regulatory framework based on the impact of AI systems.
The AI Act classifies AI systems into four risk categories:
High-risk AI systems (including applications in HR, finance, healthcare, education) must comply with strict requirements around human oversight, explainability, bias mitigation, cybersecurity, and documentation.
Penalties for non-compliance:
Up to €35 million or 7% of global annual turnover (whichever is higher). Specific administrative fines for non-critical violations.
Understanding GPAI and When You Become a Provider
GPAI (General-Purpose AI) refers to versatile AI models — such as LLMs — that are not tailored for a single application but can be adapted across various tasks and industries. Examples: GPT-4, Claude 3, Gemini 1.5, open-weight LLMs.
Starting August 2, 2025, new obligations apply to GPAI providers.
Key Clarification:
If you develop, provide, fine-tune, or significantly modify a GPAI model (e.g., modifying its weights or capabilities), you may be categorized as a provider for the modified model.
Fine-tuning a GPAI model = Compliance responsibilities for that modification.
Even if you don’t build the base model, if you modify it meaningfully, you have obligations for your version.
Core Obligations for GPAI Providers
Providers of GPAI models must:
Open-source GPAI models:
If models are truly open-source (weights + architecture + terms allow free use/modification), obligations are lighter — but copyright compliance and transparency are still mandatory.
Exception: If a model poses systemic risk, even open-source providers face full obligations.
System Risk GPAI Models: Extra Obligations
Models trained with computing resources exceeding 10²⁵ FLOPs (frontier models like GPT-4, Claude 3 Opus, Gemini 1.5) are considered systemic-risk models.
Additional obligations include:
Systemic risk is determined by computational scale and potential impact on public safety, democracy, rights, and the environment.
Timeline for Compliance and Events
May 2025 – Code of Practice Finalization
The voluntary Code of Practice for GPAI providers is expected to be finalized. While not legally binding, it provides best practices for compliance with the AI Act.
August 2, 2025 – GPAI Model Compliance Obligations Begin
Applies to developers, providers, and fine-tuners of General-Purpose AI models. Requirements include:
• Publishing training data summaries
• Ensuring copyright compliance
• Risk assessment and mitigation
• Serious incident reporting
Mid-2026 – High-Risk AI System Compliance Becomes Mandatory
Applies to high-risk AI systems used in HR, finance, healthcare, education, etc. Full compliance with AI Act requirements will be enforced.
Requirements include:
• Human oversight
• Explainability
• Documentation
• Bias mitigation
• Cybersecurity
Early preparation is critical for enterprises involved in developing or fine-tuning AI models.
Core and High-Risk AI Compliance Obligations
Enterprises building high-risk AI systems must:
Special Attention to Key Sectors:
High-risk domains are priority areas for strict audits and enforcement by regulators.
The Road Ahead for Enterprises
Pragmatic steps companies should take now:
Inventory your AI assets
Understand if you use GPAI or build high-risk AI systems.
Establish internal governance
for AI development and deployment
Set up transparency and auditability standards
early.
Ensure traceability of model training datasets
and third-party AI tools.
Engage with trusted partners
like Superbo’s GenAI Fabric, designed for transparency, modularity, security, and regulatory readiness.
Conclusion: Compliance as a Competitive Edge
The AI Act may seem daunting, but it offers forward-looking companies a real opportunity:
Responsible AI = Competitive Advantage
Companies that move early — ensuring their AI models and systems are compliant, explainable, and trustworthy — will differentiate themselves in a market increasingly demanding transparency and ethical AI use.
By acting today, enterprises can not only stay ahead of regulation but build the foundation for long-term leadership in the AI-driven economy.