Living with the EU AI Act, Governance Patterns Emerging in Q4 2024
Three months after the EU AI Act entered into force, organisations are discovering what compliance really means. From building AI inventories to navigating risk classification and GPAI obligations, early adopters are developing practical frameworks. Learn the essential governance patterns emerging from real-world implementation in Q4 2024.
11/18/20244 min read


Three months after the European Union's Artificial Intelligence Act entered into force on 1 August 2024, organisations are beginning to understand what compliance really means in practice. While full enforcement remains two years away, forward-thinking companies are already adapting their governance structures, building AI inventories, and wrestling with the complexities of risk classification. The early lessons from these pioneers are proving invaluable.
The AI Inventory Imperative
The first challenge organisations face is deceptively simple: identifying every AI system within their operations. Creating an AI asset inventory helps organizations determine whether they are dealing with an AI system or an AI model, and serves as the foundation for all subsequent compliance activities.
What seemed straightforward on paper has proven complex in practice. AI systems are embedded across enterprise functions—from customer service chatbots to supply chain optimisation tools, from HR recruitment platforms to financial forecasting models. Many organisations are discovering AI applications they didn't realise existed, often deployed by individual departments without central oversight.
Leading organisations are establishing cross-functional teams combining legal, compliance, IT, engineering, and product expertise. These core teams are supported by AI champions within departments across the organization who act as the first point of contact for new policies, processes and tools. This distributed approach ensures comprehensive coverage while maintaining subject matter expertise where it matters most.
Navigating Risk Classification
Once the inventory is complete, the real work begins: risk classification. The AI Act defines 4 levels of risk for AI systems—unacceptable, high-risk, limited, and minimal—with obligations scaling proportionally to the assessed risk level.
High-risk systems are subject to the most stringent requirements, encompassing applications in critical sectors such as biometric identification, infrastructure management, education, employment, law enforcement, and border control. Determining whether a system qualifies as high-risk involves navigating both Annex I (embedded AI systems in regulated products) and Annex III (standalone high-risk applications).
A critical subtlety is emerging around profiling. Any AI system used for profiling natural persons—automated evaluation or prediction of individuals' traits, preferences, or behaviour—is automatically classified as high-risk, regardless of other factors. This has caught many HR and marketing departments by surprise, as recruitment tools and customer segmentation systems suddenly face far stricter requirements than anticipated.
For organisations uncertain about their classification, the Act permits providers to document their assessment that a system listed in Annex III doesn't pose significant risk. However, this requires robust justification and still triggers registration obligations, creating a compliance burden that many are choosing to avoid by simply accepting the high-risk designation.
The GPAI Challenge: When Your LLM Product Needs Special Attention
General-purpose AI models—particularly large language models—represent a distinct regulatory challenge. GPAI models display significant generality and are capable to competently perform a wide range of distinct tasks regardless of the way the model is placed on the market.
For organisations building LLM-based products, understanding the distinction between GPAI models and GPAI systems is crucial. The AI Act makes an important distinction between "AI models" and "AI systems"—models are the underlying algorithms, while systems incorporate those models into specific applications. A company fine-tuning an open-source LLM for a specific use case may find themselves classified as a provider with all the attendant obligations.
All GPAI model providers must provide technical documentation, instructions for use, comply with the Copyright Directive, and publish a summary about the content used for training. For models trained with computation exceeding 10²⁵ floating-point operations, additional systemic risk obligations apply, including model evaluations, adversarial testing, incident reporting, and cybersecurity protections.
The recently published Code of Practice for GPAI models offers a voluntary compliance pathway, though organisations must carefully assess whether signing it genuinely reduces their burden or simply makes their commitments more visible to regulators.
Practical Compliance Checklists Taking Shape
From the trenches of early implementation, practical compliance frameworks are emerging. For high-risk systems, organisations are developing workflows around:
Risk Management: Establishing continuous risk assessment processes throughout the AI lifecycle, from design through deployment and monitoring.
Data Governance: Ensuring training, validation, and testing datasets meet quality standards, with documented approaches to identifying and mitigating bias.
Technical Documentation: Maintaining comprehensive records that enable authorities to understand system capabilities, limitations, and decision-making processes.
Human Oversight: Designing systems with meaningful human review capabilities, ensuring automated decisions can be challenged and corrected.
Transparency: Providing clear information to deployers and end-users about system capabilities, limitations, and appropriate use cases.
For GPAI model providers, checklists focus on copyright compliance policies, training data summaries, and for systemic-risk models, adversarial testing protocols and incident response procedures.
The Governance Gap Analysis
Perhaps the most valuable insight from Q4 2024 is the recognition that AI Act compliance isn't entirely new ground. Many of the requirements under the Act exist today under GDPR, so enterprises will benefit heavily from taking cues from their privacy officers when addressing areas like risk management, data governance, and transparency.
Smart organisations are conducting gap analyses to identify which existing processes can be adapted rather than building everything from scratch. Quality management systems, data protection impact assessments, and cybersecurity frameworks often provide solid foundations that need augmentation rather than wholesale replacement.
Looking Ahead: The February Milestone
As organisations settle into their compliance journeys, the next major milestone looms: 2 February 2025, when prohibitions on unacceptable-risk AI systems take effect. While most organisations aren't deploying social scoring systems or manipulative AI, the deadline serves as a wake-up call. The compliance timeline is compressed, and organisations that haven't started their inventories and risk assessments are rapidly running out of runway.
The patterns emerging in Q4 2024 suggest that successful AI Act compliance isn't about legal compliance teams working in isolation. It requires cross-functional collaboration, pragmatic risk assessment, and a willingness to adapt existing governance structures rather than creating parallel compliance apparatus. For organisations willing to learn from early implementers, the path to compliance is becoming clearer—even if the destination remains challenging.

