EU AI Act in Force, What LLM Builders Need to Know Right Now
The EU AI Act entered force August 2024 with staggered compliance deadlines now approaching. Understand crucial obligations for general-purpose LLM providers, systemic risk requirements for frontier models, high-risk classification triggers for specific applications, and why internal tools aren't exempt—plus immediate actions organizations must take.
9/2/20243 min read


The European Union's AI Act officially entered into force on August 1, 2024, marking the world's first comprehensive regulatory framework for artificial intelligence. For organizations building or deploying large language models, this isn't distant regulatory noise—it's immediate compliance reality with staggered timelines that demand attention now.
The Tiered Timeline
The AI Act implements a phased approach. Prohibited AI practices face bans within six months—by February 2025. These include social scoring systems, real-time biometric identification in public spaces (with limited exceptions), and manipulative AI that exploits vulnerabilities. Most LLM applications don't trigger these prohibitions, but conversational systems designed to manipulate behavior or exploit children require immediate review.
General-purpose AI models face obligations starting August 2025—twelve months from enactment. This timeline matters urgently. Organizations building or fine-tuning foundation models need compliance infrastructure operational within a year, and building that infrastructure requires starting immediately.
High-risk AI systems have until August 2027 for full compliance, but organizations developing such systems should begin classification and documentation now. The twenty-four-month window evaporates quickly when addressing technical requirements, governance processes, and audit trails.
General-Purpose AI: New Obligations for Foundation Models
The Act distinguishes between standard general-purpose AI and models with "systemic risk"—those trained with computational power exceeding 10^25 FLOPs. GPT-4, Claude 3, and similar frontier models clearly qualify. Smaller models may not trigger systemic risk provisions but still face general-purpose obligations.
For all general-purpose AI models, providers must:
Maintain technical documentation covering training data, computational resources, testing procedures, and known limitations. This isn't lightweight paperwork—the documentation must enable authorities to assess model capabilities and risks. Organizations that haven't systematically documented training processes face substantial retroactive work.
Provide transparency about training content to downstream deployers. If your model was trained on copyrighted material, you must disclose this. Copyright holders gain rights to opt-out of training data usage. For models trained on web scrapes or undocumented sources, this creates immediate compliance challenges.
Implement usage policies and safeguards to prevent generating illegal content. While LLMs already implement safety measures, the Act formalizes this requirement with potential legal liability for inadequate controls.
Systemic risk models face additional burdens: model evaluation requirements, adversarial testing, incident reporting mechanisms, and cybersecurity measures. Organizations offering API access to frontier models must implement monitoring systems that detect misuse patterns and potential systemic risks.
High-Risk Applications: When Your LLM Use Triggers Scrutiny
The Act defines high-risk AI systems by application domain rather than underlying technology. LLMs deployed in specific contexts automatically qualify:
Employment decisions: Resume screening, candidate evaluation, or performance assessment systems using LLMs are high-risk. HR departments deploying AI-powered hiring tools face conformity assessments, human oversight requirements, and bias monitoring obligations.
Credit scoring and financial decisions: LLMs analyzing creditworthiness or making lending recommendations trigger high-risk classification. Financial institutions must implement rigorous testing, documentation, and human review processes.
Education and training: AI systems that determine educational pathways or qualification access are high-risk. Automated essay grading or admissions support requires compliance infrastructure.
Law enforcement and justice: Any LLM application assisting judicial decisions, assessing recidivism risk, or supporting law enforcement investigations faces the strictest requirements.
High-risk system deployers must maintain detailed logs, ensure human oversight, implement accuracy and robustness testing, and establish quality management systems. These aren't checkbox exercises—they require organizational processes, technical infrastructure, and ongoing maintenance.
Internal Tools: Not Always Exempt
A common misconception holds that internal AI tools escape regulation. This is false. If your internal recruiting tool screens applicants, it's high-risk regardless of whether candidates see the AI. If your internal credit model influences lending decisions, compliance applies.
The distinction is end-use impact, not deployment context. Internal tools affecting employment, credit, education, or legal outcomes trigger obligations. Productivity tools like coding assistants or document summarizers generally avoid high-risk classification unless they make consequential decisions.
Immediate Action Items
Organizations deploying LLMs in the European market—or serving European users—should act now:
Classify your systems: Determine whether your LLM applications qualify as general-purpose, high-risk, or neither. This classification drives all subsequent obligations.
Audit training data and documentation: For models you've trained or fine-tuned, ensure you can demonstrate compliance with transparency and copyright obligations.
Implement governance structures: Designate compliance officers, establish risk management processes, and create incident reporting mechanisms before deadlines arrive.
Review vendor relationships: If you're deploying third-party models, ensure providers will meet their obligations. Your compliance depends partially on upstream providers' actions.
Monitor guidance publications: The EU AI Office is publishing implementation guidance throughout 2024. Regulatory interpretation will evolve—static compliance strategies will fail.
The Strategic Reality
The AI Act represents regulatory certainty after years of ambiguity. While compliance imposes costs, it also creates competitive advantages for organizations that address requirements proactively. Those rushing toward deadlines will face scrambled implementations and potential penalties. Those treating compliance as strategic groundwork will build robust, trustworthy systems that serve European markets confidently.

