The 2023 AI Regulation Moment: EU AI Act, US Executive Actions, and Global Responses
The EU AI Act approaches final passage with risk-based requirements and foundation model provisions. Biden's Executive Order mobilizes federal agencies while the UK pursues pro-innovation principles. Companies face mounting compliance obligations across jurisdictions over the next 12-24 months, requiring documentation, risk frameworks, and cross-functional governance now.
11/13/20234 min read


After years of speculation about when governments would meaningfully regulate artificial intelligence, 2023 has delivered a decisive answer: now. From Brussels to Washington to Beijing, regulatory frameworks are rapidly taking shape, each reflecting different priorities and philosophies about how to govern transformative technology. For companies deploying AI systems, the regulatory landscape that seemed distant and theoretical just months ago is becoming concrete and imminent.
The EU AI Act: Nearing the Finish Line
The European Union's AI Act, in development since 2021, is approaching final passage with trilogue negotiations between Parliament, Council, and Commission now in their critical phase. The framework's risk-based approach has remained consistent: higher-risk applications face stricter requirements, while low-risk systems operate relatively freely.
The Act classifies AI systems into four risk categories. Unacceptable risk applications—social scoring systems, real-time biometric surveillance in public spaces, exploitation of vulnerable groups—face outright bans. High-risk systems used in employment, education, law enforcement, or critical infrastructure must meet stringent requirements including risk assessments, data governance standards, human oversight, and transparency obligations.
Recent negotiations have focused intensely on foundation models. The rapid emergence of GPT-4 and similar systems caught regulators somewhat unprepared, forcing amendments to address general-purpose AI that wasn't central to the original draft. The current proposal would impose transparency requirements on foundation model providers—documenting training data, publishing capability evaluations, implementing safeguards against illegal content generation.
Foundation model providers with "systemic risk"—likely models beyond certain capability or scale thresholds—would face additional requirements including adversarial testing, incident reporting, and cybersecurity measures. This two-tier approach attempts to regulate powerful AI without stifling smaller developers.
Implementation timelines remain under negotiation, but early provisions could take effect by late 2024, with full enforcement beginning 2025-2026. The regulation's extraterritorial reach means non-EU companies offering services in European markets must comply, making this global AI governance whether other jurisdictions adopt similar frameworks or not.
US Executive Action and Agency Mobilization
The United States has taken a characteristically different approach: executive action rather than comprehensive legislation, with regulatory authority distributed across existing agencies rather than centralized in new bodies.
President Biden's recent Executive Order on AI represents the most significant US government intervention to date. Rather than creating new restrictions, it mobilizes existing regulatory authorities toward AI oversight. The Department of Commerce will develop guidelines for AI system testing and evaluation. The National Institute of Standards and Technology will establish AI safety standards. Federal agencies must assess AI use in their operations and implement risk management frameworks.
Notably, the order invokes the Defense Production Act to require companies training foundation models above certain computational thresholds to report training runs and share safety test results with the government. This represents unprecedented federal visibility into AI development, though implementation details remain unclear.
The approach emphasizes voluntary commitments from industry leaders. Major AI companies pledged to red-team models before release, develop watermarking systems for AI-generated content, and share vulnerability information—commitments that are aspirational rather than legally binding but signal expected norms.
Congressional efforts toward comprehensive AI legislation have stalled amid partisan disagreements about appropriate scope and federal versus state authority. Sector-specific regulation through agencies like the FTC, FDA, and SEC appears more likely than overarching framework legislation in the near term.
UK's Pro-Innovation Stance
The United Kingdom has deliberately positioned itself as the pro-innovation alternative to EU regulation. Rather than comprehensive legislation, the UK emphasizes existing regulators applying AI principles within their domains—the financial regulator overseeing AI in banking, healthcare regulators governing medical AI applications.
Last week's AI Safety Summit at Bletchley Park showcased this approach. By convening international leaders and AI company executives, the UK sought to establish itself as a global coordinator on AI safety without imposing rigid regulatory frameworks that might discourage development.
The UK's principles-based approach emphasizes safety, transparency, fairness, accountability, and contestability—but leaves implementation to sector-specific regulators with domain expertise. Critics argue this creates regulatory fragmentation and insufficient enforcement. Proponents counter that flexibility encourages innovation while addressing context-specific risks.
The contrast with the EU is deliberate. Post-Brexit Britain seeks competitive advantage by offering a lighter regulatory environment attracting AI investment while the EU imposes heavier compliance burdens. Whether this gambit succeeds depends partly on whether the EU framework proves innovation-inhibiting or whether clear rules actually encourage investment by reducing uncertainty.
China's Distinctive Framework
China has moved aggressively to regulate AI, particularly generative systems, with distinct priorities reflecting its governance model. Requirements for large language models include government approval before public deployment, content filtering to align with "socialist values," and mandatory real-name registration for users.
The Cyberspace Administration of China's regulations emphasize content control and social stability over privacy or individual rights concerns prominent in Western frameworks. AI systems must not generate content that could undermine state authority, spread false information, or violate socialist core values.
This creates interesting dynamics for global AI companies. Operating in China requires technical and content adaptations potentially incompatible with Western deployments. Some companies may choose to exit the Chinese market rather than compromise global systems; others may maintain separate China-specific deployments.
What Companies Should Prepare For
Organizations deploying AI systems should prepare for significantly increased compliance obligations over the next 12-24 months:
Documentation requirements will intensify. Companies should begin maintaining detailed records of AI system development: training data sources, testing methodologies, identified risks, mitigation measures, and deployment decisions. The EU AI Act and similar frameworks will require this documentation for high-risk systems.
Risk assessment frameworks should be implemented now. Identify which AI systems qualify as high-risk under emerging regulations. Conduct impact assessments evaluating potential harms, bias, and safety concerns. Establish governance processes for AI deployment decisions.
Transparency mechanisms need development. Expect requirements to disclose AI use to users, explain decision-making processes, and provide appeal mechanisms for adverse automated decisions. Companies relying heavily on AI should prepare clear explanations of how systems work and affect users.
Cross-functional governance becomes essential. AI regulation won't fit neatly into legal, IT, or product teams. Organizations need governance structures spanning legal, technical, ethics, and business functions to navigate complex compliance requirements.
Geographic complexity will increase. Different jurisdictions are adopting incompatible requirements. Companies operating globally may need regionalized AI systems, particularly regarding content moderation, data handling, and transparency requirements.
The regulatory landscape remains fluid, with final requirements still emerging. But the direction is clear: self-regulation is ending, and meaningful government oversight is beginning. Companies treating AI governance as a future problem will find themselves scrambling to retrofit compliance into systems designed without regulatory constraints.
The 2023 regulatory moment marks AI's transition from experimental technology to regulated infrastructure—with all the compliance complexity that entails.

