The fourth quarter of 2024 has brought significant developments in AI regulation worldwide, with the European Union’s AI Act implementation advancing and other jurisdictions responding with their own frameworks.
EU AI Act Implementation Progress
The EU AI Act, which entered into force on August 1, 2024, continues its phased implementation:
Immediate Prohibitions (effective February 2025):
- Emotion recognition systems in workplaces and educational institutions
- Social scoring systems by governments
- Biometric categorization inferring sensitive characteristics
- Untargeted scraping for facial recognition databases
Upcoming Requirements (2025-2027):
- High-risk AI systems must meet strict requirements by August 2025
- General-purpose AI models face transparency obligations
- Full compliance required for all systems by 2027
European companies are scrambling to audit their AI systems and establish compliance frameworks, creating a new industry of AI governance consultants.
US Federal Developments
The Biden administration’s October 2023 Executive Order on AI continues shaping the landscape:
- NIST has released updated AI risk management guidelines
- Major AI labs have submitted safety testing reports
- Procurement rules now require AI impact assessments for federal contracts
- Critical infrastructure AI deployments face enhanced scrutiny
State-level action has also intensified, with California’s SB 1047 (though vetoed) sparking national debate about AI safety legislation.
UK’s Pro-Innovation Approach
The UK continues its sector-specific regulatory approach:
- Financial Conduct Authority issued AI guidance for financial services
- Medicines and Healthcare products Regulatory Agency (MHRA) published AI medical device guidance
- Competition and Markets Authority investigating AI market concentration
- AI Safety Institute conducting model evaluations
China’s Expanding Framework
China has released additional AI governance measures:
- Generative AI service requirements now fully enforced
- New guidelines on AI-generated content labeling
- Expanded rules on algorithmic recommendation systems
- Cross-border data transfer restrictions affecting AI training
Corporate Compliance Challenges
Companies face mounting compliance pressures:
Multi-jurisdictional complexity: Global companies must navigate differing requirements across EU, US, UK, and Asian markets.
Documentation burdens: High-risk AI systems require extensive technical documentation, risk assessments, and monitoring systems.
Supply chain accountability: Organizations using third-party AI systems must verify vendor compliance.
Talent shortages: Demand for AI governance professionals exceeds supply, driving up compliance costs.
Industry Self-Regulation
Voluntary frameworks continue alongside government action:
- Major AI labs maintain their voluntary safety commitments
- Partnership on AI expanded its responsible AI guidelines
- IEEE standards for AI ethics gaining adoption
- Industry associations developing sector-specific best practices
Practical Implications
For AI developers and deployers, key actions include:
- Audit existing systems against emerging requirements
- Document AI development processes thoroughly
- Establish governance committees for AI oversight
- Train staff on compliance requirements
- Engage with regulators proactively
Looking Ahead
2025 will bring crucial regulatory deadlines:
- EU AI Act prohibited practices take effect (February)
- High-risk AI system requirements begin (August)
- Additional US federal guidance expected
- Global coordination efforts through G7 and OECD
The regulatory landscape for AI is now firmly established, and companies that delay compliance risk significant penalties and competitive disadvantage.