EU AI Act Enforces New Regulations on AI and Personal Data Protection
Daily Brief

EU AI Act Enforces New Regulations on AI and Personal Data Protection

EU AI Act adopted June 13, 2024 and in force Aug 1, 2024, sets risk-based AI rules tied to personal data protection. It mandates GDPR alignment and new du…

daily-briefregulation

The EU AI Act is now in force and it doesn’t replace GDPR—it stacks on top of it. Data and ML teams building or deploying AI systems that touch personal data should expect tighter documentation, risk controls, and transparency obligations, especially for high-risk use cases.

EU AI Act in force: GDPR alignment plus new provider duties for high-risk AI

The International Trademark Association (INTA) breaks down how the EU AI Act supplements GDPR when AI systems process personal data. The European Parliament adopted the AI Act on June 13, 2024, and it entered into force on August 1, 2024. The law uses a risk-based framework, with additional requirements for “high-risk” systems that could affect individuals’ rights.

For organizations working with personal data, the key operational point is dual compliance: AI systems must meet both the AI Act’s governance and risk-management expectations and GDPR’s established obligations. INTA notes the AI Act does not redefine GDPR roles like “controller”; instead it points back to GDPR for those definitions. The Act also reinforces transparency expectations—users should be aware when they are interacting with AI—and includes provisions that allow limited biometric data use for law enforcement, highlighting the Act’s attempt to balance rights protection with public-interest uses.

  • Compliance becomes a two-track program: privacy, security, and ML governance can’t be handled as separate workstreams. Teams will need integrated evidence that systems meet GDPR requirements while also satisfying AI Act documentation and risk controls for high-risk systems.
  • Documentation and risk controls move from “nice to have” to enforceable: providers of high-risk AI should plan for more formalized risk mitigation procedures and traceable system documentation that can survive audits and regulatory scrutiny.
  • Transparency is a product requirement: user-facing disclosures about AI interaction are not just policy language; they can affect UX, customer support, and downstream contractual representations.
  • Biometric safeguards are a liability hotspot: even with limited allowances (e.g., law enforcement contexts), biometric processing raises the bar for governance, security controls, and internal approvals—especially where personal data protection is implicated.