EU Proposes Major Reforms to GDPR and AI Act Amidst Privacy Concerns
Daily Brief

EU Proposes Major Reforms to GDPR and AI Act Amidst Privacy Concerns

On Nov 19, 2025 the European Commission proposed a Digital Omnibus to amend GDPR and the AI Act, easing AI training under “legitimate interest.” It also e…

daily-briefregulationprivacy

The European Commission’s proposed “Digital Omnibus” would reshape how companies justify using personal data for AI training under GDPR while pushing out key AI Act deadlines. At the same time, two U.S. controversies—prison call analytics and deepfake safeguards—underline how fast privacy and misuse risks can erase regulatory goodwill.

EU Digital Omnibus would ease AI training under GDPR and extend AI Act high-risk deadlines

On Nov. 19, 2025, the European Commission proposed a “Digital Omnibus” package to amend both GDPR and the EU AI Act. The proposal would allow organizations to use personal data for AI training under a “legitimate interest” basis, while still requiring safeguards. In parallel, it would extend compliance deadlines for high-risk AI systems from August 2026 to December 2027.

The policy move lands as privacy concerns remain high-profile: Securus Technologies is facing backlash for using Texas prison phone and video call records to train an AI model intended to detect criminal activity, and Public Citizen has urged OpenAI to withdraw its Sora 2 application over concerns about non-consensual deepfakes after researchers reportedly bypassed anti-impersonation protections shortly after launch.

  • Legal basis may get clearer, but the burden shifts to documentation. If “legitimate interest” becomes a more explicit pathway for AI training, teams should expect to formalize legitimate-interest assessments, record safeguards, and be ready to explain necessity and proportionality to regulators and customers.
  • Deadline relief doesn’t remove AI Act work— it changes sequencing. Extending high-risk obligations to Dec 2027 reduces near-term pressure, but it can also prolong uncertainty for product and procurement teams deciding whether to build, buy, or delay deployments.
  • Consent and provenance will be audited in practice, not theory. The Securus case spotlights “coerced consent” arguments in sensitive settings (including attorney-client privilege concerns). Data leaders should re-check how notices, opt-outs, and contractual permissions map to real-world power dynamics.
  • Anti-impersonation is now a core control for synthetic media. The Sora 2 backlash shows that safety features will be tested immediately. Teams shipping generative media should treat identity/consent checks, abuse monitoring, and rapid takedown workflows as baseline governance, not add-ons.