EU Moves to Amend GDPR for AI Training — Implications for Data Teams
Daily Brief

EU Moves to Amend GDPR for AI Training — Implications for Data Teams

On Nov 10, 2025, the European Commission outlined GDPR amendments to ease AI training via new processing exceptions. A formal omnibus package is expected…

daily-briefregulation

The European Commission has outlined GDPR amendments intended to make AI training and automated decision-making easier to operationalize—potentially expanding what data can be used, under what safeguards, and with what risk to established EU privacy rights. With a formal “digital omnibus” package expected Nov. 19, 2025, data leaders should prepare for a period of compliance ambiguity and political pushback.

European Commission tees up GDPR amendments aimed at easing AI training

According to Datamation, the European Commission outlined on Nov. 10, 2025 a set of GDPR amendments designed to reduce friction for AI development by creating new exceptions for personal-data processing. The package is positioned as a simplification effort to support AI growth, but it has triggered criticism from privacy advocates who argue it would weaken core rights and privacy-by-design principles embedded in EU law.

Key proposals described include narrowing the definition of personal data to exclude pseudonymized data, enabling processing of special categories of data for AI with “appropriate measures,” and restructuring Article 22 in ways that expand permissions for automated decision-making. The Commission is expected to formally present the omnibus package on Nov. 19, 2025. Datamation also reports that a public consultation closed in Oct. 2025, with critics alleging insufficient transparency and engagement. Named critics include GDPR architect Jan Philipp Albrecht and noyb founder Max Schrems.

  • Training-data strategy could change fast: If pseudonymized data is treated as outside the “personal data” definition, teams may be able to reuse more internal datasets for model training—but only if implementation details and enforcement align with the proposal.
  • Sensitive-data guardrails become a design requirement: Allowing special-category processing for AI with “appropriate measures” shifts pressure onto privacy engineering to define, document, and continuously validate those measures across pipelines.
  • Article 22 uncertainty affects deployment, not just training: Any expansion of automated decision-making permissions would impact product eligibility checks, fraud/risk scoring, and HR tooling—areas where teams often rely on human-in-the-loop controls to manage GDPR risk.
  • Expect a messy transition period: Political backlash suggests a non-linear path (amendments, court challenges, regulator guidance). Data orgs should plan for parallel compliance postures while rules and interpretations stabilize.