Proofpoint says data loss is now the norm for most organizations—and “agentic” AI workflows are increasing exposure faster than security controls and visibility are keeping up. The takeaway for data teams: treat AI agents like high-privilege integrations, and use synthetic data to reduce the blast radius when sharing or testing.
Proofpoint: data loss hit 85% of organizations; careless users lead
Proofpoint’s second annual Data Security Landscape report found that 85% of surveyed organizations experienced at least one data loss incident in the past year. The most common driver was “careless users,” cited in 58% of reported cases.
The report also warns that the rise of AI agents in the workplace is creating an “agentic workspace” where sensitive data moves through more tools, prompts, and automated actions—often without equivalent monitoring, policy enforcement, or clear ownership. Proofpoint flags a governance gap as generative AI adoption grows, with 44% of respondents reporting insufficient oversight of generative AI systems.
- Agentic workflows increase hidden data paths. If agents can read from internal sources and write to external systems, they function like always-on integrations—so access scopes, logging, and egress controls need to be engineered up front, not bolted on later.
- “Careless user” risk is often a design problem. High incident rates tied to user behavior typically point to weak guardrails (overbroad permissions, poor data classification, unclear sharing defaults). Fixing the workflow reduces reliance on training alone.
- Synthetic data is a practical containment tool. For analytics, model training, and testing, synthetic datasets can enable cross-team sharing while avoiding exposure of PII/PHI or trade secrets—especially when insider risk or agent misuse is a concern.
- Governance needs to cover AI systems, not just people. The 44% oversight gap suggests many organizations lack clear controls for where genAI can access data, how outputs are retained, and who approves new agent capabilities.
