Skip to content
← All posts
Regulation7 min read

What changes after DPDP for enterprise AI

₹250 crore penalties don't just change the budget for a breach — they change which AI projects make it past the legal review. Walking through what DPDP makes harder, easier, and impossible.

The Digital Personal Data Protection Act 2023 has been on the books for eighteen months and operational since the 2025 Rules. The headline most articles led with — ₹250 crore penalties — is real, but it isn’t the most consequential thing the Act did. The most consequential thing is that it changed which AI projects make it past the legal review.

What DPDP actually requires for AI

Read past the headlines and four obligations matter for any enterprise running AI on personal data:

  • Purpose limitation. Personal data can only be processed for the specific purpose disclosed to the data principal. Training a model on data collected for a different purpose is a violation.
  • Data-fiduciary record-keeping. The fiduciary must be able to demonstrate, on demand, what data was processed, why, by whom, and with what outcome.
  • Reasonable security safeguards. The Act is non-prescriptive on what counts as reasonable; case law and industry practice will fill that in, but encryption, access control, and audit logging are the floor.
  • Notification. Breaches must be reported to the Data Protection Board of India and to affected principals. CERT-In’s 6-hour parallel obligation effectively sets the internal clock.

What this makes harder

Three categories of AI project that used to slip through legal review now stop at the door:

  • Models trained on customer data collected for service delivery. Without explicit, purpose-bound consent, this is a violation by default.
  • Cloud-resident AI services that egress personal data. Cross-border transfer is restricted; in-country deployment is the safer default.
  • Decisioning systems that can’t reconstruct the rationale. A regulator who can’t see why a particular decision was made will treat the whole system as a black box.

What this makes easier

Counter-intuitively, DPDP also lowered the bar for one specific kind of AI project: deployed, audited, citation-backed decisioning systems. Two reasons. First, the legal team has a clean ‘yes’ criterion now — the system either produces a defensible audit trail or it doesn’t. Second, the cost of doing nothing went up: the alternative to deploying a compliant AI system is letting human reviewers process the same data under the same obligations, which they were already doing imperfectly.

DPDP didn’t kill enterprise AI. It killed the bad versions and made room for the good ones.

What this makes impossible

Some AI patterns are now functionally impossible at Indian enterprise scale:

  • Cloud-LLM-only chatbots handling customer PII without a data-residency story
  • Model-training pipelines that absorb whatever logs are convenient, without consent records
  • Opaque scoring systems where the customer-impacting decision can’t be explained back to the data principal under a Right-to-Information-style request

The decisioning shape that does work

Audit-first agentic decisioning. Every action recorded, every decision traceable, every model output cited back to the source it grounded on. The legal team gets the chain of evidence they need; the operations team gets the cost reduction; the data principal gets a system whose decisions can be explained.

This is the architectural answer to DPDP. It’s also the architectural answer to RBI’s outsourcing direction, IRDAI’s cyber-security guidelines, and CERT-In’s reporting timeline. The four regulations converge on the same primitives, which is fortunate because no one wants to build the same system four times.

Pilot conversations are open.

Talk to us →