Skip to content
← All posts
Regulation6 min read

What RBI examiners actually ask for when reviewing an AI decision

Pulled from public examiner notes, conversations with bank CISOs, and the language inside the 2023 Master Direction. The questions you have to be able to answer in 60 seconds without opening a model.

When the RBI examiner walks into the room to review an AI decisioning system, they don’t ask the questions the AI vendor prepared you for. They ask the questions a bank examiner has been asking for forty years, applied to a system the examiner is hearing about for the first time. We’ve gathered the questions from public examiner notes, conversations with bank CISOs, and the language inside the 2023 Master Direction itself.

The five questions you have to be able to answer in 60 seconds

  • “Show me a specific decision and walk me through how it was made.” The examiner picks one. You produce the audit trail, the citations, the model output, the reviewer disposition. If any of those four pieces is missing or has to be reconstructed, you’re in trouble.
  • “What happens if the model is wrong?” You explain the confidence-floor escalation primitive, the human-reviewer queue, and the override path. The expected answer is ‘the model’s wrong answer never reaches the customer’.
  • “Where does the data live?” The expected answer is ‘in our own environment, in our own cloud region, with our own KMS’. Any other answer triggers the cross-border-transfer conversation, which you don’t want.
  • “What’s the exit plan?” The examiner is looking for evidence that you, as a regulated entity, can stop using this vendor and migrate to another without losing the data or the audit trail. You produce the exit-management plan that was part of the original SOW.
  • “Who signed off on the threshold?” The model’s confidence threshold is a risk-management decision. The expected answer names a board-approved committee, a head of credit, a CRO, or equivalent — not the ML engineer who wrote the prompt.

What examiners don’t ask about

Notably absent from this list: the model architecture, the foundation model, the prompt-engineering technique, the embedding strategy. Examiners are not technologists. They’re institutional-controls specialists. They want to see governance, audit, exit, and override.

The examiner’s mental model is ‘outsourced vendor with a black-box process’. Your job is to make every box transparent.

This is freeing if you absorb it. You don’t have to defend gpt-4o vs Claude vs Gemini in front of the examiner. You have to defend the controls around whatever model you chose. The model becomes a vendor like any other — and the framework for managing vendors is one the bank has known how to operate for decades.

What this implies for AI vendor selection

An AI vendor that can’t produce answers to the five questions in 60 seconds is not ready for a regulated bank engagement. Specifically:

  • If the vendor needs to look something up to explain how a specific decision was made, the audit chain is not first-class. Walk.
  • If the vendor talks about ‘model uncertainty’ without naming the threshold or the escalation primitive, the safety story is missing.
  • If the vendor says ‘our cloud’ when asked where the data lives, the data-localisation answer is wrong.
  • If the vendor’s exit-management plan is ‘you can stop paying us’, the regulator is going to push back.

The questions are knowable. The answers are knowable. The vendors that can answer them are the vendors that have built for regulated workflows from day one. The vendors that can’t are pitching the wrong product.

Pilot conversations are open.

Talk to us →