Skip to content
Glossary

Plain definitions for the terms on this site.

Agentic AI has a vocabulary problem — every vendor invents its own term for the same thing. These are the words we use at Vihaya, defined so an engineer, a clinical reviewer, or a CFO can read them and get the same picture.

Agentic AI

AI systems where an LLM plans actions, calls tools, observes results, and iterates — rather than producing a single one-shot answer.

An agentic system gives the LLM a loop: read input → decide what to do → call a tool (search, database query, API request) → read the result → decide next step → eventually return a final answer. The loop is what separates agents from chatbots. Vihaya is an agentic platform because every solution it ships runs an LLM through a multi-step pipeline of retrieval, reasoning, decisioning, and write-back.

Related: AI Decisioning, Tool Use

AI Decisioning

Production AI pattern where the model's output is a structured decision (approve / deny / escalate) with attached rationale and citations, not free-form text.

Decisioning is the audit-friendly cousin of generative AI. Where a chatbot returns prose, a decisioning agent returns a typed JSON object: outcome, rationale, confidence score, citation IDs pointing to the policy text it grounded on. Every Vihaya solution produces one decision row per request; the row is immutable and links to its citations and to the audit-trail events that produced it.

Related: Confidence Floor, Audit Trail, Citations

Confidence Floor

A configured threshold below which a decision is force-escalated to a human reviewer, regardless of what the model said.

If the agent says 'approve' with confidence 0.62 but the floor is 0.75, the case routes to a human anyway. This makes confidence floor a safety primitive, not a UI flag. It's the architectural way Vihaya guarantees human-in-the-loop coverage for low-certainty cases — without depending on the LLM to volunteer its uncertainty.

Related: Escalation Queue, Human-in-the-Loop

Context Mesh

Vihaya's hybrid-retrieval data layer: typed entities, relations, knowledge chunks, and episodic memory in one Postgres instance with pgvector + pg_trgm.

The Context Mesh is the substrate every Vihaya agent reads from and writes to. It combines vector similarity (pgvector) for semantic search, full-text search (pg_trgm) for lexical lookup, and a typed entity-relation graph for structured traversal. It also stores episodic memory — past decisions for the same subject — so future agent runs have context.

Related: Hybrid Retrieval, Ontology, Episodic Memory

Hybrid Retrieval

Retrieval that combines vector similarity, full-text search, and graph traversal in one query — not just embedding cosine match.

Pure vector RAG misses on exact-term matches (drug names, policy codes). Pure lexical search misses on paraphrase. Hybrid retrieval runs both and merges the results, then optionally traverses typed relations from each hit to pick up related entities. This is what makes citations actually useful — the chunks retrieved are the chunks a human reviewer would have pulled.

Related: Context Mesh, RAG

Eval Gate

A CI-blocking test suite that runs a golden dataset through the live model on every deploy. Drift below threshold exits non-zero; deploy is blocked.

Evals are the unit tests for AI systems. The eval gate runs ≥50 hand-curated scenarios through the production model, scores outcome match + citation grounding + confidence on approves, and refuses to ship if the average drops. The golden dataset is built collaboratively with the customer's SME (e.g. medical director) and reviewed when policies change.

Related: Golden Dataset, CI/CD

Audit Trail

Append-only log of every action the system takes: who did what, to which resource, with what outcome, with what metadata. Reconstructable from cold storage.

In Vihaya the audit trail is a primitive, not a feature. Every step of every agent run writes one row to a Postgres table that has no UPDATE or DELETE permission. The trail maps to control frameworks (SOC 2 CC4.1 / CC7.2 by default) so audit-evidence collection is a side effect of normal operation, not a quarterly fire drill.

Related: SOC 2, Compliance

Escalation Queue

The ordered list of cases the agent declined to decide automatically and routed to a human reviewer.

Cases land on the queue when (a) the agent's own verdict was 'escalate' or (b) the confidence floor forced escalation. Every queue item carries the agent's recommendation, rationale, and citations — the reviewer is not starting from scratch. The queue is a first-class API endpoint and surfaces as the customer-facing reviewer UI.

Related: Confidence Floor, Human-in-the-Loop

Citations

Pointers from a decision back to the specific policy chunks the agent grounded its reasoning on.

Every Vihaya decision carries pointers to the exact policy passages the agent grounded on. The reviewer UI dereferences these to show the verbatim source text the agent based its decision on. Citations are how Vihaya makes LLM output reviewable: not 'trust the model,' but 'here is what it read.'

Related: AI Decisioning, Audit Trail

Durable Agent

An agent run record that survives process crashes, restarts, and replays. Identity, state, and progress are persisted.

Long-running enterprise workflows can't lose state when the Node process dies. A durable agent run is a row that captures who the agent is, what input it received, where it is in its plan, and what tools it has called. If the process restarts, the run resumes from the last checkpoint — the call is idempotent on the decision ID.

Related: Idempotency, Agentic AI

Ontology

A typed schema declaring which entity kinds and relation kinds the system understands — domain, range, cardinality, properties.

An ontology lets a tenant declare 'patient', 'policy', 'provider' as entity kinds, and 'covers', 'authored-by', 'requested-for' as relation kinds with domain/range constraints. Writes to the system's knowledge store are validated against the ontology before they land — which is what stops a misclassified entity from quietly poisoning the agent's view of the world months down the line.

Related: Context Mesh, Schema Validation

Prior Authorization (PA)

The healthcare payor process where a clinician requests insurance approval for a procedure or drug before delivery.

Prior auth is the highest-volume, most expensive paperwork in US healthcare — a Fortune-500 payor spends $2B+/yr on utilization management to review PA requests against clinical guidelines (Milliman, InterQual, internal policy bulletins). Vihaya's first shipping solution automates PA decisioning end-to-end while keeping every decision audited, cited, and escalation-protected.

Related: Utilization Management, Healthcare AI

Utilization Management (UM)

Healthcare payor function that reviews requests for medical services against clinical and policy criteria to decide coverage.

UM teams operate the prior-authorization, concurrent review, and retrospective review functions. They are typically staffed with nurse reviewers reading PDFs and EHR exports. Vihaya's PA solution operates within UM, automating the front end of nurse-reviewer screening with an audit chain dense enough for regulator review.

Related: Prior Authorization, Healthcare AI

SOC 2

AICPA framework for evaluating service-organization controls across security, availability, processing integrity, confidentiality, and privacy.

Vihaya's compliance package ships with the SOC 2 baseline pre-seeded. Audit-trail events automatically link as evidence under CC4.1 (monitoring) and CC7.2 (incident detection). A Type II audit attests that controls operated effectively over a period — Vihaya is pre-Type-II as of May 2026 with auditor engagement on the roadmap.

Related: Audit Trail, Compliance

Human-in-the-Loop (HITL)

A workflow where AI handles routine decisions automatically but routes uncertain or high-risk cases to a human.

HITL is the only viable production pattern for high-stakes decisioning. Vihaya bakes HITL in three ways: (1) confidence-floor escalation forces human review below threshold, (2) policy-spec escalation rules allow per-class always-escalate cases, (3) the model itself can return 'escalate' as an explicit outcome. The human reviewer sees the agent's full recommendation chain.

Related: Confidence Floor, Escalation Queue

Generative Engine Optimization (GEO)

The successor to SEO: optimizing content so it surfaces inside LLM-powered answer engines (Google AI Overviews, Perplexity, ChatGPT browsing, Claude).

Where SEO targeted SERP rankings, GEO targets the answers LLMs synthesize from many sources. Winning GEO requires: clear factual statements, defined-term content, FAQ schemas, citation-friendly stats, llms.txt / llms-full.txt files, and machine-readable structure (JSON-LD). This page is itself a GEO play — answer engines preferentially cite glossary entries because they have a clear term-definition shape.

Related: AEO, SEO

DPDP Act 2023

India's federal personal-data-protection law. Establishes the data-fiduciary role, notice + consent, purpose limitation, and breach-notification obligations.

The Digital Personal Data Protection Act 2023 is India's first comprehensive personal-data law. It applies to any entity processing personal data in connection with goods or services offered in India. Vihaya's append-only audit trail, configurable PII redaction, and consent + notice integration points make it a DPDP-aligned substrate by default. Penalties under the Act run up to ₹250 crore per incident, which is why audit-grade decisioning matters.

Related: Audit Trail, CERT-In

RBI Master Direction (IT Outsourcing)

The Reserve Bank of India's framework governing how banks and NBFCs may outsource IT services — including data localisation, audit rights, and exit-management requirements.

Formally the 'Master Direction on Outsourcing of Information Technology Services, 2023', this RBI direction requires regulated entities to retain examiner audit rights, maintain BCP/DR plans, and ensure data does not leave India without explicit approval. Vihaya deploys inside the bank's VPC (typically AWS Mumbai or Azure South India) precisely so the framework's data-localisation expectations are satisfied by the deployment model, not by paperwork.

Related: DPDP Act 2023, BFSI

IRDAI

Insurance Regulatory and Development Authority of India. Issues the cyber-security and IT-outsourcing guidelines that bind Indian insurers.

IRDAI's cyber-security and IT-outsourcing guidelines bind India's life and general insurers. The guidelines require board-level oversight of outsourced IT, defined audit rights, and incident reporting. Vihaya engagements with Indian insurers include the standard IRDAI artefacts — board-approved outsourcing policy mapping, audit-trail evidence packaging, and the exit-management plan that mirrors RBI's outsourcing direction structure.

Related: DPDP Act 2023, Audit Trail

CERT-In

Indian Computer Emergency Response Team — the national CERT under MeitY. Requires reporting of cyber incidents within six hours of detection.

CERT-In's 2022 directions require any organisation processing data in India to report specified cyber incidents within six hours. Vihaya's audit primitive surfaces incident-detection events into the customer's CERT-In reporting workflow so the six-hour window is operationally achievable, not theoretical.

Related: DPDP Act 2023, Audit Trail

ABDM

Ayushman Bharat Digital Mission — India's national digital health infrastructure, including the Health Information Exchange and the ABHA health ID.

ABDM defines the federated architecture for India's health data, including a FHIR-based Health Information Exchange (HIE), the ABHA (Ayushman Bharat Health Account) ID, and the Healthcare Professionals Registry. Vihaya's Context Mesh ingests structured ABDM data via FHIR R4. Engagements with Indian health insurers and hospital chains include the ABDM HIE adapter as a standard integration.

Related: Prior Authorization, Healthcare AI

See the terms in production.

Every concept above appears in a shipping solution.

Read the prior-auth case study →