Features
Explainability
Capture the rationale, not just the output — so any decision can be explained months or years later.
Explainability is the capability that makes sure an advice decision is intelligible after the fact. It captures the inputs the model saw, the reasoning the model returned, the parts of that reasoning the human reviewer agreed with, and the parts they overrode. When a customer (or a regulator) asks “why did your firm recommend this?” the answer is on file.
What gets captured
- Sanitised input summary (no raw PII)
- Model rationale text and structured features
- Reviewer's annotations and overrides
- Final decision text shown to the customer
API endpoints
GET /v1/explainability/search?clientReference=...— look up an explainability record by client referenceGET /v1/explainability/:jobId— fetch the full explainability record for a review job
Both endpoints return an ExplainabilityRecord — a synthesised read model that joins the job (including all review actions), the assigned reviewer, the certificate (if issued), the anchoring ledger record (hash, signature, chain position), and firm details. This single response gives auditors a complete trail from submission to outcome without cross-referencing multiple endpoints.
Evidence produced
- Full review action trail (annotations, overrides, checklist ticks) with timestamps
- Anchoring ledger record with cryptographic chain binding
- Certificate linking the outcome to the firm's signing key
FCA mapping
- PRIN 7 — Communications with clients
- PRIN 2A.5 — Consumer support outcome
- COBS 9.2 — Suitability assessment record