Features
Bias monitoring
Outcome comparisons across protected characteristics, so you can prove your AI is treating customers fairly — or catch it when it isn't.
Bias monitoring is the capability that compares outcomes across protected characteristics — age band, sex, ethnicity (where collected), disability status, vulnerability flag — and flags any group that is being approved, modified or rejected at a materially different rate from the firm baseline. It is the practical mechanism for proving Consumer Duty's “differential outcomes” obligation.
How it works
Each review job optionally carries a clientSegments object — a flat record of categorical labels (e.g. { "ageBand": "55-64", "riskProfile": "balanced", "productType": "pension" }). Bedrock aggregates completed jobs by each segment dimension over the reporting period, computes per-segment outcome rates, and compares them against the firm-wide baseline. Segments whose rejection or modification rate diverges by more than 5 percentage points (warning) or 10 percentage points (alert) are flagged on the report.
Querying the report
The report is computed on-demand from the underlying review jobs — there is no nightly job and no stored snapshot, so the same call moments apart can return different signals as new outcomes land.
GET /v1/bias?from=2026-01-01&to=2026-04-01— explicit window. The default is the last 12 months if you omit them.rejectionThresholdquery param sets the absolute rejection-rate guardrail (defaults to 0.3) used as a secondary gating signal alongside the percentage-point deltas.
Evidence produced
- Per-segment outcome rates and the per-dimension flags returned by
GET /v1/bias. - The same
clientSegmentsblob is persisted on everyReviewJobrow, so historical bias reports can be re-derived against new thresholds at any time.
FCA mapping
- PRIN 2A.4 — Consumer Duty “price and value” outcome
- PRIN 2A.5 — Consumer Duty “consumer support” outcome
- Equality Act 2010 (s.20 reasonable adjustments)