Skip to content
TowerHQ
AI transparency

AI is advisory. Never autonomous.

TowerHQ uses AI to draft ePCR narratives, complete fields, code claims, detect denial patterns, generate appeal letters, redact PHI, and power an in-product assistant. None of it happens without the guardrails below.

Single choke point

All AI inference flows through AIGateway — text, vision, speech-to-text, TTS. Direct provider SDK calls from app code fail CI. Provider, model, PHI redaction, rate limits, budget caps, logs — all enforced in one place.

Confidence, model version, sign-off

Every AI output carries a confidence score, the model version, and a prompt hash. For anything that affects a claim, patient record, credential, or pay run, a human reviewer accepts, edits, or rejects — and their reviewer ID is stamped on the record.

BAA + tenant opt-in

PHI does not reach an external provider without a signed BAA and an explicit tenant opt-in. On-prem Llama fallback is available for tenants who forbid external inference entirely.

Quarterly model cards

Each production model is reviewed every quarter: tasks, evaluation results on a golden-pair suite, known failure modes, BAA status, deprecation plan. Cards are linked from every feature that uses the model.

What AI does NOT do

  • Does NOT auto-submit claims
  • Does NOT auto-chart without crew sign-off
  • Does NOT auto-release POs
  • Does NOT auto-update credentials
  • Does NOT act on access control without supervisor approval
  • Does NOT answer the patient voice agent on balance/payment actions without human handoff available
  • Does NOT ever surface PHI to a cross-tenant view
  • Does NOT get its behavior modified by chat memory or transcripts

Logging

Every inference call records prompt, response, provider, model, latency, token usage, and cost. Retention is ten years for any call that touches a billable claim or patient record, per F1336. Tenants can export their own AI logs at any time.