We ship AI features in the chart. We're a mental-health clinic. Those two things are supposed to be in tension. They aren't, but only because we made specific decisions early. This post is what we decided and why.
The four things that earn trust (or don't)
Patients don't object to AI in their chart. They object to AI in their chart that nobody told them about, that nobody can explain, that nobody can turn off. Address those three and the temperature drops to zero.
1. Tell them. In intake. Before they consent to anything else.
We disclose AI use in three sentences in the intake form, followed by an opt-out checkbox. About 4% of patients opt out. Almost everyone in the 4% comes back later and opts back in once they've had a few sessions and trust the clinician.
2. Show them what it does. And what it doesn't.
Our patient-facing one-pager (which we're happy to share) covers exactly what the AI sees: session audio (optional), chart text, prior outcome scores. And what it doesn't: no payment data, no insurance reviews, no decisions about prescribing or diagnosis.
3. Off switch. Per session. Per chart.
Any clinician can turn off AI assist for a session in one click. Any patient can ask their clinician to chart by hand for the rest of the relationship. Both options are documented and both are exercised regularly.
4. Audit trail. Every action.
Every AI suggestion is logged separately from the clinician's accepted note. If you ever wanted to know what the AI drafted vs. what the clinician kept, you could pull that report. We've never had to. The transparency is the point.
What we negotiated with our AI vendor
Three things, in writing, before we shipped.
- BAA in place. PHI processed under enterprise contract. No use for model training.
- In-region (US-only) processing. Data residency confirmed in writing.
- Audit log of every API call from our system, queryable by us, retained for 6 years.
If your AI vendor won't sign a BAA — or won't promise PHI is excluded from training — that's the entire conversation. Walk.
What we still don't let AI do
- Auto-sign a note. Ever. The clinician signs.
- Auto-bill a code. The clinician approves the code suggestion.
- Speak to a patient as a clinician. The AI receptionist is clearly identified as a receptionist, not a provider.
- Make any decision about diagnosis, prescribing, or risk classification.
The shorter the list of what AI is allowed to do alone, the more freely it can do everything else.
What surprised us
Patients love it more than clinicians did at first. The clinicians worried about being judged or surveilled. The patients found out their notes finished the same day instead of two weeks later. That changed the conversation faster than any policy doc could.