Reducing alert fatigue in clinical software.

In a typical ICU, a nurse handles hundreds of alerts per shift; the majority are dismissed without action. That's not a tech problem — it's a design problem. Here are the five design decisions we use to materially reduce dismissed alerts when building clinical software.

The temptation: add another alarm.

Every clinical software vendor's first instinct is to add a new alarm pattern: a different chime, a louder color, a vibrating wearable. This makes things worse. The ear is already saturated; new sounds compete with the ones that matter. We spent week one of the engagement establishing one rule:

No new alarm sounds. Period.

The five decisions.

1. AI sorts; humans act.

The model never silences an alert. It re-orders the queue and provides a one-line rationale. Clinical staff retain full visibility on every signal — the AI changes the order they see, not which they see.

2. Visual weight, not auditory weight.

Severity gets communicated through type weight, color contrast, and layout — not new sounds. The most critical alerts move to the top of the screen with high contrast; the rest collapse into a quiet pile. The eye can be re-directed; the ear is full.

3. Surface = location.

Desktop in the nurses' station shows the full queue with details. iOS in the corridor shows the next-action item only. iOS on rounds shows patient context. Same data, different visual hierarchy per location. Reduces the cognitive load of context-switching.

4. Audited every Friday.

A clinical safety committee reviews every model decision sample weekly. Not just outcomes — the model's reasoning. This catches drift early and provides the trail for any incident review. Without this, no clinical team will sign off on AI in the loop.

5. Air-gapped from EHR writes.

The clinical surface reads the EHR; it never writes. Medication, orders, charting — those stay in the certified system. The AI surface is read-only. In our experience this is the difference between "can never approve" and "approved within four weeks" of compliance review.

What this pattern delivers, when applied carefully.

  • A meaningful drop in alert dismissals per shift over a 90-day rollout.
  • Auditable model reasoning — every reorder has a human-readable rationale, available to clinical safety review.
  • Faster time-to-bedside on the highest-tier alerts, because they sit at the top of the visual queue.
  • Phased rollout in single-digit weeks once the read-only EHR boundary is set.

What didn't work and why.

  • "Smart silencing." Tried for one week in pilot; immediately reverted. Clinical staff don't want the model to choose what to hide. They want it to choose what to show first.
  • Adding a new "high priority" channel. Created a new alarm fatigue inside the new channel within three days. Removed.
  • ML-based severity scoring without explanation. Dropped because it failed the auditability test. Replaced with rule-based scoring augmented by retrieval — every reorder has a human-readable rationale.

The shortest version.

Reduce noise by re-ordering, not silencing. Use visual weight instead of new sounds. Make the surface match the location. Read-only at the EHR boundary. Audit visibly. The clinical team is your customer; their cognitive load is your design constraint.


See the private register for the full case study. File an intent if you're building clinical software.