Best Practices

Insurance Rules Are the Real Boss

Insurance Rules Are the Real Boss

Tom Nork

Tom Nork

Tackling Complexity at Scale will Win Your Partner's Trust

Insurance Rules Are the Real Boss

What deploying AI across multi-location specialty practices teaches you about the workflows nobody ever wrote down โ€” and the operators who hold them in their heads.

By Tom Nork โ€” Head of Client Experience, Parakeet Health

"Just book the appointment."

It's the simplest sentence in healthcare scheduling. It's also the one that humbles every technology vendor who walks in confident.

A patient calls. They want to see a dermatologist. Cool. Easy. Then the questions start.

Are they a new patient or established? Different slot length, different provider availability. What insurance do they have? PPO? They can typically be scheduled two days out. HMO with a primary care doc and an active referral? Now you're looking at about a week. HMO without a PCP on file? A month from now โ€” assuming this clinic accepts the plan at all, in this state, on this provider's panel.

That's not an edge case. That's the workflow. And that's just the insurance dimension.

The matrix nobody documented

One of the first things we ask for during implementation isn't a feature request โ€” it's a foundational dependency. We need the practice's insurance acceptance list, broken out by clinic and provider. Without it, you cannot reliably book a new patient. With it, you can scale across dozens of locations.

The list almost never exists in a clean format. It lives in a spreadsheet on someone's desktop, or in three different places that disagree with each other, or โ€” most commonly โ€” in the head of the person who's been managing credentialing for fifteen years.

That person is the gatekeeper of a workflow that the org chart doesn't capture. They know that one provider is technically credentialed with a payer but has been quietly refusing referrals for six months because of a contracting dispute. They know that the rule book says HMO patients need a referral, but the call center has been making case-by-case judgment calls when the referring office is one specific clinic across town. They know which insurance lead times are documented, which are tribal, and which are wrong on the wiki but right in practice.

Note: Insurance rules are the most visible layer of this. Underneath them sits a second matrix of clinic-level scheduling templates, provider preferences, and visit-type configurations โ€” sometimes hundreds of rules per practice. Each one has to be encoded for the AI to be trusted with a single call.

The build inside the build

When we deploy our AI voice agent into a new practice, the hardest part of the work isn't the conversation. It isn't the EHR integration. It's the rules.

Every clinic is a configuration project. Each location has its own provider schedule, its own insurance matrix, its own cancellation patterns, its own quirks about what gets booked online versus what needs a human. A practice with fifty-five clinics isn't one configuration job. It's fifty-five overlapping ones, with shared logic at the parent level and exceptions at every leaf.

What this looks like in practice:

  • PPO โ€” Scheduled approximately two days out. Fast lane for established workflows.

  • HMO + active referral โ€” Around a week out. Referral validation has to happen before the slot is held.

  • HMO, no PCP on file โ€” Roughly a month โ€” and only if this clinic accepts the plan at all.

And those are just the lead-time rules. They sit on top of a network of decisions about provider panels, visit types, double-booking thresholds, no-move keywords for cancel-fill campaigns, and the question of whether a patient with a balance owed should be allowed to book at all before the front desk has a chance to talk to them.

Eric Mao, our CPO, has written about why we put more engineering effort into the scheduling rules engine than into the language model itself. From the implementation side, the same truth shows up in a different shape: the AI doesn't get to be smart until the rules are right. And the rules are never just documented. They have to be elicited.

Pseudo-experts

The best health tech operators I know all share one trait. They've become pseudo-experts in workflows they never expected to learn.

I didn't go to school to understand the difference between a primary care referral, a self-referral, and a payer-mandated authorization. I learned it because a patient experience analyst at a multi-state derm group walked me through it, patiently, three different times, until I could draw the decision tree myself.

I didn't grow up wanting to know the operational difference between a single-location instance of a scheduling system and a multi-location one with bridge tables for cross-clinic queries. I learned it because an IT lead at an eyecare practice flagged the architectural distinction in a kickoff call, and I needed to understand it to know what we could and couldn't build for them.

This is the unglamorous part of doing client experience work in healthcare AI. You become a small-time expert in things that don't show up in any product spec โ€” referral lifecycle quirks, telephony skill-pass behavior, the way one EHR handles triple-booking differently than another. None of it is glamorous. All of it is necessary.

The product is the AI. The work is everything else.

Empathy, then leverage

I've started thinking of operational empathy as the precondition for everything else. Before you can automate a workflow, you have to respect it. Before you can simplify it, you have to understand why it got complicated in the first place. Most of the time, the complexity is not stupidity. It's accumulated judgment, layered on year after year by people responding to real situations the rule book didn't anticipate.

The AI doesn't replace any of that. It carries it. Every successful deployment we've shipped has been one where we treated the front desk lead, the call center manager, and the credentialing coordinator as the source of truth โ€” not as obstacles to a faster rollout.

That's what scaling looks like once the pilot is over. It's not a clean replication of one configuration to many. It's the slow, careful work of asking better questions, listening to the answers, and building a system that respects the actual workflow instead of the marketing version of it.

If you're a vendor and your first instinct when a client describes their insurance matrix is to suggest they simplify it, you are going to lose that account. The matrix exists because the business demands it. The job is to encode it, not to argue with it.

The patients calling don't care about any of this. They just want an appointment. The fact that "just book the appointment" requires an entire decision tree behind the curtain is exactly the point. The teams making it look easy every day deserve more credit than they get โ€” and the technology that gets to sit alongside them has to earn its place at that desk.

Curious how a configurable AI voice agent navigates this complexity at scale?

We work with multi-location specialty practices to deploy patient access AI that respects the workflows your operations team has spent years building. Let's talk about what that could look like for yours.

Crafted in San Francisco ๐ŸŒ‰

ยฉ 2026 Parakeet Health, Inc.

Crafted in San Francisco ๐ŸŒ‰

ยฉ 2026 Parakeet Health, Inc.

Crafted in San Francisco ๐ŸŒ‰

ยฉ 2026 Parakeet Health, Inc.