Our solution suite is focused on transforming healthcare operations through innovation
Mizzeto delivers bespoke solutions to health plans and TPAs, driving healthcare innovation with deep expertise in emerging technologies and a client-centric approach. Our services span Medicare, Medicaid, Exchange, Vision, Dental, and Behavioral lines of buesiness
Learn MoreHealth plans utilize Mizzeto’s services to solve daily operational challenges and improve efficiency through customized solutions.
Mizzeto partners with TPAs to streamline processes and ensure healthcare compliance through tailored IT solutions.


At Mizzeto, we are proud to be a minority and women-owned business. We believe that varied perspectives and inclusive thinking drive innovation and creativity, enabling us to deliver cutting-edge healthcare solutions. We are dedicated to building a future where everyone has the opportunity to thrive.
Diverse Leadership: Our leadership embodies our commitment to diversity
Empowering Talent: We recruit and uplift underrepresented voices.
Inclusive Solutions: Diversity informs our approach to meeting community needs.

The rapid acceleration of AI in healthcare has created an unprecedented challenge for payers. Many healthcare organizations are uncertain about how to deploy AI technologies effectively, often fearing unintended ripple effects across their ecosystems. Recognizing this, Mizzeto recently collaborated with a Fortune 25 payer to design comprehensive AI data governance frameworks—helping streamline internal systems and guide third-party vendor selection.
This urgency is backed by industry trends. According to a survey by Define Ventures, over 50% of health plan and health system executives identify AI as an immediate priority, and 73% have already established governance committees.

However, many healthcare organizations struggle to establish clear ownership and accountability for their AI initiatives. Think about it, with different departments implementing AI solutions independently and without coordination, organizations are fragmented and leave themselves open to data breaches, compliance risks, and massive regulatory fines.
AI Data Governance in healthcare, at its core, is a structured approach to managing how AI systems interact with sensitive data, ensuring these powerful tools operate within regulatory boundaries while delivering value.
For payers wrestling with multiple AI implementations across claims processing, member services, and provider data management, proper governance provides the guardrails needed to safely deploy AI. Without it, organizations risk not only regulatory exposure but also the potential for PHI data leakage—leading to hefty fines, reputational damage, and a loss of trust that can take years to rebuild.
Healthcare AI Governance can be boiled down into 3 key principles:
For payers, protecting member data isn’t just about ticking compliance boxes—it’s about earning trust, keeping it, and staying ahead of costly breaches. When AI systems handle Protected Health Information (PHI), security needs to be baked into every layer, leaving no room for gaps.
To start, payers can double down on essentials like end-to-end encryption and role-based access controls (RBAC) to keep unauthorized users at bay. But that’s just the foundation. Real-time anomaly detection and automated audit logs are game-changers, flagging suspicious access patterns before they spiral into full-blown breaches. Meanwhile, differential privacy techniques ensure AI models generate valuable insights without ever exposing individual member identities.
Enter risk tiering—a strategy that categorizes data based on its sensitivity and potential fallout if compromised. This laser-focused approach allows payers to channel their security efforts where they’ll have the biggest impact, tightening defenses where it matters most.
On top of that, data minimization strategies work to reduce unnecessary PHI usage, and automated consent management tools put members in the driver’s seat, letting them control how their data is used in AI-powered processes. Without these layers of protection, payers risk not only regulatory crackdowns but also a devastating hit to their reputation—and worse, a loss of member trust they may never recover.
AI should break down barriers to care, not build new ones. Yet, biased datasets can quietly drive inequities in claims processing, prior authorizations, and risk stratification, leaving certain member groups at a disadvantage. To address this, payers must start with diverse, representative datasets and implement bias detection algorithms that monitor outcomes across all demographics. Synthetic data augmentation can fill demographic gaps, while explainable AI (XAI) tools ensure transparency by showing how decisions are made.
But technology alone isn’t enough. AI Ethics Committees should oversee model development to ensure fairness is embedded from day one. Adversarial testing—where diverse teams push AI systems to their limits—can uncover hidden biases before they become systemic issues. By prioritizing equity, payers can transform AI from a potential liability into a force for inclusion, ensuring decisions support all members fairly. This approach doesn’t just reduce compliance risks—it strengthens trust, improves engagement, and reaffirms the commitment to accessible care for everyone.
AI should go beyond automating workflows—it should reshape healthcare by improving outcomes and optimizing costs. To achieve this, payers must integrate real-time clinical data feeds into AI models, ensuring decisions account for current member needs rather than outdated claims data. Furthermore, predictive analytics can identify at-risk members earlier, paving the way for proactive interventions that enhance health and reduce expenses.
Equally important are closed-loop feedback systems, which validate AI recommendations against real-world results, continuously refining accuracy and effectiveness. At the same time, FHIR-based interoperability enables AI to seamlessly access EHR and provider data, offering a more comprehensive view of member health.
To measure the full impact, payers need robust dashboards tracking key metrics such as cost savings, operational efficiency, and member outcomes. When implemented thoughtfully, AI becomes much more than a tool for automation—it transforms into a driver of personalized, smarter, and more transparent care.

An AI Governance Committee is a necessity for payers focused on deploying AI technologies in their organization. As artificial intelligence becomes embedded in critical functions like claims adjudication, prior authorizations, and member engagement, its influence touches nearly every corner of the organization. Without a central body to oversee these efforts, payers risk a patchwork of disconnected AI initiatives, where decisions made in one department can have unintended ripple effects across others. The stakes are high: fragmented implementation doesn’t just open the door to compliance violations—it undermines member trust, operational efficiency, and the very purpose of deploying AI in healthcare.
To be effective, the committee must bring together expertise from across the organization. Compliance officers ensure alignment with HIPAA and other regulations, while IT and data leaders manage technical integration and security. Clinical and operational stakeholders ensure AI supports better member outcomes, and legal advisors address regulatory risks and vendor agreements. This collective expertise serves as a compass, helping payers harness AI’s transformative potential while protecting their broader healthcare ecosystem.
At Mizzeto, we’ve partnered with a Fortune 25 payer to design and implement advanced AI Data Governance frameworks, addressing both internal systems and third-party vendor selection. Throughout this journey, we’ve found that the key to unlocking the full potential of AI lies in three core principles: Protect People, Prioritize Equity, and Promote Health Value. These principles aren’t just aspirational—they’re the bedrock for creating impactful AI solutions while maintaining the trust of your members.
If your organization is looking to harness the power of AI while ensuring safety, compliance, and meaningful results, let’s connect. At Mizzeto, we’re committed to helping payers navigate the complexities of AI with smarter, safer, and more transformative strategies. Reach out today to see how we can support your journey.
Feb 21, 2024 • 2 min read

Most health plan operations leaders can tell you their average handle time and their cost per call. Very few can tell you what a single transferred call actually costs when you follow it all the way through the system.
That transferred call triggers a second interaction at $4.90 or more.[1] It resets the resolution clock. It inflates the member’s frustration, which showsup months later in a CAHPS survey the plan cannot retroactively fix. And if the plan is running a Medicare Advantage contract, that CAHPS score is tied directly to Star Ratings, which determine quality bonus payments worth tens of millions in annual revenue.[2]
The real problem is not the cost of one bad call. It is that the way most health plans measure call center quality today was designed for a different era, and it is structurally incapable of seeing how many bad calls are happening, or why.
First call resolution is the most important metric in any health plan contact center. SQM Group’s benchmarking across more than 100 leading North American healthcare call centers puts the industry average FCR rate at 71%. Only 4% of those centers reach the world-class threshold of 80% or higher.[3]
That means roughly 29% of member calls require a callback, transfer, or follow-up. In some studies, the number is far worse: one analysis found the average healthcare FCR rate sitting at 52%, meaning more than half of all member inquiries go unresolved on first contact.[4]
Each of those unresolved calls carries a compounding cost. SQM Group’s research shows that a 1% improvement in FCR translates to approximately $286,000 in annual operational savings for a typical midsize call center.[5] That is not a theoretical model. That is reduced repeat volume, shorter queues, and lower agent workload.
Now consider the member experience side. Satisfaction drops roughly 15% every time a member has to call back about the same issue.[6] The call that started as a routine benefits question becomes, by the third attempt, a complaint. And complaints have an FCR rate of just 47%.[7]
Healthcare call centers face transfer rates as high as 19%.[8] Each transfer does three expensive things simultaneously.
First, it adds direct cost. A transferred call requires a second agent, a second set of minutes, and often a longer total handle time than a single well-routed interaction. With average handle times running 6.6 minutes and average costs at $4.90 per call, a transferred call effectively doubles the expense of that member interaction.
Second, it destroys member confidence. Talk desk’s survey of 330 health plan members found that 78% described their experience with their insurers as less than seamless. The leading cause was not claims denials or billing errors. It was poor customer service, cited by 31% of respondents.[9] Being transferred between departments and repeating the same information is the archetype of that frustration.
Third, and most overlooked, transfers create data fragmentation. When a call moves fromone agent to another, the wrap-up codes, disposition notes, and resolution status become inconsistent. The first agent may mark the call as resolved because they transferred it. The second agent may not log the original call reason. The result is that the plan’s reporting shows two “handled” calls instead of one unresolved member issue.
Many of these transfers are not agent errors. They are routing failures: an IVR that sends a prior authorization status call to a general benefits queue, or a system that cannot identify a member’s preferred language and routes them to an English-only agent by default. These are infrastructure and configuration problems that compound silently across thousands of calls.
Here is where the structural problem becomes clear.
The traditional approach to call center quality assurance, whether run in-house or through an outsourced partner, reviews between 2% and 5% of total interactions. In many operations, the number sits closer to 2%.[10] That means 95% or more of member calls are never evaluated by anyone.
The math alone makes the approach statistically indefensible. A 3% random sample of 800,000 annual calls captures 24,000 interactions. If 232,000 of those calls are repeat contacts, the sample will catch only a small fraction of them, and it will almost never catch the systemic patterns that cause them.
The deeper issue is not just sample size. It is what the QA program is designed to measure. Most legacy QA scorecards evaluate whether an agent followed a script, greeted the member properly, and used compliant language. They do not measure whether the member’s issue was actually resolved, whether the call could have been prevented by better routing, or whether the same question has been asked 500 times this month because a benefit change was poorly communicated.
When quality measurement is limited to agent-level compliance on a tiny sample, the operational problems that drive repeat calls, unnecessary transfers, and member dissatisfaction remain invisible. QA scores can look strong while member experience deteriorates, because the scorecard and the member’s reality are measuring different things.
For Medicare Advantage plans, this is not just an operational inconvenience. It is a revenue problem measured in tens of millions.
CAHPS survey results have historically carried a 4x weight in CMS Star Ratings calculations. While the weighting shifted to 2x for Star Year 2026, CAHPS measures remain a significant driver of overall ratings. CMS’s proposed rules for 2027 and beyond signal that member experience will become an even larger share of the total score, with CAHPS and HOS projected to make up nearly 40% of total Star weight by 2029.[11]
The financial stakes are hard to overstate. The gap between a 3.5-star and a 4+ star plan can translate to tens of millions of dollars in annual quality bonus payments. In 2026, only about 40% of MA-PD contracts achieved 4 stars or higher, the lowest proportion in over five years.[12]
Every repeat call, every unnecessary transfer, every escalation that leaves a member frustrated is a data point that can move CAHPS scores. A plan cannot fix a bad call center experience with a follow-up mailer.
Consider amid-size Medicaid managed care plan handling 800,000 member calls per year. At a 71% FCR rate, roughly 232,000 of those calls require a repeat contact. At $4.90 per call, the repeat volume alone represents more than $1.1 million indirect costs annually, and that does not account for the extended handle times, supervisor escalations, or member complaints those calls generate.
Now suppose the plan’s QA program reviews 3% of calls. That is 24,000 calls reviewed out of 800,000. The 232,000 repeat interactions? They are almost entirely invisible, because repeat calls do not cluster conveniently in a random 3% sample.
The plan sees a QA dashboard that shows 90%+ compliance scores. The quality team reports stable performance. Meanwhile, CAHPS scores are flat or declining, member complaints are rising, and the CX team cannot pinpoint why.
This is not a failure of the people doing the work. It is a failure of the measurement infrastructure. The plan is making decisions based on what 3% of its interactions reveal, while the other 97% contain the signals that actually explain member experience.
One of the most overlooked drivers of call center inefficiency in health plans is language access. Medicaid and dual-eligible populations frequently include members whose primary language is not English. When these members reach an agent who cannot serve them in their preferred language, the result is almost always a transfer, extended hold time, or an unresolved interaction.
CMS requires that Medicare Advantage and Medicaid managed care plans provide meaningful language access. But compliance is often measured at the policy level, not the interaction level. A plan may have interpreter services available, but if the routing logic does not match members to bilingual agents and QA does not evaluate non-English interactions, language-related service failures become invisible in aggregate metrics.
This matters because the members most affected are often the most vulnerable: elderly, disabled, low-income, or limited English proficient populations whose CAHPS responses carry the same weight as every other member’s. A plan that underserves this segment is not just creating an equity gap. It is creating a Star Ratings exposure that shows up 12 to 18 months later in the measurement cycle.
The answer is not to bring everything in-house or to stop working with operational partners. The answer is to modernize how quality is measured, who owns the data, and what the plan can actually see. Whether your call center is in-house, outsourced, or hybrid, these capabilities separate plans that manage costs from plans that manage outcomes.
100% interaction monitoring, not sampling. Any quality program that evaluates only a fraction of calls will always miss the patterns that drive repeat contacts and member dissatisfaction. AI-powered monitoring across voice, chat, and digital channels is now operationally viable and should be the baseline expectation.
Multilingual QA that matches the member population. If your plan serves Medicaid or Medicare Advantage populations, quality monitoring must cover non-English interactions with the same rigor as English calls. This means native-language evaluation, not post-hoc translation of transcripts.
Plan-owned quality measurement. Regardless of who operates the call center, the plan should own the quality data. When quality measurement is controlled entirely by the team handling the calls, there is no independent check on whether reported performance matches member reality.
Root-cause analytics, not just scorecards. A QA score tells you whether an agent followed a script. It does not tell you why members are calling back, which call types drive the most transfers, or where routing logic is failing. Modern QA surfaces the operational signals behind the numbers.
Direct linkage to CAHPS and Star Ratings strategy. Call center performance and Star Ratings are not separate workstreams. Quality data from member interactions should feed directly into Stars strategy, giving plans the ability to intervene before CAHPS surveys go into the field.
Operational intelligence, not just compliance reporting. The goal is not a cleaner scorecard. It is the ability to see which processes are broken, which member segments are at risk, and which changes will move the metrics that matter.
Mizzeto’s Multilingual QA Solution was built to give health plans 100% visibility into call center quality across every language their members speak. Rather than relying on sampling or siloed scorecards, the platform uses AI to monitor and score every member interaction, surfacing the compliance risks, service failures, and repeat-call drivers that legacy QA methods cannot detect. Whether your call center is in-house, outsourced, or a combination, Mizzeto puts quality oversight and operational intelligence back in the hands of the plan.
The most expensive call in your contact center is not the one that takes 12 minutes. It is the one that generates three more calls, a formal complaint, and a CAHPS response that pulls your Star Rating below the bonus threshold.
Health plans have spent years optimizing the visible costs: average handle time, headcount, per-call rates. The invisible costs, the ones hiding in the 95% of calls nobody reviews, are where the real money is. The plans that figure this out first will not just run more efficient call centers. They will have a structural advantage in Star Ratings, member retention, and the ability to make operational decisions based on what is actually happening.
The call center is not a cost center to be minimized. It is an intelligence asset to be owned.
[1]DialogHealth, “Latest Healthcare Call Center Statistics,” 2025.
[2]Ameridial, “Health Plan Member Services Outsourcing for Star Ratings,” 2026.
[3]SQM Group, “Why FCR Matters to Healthcare Insurance Call Centers.”
[4]Physicians Angels, “Healthcare Call Center Statistics To Know,” 2025.
[5]Talkdesk, “How Payers Can Improve Member Experience with Modern Contact Centers.”
[6]TheAIQMS, “AI QMS for BPO: Scaling Contact Center Quality Without Expanding QA Teams,” 2025.
[7]Enthu.ai, “Call Center Quality Assurance,” 2026.
[8]Press Ganey, “CMS Just Ignited the Biggest Stars Shake-Up in a Decade,” December 2025.
[9]Oliver Wyman, “How Plans Can Win as Medicare Advantage Star Ratings Change,” 2025.
[10]CAQH, “2025 CAQH Index: U.S. Healthcare Avoided $258 Billion,” February 2026.
[11]CMS, “2026 Star Ratings Fact Sheet,” November 2025.
Jan 30, 2024 • 6 min read

Your UM director just told you the team averaged 8.5 days on standard prior auths last quarter. You nodded, made a note, moved on. In six months, that number becomes a regulatory violation.
For years, health plans have complained about prior authorization burdens: opaque decisioning, variable outcomes, slow turnaround, escalating provider frustration. Half-hearted automation efforts and hybrid analog-digital processes made the problem more visible without solving it.
CMS is now codifying expectations in a way that forces every payer to face reality: the way prior authorization has been done cannot survive 2026.
The changes coming from the CMS Interoperability and Prior Authorization Final Rule aren't incremental technical requirements. They're operational inflection points that will expose long-standing design flaws in prior authorization and utilization management. Leaders who wait until enforcement deadlines will find themselves reacting. Those who act now can redesign the system itself.
Most plans are asking: "What do we have to do to comply with CMS by 2026?" That's a tactical question.
The strategic question is: How do we redesign our prior authorization engine so it performs at the speed, transparency, and explainability levels CMS expects without burning clinical resources, inflating costs, or fragmenting operations?
Checking boxes gets you compliant. Redesigning the system gets you competitive.
Beginning January 1, 2026, CMS moves prior authorization from operational best practice to regulatory mandate.
Under the Interoperability and Prior Authorization Final Rule (CMS-0057-F), impacted payers including Medicare Advantage, Medicaid managed care organizations, CHIP, and certain Qualified Health Plans must comply with several non-negotiable requirements1:
72-hour turnaround for expedited prior authorization requests
Seven calendar days for standard requests
Specific, actionable denial reasons included with every adverse determination
Public reporting of prior authorization metrics including approval rates, denial rates, and average processing times beginning March 31, 2026.2
FHIR-based APIs to support electronic prior authorization workflows and expanded data access
Here's what this means in practice:

These aren't tweaks to existing workflows. They introduce enforceable timelines, public transparency, and standardized data exchange that most legacy UM environments were never built to support.
The 2026 requirements don't create new operational weaknesses. They expose existing ones.
You Can't Hire Your Way to 72-Hour Compliance
If your prior authorization process depends on manual triage, inconsistent intake validation, or batch review cycles, meeting 72-hourand seven-day mandates becomes structurally challenging. Missed SLAs are no longer internal performance issues. They become regulatory violations.
The constraint is workflow design, not headcount. Adding clinical reviewers may temporarily reduce queue depth, but it doesn't eliminate intake latency, fragmented decision logic, or rework loops that consume days before cases reach clinical evaluation.
Your Denial Rates Are About to Become Public
For the first time, denial rates and processing times will be publicly reported beginning March 31, 2026.2
Plans with high denial rates, particularly those with elevated appeal overturn percentages, will face scrutiny from regulators, providers, and beneficiaries. Appeal overturn rates that were previously internal quality metrics become public signals about determination consistency.
Denials frequently reversed on appeal start looking less like utilization management discipline and more like systematic dysfunction.
Unstructured Intake Creates SLA Risk
Any workflow relying on fax, email attachments, or unstructured documentation creates intake uncertainty. Under 2026 mandates, that uncertainty translates directly into SLA exposure. What was operational inconvenience becomes regulatory vulnerability.
When requests arrive incomplete or in unstructured formats, the clock has already started but clinical review cannot. Days get consumed in follow-up and clarification before actual determination work begins.
Policy Fragmentation Becomes Audit Risk
Medical policies in PDFs. Coverage criteria configured separately in UM systems. Benefit rules embedded in claims engines.
When these layers diverge, denial rationale becomes inconsistent. Inconsistent rationale fuels appeals. Appeal patterns become public metrics tracked by CMS and visible to your provider network.
The 2026 rule requires "a specific reason for adenial"1 in a manner that allows providers to understand what additional information or clinical criteria would result in approval. Fragmented policy governance makes this level of specificity difficult to maintain consistently across thousands of determinations.
API Implementation Without Operational Alignment Fails
FHIR-based Prior Authorization APIs are mandated under the final rule1, but successful implementation requires more than technical connectivity.
These APIs demand structured, standardized data; clear mapping of coverage rules; real-time status tracking; and determination traceability. Treating API implementation as a technical bolt-on without aligning internal policy logic and workflow orchestration creates compliance on paper but operational brittleness in practice.
Reporting Infrastructure Will Strain Multiple Teams
Public reporting requires consolidated, accurate, reconcilable data. The rule requires payers to publicly report metrics including prior authorization decisions, denial reasons, and turnaround times2.
Most plans currently track these metrics across multiple systems: intake portals, UM platforms, claims engines. Without centralized reporting architecture, compliance becomes a manual reconciliation exercise rather than an automated output.
What Forward-Thinking Plans Are Doing Differently
The plans that will meet and leverage the 2026 expectations approach the problem differently.
They Treat Prior Authorization as a System, Not a Function
Rather than thinking in terms of "PA teams" or "PA tech stacks," they define a unified decision pipeline: intake →policy → decision → evidence → reporting. Every component must be architected for speed, traceability, and defensibility.
They Engineer Intake for Decision Readiness
Systems that treat intake as a validation and structuring event, not just data capture, dramatically reduce downstream review time. When requests arrive complete and structured, decisions get smarter and faster.
If a significant portion of requests require follow-up for missing clinical documentation, days burn before clinical review even starts. Fixing intake fixes throughput.
They Govern Policy and Logic Centrally
If policy resides in PDFs, disparate tools, and tribal knowledge, automation fails. Aligning policy logic, configuration, and deployment is the prerequisite for defensible, explainable decisions that meet CMS transparency expectations.
Centralized policy governance ensures reviewers apply consistent standards across all determinations, directly impacting appeal rates and public reporting metrics.
They Accelerate FHIR API Adoption Strategically
Forward-leaning plans are adopting FHIR Prior Authorization APIs now, enabling electronic request and response, reducing provider friction, and establishing a foundation for real-time decisioning rather than batch processing.
This isn't just compliance theater. It's infrastructure for the next decade of utilization management.
Most organizations' instinctive reaction to tighter SLAs is staffing expansion. Consider what that investment looks like:
Clinical hiring: Expanding nurse reviewer teams to handle faster turnaround requirements
Reporting resources: Staff to reconcile metrics across systems for public reporting compliance
API implementation: Technical infrastructure for FHIRPA API deployment and provider integration
Policy governance: Often unfunded, leading to continued fragmentation and appeal exposure
The alternative is investing in redesigning the decision pipeline itself. Structured intake, centralized policy logic, and automated workflow orchestration reduce review burden while improving consistency. The ROI isn't just compliance. It's operational leverage.
If you're treating this as a compliance checklist, you're already behind. This is a fundamental redesign of how utilization management operates.
By Q2 2025: Audit your current PA workflow end-to-end. Identify where time gets consumed: intake validation, clinical review queues, policy lookup, documentation rework, peer-to-peer scheduling. Measure your actual turnaround distribution, not averages.
By Q3 2025: Centralize policy governance. Map coverage criteria to decision logic. Ensure clinical reviewers are applying consistent standards that can withstand public scrutiny and audit review.
By Q4 2025: Implement structured intake that validates completeness before requests enter clinical queues. Stand up reporting infrastructure that consolidates metrics in real time.
By Q1 2026: Conduct dry runs of public reporting. Simulate 72-hour expedited workflows under peak volume. Validate FHIR API functionality with key provider groups.
The plans that redesign now won't just comply. They'll operate with structural advantage.
We built Smart Auth after years of working inside health plan operations, seeing firsthand where prior authorization workflows breakdown. It's designed to make prior authorization decision-ready from intake through policy application and final determination.
Smart Auth structures data at intake, aligns policy logic centrally, and supports the traceability required for timely decisions and transparent reporting. It enables defensible, explainable determinations at the speed CMS expects without requiring massive clinical hiring or fragmented point solutions.
In 2026, prior authorization performance won't be judged internally. It will be measured, reported, and compared publicly. The question isn't whether to redesign. It's whether you start now or spend 2026 firefighting compliance gaps while your metrics become part of the public record.
Jan 30, 2024 • 6 min read

Physicians and their staff complete an average of 39 prior authorization requests per week. They spend roughly 13 hours processing them.¹ When requests get denied, more than 80% of those denials are partially or fully overturned on appeal, meaning the care was appropriate all along.²
That is not a utilization management program working as intended. That is a system generating unnecessary friction, burning clinical resources, and producing decisions it cannot defend.
Auto approvals were supposed to fix this. Route the obvious cases through automatically. Free up clinical reviewers for complex decisions. Cut turnaround times. Reduce provider abrasion.
Most health plans tried some version of this approach. Few got the results they expected.
The math behind auto approvals is straightforward. If a significant share of prior authorization requests are routine, policy aligned, and destined for approval anyway, why route them through manual clinical review?
The problem is execution. In a recent KFF analysis of Medicare Advantage data, denial rates ranged from 4.2% at Elevance Health to 12.8% at UnitedHealth Group.² Those rates might seem low. But when over 80% of denied requests are overturned on appeal, the real story becomes clear: plans are denying care they will ultimately approve, just with extra steps, extra cost, and extra delay.
According to the CAQH Index, only 35% of medical prior authorizations are conducted fully electronically.³ Manual prior authorization transactions cost providers $10.97 each. Fully electronic ones cost $5.79, roughly half. For payers, the gap is even wider: $3.52 per manual transaction versus five cents for a fully electronic one.⁴
Auto approvals should be eating into these costs. For most plans, they are not.
The failure mode looks the same everywhere. A health plan bolts an auto approval layer onto a prior authorization workflow that was designed for manual review. Every request enters the same intake funnel. Data arrives incomplete or inconsistently structured. Clinical documentation lands as bulk attachments, hundreds of pages of chart notes that no automation can parse reliably.
Under those conditions, auto approval rules get conservative fast. Exceptions multiply. Edge cases pile up. The system cannot distinguish between a straightforward imaging request that matches policy criteria and a complex surgical case requiring genuine clinical judgment. So it sends both to manual review, because it cannot trust its own inputs.
The result: auto approvals exist on paper but barely dent the queue. Reviewers still touch most cases. And the 40% of physician practices that now employ staff exclusively to handle prior authorization paperwork¹ see no relief.
Three specific blockers keep this pattern locked in place.
Intake is broken. Requests arrive via fax, portal, phone, and EDI, often missing required fields. When the system cannot confirm a request is complete, it cannot auto approve it. We wrote about this problem in detail in our piece on modernizing UM intake. The front end is where most prior authorization delay actually begins.
Policy logic is fragmented. Medical policies live in PDFs. Clinical criteria are configured in the UM platform. Benefit rules sit in the claims system. No single source of truth exists for “should this request be approved?” When three systems disagree, the default answer is always manual review.
Nobody owns the auto approval rate. UM owns clinical appropriateness. IT owns the platform. Compliance owns regulatory exposure. No single executive is accountable for the percentage of requests that bypass manual review, so nobody optimizes for it.
The health plans getting auto approvals right are not buying better automation. They are fixing the preconditions that make automation possible.
That means restructuring intake so requests arrive complete and policy aligned before any decision logic runs. It means centralizing medical policy so criteria are applied consistently, not interpreted differently by different reviewers on different shifts. And it means surfacing clinical evidence in context: extracting the three data points that matter for a routine imaging request, rather than dumping a 200 page chart into a reviewer’s queue.
This is the approach behind Mizzeto’s Smart Auth. Instead of asking “can this request be auto approved?” Smart Auth asks “is this request decision ready?” That distinction matters. A decision ready request has complete data, matches a known policy pathway, and meets explicit criteria thresholds, so the system approves it with confidence, not with crossed fingers.
CMS is pushing the entire industry in this direction. Under the Interoperability and Prior Authorization Final Rule (CMS 0057 F), impacted payers must respond to standard prior authorization requests within seven calendar days and expedited requests within 72 hours, effective January 1, 2026.⁵ Plans must also publicly report prior authorization metrics, including approval rates, denial rates, and average turnaround times, beginning March 31, 2026.⁵
Those timelines are not aspirational. They are regulatory. And plans that still route 65% of prior authorization through manual or partially manual channels³ will not meet them by hiring more reviewers. The only realistic path is systematic auto approval of decision ready requests, which means fixing intake, policy logic, and data quality first. We laid out the full compliance timeline in The Countdown to 72/7.
The HL7 Da Vinci FHIR Implementation Guides (CRD, DTR, and PAS) provide the technical scaffolding for this shift, enabling real time coverage requirement discovery and electronic prior authorization submission.⁶ Plans that invest in FHIR based infrastructure now are not just meeting a compliance deadline. They are building the foundation for auto approvals that actually scale.
If your auto approval rate is stagnant, the problem is not your approval logic. It is everything upstream: incomplete intake, fragmented policy, and data your system cannot trust.
Start by measuring what percentage of prior authorization requests arrive decision ready. Complete, structured, and policy aligned before they hit clinical review. That number is your ceiling for sustainable auto approvals. Everything you do to raise it directly reduces manual review volume, improves turnaround performance, and positions your plan for CMS 0057 F compliance.
At Mizzeto, Smart Auth was designed to close exactly this gap. Not by approving more requests automatically, but by ensuring the right requests never need manual review in the first place.
If auto approvals exist in your organization but still feel fragile, that fragility is the signal. Let’s talk about it.
¹ American Medical Association, “2024 AMA Prior Authorization Physician Survey,” 2024. ama-assn.org
² KFF, “Medicare Advantage Insurers Made Nearly 53 Million Prior Authorization Determinations in 2024,” January 2025. kff.org
³ CAQH, “Priority Topics: Prior Authorization,” 2024. caqh.org
⁴ 4sight Health, “The Costly Lever of Prior Authorization” (citing 2023 CAQH Index data), February 2024. 4sighthealth.com
⁵ Centers for Medicare & Medicaid Services, “CMS Interoperability and Prior Authorization Final Rule (CMS 0057 F),” January 17, 2024. cms.gov
⁶ CAQH CORE, “Navigating the CMS 0057 Final Rule: A Guide for Implementing Prior Authorization Requirements,” 2024. caqh.org
Jan 30, 2024 • 6 min read

For years, prior authorization improvement efforts have centered on one metric: speed. Faster turnaround times. Shorter queues. Quicker determinations. When backlogs grow, the instinctive response is to push harder, add staff, tighten SLAs, accelerate intake, automate submission.
And yet, despite sustained investment, many health plans find themselves in a familiar place. Requests move faster into the system, but decisions do not come out any cleaner. Appeals rise. Clinical teams feel busier, not better supported. Regulatory scrutiny intensifies.
The problem isn’t that health plans aren’t moving quickly enough. It’s that they’re optimizing for the wrong outcome.
The critical question facing payer executives today is not how to make prior authorization faster. It is how to make authorization outcomes decision-ready.
In theory, prior authorization is a linear process. A request arrives. Medical necessity is assessed. A decision is rendered and communicated. In practice, speed at the front of the process often exposes fragility downstream. Requests arrive sooner, but incomplete. Data flows faster, but inconsistently. Clinical documentation is attached, but not usable.
What feels like progress—shorter intake cycles, higher submission volumes—often masks a deeper inefficiency: decisions still require the same amount of searching, clarifying, and rework. Sometimes more.
When speed becomes the primary goal, organizations optimize how fast work enters the system, not how effectively it can be resolved.
In our experience working with payer organizations, most delays in prior authorization do not occur because reviewers are slow. They occur because reviewers are forced to reconstruct meaning from poorly prepared inputs.
Requests arrive with missing or mis-keyed information. Clinical notes are uploaded as hundreds of unstructured pages. Policy criteria are technically met, but not clearly demonstrated. Nurses and physicians spend their time hunting for evidence rather than applying judgment.
A routine imaging authorization, for example, may arrive with a 200-page chart attached—office notes, lab results, historical encounters spanning years. The information needed to approve the request may exist somewhere in the record, but reviewers must sift through dozens of irrelevant pages to find it. The delay isn’t clinical complexity. It’s the effort required to locate and validate the right signal inside too much noise. That friction compounds downstream, creating a clinical review bottleneck where highly trained staff spend their time searching for context instead of making decisions.
Accelerating intake without addressing these issues simply increases the volume of work that is not ready to be decided. Each incomplete request introduces pauses, clarifications, and handoffs. What should have been a single pass through the system becomes multiple touches across multiple teams.
From the outside, this looks like insufficient capacity. From the inside, it is capacity being quietly consumed by avoidable friction. Across the U.S. health care system, administrative burden tied to prior authorization contributes to multi-billion dollar annual costs, reflecting how inefficient processes absorb payer and provider resources long before clinical review begins.1
This is where many modernization efforts stall. Automation accelerates submission and routing, but PA automation alone does not change the quality of what enters the system. Providers submit more requests because it is easier to do so. Intake teams process them faster. Clinical reviewers inherit the same defects at higher velocity. Speed amplifies whatever already exists—and when work is not decision-ready, it multiplies rework rather than reducing it
Organizations that consistently control prior authorization performance focus less on turnaround time and more on decision quality at entry.
They ensure requests arrive complete and structured, reducing manual re-keying and downstream correction. Reflecting this shift, a significant proportion of health plans have already implemented electronic prior authorization systems, signalling both the complexity of modern workflows and the growing emphasis on reducing manual friction.2 They normalize data so policy criteria can be evaluated consistently. They surface the specific clinical evidence needed for a decision, rather than forcing reviewers to search entire records. And they treat policy logic as a shared, governed asset—not something interpreted differently by each reviewer.
As a result, their systems move work through once. Appeals decrease because rationales are timely and clear. Clinical teams spend their time applying judgment instead of assembling context. Speed improves, but as a consequence of better design, not as the primary objective.
The shift is subtle but decisive. The goal is no longer faster authorization. It is fewer touches per authorization.
Prior authorization sits at the intersection of cost control, access, and regulatory oversight. As CMS and other regulators increasingly expect decisions to be explainable, not just defensible—as reinforced by the CMS prior authorization rule—the cost of prioritizing speed over clarity rises. Under the CMS Interoperability and Prior Authorization final rule (CMS-0057-F), impacted payers must provide prior authorization decisions within 72 hours for urgent requests and seven calendar days for standard requests, and include specific reasons for denials to improve transparency and explainability of decisions.3 The rule shifts expectations away from throughput alone and toward consistency, traceability, and timely rationale.
Systems that rely on heroics and overtime may hit SLAs in the short term, but they accumulate risk. Systems designed for decision readiness scale more predictably and withstand scrutiny more effectively.
What executives experience as utilization management pressure is rarely a failure of effort. It is a signal that the system has been optimized for motion, not resolution.
At Mizzeto, we work with payer organizations to address this exact gap—connecting intake, clinical review, and policy logic so prior authorization decisions can be made efficiently, consistently, and explainably. This is the design philosophy behind Smart Auth, our prior authorization platform—ensuring requests arrive decision-ready, with structured intake, reduced rework, and clinical evidence surfaced in context rather than buried in charts.
Because in modern utilization management, sustained performance isn’t about pushing teams harder. It’s about removing the friction that never needed to be there in the first place.
If your team is hitting SLAs but appeals keep climbing, let’s talk.
Jan 30, 2024 • 6 min read
