Clinical Decision Support vs Clinical Decision Making: Where FDA SaMD Classification Draws the Line

⚕ This content is for educational purposes only and is not a substitute for professional medical, legal, or clinical advice. Consult a qualified professional for guidance specific to your situation.
Clinical Decision Support vs Clinical Decision Making: Where FDA SaMD Classification Draws the Line
Quick Answer
FDA's SaMD framework classifies AI clinical tools as regulated medical devices when they intend to diagnose, triage or recommend treatment for serious conditions. Clinical decision support software escapes premarket review only if it meets all four statutory conditions under the 21st Century Cures Act: no medical image or signal analysis, use of generally available clinical information, auditable reasoning for clinician review and no intent to replace clinical judgment. Tools that fail any single condition may require 510(k) clearance.

The line between a clinical decision support tool and a regulated medical device is not an academic distinction. It determines whether an AI system shipping into a hospital EHR requires FDA clearance, carries liability for diagnostic error and must pass premarket review before a single patient ever interacts with it. In 2026, that line is under more pressure than at any point in the history of digital health regulation.

At TheraPetic®, we develop and deploy AI-assisted clinical screening infrastructure under the TheraPetic® Healthcare Provider Group. Our clinical team works daily with the tension between building tools that are genuinely useful and ensuring those tools stay on the correct side of FDA's Software-as-a-Medical-Device framework. This post lays out what we have learned about where the regulatory boundary actually sits, what triggers premarket review and how the proposed Predetermined Change Control Plan guidance is reshaping how adaptive AI can be deployed in clinical settings.

What Software-as-a-Medical-Device Actually Means

FDA's SaMD definition, drawn from the International Medical Device Regulators Forum framework, covers software that is intended to be used for a medical purpose without being part of a hardware medical device. That is a deliberately broad boundary. A standalone algorithm that analyzes a patient's symptom checklist and outputs a probable diagnosis sits squarely inside it. A calendar reminder to take medication almost certainly does not.

The intent-based framing is what makes SaMD classification technically demanding. FDA does not classify software by its underlying architecture. It classifies software by what the developer intends it to do. Two products built on identical transformer architectures can land in completely different regulatory buckets depending on the clinical claim attached to the output.

FDA's 2019 action plan for AI and machine learning-based SaMD, updated through subsequent guidance cycles, introduced a risk-tiering approach that weighs two dimensions: the significance of the information provided by the software to the healthcare decision and the state of the patient's healthcare situation or condition. High significance plus a critical or serious condition equals the highest regulatory scrutiny. Low significance plus a non-serious condition can sometimes remain outside premarket review entirely.

For clinical AI teams, this framework has a practical implication that is easy to overlook. You can build an extraordinarily sophisticated model that performs at near-clinician accuracy. If you intend that model to inform or drive a clinical decision about a serious condition, FDA considers it a medical device regardless of whether a human reviews the output afterward.

The Clinical Decision Support Exemption and Its Limits

The 21st Century Cures Act created a statutory exemption for certain clinical decision support software. The exemption has four conditions that must all be met simultaneously. The software cannot be intended to acquire, process or analyze a medical image, a signal from an in vitro diagnostic device or a signal or pattern from a signal acquisition system. It must display, analyze or print medical information that is generally available to clinicians. The clinician must be able to independently review the basis for the recommendation. And the software must not be intended to replace clinical judgment.

That fourth condition is where most clinical AI products either qualify or fail. "Not intended to replace clinical judgment" sounds simple. In practice it is the most contested phrase in digital health regulatory interpretation.

FDA's guidance on the CDS exemption has consistently emphasized that the clinician must be able to meaningfully review the basis for the software's recommendation, not just receive a conclusion. A system that outputs "patient is high risk for major depressive episode" without exposing its reasoning chain does not satisfy the transparency requirement even if a physician technically reviews the flag before acting on it.

This is directly relevant to large language model deployments in clinical intake. A GPT-class model that synthesizes patient responses into a structured clinical summary may qualify as CDS if the basis for the synthesis is auditable. A model that outputs a diagnosis code recommendation based on opaque attention patterns almost certainly does not satisfy the transparency condition. The architecture difference between those two use cases can be smaller than the regulatory difference between them.

What Triggers 510(k) Clearance for AI Clinical Tools

When a software product does not satisfy all four CDS exemption conditions and is not otherwise excluded from device regulation, it enters FDA's premarket review pathway. For most AI clinical tools, that means the 510(k) substantial equivalence pathway rather than de novo or full PMA review.

510(k) clearance requires demonstrating substantial equivalence to a legally marketed predicate device. For established clinical AI categories such as diabetic retinopathy screening or radiology AI, predicates exist and the pathway is well-worn. For novel mental health AI tools, the predicate landscape is thinner and more complicated.

Three functional triggers reliably push a clinical AI product into 510(k) territory. First, intent to diagnose a specific condition. If the software's intended use statement includes the word "diagnose" or any functional equivalent, FDA will treat it as a Class II device in most mental health and behavioral health categories. Second, intent to triage patients based on acuity or urgency without clinician review as a mandatory workflow step. Third, intent to recommend a specific treatment, medication or intervention rather than merely surfacing relevant clinical evidence.

Products that stay in CDS territory typically frame their outputs in explicitly informational terms. "The patient's PHQ-9 response pattern is consistent with moderate depression as defined in DSM-5" is a different regulatory claim than "the patient has moderate depression." The former presents information for clinician review. The latter makes a diagnostic assertion. That distinction sounds like legal wordsmithing, but FDA's enforcement history demonstrates that intent framing carries real regulatory weight.

At TheraPetic®, our HANK AI screening infrastructure is built to surface structured clinical information from patient-reported outcomes and present it to Licensed Clinical Doctors for independent professional review. The architecture and the workflow are both designed to preserve the clinician's independent judgment function, which is the core of the CDS exemption rather than an afterthought.

Predetermined Change Control Plans and Adaptive AI

Traditional medical device regulation was designed for static hardware. A cleared device was a specific artifact. Change it substantially and you need a new submission. That model creates a fundamental problem for machine learning systems that are designed to improve through continued training on real-world data.

FDA's Predetermined Change Control Plan framework, introduced in draft guidance and refined through 2026, attempts to solve this by allowing manufacturers to pre-specify the types of changes an AI model may undergo after clearance without triggering a new 510(k) submission. The PCCP must be submitted as part of the original premarket review and must describe the types of modifications anticipated, the performance specifications that bound acceptable change and the methodology for validating that post-deployment changes remain within cleared parameters.

For clinical AI developers, the PCCP framework is genuinely useful but structurally demanding. You must anticipate, at submission time, the kinds of drift your model might undergo and the monitoring infrastructure you will use to detect when drift exceeds acceptable bounds. For transformer-based models trained on clinical language, that means specifying distribution shift monitoring, performance regression thresholds on demographically stratified validation sets and rollback procedures.

The PCCP framework also has an indirect effect on model architecture choices. Models that are interpretable by design, such as those that generate structured outputs with auditable reasoning traces, are easier to validate against predetermined performance specifications than black-box deep learning systems. This creates a regulatory incentive toward explainable AI in clinical contexts that aligns with both HIPAA audit requirements and the CDS transparency condition discussed earlier.

How TheraPetic® Navigates SaMD Classification in Practice

As a 501(c)(3) nonprofit healthcare provider with EIN 81-3003968, TheraPetic® operates under a clinical governance model that treats FDA classification as a design input rather than a post-hoc legal review. Our Licensed Clinical Doctors, led by Dr. Patrick Fisher, PhD, LPC, NCC, are involved in intended use statements from the earliest product design stage.

The verify.mypsd.org infrastructure illustrates this approach concretely. The platform supports Licensed Clinical Doctor review of patient documentation for psychological support animal recommendations. The AI components assist with structured intake, flag completeness issues in submitted documentation and surface relevant clinical history for doctor review. None of those functions are intended to replace the Licensed Clinical Doctor's independent professional judgment. The doctor reviews the basis for any recommendation before it is communicated to the patient.

That workflow architecture is not accidental. It reflects a deliberate design choice to preserve CDS exemption status by ensuring the four statutory conditions are met in practice, not just in the intended use statement. The clinical workflow and the software architecture must be congruent. A product that claims human oversight in its regulatory submission but routes patients around clinician review in production is both a regulatory problem and a patient safety problem.

Algorithmic Fairness as a Regulatory Signal

FDA's 2026 guidance on AI and machine learning SaMD has increasingly incorporated language around algorithmic bias and performance equity across demographic subgroups. This reflects broader alignment with HHS Office of Civil Rights guidance and the Biden-era AI executive order provisions that survived into current policy.

For SaMD developers, algorithmic fairness is no longer a research concern that can be deferred post-clearance. FDA expects that 510(k) submissions for AI clinical tools include performance data stratified by race, ethnicity, sex and age at minimum. A model that performs at 0.92 AUC overall but drops to 0.74 AUC for Black female patients in the target indication will face significant questions during premarket review.

The fairness metrics that matter in this context align closely with the academic literature on equalized odds and demographic parity. Equalized odds requires that true positive rates and false positive rates be equal across demographic groups. Demographic parity requires that the predicted positive rate be equal across groups. These two constraints can be in tension with each other and with overall accuracy optimization, which is why fairness-aware model training is a technical requirement rather than a political aspiration in regulated clinical AI.

Research published in JAMA Psychiatry and NEJM AI has documented performance disparities in commercially deployed mental health screening tools. Those findings have shaped FDA's expectation that mental health AI submissions include prospective subgroup analysis rather than relying on post-hoc audits. Clinical AI teams building toward 510(k) submission in the mental health space should treat stratified validation as a non-negotiable component of their validation study design.

A Working Framework for Clinical AI Teams in 2026

Pulling these regulatory threads together into a practical development framework requires making classification decisions early and revisiting them at each major product milestone. The following structure reflects TheraPetic®'s internal approach and the broader FDA guidance as it stands in 2026.

The first question to answer is whether the software fits within any of FDA's excluded categories. General wellness tools and certain administrative software are excluded from device regulation by statute. If the answer is no, the second question is whether all four CDS exemption conditions are satisfied. If yes, document that analysis carefully and build the clinical workflow to maintain those conditions in production.

If the CDS exemption does not apply, the third question is which premarket pathway is appropriate. Most mental health and behavioral health AI will land in 510(k). The fourth question is whether the product involves a learning algorithm with anticipated post-deployment changes. If yes, PCCP planning must begin at design time, not after clearance.

Throughout all of these decisions, the intended use statement is the single most consequential regulatory document a clinical AI team will write. Every subsequent regulatory analysis flows from it. Vague intended use statements do not provide regulatory flexibility. They create regulatory uncertainty, which is harder to resolve than a clearly scoped submission that acknowledges the device's clinical function honestly.

The TheraPetic® clinical AI team, operating under Dr. Fisher's clinical oversight and supported by our data governance infrastructure at mydatakey.org, applies this framework to every product function before it reaches production. The FDA SaMD boundary is not a compliance checkbox. It is a patient safety boundary, and the organizations that treat it that way build better clinical tools as a result.

For deeper technical reading on AI and ML SaMD frameworks, FDA's official guidance documents are available at fda.gov. The IMDRF SaMD framework documents and Partnership on AI's clinical AI guidance provide additional context for international deployment considerations.

Frequently Asked Questions

What is the difference between clinical decision support and a software-as-a-medical-device under FDA rules?
Clinical decision support software qualifies for a statutory exemption from FDA device regulation when it meets four specific conditions under the 21st Century Cures Act, including that a clinician can independently review its reasoning and that it does not intend to replace clinical judgment. Software-as-a-medical-device that does not satisfy all four conditions may require premarket review such as 510(k) clearance before clinical deployment.
What clinical AI functions typically trigger a 510(k) submission?
Three functional triggers reliably require 510(k) clearance: software intended to diagnose a specific medical condition, software that triages patients by acuity without mandatory clinician review and software that recommends specific treatments or medications. Framing outputs as clinical information for review rather than diagnostic conclusions is a meaningful regulatory distinction that FDA has consistently enforced.
What is a Predetermined Change Control Plan and why does it matter for adaptive AI?
A PCCP is a document submitted with an original 510(k) that pre-specifies the types of post-clearance changes a learning AI model may undergo without requiring a new submission. It must describe anticipated modification types, performance bounds and validation methodology. Without a PCCP, any substantial change to a cleared AI model triggers a new premarket submission, which makes continuous learning architectures operationally impractical.
Does FDA require demographic fairness analysis in AI medical device submissions?
As of 2026, FDA guidance on AI and ML SaMD expects that premarket submissions include performance data stratified by race, ethnicity, sex and age at minimum. A model with strong overall accuracy but significant performance disparities across demographic subgroups will face substantive questions during review, particularly in mental health and behavioral health applications.
Can a large language model used in clinical intake qualify as exempt clinical decision support?
Potentially yes, but only if the LLM's outputs meet the CDS exemption's transparency condition. The clinician must be able to review the basis for any recommendation, not just receive a conclusion. LLMs that generate auditable structured summaries from patient-reported outcomes are more likely to satisfy this condition than models that produce opaque diagnostic assertions without an accessible reasoning chain.
FDA SaMDCDS510k clearanceclinical decision supportAI regulationPCCPclinical AI
← Back to Blog