preloader

Navigating the Legal and Ethical Landscape of AI-Powered Ambient Scribe Tools

Navigating the Legal and Ethical Landscape of AI-Powered Ambient Scribe Tools

April 22, 2026

Navigating the Legal and Ethical Landscape of AI-Powered Ambient Scribe Tools

By Eric M. Fish

Partner at Hooper, Lundy & Bookman

Ambient scribing tools are rapidly becoming part of everyday clinical practice. The broad integration of tools such as Abridge, Ambience, and DAX Copilot throughout large health systems and independent practices is making documentation powered by artificial intelligence (AI) the first touch point with AI for many family physicians throughout North Carolina. Analysis of the tools’ impact on early adopters and studies of pilot projects supports the conclusion that AI-enabled tools are a key component to improving the quality of care. Ambient scribes specifically address the burden of documentation that has become an enormous challenge by contributing to clinician burnout, job dissatisfaction, and limited physician-patient interaction.

But alongside these benefits, ambient scribes introduce new legal and regulatory obligations of which physicians must be aware. Use of these tools, as well as all other similar AI-enabled tools, requires diligent review of vendor agreements and heightened awareness to navigate issues of ethical and professional responsibility. Although such concerns arise any time a profession adopts novel and transformative technologies, a rigorous and disciplined approach to selecting the appropriate tool, thoughtful review of vendor agreements for nuances created by AI tools, and steps towards responsible use can mitigate risk and facilitate adoption in a manner that captures the benefits promised.

The Need for Enhanced Compliance and Review of Vendor Contracts in AI Ambient Scribe Tools

Ambient scribe technologies are just one part of the suite of AI-enabled tools that vendors are offering to physicians. Nearly every existing service that touches upon administration or delivery of health care has been updated with AI enhancements. Substantial financial investments and innovation-focused state and federal policies have accelerated these updates, bringing a wave of products designed for clinical deployment to market at remarkable speed. Each of these tools is dependent on data, and protected health information will be shared with vendors and subcontractors of those vendors. For physicians and health systems, this fact demands a more deliberate and legally informed approach to tool selection and vendor contracting.

At the outset of any relationship with a vendor of AI-enabled tools, physicians and compliance officers should require vendors to disclose the specific clinical use cases for which the underlying AI model was trained and validated, ensuring that those uses align precisely with the intended deployment. Improper training and validation are consequential and often overlooked risks in contracting. Using a tool trained and refined on one patient population may increase the impact of bias in training data, lead to negative patient outcomes, and expose the user to greater risk of lawsuits or regulatory discipline.

Proper diligence involves requesting information that will limit exposure to legal risks, and physician involvement in the selection process is a key feature of a responsible compliance strategy. Prior to contracting for any tool, physicians and affiliated staff should demand transparency regarding the underlying algorithm, training data, and whether the model has undergone both internal and external testing. Answers to these questions should be provided in plain-language model cards that are included in contract deliverables. Template model cards, such as the Applied Model Card developed by the Coalition for Health AI (CHAI)[1] and the Model Facts label released by the Health AI Partnership and  Duke Institute for Health Innovation (DIHI)[2], establish common understanding between the user and vendor of the intended use, performance, bias mitigations, deployment plans, and regulatory status of the AI tool.

Because tools such as ambient scribes are actively recording, analyzing, and transmitting clinical conversations to third-party servers, physicians should actively review existing agreements with vendors of these tools to ensure that access and use of protected health information is sufficiently covered in HIPAA-required business associate agreements. Moreover, standard terms may confuse the understanding of how data obtained from a practice or from patients as part of a clinical encounter may be used. Definitions of de-identified data, training, or product improvements may carry a common meaning but have real consequences in a health care setting.

Vendor contracts are written to protect the vendor, leaving the user responsible for bearing the liability of failures created by the models that support the AI tools. Placing the compliance burdens of opaque black-box models on customers creates great risk for those customers unless they carefully negotiate the limitations of liability, indemnification and disclaimer provisions of the contract. Efforts to further address this asymmetry is to require vendors to agree to certain performance standards, monitor tools for any degradation in model performance, and establish procedures for regular performance and accuracy reporting. Ensuring that such terms are part of contracts reinforces accountability, helps ensure compliance standards and clinical expectations are met, and ultimately helps limit risk.

The Impact of Ambient Scribes and Other AI Tools on Ethical and Professional Duties

In 2024, the North Carolina Medical Board issued guidance that made it clear that using AI to support clinical documentation is not a passive act.[1] Physicians are expected to understand the tools they use, verify the accuracy of AI-generated patient notes, and be transparent with patients on how AI is involved in their practice. Nationally, data show that failures in medical record-keeping remain one of the most common reasons physicians face disciplinary action. Introducing AI into the documentation process raises the standard of attention required.

Although ambient technology can produce more thorough records by capturing all relevant clinical details, it also introduces unique challenges, just as human notetaking does. Because ambient scribes are built on probabilistic large language models (LLMs), these tools may be prone to hallucinations, causing findings or statements that never occurred to be entered into the clinical record. Furthermore, the ability to accurately capture a patient’s conversation may be impeded by the audio environment, potentially introducing bias or critical omissions into the permanent medical record.

As with other technologies, familiarity may breed complacency, and because physicians may approve AI-generated notes with minimal review, it is essential that users of such tools take the time to review the output and ensure that the clinical note reflects their own clinical memories, is in a narrative format that they can reliably use in future encounters, and contains enough information to maintain a sense of connection to the patient as a person, rather than just a creator of AI input.

Recent lawsuits brought against a health system in California and a dental provider in Illinois have put issues with ambient scribing squarely before the courts. Both cases center on whether proper consent of all parties, including family members and others in the exam room, was obtained prior to recording the encounter using ambient scribe tools. Although North Carolina is a one-party consent state for recordings and has yet to pass legislation enacted in Texas and California to disclose when AI is used to generate clinical records, physicians should continue to monitor changes in general or health care -specific consent laws that could pose additional liability risk. It is a best practice to disclose the use of such technology to the patient at the initial visit, both in writing and verbally, document that this education and consent occurred, and document reaffirmation of consent at each subsequent visit or when the patient is joined by additional caregivers.

Although the integration of AI tools raises concerns for physicians, these concerns are not cause for alarm that should limit adoption or use. Understanding the limitations of these tools and the risks they may create, as well as diligently adhering to existing regulatory frameworks and professional codes of ethics, is a hallmark of responsible use of any innovative tool.

Key Takeaways for Physician Users of AI Tools

Whether you are part of a large health system or an independent practice, the following actionable steps can help address the legal and regulatory obligations created by AI-enhanced technology and support adoption practices that are as trustworthy as they are transformative.

1. Develop a risk-based compliance framework to review AI tools that are involved in patient care and administrative services that also documents the evaluation process, review of safety and validation data, and rationale for adoption.

2. Review existing Business Associate Agreements and other vendor contracts to assess data use permissions, identify liability exposure, and address gaps created by AI-integrated services. Ensure compliance with HIPAA by understanding the scope of permitted uses, including applicable exceptions for care coordination and product improvement, to avoid unauthorized disclosures or misuse of protected health information.

3. Create standard contract language and procurement policies to address issues such as data use and security, liability allocation, performance guarantees, and post-deployment monitoring to ensure that the products are used appropriately.

4. Thoroughly review the output of ambient scribing tools for accuracy before approving the record and ensure that the note contains documentation confirming that informed consent to use the technology was provided by all parties present in the clinical setting.

5. Develop patient educational resources that inform patients and caregivers of how AI is used in care and steps your practice has taken to responsibly integrate tools into practice.

Eric Fish is a partner at Hooper, Lundy & Bookman where he advises health care clients on regulatory compliance, digital health, and the integration of AI and emerging technologies into care delivery. He previously served as Chief Legal Officer of the Federation of State Medical Boards (FSMB) where he led initiatives including the drafting of the Interstate Medical Licensing Compact and FSMB’s 2024 guidance on use of AI in clinical settings.
 
[1] https://mc.chai.org/v0.1/documentation.pdf
[2] https://healthaipartnership.org/model-facts-v2-label-for-hti-1-compliance
[3] https://www.ncmedboard.org/resources-information/professional-resources/laws-rules-position-statements/position-statements/licensee-use-of-innovative-or-new-treatment