HunterMaclean: HHS Releases 2025 Strategic Plan for AI in Healthcare: What Providers Need to Know

Monday, February 24th, 2025

Artificial intelligence (“AI”) is quickly reshaping healthcare, with applications ranging from clinical decision-making to administrative automation. To address this rapid transformation, the United States Department of Health and Human Services (“HHS”) has released its 2025 Strategic Plan (“Strategic Plan”), offering its first look at how it foresees AI integration into health, human services, and public health settings.

For healthcare providers, the Strategic Plan offers an early insight into HHS’s perspective on AI’s opportunities, risks, and HHS’ potential regulatory direction. While the plan addresses AI’s role in research, medical product development, and public health, some of its most relevant insights focus on AI’s growing influence in healthcare delivery—including financing, patient care, documentation, billing, and administrative services.

While AI adoption continues to outpace official regulatory guidance, the Strategic Plan offers a glimpse into how HHS views AI’s opportunities and risks in healthcare. In particular, the risks and challenges HHS highlights now will likely inform future regulations and enforcement efforts, making this a key document for providers navigating AI implementation.

Key AI Opportunities in Healthcare

The Strategic Plan highlights several promising AI applications in healthcare, particularly in patient care, operations, and clinical decision-making:

  1. Enhancing the Patient Experience

    • AI-powered chatbots and virtual assistants can improve patient communication, providing appointment reminders, personalized care guidance, and answers to common questions.

    • Symptom checkers and diagnostic tools can help patients understand their symptoms and make informed decisions about seeking medical care.

  1. Clinical Decision Support

    • AI can assist clinicians by analyzing patient histories, imaging, and medical data to improve diagnostic accuracy.

    • AI can analyze medical history across various providers and treatment settings, ensuring that physicians have access to relevant information to inform their care.

  1. Predictive Analytics for Preventive Care

    • AI can analyze large datasets to identify at-risk populations and guide early intervention strategies, such as targeted screening programs for conditions like diabetes or mental health disorders.

    • Predictive algorithms can assess a patient’s risk of developing chronic diseases, allowing for early intervention and prevention.

  1. Operational Efficiency & Administrative Automation

    • AI can streamline scheduling, billing, insurance claims processing, and other administrative tasks, reducing workload and increasing efficiency.

  1. Telemedicine and Remote Monitoring

    • AI-powered devices can monitor patient vitals like blood pressure and glucose levels, alerting healthcare providers to any abnormalities and reducing unnecessary in-person visits.

    • AI can enhance telemedicine consultations and enable healthcare providers to monitor patients in remote care settings.

Risks & Challenges of AI in Healthcare

Despite its potential, AI integration presents significant risks that providers must navigate carefully. In the Strategic Plan, HHS has identified key concerns, including patient safety, data security, AI bias, and regulatory uncertainty.

  1. Data Privacy & Security Concerns

    • Storing and processing sensitive health data in AI-driven systems increases the risk of data breaches and unauthorized access.

    • AI use must comply with HIPAA regulations, as it does not exempt providers from their legal obligations to protect patient data.

  1. Bias in AI Algorithms

    • AI systems are only as good as the data they are trained on. If training data is biased or unrepresentative, AI could produce inaccurate or discriminatory responses.

    • Providers must vet AI vendors carefully to ensure their systems minimize bias and provide fair, equitable patient care.

  1. Transparency & Explainability Issues

    • Many AI models function as "black boxes," meaning their decision-making processes are not fully transparent. This may lead to distrust among healthcare providers.

    • Lack of explainability raises liability concerns. If an AI system makes an incorrect recommendation, who is responsible? The provider? The AI vendor? The system developer?

  1. Regulatory & Legal Issues

    • AI’s rapid adoption in healthcare is outpacing regulatory guidance, creating uncertainty around enforcement risks.

    • At this time, HHS and the Centers for Medicare and Medicaid (“CMS”) have yet to clarify how AI will be treated in areas such as diagnostics, clinical decision-making (e.g., review of radiology studies), documentation, and billing and coding.

    • Errors made by AI systems will likely be attributed to providers, who remain responsible for government reimbursement compliance.

  1. Workforce Training

    • Healthcare organizations must train staff and providers on AI tools, ensuring appropriate use and compliance with their internal policies.

  1. Patient Consent & Autonomy

    • Should patients be informed when AI is used in their care?

    • Will patient consent be required before AI-driven decisions impact their treatment plan?

    • These unresolved questions underscore the need for clear AI disclosure and consent policies.

Key Takeaways for Healthcare Providers

The HHS Strategic Plan is not legally binding but should be viewed as a roadmap for the opportunities and risks that HHS will focus on as it continues to grapple with the impact of AI in healthcare. Healthcare providers should proactively prepare by:   

  • Developing Clear AI Policies: Establish clear guidelines ensuring AI supports, rather than replaces, clinical judgment.

  • Investing in Education & Training: Educate employees on AI tools and best practices.

  • Strengthening Data Security: Ensure AI use complies with HIPAA and other applicable privacy laws by implementing robust cybersecurity measures to protect patient data from breaches.

  • Engaging Stakeholders: Collaborate with patients, contractors, vendors, and applicable regulatory bodies to align AI use with ethical and legal standards.

  • Staying Updated on Regulations: Stay abreast of developments in AI regulation and ensure that AI tools comply with all CMS and HHS guidance.

Conclusion: Proceed with Caution

The Strategic Plan offers valuable insight into the key risks HHS has identified in AI use and the precautions it expects healthcare stakeholders to take.

These recommendations may also serve as a preview of how regulators will evaluate whether providers have acted in good faith when addressing AI-related issues—including patient harm, billing errors, and the use of patient information and related data breaches.

Before integrating AI into patient care or operations, healthcare providers should consult legal counsel to navigate the complex web of federal and state regulations and ensure compliance with best practices.

Proactively addressing these challenges can help mitigate legal risks, safeguard patient data, and position organizations for AI’s growing role in the healthcare industry.

To ensure compliance and mitigate AI-related risks, contact Matt Wilmot ([email protected]) or Edgar Bueno ([email protected]) at HunterMaclean.