How Your PBM’s AI Use Can Impact Your Health Plan: Key Rewards, Risks, and Practical Tips for Employers
Pharmacy benefit managers (PBMs) are increasingly using artificial intelligence tools in drug benefit administration, a transaction-intensive function involving the calculation of fees, discounts, and rebates to manage costs for plan sponsors and participants. While AI can enable PBMs to operate more efficiently and optimize pricing to help lower drug costs, this new technology raises several data privacy, regulatory, and contractual risks for employers that sponsor health plans. This Insight explains how plan sponsors can spot potential red flags and avoid potential liability while benefiting from the advantages that AI can bring to healthcare costs and administration.
How Your PBM’s AI Use Can Impact Your Health Plan
Relationships between PBMs and healthcare plan sponsors are governed by administrative service agreements (ASA), which authorize PBMs to manage prescription drug programs. Increasingly, ASAs contain provisions that expressly authorize PBMs to leverage AI in the delivery of services.
However, these provisions often are exceptionally broad and give PBMs significant discretion to use AI as they see fit. Such a structure introduces a host of data privacy concerns, including:
- Who ultimately bears the risk if the use of AI results in actionable damages?
- What adjudicative functions will the AI perform?
- Where will the PBM deploy AI in managing plans as opposed to relying on traditional administration methods?
- When is the use of AI permissible as an operations enhancement versus barred as a danger to patient privacy concerns?
- How can plan sponsors avoid assuming liability for a PBM’s AI strategy?
- How can a plan sponsor protect itself and its participants from AI-related incidents?
Key Tips for Plan Sponsors Negotiating or Renewing an ASA
Fortunately, many of these concerns can be addressed in your ASA, which should be carefully drafted to avoid giving the PBM power to unilaterally determine where, when, and how it will use AI.
When negotiating the terms of an ASA, you should consider pushing for provisions that:
1. Prohibit the use of AI in adjudicative decision making. To the extent that AI plays a role in deciding benefit determinations, this presents a serious risk to plan sponsors. If AI improperly denies coverage, a participant may have a cause of action against the sponsor. For that reason, if a PBM proposes using AI, the ASA should explicitly state that AI cannot be used to make or materially influence adverse benefit determinations, coverage denials, utilization review outcomes, or pricing decisions.
2. Restrict the use of PHI for training AI tools and models. To maximize efficacy, AI relies upon the steady introduction of massive data sets to become “smarter” and optimize performance. Nevertheless, the protected health information (PHI) of unknowing or unwilling plan participants should not be used to train AI systems. Limits on how PHI can be used by PBMs to “train” their AI models should be delineated in the ASA. Plan sponsors should also confirm that the business associate agreement (BAA) governing the PBM relationship expressly addresses AI model training as a permissible use or, preferably, prohibits it. If PHI is to be used, it should follow commonly used data protection principles (for example, deidentification, anonymization, pseudonymization, etc.).
3. Shift the risk of using AI to the PBM. Ultimately, if a PBM elects to use AI, they are doing so because they believe it will confer a benefit to their business. In exchange, they should bear the risk if something goes wrong. Including language in the ASA which establishes that “any use of AI or generative AI constitutes a representation of regulatory compliance and fitness for purpose by the vendor” can help protect plan sponsors. Additionally, sponsors would be wise to include indemnification provisions that place any costs associated with AI errors on the PBM.
Three Proactive Steps You Can Take Now
The use of AI in healthcare is a rapidly evolving area with consequences yet to be fully seen. However, it’s undeniable that the regulatory landscape is playing catch-up to businesses deploying the technology. As such, it’s inevitable that policymakers will begin to act more authoritatively where AI impacts health.
In the interim, plan sponsors can take the following steps to stay at the forefront of AI-related developments:
- Maintain open channels of communication. Understand if, and how, your plan’s PBM applies AI, and what their strategic vision, and tactical applications, for its use entail in the short, medium, and long-term.
- Establish robust AI-oriented policies and procedures. It is possible that your business is already exploring the use of AI for its own purposes, or that other vendors you transact with use AI to deliver services. Having a set of guardrails around the use of AI is advisable.
- Engage outside experts. One of the most defining features of data protection in the US is the lack of uniform rules and regulations. Federal, state, and local governments have all legislated in this domain. It should be assumed that a similar pattern regarding AI will follow. It is the job of data protection attorneys to maintain awareness of developments in this area, and then work with your organization to design, develop, and deploy compliant best practices.
Conclusion
Fisher Phillips will continue to monitor developments and provide updates as warranted, so make sure you are subscribed to Fisher Phillips’ Insight System to get the most up-to-date information directly to your inbox. If you have questions, please contact your Fisher Phillips attorney, the authors of this Insight, or any member of our Data Protection and Cybersecurity, Employee Benefits and Tax, or Healthcare teams.


