• People
  • Services & Industries
  • Insights
  • Innovation
  • Offices
  • My Binder
  • PDF

Virginia Lawmaker Reveals Plan to Regulate AI in Healthcare and Beyond in 2026: An Interview With Delegate Michelle Maldonado

Insights

11.20.25

With Democrats taking full control of Virginia’s state government in the recent off-year election, lawmakers who have been stymied by the outgoing Republican governor’s vetoes are now eyeing the opportunity to set up new legal guardrails around the use of artificial intelligence in 2026. In an interview with Fisher Phillips, State Delegate Michelle Maldonado (D), the chief sponsor of a 2025 bill that would have required developers and deployers of AI technology to take steps to protect users from discrimination, said she will reintroduce a narrower version of that legislation focusing on the healthcare sector when the Commonwealth convenes again next year. Here’s what businesses in Virginia – and across the country – should be watching for.

AI Moratorium Threats

The shifting environment presented by recent federal threats to block or discourage states from enacting artificial intelligence legislation required a change in approach from state lawmakers, Maldonado said in a November 14 conversation with FP.

“It really required so many of us to think about how do we move things forward that allows for innovation and protection of people, data, and privacy in this really heated, new, fast, moving environment,” she said. How that impacts the way Maldonado is looking at legislation going forward is to narrow her focus first on efforts to ensure disclosure and transparency, she said.

Fisher Phillips’ David Walton, who co-chairs the firm’s AI, Data, and Analytics Practice Group, noted that while Washington is mulling an AI moratorium, bills like those being offered by Maldonado should be shielded from such a ban.

“It’s not binary, or all or nothing. The administration believes we are in the midst of AI arms race. It does not want to do anything that hampers our ability to win that race – namely against China,” said Walton. But the type of legislation Maldonado is proposing is “focused on how AI tools are being used for areas like employment. Those kinds of laws should not affect the national security aspects of AI.”

The state delegate is no stranger to AI and the tech field. A former technology attorney, Maldonado worked at AOL early in her career before shifting to various business leadership development roles and then eventually founding her own company.

Legislation she will introduce next session will ensure “that people understand what's being collected, how it's being used, and whether they have agency or decision-making authority around saying that I don't want that part of my data shared or incorporated into a training data model for large language models, for example.”

Tech Bills by The Dozen – Including AI Guardrails for Healthcare

Maldonado, who sits on committees that oversee both technology and labor issues, says she plans to introduce around a dozen technology-related bills for the 2026 session, along with three bills addressing chatbots and legislation on the use of personal data to train artificial intelligence systems.

Michelle Maldonado Quote

That includes reintroducing a scaled-back version of her AI anti-discrimination bill HB 2094, which passed the legislature, but was vetoed by outgoing Republican Gov. Glenn Youngkin. Maldonado says the new bill’s application will be limited to the healthcare sector.

“Just like we're focusing on transparency and exposure, we’re also going to be taking 2094 and sort of aligning it with a use-case or sector,” Maldonado explained. “So we're looking at components of that, but in the healthcare sector. There will be discussions of assessment impacts, but it will be narrowed to health.”

While the full details of the new version aren’t yet available, here’s a breakdown of the original HB 2094:

  • Businesses using AI in high-risk decisions would need to exercise reasonable care to prevent algorithmic bias.
  • Provisions would only apply when AI is the principal basis for such decisions.
  • Legislation targeted AI systems specifically designed to make consequential decisions autonomously, excluding lower-risk AI tools.
  • Deployers of AI would need to conduct risk assessments, disclose AI usage to individuals affected, and allow appeals of AI-driven adverse decisions.

What About a Private Right of Action?

Notably, the original version of Maldonado’s AI transparency bill didn’t include a private right of action that would allow aggrieved applicants or employees to file lawsuits against employers and tech developers. Rather, the Virginia Attorney General would have been tasked with enforcing violations, shutting off individuals from filing their own lawsuits in court.

Despite Democrats newly secured party-control of the commonwealth, Maldonado seemed cautious of including a private right of action in future AI legislation. “I think we have to be careful,” she said, noting how Democrats lost their last trifecta in the Commonwealth after moving quickly to pass lots of legislation.

But, Maldonado did say there could be areas where a private right of action could be appropriate, like for example, with the use of chatbot companions and minors.

She also expects there to be more conversation surround whether the enforcement mechanism should rest with the attorney general, or if there are aspects of the law that may need enough teeth “to hold people accountable when it comes to the health, safety and well-being of our individuals.”

“Most states are not creating private rights of actions for these laws … yet,” added FP’s Walton. But employers should keep an eye out for states moving to enact these protections, which Walton predicts will happen in the future.

Audits and Assessments

Other states that have artificial intelligence legislation on the books have been slow to issue guidance to help employers navigate and ensure compliance with the law.

Maldonado says she’s worked with chambers of commerce and non-profits to address components of past legislation that seemed too onerous, like certain audit provisions.

“The reality is, we don't have infrastructure across industry, where people are skilled and knowledgeable to do audits of artificial intelligence systems. So we've got to build that,” she said. That also includes fostering partnerships and collaborations between the government and private sector to ensure there is an “independent capacity,” to conduct system audits.

“In the tech space, it's build fast, break fast, and iterate. But we have to build fast, innovate, be competitive, while we're also protecting,” Maldonado said, adding that: “We have to shift some mindsets that just because, an industry can do something doesn't mean that it should.”

David Walton Quote

FP’s Walton expects that if the federal government doesn’t issue a broad state moratorium on AI regulations, states will zero in on issues like impact assessments and bias audits. With movement in both New York City and California on such requirements, Walton says employers “need to be ready for this change.”

“Employers must work with their counsel for developing these assessments and audits,” Walton said. “In the near future, these assessments will be important for both regulatory compliance and in general employment litigation to show juries that you acted reasonably when deploying AI tools.”

Conclusion

We will continue to monitor developments in this space, so make sure you subscribe to Fisher Phillips’ Insight System to receive the most up-to-date information on AI and the workplace. Should you have any questions on the implications of these developments and how they may impact your operations, contact your Fisher Phillips attorney, the author of this Insight, any attorney in our Washington, D.C. office, or any attorney in our AI, Data, and Analytics Practice Group.

Related People

  1. Rebecca Rainey
    Legal Content Reporter

    202.908.1142

    Email
  2. David Walton bio photo
    David J. Walton, AIGP, CIPP/US
    Partner

    610.230.6105

    Email

Service Focus

  • AI, Data, and Analytics
  • Government Relations

Industry Focus

  • Tech

Related Offices

  • Washington, D.C.

We Also Recommend

Subscribe to Our Latest Insights 

©2025 Fisher & Phillips LLP. All Rights Reserved. Attorney Advertising.

  • Privacy Policy
  • Legal Notices
  • Client Payment Portal
  • FP Solutions