Bipartisan Bill Aims to Protect Whistleblowers at Large Artificial Intelligence Companies

A bipartisan effort to add more guardrails around artificial intelligence by protecting whistleblowers was cleared out of the Colorado House Judiciary Committee last week. 

The effort is being led by Democratic Rep. Manny Rutinel and Republican Rep. Matt Soper. Rutinel said that the bill would better protect public safety, business security and worker rights in the AI industry. 


“This bill is about ensuring that when AI developers see real safety threats emerging from their work, they can report those concerns without fear of retaliation,” Rutinel said. 

Rutinel gave the examples of AI being used to bypass security systems, run scams, launch autonomous cyber attacks against public infrastructure and even enabling the creation of bioweapons. 

“If a whistleblower at a large AI company realizes that their model could be misused for bioterrorism, they should be able to report it to their company or state or federal authorities without retaliation,” Rutinel said. 

The bill as designed would only apply to large AI developers, according to Rutinel. “This bill only applies to AI developers who have spent at least $100 million in the preceding year on model training and have trained at least one model costing $20 million or more,” Rutinel explained. “This ensures that only the largest AI developers, those building the most powerful and potentially risky AI systems are covered. Small businesses and startups are not affected at all, and neither would industries not training foundation models, like banks.” 

The sponsors noted that due to these definitions, the bill would likely only apply to around 10 companies. 

Soper said that the measure is for whistleblower protection. “When workers in part of the larger AI space, software development space, see risks to our national security, to our systems, they’re encouraged to report that to their managers, and if their managers don’t listen, then to government actors,” Soper said.  

He noted that the people most likely to see the threats potentially posed by the technology are those that are working on it. “If those workers identify serious public safety risks, they should be able to report them internally to relevant state and federal authorities without retaliation,” Soper said. 

John Bliss, a professor at the University of Denver Sturm College of Law, appearing before the committee in a personal capacity, said he supported the bill because he believed it would protect Coloradans from potentially serious harm and shield conscientious workers from retaliation. 

“So what happens when an employee at an AI company sees something that looks dangerous in the model, in the testing of the model, and their employer is proceeding without caution and maybe even deceiving the public about these test results,” Bliss said. “Unlike a more established sector, like the chemical industry, the AI industry is really lacking in regulations. So there might not be a basis for any whistleblower protections, because the actions that they would be speaking out about wouldn’t be illegal and wouldn’t be covered under current regulations.” 

“This means that the employees who might be the only people in the world who really are privy to the threat before a model is released wouldn’t be incentivized to raise an alarm,” Bliss added. “For these reasons, I believe HB-1212 is an effective response.” 

Jennifer Gibson, a co-founder and director of Psst.org, a nonpartisan, nonprofit organization that offers assistance to tech workers who want to speak out, testified in support of the bill. She said that some of her clients have warned regulators, testified before Congress and helped the media cover dangerous practices in tech companies. 

“But the decision to speak out, and the repercussions that come with it, are difficult,” Gibson said. “My clients face retaliation, as well as the loss of income and future prospects.” 

She said she is hearing more concerns from individuals working in the AI space, but that there are no laws that make what the companies are doing illegal. She said that HB25-1212 is important to signal to workers in the state that their lawmakers will have their back and protect them if they raise safety concerns about the technology. 

“In our experience at Psst, these protections and reassurances are critical,” Gibson said. “They can tip the scales as to whether an insider decides to speak out or not. This is especially true in a field like AI, where the technology is moving faster than regulation.” 

Opposition to the bill came from across the state’s business community, with representatives from TechNet, the Colorado Bankers Association, the Colorado Competitive Council and the Colorado Chamber of Commerce all testifying in opposition. 

But it wasn’t just businesses that were unfriendly to the bill. 

Michael McReynolds, the legislative liaison for the Governor’s Office of Information Technology, appeared to testify in opposition to the bill for both his office and for Gov. Jared Polis’s administration. 

“While the intent behind the bill may be well meaning, its potential consequences for Colorado’s technology industry are far too significant to ignore,” McReynolds said. “House Bill 25-1212 would place undue burdens and restrictions on companies developing artificial intelligence. This bill’s broad definition of a foundational model could encompass a vast range of AI applications, many of which pose no conceivable threat to public safety.” 

McReynolds also expressed concern over the bill’s private right of action and its framework that could allow for large damage awards. “Perhaps most alarmingly to us, House Bill 25-1212 could inadvertently incentivize companies to relocate their AI development operations outside of Colorado.” 

Meghan Dollar, senior vice president of government affairs at the Colorado Chamber of Commerce, said the tech industry accounts for 10% of the state’s employment and 20% of the state’s GDP. 

“This data is really important to think about when you’re looking at proposals such as this that could adversely impact Colorado and look at other states to invest,” Dollar said. 

She also flagged the private right to action as an area of concern, and Dollar said the chamber would prefer resolving concerns through an administrative process. 

The bill’s sponsors brought three amendments to the committee, but only asked for a vote on two. 

The first amendment that passed was focused on clarifying that the bill doesn’t permit the disclosure of federally protected trade secrets and narrowing the scope of the bill by making more clear the definition of risk.

That amendment received an objection from Rep. Yara Zokaie, as she believed that risk in the bill was defined too narrowly. The amendment narrowly survived a vote, with intraparty splits leading to a 5-4 decision. 

The second amendment was to tool the bill to make sure that the law would catch the largest AI companies, even if the development of the models became cheaper or if the foundational model building was completed. 

Despite her objection to the amendment, Zokaie was still one of the seven votes that allowed the bill to pass out of the House Judiciary Committee. The bill has been scheduled for its second reading since March 7, but at the time of publication hadn’t been brought up for a vote.

Previous articleCourt Opinions: Appeals Court Holds Independent Contractors Working With Each Other not Subject to Workers’ Compensation Act Limits
Next articleLegal Lowdown: Fox Rothschild Adds Litigation Partner, Quarles & Brady Elevates One to Denver Office Managing Partner

LEAVE A REPLY

Please enter your comment!
Please enter your name here