Colorado has the Nation’s First AI Regulation Law, But it’s not Set in Stone Yet

On May 17, Gov. Jared Polis signed into law a first-in-the-nation bill regulating artificial intelligence. Senate Majority Leader Robert Rodriguez, the bill’s prime sponsor, told the Senate Judiciary Committee, “At the base of this bill and policy is just accountability, assessments and disclosures that people need to know when they’re interacting with artificial intelligence.” 

The bill’s provisions reflect that, but it didn’t take long for the politicians involved in the bill to acknowledge that changes were needed before the law goes into effect on Jan. 1, 2026. In a letter released by Polis, Rodriguez and Attorney General Phil Weiser, the trio said they would focus on improvements to the law in a few specific areas after listening to the concerns of businesses in the state. 


The areas highlighted in the letter include refining the definitions in the bill, focusing regulation on developers of high-risk AI systems rather than small companies that deploy the technology, shifting from a proactive disclosure regime to the traditional enforcement regime, clarifying the consumer right to appeal and other measures “the state can take to become the most welcoming environment for technological innovation while preventing discrimination, especially for early-stage companies.” 

A Novel Requirement for American Businesses 

The regulation of AI is coming about as the technology becomes more consumer facing than ever before. 

“We’ve had machine learning and other similar technologies that have been helping businesses for a long time,” said Sophie Baum, senior associate and part of the global regulatory team at Hogan Lovells. “But I think the rise of generative AI, and the fact that you have day-to-day folks, normal consumers, interacting with this technology, really spurred the legislators to wake up to this new tech.” 

Senate Bill 24-205, the new Colorado bill, has a particular focus, ‘high-risk systems.’ Baum noted that the vast majority of obligations apply to these systems. What makes a system high-risk is that when it’s deployed or used, it makes or is a substantial factor in a consequential decision. 

“So what does that mean, a consequential decision?” said Baum. “That is a decision that has a material or legal or similarly significant impact on the provision or denial of certain essential services.” 

Baum added that those essential services can include education opportunities, employment opportunities, finance or lending, government services, health care and housing insurance. She gave an example on where a high-risk system could make a consequential decision. 

“It could be possible, for example, that you would have a binary system where you would be deciding whether or not somebody would be admitted to a college or some other type of educational institution based just on a high-risk AI system that looks at a bunch of inputs and then makes an admit or deny decision without any type of human intervention,” said Baum. 

The bill has similarities to the EU AI Act, which passed in March, but the binary aspect of the high-risk system definition is one of the differences between the bills. 

“In the Colorado Act, it’s high-risk versus not high-risk,” said Baum. “In the EU AI Act, there’s a couple more levels of systems where you have different obligations that attach based on how you would classify each of those levels.”

As it stands, the bill creates responsibilities for both deployers and developers of high-risk systems. The bill requires them to take reasonable care to protect consumers from algorithmic discrimination in high-risk systems. The bill also creates a disclosure requirement for when consumers interact with artificial intelligence systems, and that requirement applies to “a person doing business in the state, including a deployer or other developer,” according to the bill’s summary. 

Preparing for a Regulated Future 

For companies using AI systems, Baum told Law Week that there are a number of things companies can do to prepare for compliance with the new law. In general, businesses should know what AI tools they’re using, and if necessary let their consumers know when they’re interacting with AI. 

If the company is using a high-risk system, there are further steps companies can take. One of those is completing an AI risk assessment. 

“A lot of companies have privacy assessments or standardized processes for that, but they might not have done or completed AI risk assessments yet,” said Baum. “So understanding how to scope those appropriately, what they are, that the right stakeholders are involved, in some cases, conducting them under privilege with counsel can be very beneficial.” 

Baum added that these risk assessments are iterative, and with this law, it’s not the case of just doing one. 

In addition to the risk assessments, Baum noted that transparency was a big focus of the new law, and it’s important for deployers to understand when they use high-risk systems to make consequential decisions and what those decisions mean in the contexts of their own companies. This includes cases where decisions are made that may be adverse to a consumer, and companies should plan for both individualized disclosures around adverse actions and for general website disclosures. 

“There’s obligations for transparency and certain disclosures on websites about the use of AI technologies,” said Baum. “So those are some of the things that we’re starting to counsel our clients on as they prepare for the law. But it’s a new framework and there’s a lot to do.”

Previous articleCourt Opinions: US Supreme Court Overrules Chevron

LEAVE A REPLY

Please enter your comment!
Please enter your name here