Summary
The use of artificial intelligence (“AI”) is becoming increasingly more commonplace in the healthcare industry as practitioners recognize its benefits. Meanwhile, state and federal legislatures are grappling with the potential for AI systems to discriminate and produce biased results in healthcare and other contexts. In response to this evolving issue, Colorado’s legislature enacted “Concerning Consumer Protections in Interactions with Artificial Intelligence Systems” (hereinafter Consumer Protection for Artificial Intelligence), which takes effect on February 1, 2026, and will play a significant role in the development of nationwide regulations on AI in the healthcare industry.
By: David Ospina, JHBL Staffer
Introduction
The artificial intelligence industry is finding great promise in healthcare as medical providers and adjacent businesses adopt and implement AI systems in unique ways.[1] From assisting with diagnosing conditions to completing billing work, it is no wonder why the healthcare industry eagerly adopted AI and continues to automate processes.[2] AI also helps prevent errors caused by biases exhibited by human counterparts.[3] AI is not perfect, however, and can inherit and perpetuate biases and stereotypes from input data.[4] State legislatures across the U.S. created, or are in the process of developing, laws to address the dangers posed by AI, including the perpetuation of bias and systemic discriminatory practices.[5] The Department of Health and Human Services (hereinafter DHHS) also stepped in and published a final rule on May 6, 2024, to prohibit algorithmic discrimination in healthcare as AI continues to proliferate.[6]
Background
AI in healthcare is becoming increasingly more commonplace with various companies are also offering new AI technology for a variety of reasons.[7] Hospitals are using AI to assist in diagnosing conditions, transcribing documents, drug discovery, and completing administrative work.[8] Major companies such as Google, Microsoft, and General Electric, pledged to develop and advance AI in the healthcare industry, and many doctors express optimism about the benefits of AI.[9] One of AI’s major contributions to medical care is its role in improving diagnostic accuracy, as some studies demonstrate that AI increases the accuracy of cancer diagnoses by twenty percent.[10]
As AI adoption in healthcare continues to grow medical groups are becoming concerned as utilizing AI systems in healthcare have been found to play a role in perpetuating discriminatory biases.[11] The American Cancer Society found that AI is less likely to diagnose, and subsequently treat, cancer in certain racial groups.[12] In another study completed in 2019, researchers found that AI was more likely to suggest caesarean delivery to Black and Hispanic expectant mothers than to White expectant mothers with similar characteristics.[13] Another more troubling study found that AI was able to successfully determine a patient’s race without receiving any surrogate information, which implies that controlling AI bias may prove more difficult than assumed.[14] These biases may affect healthcare costs, as AI predictions of costs may be racially biased and offer unequal access of care for Black patients.[15] Biases in AI also affect health insurance, as investigations find that major health insurers use flawed AI to deny coverage with little oversight.[16] Patients who have been denied coverage have launched lawsuits against insurance companies for their use of AI to determine coverage.[17]
Given the concerning trends in AI, state legislatures are beginning to craft bills to regulate artificial intelligence.[18] Many states now require businesses to inform consumers of any use of AI in their decision making and offer consumers the opportunity to opt-out of providing information or having the AI make decisions on their issues.[19] Moreover, DHHS also released its strategic plan for adoption and use of AI in healthcare settings.[20]
Colorado’s Consumer Protection for Artificial Intelligence bill is farther reaching than other state bills by providing an affirmative defense for the discriminatory effects of AI if the AI developer uses reasonable care to discover and correct any algorithmic discrimination.[21] The reasonable care element is a duty of care imposed on developers and deployers of AI to protect consumers from algorithmic discrimination.[22] The bill also requires any entity that deploys AI that interacts with consumers to disclose its use to consumers.[23] The law imposes these duties on any developer or deployer who uses AI to make “consequential decisions,” which are decisions that relate to, among other areas, healthcare services.[24] The bill does not, however, create a private right of action and instead tasks that Colorado Attorney General with enforcement.[25]
Analysis
On February 1, 2026, the Consumer Protection for Artificial Intelligence will go into effect and healthcare providers immediately experienced its effects.[26] Hospitals in Colorado will have to inform consumers every time they interact with AI.[27] A major issue could arise when a specific form of artificial intelligence that a healthcare provider uses frequently algorithmically discriminates against specific patient populations, and ending use of the AI would cause the deployer difficulties.[28] If the algorithmic discrimination stems from a lack of data or a systemic issue, the healthcare deployer may choose to continue using the AI, depending on if it faces litigation.[29] Because the Consumer Protection for Artificial Intelligence bill does not allow for a private right of action, Colorado State Attorney General Philip Weiser will make the final determination as to whether his office wishes to litigate claims of algorithmic discrimination and force the deployer to prove that it is taking reasonable measures to mitigate the bias.[30]
Many state legislatures have tried to pass bills meant to fight algorithmic discrimination, but failed to pass due to lack of support.[31] State officials should consider looking to Colorado for guidance and add the affirmative defense to their legislation.[32] While the affirmative defense element creates a potential avenue for deployers to use discriminatory AI, healthcare providers may find the AI to still be useful and may wish to use it while still performing reasonable care to limit harm.[33] The affirmative defense clause made the Consumer Protection for Artificial Intelligence bill more palatable to AI developers and their legislative allies, who might otherwise fight against attempts to regulate the growing AI business.[34]
While AI offers many benefits to healthcare, the negative aspects of AI, such as algorithmic discrimination, must be controlled through legislation such as Colorado’s recent bill without sabotaging the benefit AI presents to healthcare.
Disclaimer: The views expressed in this blog are the views of the author alone and do not represent the views of JHBL or Suffolk University Law School.
David Ospina is a second-year law student at Suffolk University Law School pursuing a career in intellectual property law. He graduated from Montana State University with a bachelor’s degree in political science in 2015.
_______________________________________________________________________________________________________________________
[1] See Robert Dahdah, Microsoft makes the promise of AI in healthcare real through new collaborations with healthcare organizations and partners, Microsoft (Mar. 11, 2024), https://blogs.microsoft.com/blog/2024/03/11/microsoft-makes-the-promise-of-ai-in-healthcare-real-through-new-collaborations-with-healthcare-organizations-and-partners/ [https://perma.cc/PW7B-BMG3] (detailing growth of AI in healthcare). According to the report, 79% of healthcare organizations in the US have adopted and used AI in regular business. Id. The report also found major business growth for healthcare organizations that invested in AI. Id.
[2] See Revolutionizing Healthcare: How is AI being Used in the Healthcare Industry?, L.A. Paci. Univ. (Dec. 21, 2023), https://www.lapu.edu/ai-health-care-industry/ [https://perma.cc/Z5CF-Q3S2] (discussing benefits of AI). The examples of AI used include Machine learning, Natural Language Processing and Rules-Based Expert Systems. Id.
[3] See Dahdah, supra note 1; see also Daniel Restrepo & Raja-Elie Abdulnour, How is AI Used in Health Care?, Mass Gen. Bringham (Dec. 15, 2023), https://www.massgeneralbrigham.org/en/about/newsroom/articles/how-is-ai-used-in-health-care [https://perma.cc/M5J3-VZC5] (showing bias in diagnosis and how AI can solve it).
[4] See Restrepo & Abdulnour, supra note 3. When asked what careers a girl would want when she grows up an AI chatbot is likely to give stereotypical answers like “nurse” or “stay-at-home mother.” Id. When asked the same question but for a boy the AI chatbot is likely to suggest career paths like “CEO” and “doctor.” Id.
[5] See Colo. Rev. Stat. §§ 6-1-1702—6-1-1707 (demonstrating successful AI regulation); see also Cal. Health & Safety Code § 1367.01(k) (showing regulation of AI used in healthcare).
[6] See Nondiscrimination in Health Programs and Activities, 89 Federal Register, 37522, 37642 (May 6, 2024). HHS developed § 92.210, with input from healthcare professionals. Id. The final ruling is that AI programs cannot discriminate against protected classes, that healthcare entities have a duty to make reasonable efforts to identify the risk of discrimination and that healthcare entities must make a reasonable effort to mitigate the risk of algorithmic discrimination. Id.
[7] See Haider J. Warraich et al., FDA Perspective on the Regulation of Artificial Intelligence in Health Care and Biomedicine, 33 JAMA 241, 241 (Oct. 15, 2024). The FDA has authorized close to 1000 AI-enabled medical devices, ranging from simple machine learning to complex transformer models and generative AI tools. Id.
[8] See Revolutionizing Healthcare: How is AI being Used in the Healthcare Industry, supra note 2.
[9] See Susanna Vogel, Tech companies pitch AI platforms for healthcare, outline planes for responsible rollout, HealthcareDive (Oct. 28, 2024), https://www.healthcaredive.com/news/ai-healthcare-google-microsoft-products-hlth-2024/731167/ [https://perma.cc/MGB2-TW2S]. As AI continues to proliferate in healthcare many medical professionals are cautiously optimistic about its use in medicine. Id.; see also Restrepro, supra note 3 (highlighting doctor’s response to AI).
[10] See Shuroug A. Alowais et al., Revolutionizing healthcare: the role of artificial intelligence in clinical practice BMC Med. Educ., 3-11 (Sep. 22, 2023) (exemplifying benefits of AI in healthcare). The author also noted that AI hold great potential in the realm of disease prediction and prevention as well as the realm of personalized treatment. Id.
[11] See U.N., Hum. Rts. Council, Contemporary forms of racism, racial discrimination xenophobia and related intolerance, Rep. of the Hum. Rts. Council, 5-25, U.N. Doc.A/HRC/56/68 (June 3, 2024). There is a major concern that AI could also be used to contribute to bias by reinforcing discrimination in other fields such as law enforcement and education. Id.
[12] See Likhitha Kolla & Ravi B. Parikh, Uses and limitations of artificial intelligence for oncology, 130 Am. Cancer Soc’y, 2101, 2107 (2024). The study found that AI algorithms were more likely to struggle to detect skin lesions used for diagnosis of skin cancer in Black patients as the AI had been trained on data sets where there was little training data for Black patients. Id.
[13] See Darshali A. Vyas et al., Challenging the Use of Race in the Vaginal Birth after Cesarean Section Calculator, 29 Women’s Health Issues, 201, 201-04 (2019) (explaining plausible reason for AI discriminatory bias). The AI tool was designed to determine if a cesarean operation was necessary though risk factors like age and weight. Id. Still, it also had two race-based correction factors that would increase the likelihood of cesarean operations being found necessary in Black and Hispanic women. Id.
[14] See Judy W. Gaichoya et al., AI recognition of patient race in medical imaging: a modelling study, 4 Lancet e406, e406-41 (June 2022) (showing possibility of AI to determine race even if not told to). Surrogate information, like bone density or propensity for certain ailments could be used by AI to determine a patients race without being told directly. Id. The study trained the AI on medical imaging like x-rays, mammograms, and CT scans collected from various public sources without reference to race. Id. The AI, however, was still able to determine the patient’s race. Id.
[15] See Zaid Obermeyer et al., Dissecting racial bias in an algorithm used to manage the health of populations, 366 Science 447, 449 (2019) (highlighting disparities between health risk and cost between white and Black patients). The algorithm in the study found that median white and Black patients would pay the same amount for treatment at a hospital despite other data showing that the Black population was significantly sicker, as evidenced by signs of uncontrolled illness. Id. The AI would falsely conclude that the Black patients are healthier, based on costs alone, and would restrict their access to healthcare services. Id.
[16] See Casey Ross & Bob Herman, Denied by AI: How Medicare Advantage plans use algorithms to cut off care for seniors in need, STAT(Mar. 13, 2023), https://www.statnews.com/2023/03/13/medicare-advantage-plans-denial-artificial-intelligence/ [https://perma.cc/MQC4-RAXS] (showing algorithmic bias in Medicare plans). Senior citizens were more likely to either be denied or delayed in receiving coverage for treatment by insurers that used AI. Id.; see also Patrick Rucker et al., How Cigna Saves Millions by Having Its Doctors reject Claims Without Reading Them, Propublica (Mar. 25, 2023), https://www.propublica.org/article/cigna-pxdx-medical-health-insurance-rejection-claims [https://perma.cc/FWD7-2PJX] (showing how AI would reject insurance claims with no human oversight).
[17] See Huskey v. State Farm Fire & Cas. Co., No. 22 C 7014, 2023 U.S. Dist. LEXIS 160629, at *2, *3 (N.D. Ill. Sep. 11, 2023). Plaintiff alleged that defendant insurance company’s use of AI resulted in significant racial disparities for claims processing and the court held that the disparate impact claim by the plaintiff survived dismissal as there was a prima facie showing of discrimination. Id.; see also Emily Cousins, Cigna Class Action: Algorithm Allegedly Auto-Denies 300,000 Claims, ConnecticutLawTribune (Mar. 12, 2024), https://www.law.com/ctlawtribune/2024/03/12/cigna-class-action-algorithm-allegedly-auto-denies-300000-claims/ [https://perma.cc/HX94-6ZGS] (exposing Cigna AI use that led to class-action lawsuits).
[18] See US State-By-State AI Legislation Snapshot, BCLP (2024), https://www.bclplaw.com/en-US/events-insights-news/us-state-by-state-artificial-intelligence-legislation-snapshot.html [https://perma.cc/D4YW-8CM4]. At the time of posting over 30 states have either proposed or enacted legislation to regulate AI. Id. While only a few states have proposed or enacted legislation that directly regulate healthcare AI many more states have bills that regulate AI that can make critical decisions which includes decisions relating to a person’s health. Id.
[19] See S.B. 619, 82nd Legis. Assemb., Reg. Sess. (Or. 2023); see also Va. Code Ann. § 59.1-577 (2024) (exemplifying state bills that allow opt-out). But see S. AB-2930, 2024 Leg., Reg. Sess (Cal. 2024) (highlighting failed attempt to regulate AI)
[20] See HHS Artificial Intelligence Strategic Plan, Assistant Sec’y for Tech. Pol’y (Jan. 10, 2025), https://www.healthit.gov/topic/hhs-ai-strategic-plan [https://perma.cc/YR9W-FLCB] (explaining DHHS strategic plan for AI use in healthcare).
[21] See S. 24-205 74th Gen. Assemb., Second Reg. Sess. (Colo. 2024). The bill lists several actions a deployer could take to help prove reasonableness including implementation of a risk management policy, annual reviews of high-risk systems, notifying customers if the system will be involved in a consequential decision, providing opportunities to consumers to correct or appeal AI decisions, and disclose algorithmic discrimination to the attorney general within 90 days of discovery. Id.
[22] See Colo. Rev. Stat. § 6-1-1701(1)(a) (detailing reasonable care element). Algorithmic discrimination is defined as “[T]he use of an artificial intelligence system results in an unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of
[23] See Colo. Rev. Stat. § 6-1-1704 (explain bill’s regulation of customer interaction). The disclosure is not required if it would be obvious to the consumer that they are interacting with an AI. Id.
[24] See Colo. Rev. Stat. § 6-1-1701(3) (exemplifying bill’s definition of “consequential decisions”). Other “consequential decisions” are made in relation to essential government services, housing, insurance, or legal services. Id.
[25] See Colorado Anti-Discrimination in AI Law (ADAI) Rulemaking, Colo. Att’y Gen. (May 17, 2024), https://coag.gov/ai/ [https://perma.cc/KL5E-T7BR] (detailing responsibilities of Colorado attorney general)
[26] See Colo. Rev. Stat § 6-1-1702(1); see also Colo. Rev. Stat § 6-1-1701(3) (explaining how Consumer Protection for Artificial Intelligence will relate to healthcare providers). Healthcare providers will fall under the Consumer Protection for Artificial Intelligence as they are high-risk deployers of AI making critical decisions. Id.
[27] See Colo. Rev. Stat § 6-1-1704; see also Dahdah, supra note 1 (listing various uses of AI in healthcare).
[28] See Revolutionizing Healthcare: How is AI being Used in the Healthcare Industry? supra note 2 (highlighting growing use of AI in healthcare). While AI use in clinical practice is currently limited, it is expected to become more extensive over the next ten years. Id.; see also Restrepo, supra note 3 (showing different applications of AI in diagnosis). As AI becomes more commonplace in diagnosis and treatment, the healthcare system will get more efficient and will be able to treat more patients; pulling the plug on the AI could cause a serious backlog on patient treatments. Restrepo, supra note 3.
[29] See Kolla, supra note 12, at 2107. The author realized that the lack of dark-skinned data training sets for skin cancer was the cause of the algorithmic discrimination. Id. If a healthcare provider was using an AI with this problem the provider would have to wait until new data was created to fix the problem. Id.; see also Gaichoya, supra note 13 (predicting difficulty in preventing algorithmic discrimination). The author is concerned about how the AI can determine race, which is unknown and will be difficult for developers to solve. Gaichoya supra note 13.
[30] See Colo. Rev. Stat. § 6-1-1706(3) (highlighting lack of private action). A lack of private action could halt injured patient’s ability to collect damages ad put pressure on AI developers to correct the AI. Id.; see also Colorado Anti-Discrimination in AI Law (ADAI) Rulemaking supra note 25 (noting responsibilities of Colorado attorney general)
[31] See US State-By-State AI Legislation Snapshot supra note 18 (detailing struggle of states to pass AI regulation). Bills meant to stop algorithmic discrimination failed to pass in California, Georgia, New Jersey, New York, and Washington. Id.
[32] See US State-By-State AI Legislation Snapshot supra note 17 (revealing shortage of AI regulations at state level). Only 13 states have proposed or created bills to deal with algorithmic discrimination; California, Colorado, Georgia, Illinois, New Jersey, New York, Oklahoma, Pennsylvania, Rhode Island, Texas, Vermont, Virginia and Washington. Id.
[33] See Colo. Rev. Stat. § 6-1-1706(3) (justifying application of affirmative defense). Even if an AI has discriminatory bias towards one protected class it can still provide accurate diagnosis for other patients. Id. If a healthcare provider learns that an AI they are using has discriminatory bias in certain use cases they can still use the AI with patients that will not be affected, provided that they observe all other regulations in Consumer Protection for Artificial Intelligence. Id.; see also Vyas supra note 13.
[34] See AB-2930, 2024 Leg., Reg. Sess (Cal. 2024) (highlighting negative response to AI regulations). When the California senate opened the floor to a reading of an AI regulatory bill, no groups representing healthcare or AI development voiced in favor. Id.