Insights

US state-by-state AI legislation snapshot

US state-by-state AI legislation snapshot

Download PDFDownload PDF
Print
Share

Summary

BCLP actively tracks the proposed, failed and enacted AI regulatory bills from across the United States to help our clients stay informed in this rapidly-changing regulatory landscape. The interactive map will be updated regularly to include legislation that if passed would directly impact a businesses’ development or deployment of AI solutions.

Artificial Intelligence (AI), once limited to the pages of science fiction novels, has now been adopted by more than 40% of enterprise-scale businesses in the United States, and as many organizations are working to embed AI into current applications and processes.[1]  As companies increasingly integrate AI in their products, services, processes, and decision-making, they need to do so in ways that comply with the different state laws that have been passed and proposed to regulate the use of AI.

Click the image below to view detailed state-by-state AI legislation information.

As is the case with most new technologies, the establishment of regulatory and compliance frameworks has lagged behind AI’s rise. This is set to change, however, as AI has caught the attention of federal and state regulators and oversight of AI is ramping up. 

In the absence of comprehensive federal legislation on AI, there is now a growing patchwork of various current and proposed AI regulatory frameworks at the state and local level.  Even with the federal bill uncertain, it is clear that momentum for AI regulation is at an all-time high.  Consequently, companies stepping into the AI stream, face an uncertain regulatory environment that must be closely monitored and evaluated to understand its impact on risk and the commercial potential of proposed use cases. 

To help companies achieve their business goals while minimizing regulatory risk, BCLP actively tracks the proposed and enacted AI regulatory bills from across the Unites States to enable our clients to stay informed in this rapidly-changing regulatory landscape.  The interactive map is updated regularly to include legislation that if passed would directly impact a business’s development or deployment of AI solutions.[2] Click the states to learn more.

We have also created an AI regulation tracker for the UK and EU to keep you informed in this rapidly changing regulatory landscape.


[1]IBM Global AI Adoption Index 2024.

[2]We have also included laws addressing automated decision-making, because AI and automation are increasingly integrated, noting that not all automated decision-making systems involve AI, such businesses will need to understand how their particular systems are designed.  We have omitted biometric data, facial recognition, and sector-specific administrative laws.

Enacted

H172

H172, enacted May 15, 2024, prohibits a person from distributing or entering into an agreement to distribute materially deceptive media. “Materially deceptive media” is “any image, audio, or video” that (1) “depicts an individual engaging in speech or conduct in which the depicted individual did not in fact engage,” (2) “a reasonable viewer or listener would incorrectly believe that the depicted individual engaged in the speech or conduct depicted,” and (3) AI created the media. AI includes any “artificial system or generative artificial intelligence system that performs tasks under varying and unpredictable circumstances without significant human oversight or that can learn from experience and improve performance when exposed to data sets.” A violation occurs if the person knows the media falsely represents someone, the distribution occurs within 90 days before an election, and the person intends to distribute this and cause a particular result. The creator, sponsor, or purchaser must have a disclaimer informing viewers the media has been manipulated. A violation results in criminal penalties.

Attorney General, depicted person, a candidate for office, and an entity that represents the interests of voters, may all seek injunctive relief.

Effective date: October 1, 2024.

PROPOSED

HB283

Introduced on February 13, 2025, HB283 authorizes greater  consumer control of their personal data. The act allows consumers to: confirm whether a controller is processing any of the consumer's personal data, correct any inaccuracies in the consumer's personal data, direct a controller to delete the consumer's personal data, obtain a copy of the consumer's personal data, and opt out of the processing of the consumer's data for sale or targeted advertising. It regulates the manner in which a controller can process data and requires the controller that possesses data to publicly commit to processing the data in a deidentified manner only. The controller shall document a data protection assessment for any activities that present a risk of harm to consumers, for example: sale of personal data, profiling, sale of data for targeted advertising, etc. There is no private right of action.

If enacted, the bill would become effective on October 1, 2025.

HB516

Introduced on April 3, 2025, HB516 provides that the use of a computer to interact with a consumer as part of a commercial transaction in a manner that would deceive the consumer into reasonably believing that the consumer is interacting with a human is an unlawful, deceptive trade practice.

If enacted, the bill would become effective on October 1, 2025.

Proposed

SB2

Introduced on January 22, 2025, SB2 is an act governing the disclosure of election-related deepfakes. If a communication is related to a candidate or proposition, it must contain a label (specific text of label in bill) or, if the content is in audio format, a spoken warning. This bill contains a private right of action for any candidate or proposition group about whom a deepfake is made. This bill is effective upon passage.

SB33

Introduced on January 1, 2025, SB33 prohibits a person from using synthetic media in an electioneering communication with the intent to influence an election. The bill provides a private right of action, but allows a defense to such action by including the disclosure statement explicitly spelled out in the bill.  It provides that an individual who is harmed by such communication may bring an action in the superior court to recover damages, reasonable attorney fees, and costs from (1) the person who created the communication or retained the services of another to create the communication; (2) a person who disseminates the communication knowing that the communication includes synthetic media; or (3) person who removes a disclosure statement described in (d) of this section from an election communication with (a) the intention to influence (b) the purpose of the disclosure statement; and provides that the synthetic media constitutes satire

The bill also categorizes defamation claims for involving synthetic media claims as a claim for  defamation per se. The bill is effective upon passage.

Failed

S117

S117, introduced January 16, 2024, would have prohibited a person from making or retaining services to make an election-related communication the person “knows or reasonably should know includes a deepfake relating to a candidate or proposition without including” a specified disclosure stating AI has manipulated or generated the content. The disclosure placement and readability are based on the type of media. “Deepfake” constitutes an “image, audio recording, or video of an individual’s appearance, conduct, or spoken words that has been created or manipulated with machine learning, natural language processing, or another computational processing technique in a manner to create a realistic but false image, audio, or video” that a reasonable person would understand to depict a real individual.

H358

Introduced February 20, 2024, H358 would have provided for a defamation action on the use of synthetic media. More specifically, a person cannot knowingly use synthetic media in an electioneering communication with an intent to impact the election. Otherwise, the harmed individual may recover damages against the person who created the communication or retained services of another to do so, the person who disseminated the communication, or the person who removed the disclosure statement. “Synthetic media” is an “image, audio recording, or video recording of an individual’s appearance, speech, or conduct that is manipulated by artificial intelligence in a manner that creates a realistic but false image, audio recording, or video recording and produces” a certain depiction. The depiction is one a “reasonable person would believe is of a real individual in appearance, speech, or conduct but did not actually occur in reality;” and is “a materially different understanding or impression that a reasonable person would have from the unaltered, original version.”

H352

Introduced February 20, 2024, H352 would revise the definition of “person” in a civil action to not include AI. This bill would have taken effect July 1, 2024.

H306

Introduced February 2, 2024, H306 (Senate version S117) would require a disclosure if a person knows or reasonable should know the communication has a deepfake depicting a candidate or political party in a way intended to injure reputation or deceive a voter. The disclosure must state the communication has been manipulated or generated by AI. This disclosure must be easily heard or readable based on the type of media. “Deepfake” includes “an image, audio recording, or video recording of an individual’s appearance, conduct, or spoken words that has been created or manipulated with machine learning, natural language processing, or another computational processing technique of similar or greater complexity in a manner to create a realistic but false image, audio, or video” that (1) “appears to a reasonable person to depict a real individual saying or doing something that did not actually occur” or (2) “provides a fundamentally different understanding or impression of an individual’s appearance, conduct, or spoken words than the understanding a reasonable person would have from an unaltered, original version of the media.”

ENACTED

HB2175

Enacted May 12, 2025, HB2175 outlines amendments to Arizona Revised Statutes concerning healthcare insurance. Specifically, it mandates that medical directors at healthcare insurance companies must personally review all denials of claims and prior authorizations that are based on medical necessity. Furthermore, these medical directors are required to exercise independent medical judgment during these reviews, rather than relying solely on external recommendations. This new regulation is set to become effective on June 30, 2026.

Enacted

AB1008

AB 1008 updates the definition of “personal information” as defined in the California Consumer Privacy Act to clarify that “personal information” can exist in various formats, including artificial intelligence (AI) systems that are capable of outputting personal information.

AB1836

Introduced on January 16, 2024, AB1836, prohibits commercial use of digital replicas of deceased performers in films, TV shows, video games, audiobooks, sound recordings, etc., without first obtaining the consent of those performers’ estates.

AB2013

Introduced on January 31, 2024, AB2013, requires, on or before January 1, 2026, a developer of an artificial intelligence system or service made available to Californians for use, regardless of whether the terms of that use include compensation, to post on the developer’s internet website documentation regarding the data used to train the artificial intelligence system or service.

The law applies to AI developers, which is defined broadly to mean any person, government agency, or entity that either develops an AI system or service or “substantially modifies it,” which means creating “a new version, new release, or other update to a generative artificial intelligence system or service that materially changes its functionality or performance, including the results of retraining or fine tuning.” The law applies to generative AI released on or after January 1, 2022, and developers must comply with its provisions by January 1, 2026.

AB2355

Introduced on February 12, 2024, AB2355 requires that electoral advertisements using AI-generated or substantially altered content feature a disclosure that the material has been altered.   The law will be enforced by the Fair Political Practices Commission.

AB2602

Introduced on February 14, 2024, AB2602, provides that “a provision in an agreement between an individual and any other person for the performance of personal or professional services is unenforceable only as it relates to a new performance, fixed on or after January 1, 2025, by a digital replica of the individual of the voice or likeness of an individual in lieu of the work of the individual.”

AB2839

Introduced on February 15, 2024, AB2839 expands the timeframe in which a committee or other entity is prohibited from knowingly distributing an advertisement or other election material containing deceptive AI-generated or manipulated content.

AB2885

Introduced on February 15, 2024, AB2885 defines artificial intelligence as “an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objective, infer from the input it receives how to generate outputs that can influence physical or virtual environments.” The purpose of this definition is to standardize the definition of AI across various California statutes, including the California Business and Professions Code, Education Code, and Government Code.  The law took effect January 1, 2025.

AB3030

Effective January 1, 2025, California’s AB 3030 regulates the use of generative artificial intelligence (“genAI”) in health care provision. The law requires health facilities, clinics, doctor’s offices, and group practices to disclose when they have used genAI to communicate clinical information about health status to patients. In these circumstances, the law requires both: 1) a disclaimer that the communication was created with genAI, and 2) clear instructions about how the patient can contact a human instead.

There are two exceptions to the disclosure requirements: AI-generated communications read and reviewed by a licensed or certified healthcare provider, and administrative tasks like appointment scheduling, even if AI-assisted.

The law does not include a private right of action.

California Consumer Privacy Act

The California Consumer Privacy Act, as amended by the California Privacy Rights Act (CCPA) governs profiling and automated decision-making. The CCPA gives consumers opt-out rights with respect to businesses’ use of “automated decision-making technology,” which includes “profiling” consumers based on their “performance at work, economic situation, health, personal preferences, interests, reliability, behavior, location or movements.” The CCPA defines “profiling” as “any form of automated processing of personal information, as further defined by regulations pursuant to paragraph (16) of subdivision (a) of Section 1798.185 [of the CCPA], to evaluate certain personal aspects relating to a natural person and in particular to analyze or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behavior, location, or movements,” leaving the scope relatively undefined. The CCPA also requires businesses to conduct a privacy risk assessment for processing activities that present “significant risk” to consumers’ privacy or security. “Significant risk” is not defined by the CCPA but may be fleshed out by the regulations.

As of the date of publication, regulations addressing automated decision-making have not been finalized.

AB 566, introduced Feb. 12, 2025 would add to the CCPA, prohibiting businesses from creating a browser or mobile operation that does not include an opt-out setting for consumers.

SB1001

Introduced in 2018 as SB 1001, The Bolstering Online Transparency Act (BOT), went into effect in July 2019. BOT makes it unlawful for a person or entity to use a bot to communicate or interact online with a person in California in order to incentivize a sale or transaction of goods or services or to influence a vote in an election without disclosing that the communication is via a bot. The law defines a “bot” as “an automated online account where all or substantially all of the actions or posts of that account are not the result of a person.”  The law applies only to communications with persons in California. In addition, it applies only to public-facing websites, applications, or social networks that have at least 10 million monthly U.S. visitors or users.   BOT does not provide a private right of action.

SB942

Introduced on January 17, 2024, SB942, the California AI Transparency Act applies to businesses providing a generative AI system with over 1M monthly visitors during a 12-month period that is publicly accessibly within the state’s geographic boundaries. The law requires in-scope businesses to create an AI detection tool that allows a user to query the business about the which content was created by a generative AI system.  Additionally, the law requires these businesses to include in any AI-generated content a visible disclosure that has “clear and conspicuous” as well as appropriate notice based on the content’s medium stating AI has created the content. This disclosure must be understandable to a reasonable person, not avoidable, and consistent with the communication itself.  The law goes into effect on January 1, 2026.

Proposed

AB316

AB 316, introduced on January 24, 2025, would prohibit defendants who develop, modify, or use artificial intelligence from claiming that the AI autonomously caused harm as a legal defense in civil actions. The bill defines AI as systems capable of generating outputs that influence environments based on input data. It aims to ensure accountability by preventing developers or users of AI from shifting liability to the technology itself.

AB331 (reintroduced, companion with AB2930)

Introduced on January 30, 2023, AB 331, would, among other things, require an entity that uses an automated decision tool (ADT) to make a consequential decision (deployer), and a developer of an ADT, to, on or before January 1, 2025, and annually thereafter, perform an impact assessment for any ADT used that includes, among other things, a statement of the purpose of the ADT and its intended benefits, uses, and deployment contexts.  The bill requires a deployer or developer to provide the impact assessment to the Civil Rights Department within 60 days of its completion. Before using an ADT to make a consequential decision deployers must notify any natural person that is the subject of the consequential decision that the deployer is using an ADT to make, or be a controlling factor in making, the consequential decision. Deployers are also required to accommodate a natural person’s request to not be subject to the ADT and to be subject to an alternative selection process or accommodation if a consequential decision is made solely based on the output of an ADT, assuming that an alternate process is technically feasible.  This bill would also prohibit a deployer from using an ADT in a manner that contributes to algorithmic discrimination.

Finally, the bill includes a private right of action which would open the door to significant litigation risk for users of ADT.

This bill has been reintroduced by Assemblywoman Bauer Kahen bill number pending.

AB410

AB 410, introduced on February 4, 2025, strengthens bot disclosure requirements. It mandates that any person using a bot to autonomously communicate online must ensure the bot clearly discloses its non-human identity at the start of the interaction, truthfully answers questions about its identity, and does not mislead users. The bill redefines “bot” to include generative AI outputs and applies to bots that could reasonably be mistaken for humans. It also authorizes civil enforcement by state and local prosecutors, there is no private right of action. Penalties include a $1,000 fine per violation

AB412

AB 412, introduced on February 4, 2025, requires developers of generative AI models to document any copyrighted materials used in training and provide a public mechanism for rights holders to request this information and developers must respond within 30 days. Each day of noncompliance constitutes a separate violation, and rights holders may pursue civil action.

If enacted, the law would apply to AI systems made available in California on or after January 1, 2026.

AB446

Introduced on February 6, 2025, AB 446 regulated the use of algorithmic decisions in dynamic pricing. This practice, known broadly as surveillance pricing involves collecting individualized, unspecified data about consumers, then using that information to charge a personalized price for a product that differs from the standard price. The bill prohibits the use of individualized or aggregate consumer information in setting a price.

The bill exempts certain pricing structures, including situations where the price difference reflects the cost to the supplier in supplying the product to different consumers and when the price difference is a discount, generally available to all consumers.

The bill includes a private right of action with penalties up to the maximum available in small claims court. Treble damages are available if the violation was done knowingly.

AB853

AB 853, introduced on February 19, 2025, expands the California AI Transparency Act. It requires developers of generative AI systems with over one million monthly users to provide a free AI detection tool that identifies whether content was created or altered by their system. The bill also mandates large online platforms retain and label machine-readable provenance data in AI-generated content. Additionally, it requires capture device manufacturers to offer users the option to embed provenance disclosures in recorded content. It prohibits platforms from hosting generative AI systems that omit permanent disclosures and bans tools designed to remove such disclosures. There is no private right of action. Penalties include a $5,000 fine per violation.

AB1018

AB 1018, introduced on February 21, 2025, regulates the development and deployment of automated decision systems (ADS) used to make consequential decisions such as decisions about: employment, education and vocational training, housing and lodging, essential utilities, family planning, health care, financial services, criminal justice, legal services, mediation/arbitration, elections, government benefits/penalties, places of public accommodation, insurance, and internet/telecommunications access. Beginning January 1, 2027, deployers of covered ADS must provide disclosures to individuals affected by such systems, offer opt-out and appeal options, and submit the systems to third-party audits.

Developers must conduct performance evaluations and share results with deployers. Upon request, developers, deployers, or auditors must provide unredacted evaluation reports to the Attorney General.

There is no private right of enforcement and penalties for violations of up to $25,000 per violation are available.

AB2930 (reintroduced February 13, 2025)

Introduced on February 15, 2024, AB2930, would, among other things, require an entity that uses an automated decision tool (ADT) to make a consequential decision (deployer), and a developer of an ADT, to, before first using it, and annually thereafter, perform an impact assessment for any ADT used that includes, among other things, a statement of the purpose of the ADT and its intended benefits, uses, and deployment contexts.  The bill requires a deployer or developer to provide the impact assessment to the Civil Rights Department within 60 days of its completion. Before using an ADT to make a consequential decision deployers must notify any natural person that is the subject of the consequential decision that the deployer is using an ADT to make, or be a controlling factor in making, the consequential decision. Deployers are also required to accommodate a natural person’s request to not be subject to the ADT and to be subject to an alternative selection process or accommodation if a consequential decision is made solely based on the output of an ADT, assuming that an alternate process is technically feasible.  This bill would also prohibit a deployer from using an ADT in a manner that contributes to algorithmic discrimination.   AB2930 is nearly identical to AB331, which advanced from the House Committee on Privacy and Consumer Protection in 2023, but notably does not include a private right of action as AB331 did.

The proposed reintroduction by Assemblywoman Bauer Kahen incorporates AB 2930 with additions. The new bill allows persons subject to an ADS (automated decision system) to appeal the results of that decision and correct any personal information used in the decision.

Unlike AB 2930, this version of the bill would require developers’ impact assessments by developers that use ADS to render consequential decisions for 5999 people or more in a three year period. These reports, prepared by an auditor, need to be furnished to the Attorney General within 30 days of the Attorney General’s request.

If approved, this bill would become operative on Jan. 1, 2027. There is no private right of action.

AB3211

Introduced on Feb. 16, 2024, AB3211 seeks to provide context (called provenance) around synthetic media. The bill would require generative AI providers to mark the content with provenance data noting the synthetic nature of the content, the name of the generative AI provider, and identifying the portions of the content that are synthetic. The bill would further require a public facing tool that allows users to determine whether and how a piece of content was modified.

The bill has specific requirements for large online platforms, defined as public-facing social media platform, video-sharing platform, messaging platform, advertising network, or standalone search engine and had at least 2,000,000 monthly CA users during the past year. These platforms must label content when provenance data is available, display that data, and provide an annual transparency that identifies deceptive synthetic media on the platform.

There is no private right of action. Violations of this bill would result in a $25,000 fine per violation.

SB7

Introduced on December 2, 2024, SB 7 requires employers to provide written notice to employees when an automated decision making system (ADS) is used to make employment related decisions, excluding hiring decisions. The worker subject to the ADS decision must be given 30 days to appeal the decision.

There is a private right of action. Civil penalties of $500 per violation are available.

SB11

Introduced on Dec. 2, 2024, SB11 sets requirements for sellers and providers of AI.  Effective Dec. 1, 2026, the bill would require any person or entity who sells or provides AI technology that makes synthetic content to provide a consumer warning that the misuse of the technology can result in civil or criminal liability. There is no private right of action for this provision, and violations are penalized up to $25,000 a day.

The bill goes on to define restricted uses of the name, image, and likeness of another without consent and establishes a claim of damages for violations.

SB53

Introduced Jan. 7, 2025, SB 53 has 2 primary aims: to protect whistleblowers and to establish CalCompute, a framework for the use AI to foster innovation and drive research that benefits the public and expand access to computational resources. The bill would prohibit AI developers from developing any policies or practices that stop employees from reporting on potential critical risk created by the developer’s use of AI. The bill defines critical risk as “a foreseeable and material risk” that the “development, storage, or deployment” of the model “will result in the death of, or serious injury to, more than 100 people, or more than $1 billion in damage.” The bill protects whistleblowers at these developers who seek to report to state or federal authorities, or to other employees with authority to address the risk. Employers must provide written notice to their employees of their right to report and provide an anonymous internal reporting option. This bill allows for employees to bring suit individually, seeking both damages and injunctive relief.

SB243

SB 243 was introduced on January 30, 2025. This bill regulates the use of AI-powered companion chatbots, particularly those interacting with minors. The bill requires chatbot platform operators to take reasonable steps to prevent chatbots from encouraging addictive behaviors, such as providing unpredictable rewards or promoting excessive engagement. It also mandates that operators implement protocols for detecting and responding to suicidal ideation, including referrals to crisis services. It requires annual reporting to the Office of Suicide Prevention on incidents of suicidal ideation detected by chatbots, with the data to be published online. Also mandates third-party audits to ensure compliance.

This bill provides a private right of action and permits actual damages or statutory damages of $1,000 per violation, whichever is higher.

SB295

Introduced Feb. 6, 2025, SB295 the Preventing Algorithmic Collusion Act would prevent a person from using or distributing a pricing algorithm that uses, incorporates, or was trained with competitor data with two or more people to set the price of a product. The bill also gives the attorney general power to request a written report detailing the owner and use of the price setting algorithm, the data entered in the pricing algorithm, and the rules the algorithm relies on.

There is no private right of action, and a violation is punishable by a fine of $5,000 a day.

SB384

Introduced on February 14, 2025, SB 384 prevents sellers from using a price fixing algorithm to set the price or supply of goods or the rent or occupancy levels of a rental property.

There is no private right of action. Violations of this law can result in a $1,000 fine per violation.

SB420

SB 420, introduced on February 18, 2025, would prohibit state agencies from awarding contracts for high-risk automated decision systems unless the contractor certifies that the system complies with civil rights laws and the bill’s provisions. It requires developers and deployers of such systems to conduct impact assessments before deployment and provide them to state agencies and, upon request, to the Attorney General or Civil Rights Department. These assessments must be kept confidential. The bill also authorizes enforcement actions and allows violators a 45-day window to cure violations.

This bill does not apply to entities with 50 or fewer employees. There is no private right of action.

SB468

SB 468, introduced on February 19, 2025, would require businesses deploying high-risk artificial intelligence systems that process personal information to implement a comprehensive information security program. This program must include administrative, technical, and physical safeguards tailored to the business’s size and scope. It also mandates employee training, risk assessments, access controls, encryption, and incident response protocols. It also states any violation of the duties set forth in the bill constitute a deceptive trade act or practice under the Unfair Competition Law. The bill grants the California Privacy Protection Agency authority to enforce these provisions and adopt implementing regulations, there is no private right of action.

SB503

SB 503, introduced on February 19, 2025, focuses on the use of generative artificial intelligence in healthcare. Existing law requires health facilities, clinics, and physician offices using generative AI for patient communications to include disclaimers and clear instructions for contacting a human provider.

This bill mandates that developers and deployers of those patient care decision support tools identify and mitigate risks of discrimination based on protected characteristics. These tools must be tested for biased impacts at least every three years.

There is no private right of action.

Failed

SB892

SB 892, introduced on January 1, 2024, would impact businesses entering into a contract with state agencies to provide artificial intelligence services by prohibiting such a contract unless the business met California’s Department of Technology safety, privacy, and nondiscrimination standards relating to artificial intelligence services. The Department of Technology to date has not promulgated these standards.

SB970

Introduced on January 25, 2024, SB970, this bill would require any person or entity that sells or provides access to any artificial intelligence technology that is designed to create content to provide a consumer warning that misuse of the technology may result in civil or criminal liability for the user. The bill would require the Department of Consumer Affairs to specify the form and content of the consumer warning and would impose a civil penalty for violations of the requirement. Failure to comply with consumer warning requirement would be punishable by a civil penalty not to exceed twenty-five thousand dollars ($25,000) for each day that the technology is provided to or offered to the public without a consumer warning.

SB1047 (Vetoed by Governor Newsom)

The Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act, SB 1047, introduced February 7, 2024, would in general authorize an AI developer of a covered model that is nonderivative to determine if the model qualifies for a limited duty exemption before training on that model can begin. The “limited duty exemption” would apply to a covered AI model defined by this bill that the develop can provide reasonable assurance the model does not, and will not, possess a hazardous capability. “Hazardous capability” means  the model creates or uses a “chemical, biological, radiological, or nuclear weapon in a manner that results in mass casualties”; causes at least $500,000,000 “of damages through cyberattacks on critical infrastructure via a single incident” or related incidents; causes at least $500,000,000 of damages by engaging in bodily harm to another human or theft of, or harm to, property with the requisite mental state; and other comparable “grave threats in severity to public safety and security." Before starting training, the developer must meet specified requirements, such as the capability to promptly shutdown, until the model falls under the limited duty exemption. If an incident occurs, the developer must report each AI safety incident to the Frontier Model Division, a subdivision of the Department of Technology.

SB1229 

Introduced February 15, 2024, SB 1229 would require property and casualty insurers to disclose until January 1, 2030, if it has used AI to make decisions that affect applications and claims review, as specified.

Enacted

Colorado Privacy Act

The Colorado Privacy Act (CPA), went into force on July 1, 2023, provides consumers the right to opt-out of the processing of their personal data for purposes of “profiling in furtherance of decisions that produce legal or similarly significant effects.” The law defines those decisions as “a decision that results in the provision or denial of financial and lending services, housing, insurance, education enrollment or opportunity, criminal justice, employment opportunities, health care services, or access to essential goods or services.”  The CPA further requires that controllers conduct a data protection impact assessment (DPIA) if the processing of personal data creates a heightened risk of harm to a consumer.  Processing that presents a heightened risk of harm to a consumer includes profiling if the profiling presents a reasonably foreseeable risk of:

  • Unfair or deceptive treatment of, or unlawful disparate impact on, consumers;
  • Financial or physical injury to consumers;
  • A physical or other intrusion upon the solitude or seclusion, or the private affairs or concerns, of consumers if the intrusion would be offensive to a reasonable person; or
  • Other substantial injury to consumers.

All of which means that deployers of automated-decision making (which may or may not use AI) need to ensure that their design and implementation do not create the heightened risks outlined above, and are included in their DPIA. On March 15, 2023, the Colorado Attorney General’s Office finalized rules implementing the CPA.

HB1147

Enacted on May 24, 2024, HB1147, creates a statutory scheme to regulate the use of deepfakes produced using generative artificial intelligence in communications about candidates for elective office. HB1147 prohibits the distribution of a communication that includes an undisclosed deepfake with actual malice as to the deceptiveness or falsity of the communication related to a candidate for public office.  Violators will be subject to civil penalties.  Additionally, a candidate who is the subject of a communication that includes a deepfake and does not comply with the disclosure requirements may bring a civil action for injunction or for general or special damages or both.

SB21-169

In 2021, Colorado enacted SB 21-169, Protecting Consumers from Unfair Discrimination in Insurance Practices, a law intended to protect consumers from unfair discrimination in insurance rate-setting mechanisms. The law applies to insurers’ use of external consumer data and information sources (ECDIS), as well as algorithms and predictive models that use ECDIS in “insurance practices,” that “unfairly discriminate” based on race, color, national or ethnic origin, religion, sex, sexual orientation, disability, gender identity, or gender expression.

On February 1, 2023, the Colorado Division of Insurance (CDI) released a draft of the first of several regulations to implement the bill.

On September 21, 2023, the CDI adopted Regulation 10-1-1 – Governance and Risk Management Framework Requirements for Life Insurers. The regulation governs the use of algorithms and predictive models that use external consumer data and information sources (ECDIS). Among other things, the regulation requires all Colorado-licensed life insurers to submit a compliance progress report on June 1, 2024, and an annual compliance attestation beginning on December 1, 2024.

SB24-205

Enacted May 17, 2024, SB24-205 requires both a developer and a deployer of a high-risk artificial intelligence system (high-risk system) to use reasonable care to avoid algorithmic discrimination in the high-risk system. A developer is a person doing business in Colorado who develops or substantially modifies certain AI models or systems, while a deployer is a person doing business in Colorado who deploys certain AI systems.

Algorithmic discrimination is when an AI system materially increases the risk of unlawful differential treatment or impact on an individual or group on the basis of certain protected classes like age, color, disability, ethnicity, race, religion, or sex.

There is a rebuttable presumption that a developer used reasonable care if the developer complied with certain provisions of the bill, including:

  • Making available to a deployer of the high-risk system information and documentation necessary to complete an impact assessment of the high-risk system;
  • Making a publicly available statement summarizing the types of high-risk systems that the developer has developed and how reasonably foreseeable risks of discrimination are managed;
  • Disclosing certain reasonably foreseeable risks of discrimination to the AG and deployers within 90 days after discovery of the risk.

There is a rebuttable presumption that a deployer used reasonable care if the deployer complied with certain provisions of the bill, including:

  • Implementing a risk management policy and program for the high-risk system;
  • Completing an impact assessment of the high-risk system;
  • Making a publicly available statement summarizing the types of high-risk systems that the deployer has deployed and how reasonably foreseeable risks of discrimination are managed;
  • Disclosing certain reasonably foreseeable risks of discrimination to the AG within 90 days after discovery of the risk.

A developer or business that makes available an AI system that is intended to interact with customers must disclose the consumer is interacting with an AI system.

There is no private right of action – the AG is exclusively responsible for enforcement. However, a developer or deployer has an affirmative defense if their system involved in the violation complies with federal or internal law and the developer or deployer has taken specified measures to discover any violations of this bill.

Proposed

HB1212

Introduced on February 11, 2025, HB1212 protects workers who disclose information about a developer’s foundation AI model if they believe the developer is breaking the law, posing a public security risk, or making false statements about safety. The developers cannot retaliate against these workers and must inform workers of their rights.

There is a private right of action—an aggrieved worker may commence a civil action in district court against a developer for a violation of the bill.  Damages under the bill are the greater of actual damages or $10,000.

Failed

HB24-1057

Introduced January 10, 2024, HB 24-1057 would have prohibited a private landlord from employing or relying on AI or some algorithmic device to calculate rent to be charged to a tenant. Such use would have been an unfair or deceptive trade practice under Colorado Consumer Protection Act.

HB1264

Introduced on February 18, 2025, HB1264 prevents discrimination in setting personalized prices or wages using automated decision systems (ADS). It prohibits the use of surveillance data to set discriminatory individualized consumer prices or employee wages. A violation of this prohibition is deemed a deceptive trade practice.

A company using ADS related to wages or pricing must publish procedures to allow the affected individual to correct or challenge the accuracy of the data considered, to receive the data considered, and information regarding how that data was considered in setting the price or wage.

This bill permits both public and private enforcement. The private right of action includes the greater of actual damages or $3,000 for each violation.

SB 318

As the regulatory landscape for AI continues to evolve, Colorado is considering significant changes to its recently enacted AI Act.  Senate Bill 318 - which follows nearly a year-long effort by the state’s Artificial Intelligence Impact Task Force to address concerns that the AI Act was so far-reaching that it would stifle innovation – would overhaul the AI Act narrowing its focus and materially reducing the compliance burden on in-scope businesses.

Key Proposed Changes:

  • Revised Definitions: The bill amends several key definitions.  For example, “algorithmic discrimination has been redefined to mean “the use of an artificial intelligence system that results in a violation of any applicable local, state, or federal anti-discrimination law…”.  The definition of "developer" is amended to exclude persons offering systems with open model weights or meeting specified conditions, e.g., if they do not engage in material conduct or statements promoting the use of the system in making consequential decisions and include specific disclaimers in contracts and documentation stating the system is not designed for consequential decisions, and that deployers are responsible for compliance if they use it for such purposes.  Certain specified technologies are exempted from the definition of "high-risk artificial intelligence system" unless they make or are a substantial factor in making a consequential decision.
  • New & Broadened Exemptions: Perhaps the most impactful of the amendments is the new exemption for deployers using high-risk AI systems solely for recruitment, sourcing, or hiring of external candidates, provided that certain conditions regarding employee count and disclosures are met.  Given the prevalence of AI-powered recruitment tools, and the high-rate of adoption across human resource departments, many businesses stand to benefit from this exemption.  Certain developer disclosure requirements do not apply to a developer that meets specific financial criteria (less than $10M from third-party investors, less than $5M annual revenue, operating less than 5 years) and sells or distributes high-risk AI systems that deployers use to make a limited number of consequential decisions per year, with the limit decreasing annually from 10,000 in 2027 to 2,500 in 2029.  Also, compliance with the Fair Credit Reporting Act or federal prudential regulators for banks/credit unions also provides exemptions or deems compliance with the AI Act.
  • Modified Duties: A prior requirement for developers/deployers to notify the attorney general of known/foreseeable risks of algorithmic discrimination appears to have been eliminated. 
  • Enhanced Deployer Obligations (with limitations): In-scope deployers must implement a risk management policy and program and complete annual impact assessments. The required content of impact assessments now explicitly includes analyzing risks of limiting accessibility, unfair trade practices, labor law violations, or Colorado Privacy Act violations. Deployers using high-risk AI systems for consequential decisions must provide consumers with disclosures about the system's purpose, name, developer, deployer contact, and a plain language description of the system's role and data evaluation process. For adverse consequential decisions, deployers must provide a single notice disclosing the principal reasons, the system's contribution, categories/sources of adverse data, consumer rights regarding personal data correction, and an opportunity to appeal non-time-limited/non-competitive adverse decisions based on incorrect data or unlawful information/inferences. Importantly, many of the deployer obligations apply only to high-risk AI systems that make, or are the principal basis for making, consequential decisions. "Principal basis" means making a decision without meaningful human involvement. This is a materially narrower concept than the current “substantial factor” standard.
  • New Notification Requirement for Withheld Information: In-scope businesses that withhold information that would otherwise be subject to disclosure under the AI Act, will be required to notify the individual that would otherwise have a right to receive the information stating the basis for the withholding, and provide non-exempt information.
  • Delayed Enforcement: The attorney general has exclusive authority to enforce the Act, but this authority does not begin until January 1, 2027.  Affirmative defenses are available for businesses that discover and cure a curable violation within seven days or who were otherwise compliant and meet specific criteria, including inadvertence, affecting fewer than 1,000 consumers, and no negligence.

This proposed bill represents a significant refinement of Colorado's approach to AI regulation, exempting more businesses from its scope, and adjusted compliance timelines and requirements for developers and deployers.

Enacted

CTPA

The Connecticut Privacy Act (CTPA) which went into full force on July 1, 2023, provides consumers the right to opt-out of profiling if such profiling is in furtherance of automated decision-making that produces legal or other similarly significant effects.  Controllers must also perform data risk assessments prior to processing consumer data when such processing presents a “heightened risk of harm.” These situations include certain profiling activities that present a reasonably foreseeable risk of unfair or deceptive treatment of or unlawful disparate impact on consumers, financial, physical or reputational injury to consumers, physical or other intrusion into the solitude, seclusion or private affairs or concerns of consumers that would be offensive to a reasonable person, or other substantial injury to consumers.

Proposed

HB5076

Introduced on January 10, 2025, HB5076, would amend the state’s law to require that any artificial intelligence data center (1) utilize energy derived from renewable sources for not less than fifty per cent of the energy consumption requirements of such center, (2) utilize energy storage systems and modern grid infrastructure, (3) implement water conservation measures, and (4) report annual energy consumption, water consumption and emissions, and to provide tax credits, grants and research funding for the development of such centers.

HB5587 and HB5590

Introduced on January 21, 2025, HB5587 and HB5590, prohibit any health insurer from using artificial intelligence as the primary method to deny health insurance claims.

HB5877

Introduced on January 22, 2025, HB5877, would prohibit the use of artificial intelligence to replace public school educators in providing instruction to and regular interaction with students.

HB6846

Introduced on January 31, 2025, HB6846, would prohibit the distribution of certain deceptive synthetic media within the ninety-day period preceding an election or primary.

SB2 (Reintroduced as part of SB 1484)

Introduced on February 21, 2024, SB 2, would regulate the development and use of automated decision tools (ADT) and high-risk artificial intelligence systems.  The following requirements would go into force as of July 1, 2025.

Development Requirements:

  • Documentation: Developers of certain AI systems must provide comprehensive documentation. This documentation should cover:
    • System Behavior: Detailed information about how the AI system operates.
    • Data Used: The datasets utilized by the AI system during development.
    • Risk Assessment: An assessment of potential risks associated with the AI system.
  • Transparency: Developers must ensure transparency in the development process, allowing stakeholders to understand the system’s inner workings.

Deployment Requirements:

  • High-Risk AI Systems: Deployers of high-risk AI systems (those impacting critical areas like criminal justice, education, employment, and healthcare) have additional responsibilities:
    • Risk Assessment: Conduct a thorough risk assessment before deploying the AI system.
    • Documentation: Provide detailed documentation to users and relevant authorities.
    • Transparency: Ensure transparency regarding the AI system’s functioning and potential biases.
    • Compliance: Comply with guidelines set forth by the bill to prevent unintended consequences. 

Artificial Intelligence Advisory Council:

  • The bill establishes an Artificial Intelligence Advisory Councilto oversee compliance and provide guidance to developers and deployers.

SB 2 does not establish a qualified individual right to opt-out of covered decision-making systems.  SB 2 address various other AI topics, including synthetic images and provide for the establishment of a "Connecticut Citizens AI Academy".

SB447

Introduced on January 10, 2025, SB447, would prohibit health carriers from using artificial intelligence in the evaluation and determination of patient care to safeguard patient access to testing, medications and procedures.

SB817

Introduced on January 21, 2025, SB817, would prohibit any health insurer from using a software tool, including, but not limited to, artificial intelligence or an algorithm, to automatically downcode or deny a health insurance claim submitted by a health care provider without detailed review by a clinical peer.

SB1292

Introduced on February 13, 2025, SB1292, would require (1) an owner or operator of an artificial intelligence data center to submit quarterly reports to the Commissioner of Energy and Environmental Protection, and (2) the commissioner to adopt regulations concerning water and energy efficiency standards for such data centers.

SB1484

Introduced March 6, 2025, SB1484 establishes that an employer may only engage in electronic monitoring in order to ensure the quality of goods and services, conduct periodic assessments of employee performance, protect the health, safety and welfare of employees, ensure compliance with the law, or administer wages and benefits. Electronic monitoring of employees must be done in the least invasive manner possible to best protect employee privacy. An employer is prohibited from collecting information on a protected characteristic.

Any employer intending to use electronic monitoring must give employees prior written notice detailing the type of monitoring, the intended use of the information collected, how it will be stored, and the employee’s rights.

There is no private right of action and the first penalty for a violation is $500. If passed, the bill would become effective October 1, 2025.

Failed

HB5450

Introduced March 7, 2024, HB 5450 would, within a 90-day period preceding an election or primary, prohibit the distribution of certain deceptive synthetic media created by AI. “Deceptive synthetic media” constitutes “any image, audio or video of an individual, and any representation of such individual’s appearance, speech or conduct that is substantially derived from any image, audio or video” that (1) “a reasonable person” would attribute to a person and (2) was created by AI or by other means.

Enacted

Deleware Personal Data Privacy Act

The Delaware Personal Data Privacy Act which became effective on January 1, 2025 provides consumers the right to opt-out of profiling if such profiling is in furtherance of solely automated decisions that produce legal or similarly significant effects concerning the consumer. Controllers must also perform data protection assessments when data processing presents a “heightened risk of harm” including where Controller processes personal data for the purposes of profiling, where such profiling presents a reasonably foreseeable risk of any of the following: (a) unfair or deceptive treatment of, or unlawful disparate impact on, consumers, (b) financial, physical, or reputational injury to consumers, (c) a physical or other intrusion upon the solitude or seclusion, or private affairs or concerns, of consumers, where such intrusion would be offensive to a reasonable person; or (d) other substantial injury to consumers.

Failed

B114

Introduced on February 2, 2023, B114, Stop Discrimination by Algorithms Act of 2023 (SDAA) would prohibit both for-profit and nonprofit organizations from using algorithms that make decisions based on protected personal traits. This bill makes it unlawful for a DC business to make a decision stemming from an algorithm if it is based on a broad range of personal characteristics, including actual or perceived race, color, religion, national origin, sex, gender identity or expression, sexual orientation, familial status, source of income or disability in a manner that makes “important life opportunities” unavailable to that individual or class of individuals. Any covered entity or service provider who violates the act would be liable for a civil penalty of up to $10,000 per violation.

B25-0832

Introduced on June 5, 2024, Bill 25-0832 would require that all candidates, political action committees, political committees and other entities involved in political advertising, using artificial media, be prohibited from distributing artificial media within 90 days of an election that does not conform to certain disclosure requirements. It would permit injunctive relief by the Superior Court of the District of Columbia and the issuance of civil fines by the Campaign Finance Board for any violations.

ENACTED

HB919

Enacted April 29, 2024, HB 919 will require, if created by generative AI, certain political advertisements, electioneering communications, or other political content to include a disclaimer. Advertisements falling under this bill include depictions of “a real person performing an action that did not actually occur” and content that “was created with intent to injure a candidate or to deceive regarding a ballot issue,” etc. These advertisements must state the following disclaimer: “Created in whole or in part with the use of generative artificial intelligence (AI).” This disclaimer must be printed clearly, be readable, and occupy at least 4 percent of the communication based on the type of media. Failure to comply will result in civil and criminal penalties. This bill will take effect July 1, 2024.

Failed

SB850

Introduced on January 19, 2024, SB 850, the Use of Artificial Intelligence in Political Advertising, would take effect July 1, 2024, if enacted, aims to require political campaigns to disclose through a disclaimer the use of AI in any “images, video, audio, text, and other digital content used in ads. This bill seeks to address the rising concern of deceptive campaign advertising (deepfakes) by mandating disclaimers on political ads that contain certain content generated through artificial intelligence. Generative artificial intelligence is defined as a “machine based system that can for a given set of human defined objectives emulate the structure and characteristics of input data in order to generate derived synthetic content.” Violators of this proposed legislation could face civil penalties. Anyone can file a complaint with the Florida Elections Commission if they have suspicions of violations. This bill would apply to any person or entity releasing a political advertisement, electioneering communication, or other miscellaneous advertisement.

HB369

Introduced on February 4, 2025, HB 369 focuses on the provenance of digital content, and defines “provenance data” as information recording the origin and history of modifications to digital content. “Provenance data” includes information identifying whether some or all the content has been generated through AI and if so, the name of the AI tool that was used. Under this bill, providers of AI tools must make 1) application tools (“tool or service that enables the user to apply provenance data, either directly or through the use of third-party technology, to any data that has been modified to include synthetic content”) and 2) free provenance readers (“tool or service that allows users to identify the provenance data of visual or audio digital content”) available to the public.  Social media platforms must also retain and make provenance data available for visual or audio content posted on their platforms. Devices that record visual or audio content must allow the inclusion of provenance data, and manufacturers must ensure that this data can be read by third party applications. Violations of this law constitutes unfair or defective acts or practices.

Effective on July 1, 2025 if passed. As of May 3, 2025, the bill has been indefinitely postponed and withdrawn from consideration.

HB1459

HB 1459, introduced January 7, 2024, would have required business entities that produce AI and make it available to the public to put out safety and transparency standards for AI-generated content and videos. The bill would have then required disclosure of certain AI-generated content to better inform consumers that they are using AI. And, more specifically, the bill would also have required political ads to be subject to certain requirements.

Enacted

HB203

Signed into law on May 2, 2023, and effective as of July 1, 2023, HB 203, permits an optometrist or ophthalmologist licensed in the state (a “prescriber”) to use an “assessment mechanism,” to conduct an eye assessment or generate a prescription for contact lenses or spectacles subject to the below conditions. An “assessment mechanism” means automated or virtual equipment, application, or technology designed to be used on a telephone, a computer, or an internet accessible device that may be used either in person or via telemedicine to conduct an eye assessment, and includes artificial intelligence devices and any equipment, electronic or nonelectronic, that are used to conduct an eye assessment. An assessment mechanism can be used; provided, however, that:

  • The data obtained from the assessment mechanism is not the sole basis for issuing the prescription.
  • The assessment mechanism alone is not used to generate an initial prescription or the first renewal of the initial prescription.
  • The assessment mechanism is only used where the patient has had a traditional eye examination in the past two years.

PROPOSED

HB478

Introduced on February 18, 2025, GA HB 478, would amend Georgia’s Fair Business Practices Act to require clear disclosure of AI-generated content used in commerce and trade. HB 478 would be effective on July 1, 2025, if enacted.

Under HB 478, AI-generated content that involves the use of visual media must be (1) in writing and (2) clearly readable. “Clearly readable” means that the text must be at least 30% of the vertical picture height, be visible for at least 30% of the media’s length for moving images and video, and appear with a reasonable degree of color contrast against the background.

For audio media, the disclosures must be announced using the same volume, rate of speaking, and in each spoken language as used in such content.

Each video recording, audio recording, or image disseminated without such disclosures will constitute a separate violation under the act.

HB566

GA HB 566, introduced on February 21, 2025 as the “NO FAKES Act of 2025”  to protect voice and visual likeness in digital replicas and to prevent unauthorized computer generated representations. The NO FAKES Act will be effective 180 days after being enacted.

Under the NO FAKES Act, individuals have the right to authorize the use of his or her voice or visual likeness in a digital replica. A “digital replica” is a computer-generated, highly realistic electronic representation of an individual’s voice or likeness identifiable in sound recordings, images, or audiovisual works. It includes instances where the individual did not perform or appear, or where their performance has been materially altered. It excludes authorized electronic reproductions, sampling, remixing, mastering, or digital remastering.

The right to authorize one’s voice or visual likeness under the NO FAKES Act is a property right that extends beyond the life of the individual and is not assignable during the life of the individual. However, the right is licensable for no more than 10 years after the death of the individual, unless the licensee can show active and authorized public use of the voice or visual likeness during the 2-year period preceding the expiration of the 10-year period. In such case, the license may renew for additional 5-year periods.

Violators of this act will be liable if they have actual knowledge that the applicable material is a digital replica, and the digital replica was not authorized by the applicable rightholder.

The NO FAKES Act allows for exceptions where digital replicas are used in (1) news, public affairs, or sports broadcasts, (2) documentaries or historical or biographical works, including some degree of fictionalization, unless they falsely appear as authentic works in which the individual participated or they are embodied in a musical sound recording for audiovisual works and such digital replicas are not protected by the First Amendment, (3) commentary, criticism, scholarship, satire, or parody, or (4) advertisements or commercials for of the foregoing purposes and the digital replica is relevant to the subject of the work. Additionally, it will not be a violation if use of the digital replica is fleeting or negligible.

A person will not be secondarily liable for violating the act by manufacturing, importing, offering, providing, or distributing a product or service unless it is primarily designed to create unauthorized digital replicas, has limited commercial use other than producing such replicas, or is marketed or promoted with the knowledge that it will be used to produce unauthorized digital replicas. Additionally, an online service that has an objectively reasonable belief that materials is claimed to be an unauthorized digital replica does not qualify as a digital replica will not be liable for damages over $1,000,000 regardless of whether the material is ultimately determined to be an unauthorized digital replica.

An “online service” under the act includes, (1) any public website, online application, mobile application or virtual reality forum, (2) a digital music provider, and (3) a social media service or network provided that such term doesn’t include a service by wire or radio.

The statute of limitations for bringing for a claim under this act is 3 years after the date on which the party bringing suit discovered or should have discovered the alleged violation.

Individual violators will be liable in an amount equal to the greater of $5,000 per work embodying the digital replica or any actual damages suffered by the injured party, plus profits from the authorized use. Online service entities will be liable for the greater of $5,000 per violation or any actual damages plus profits. All other entities will face $25,000 per work or any actual damages plus profits.

HB679

Georgia HB 679, introduced on February 27, 2025, would prohibit agreements to fix rental prices. Additionally, it would prohibit landlords from setting rental prices based on a price-fixing function. A “price-fixing function” means (1) collecting historical or contemporaneous data regarding rental prices, supply levels, or rental agreement termination and renewal dates from multiple landlords in the same market, (2) analyzing or processing that data through the use of a system, software or process, including a process that uses machine learning or other artificial intelligence techniques, and (3) recommending rental prices, rental agreement renewal terms, or ideal occupancy levels to a landlord.

Violators could be guilty of a felony and face up to 5 years in prison and/or face fines between $1,000 and $5,000.

HB 679 would be effective upon enactment.

HB715

Introduced on March 3, 2025, GA HB 715 would amend Georgia’s fair housing laws to regulate the use of artificial intelligence and automated decision tools in housing decisions to prevent discriminatory housing practices. If enacted, no person would be permitted to use AI or automated decision tools to make determinations regarding the sale, rental, or financing of dwellings or in the provision of brokerage services or facilities for the sale or rental of a dwelling without (1) human review of such decisions and (2) disclosing to any affected individuals that such tools were used.

The state Attorney General may impose administrative penalties up to $10,000 against violators of this section.

Failed

HB887

Introduced on January 1, 2024, HB 887, would have prohibited the use of artificial intelligence in making certain decisions regarding insurance coverage, health care and public assistance. In particular, the bill would have prohibited health care activities from being based “solely on results derived from the use or application of artificial intelligence or utilizing decision tools.”  The bill further would have required that the Georgia Composite Medical Board review, and override, any decision resulting from AI, and to promulgate regulations on review activities. The bill advanced a similar approach regarding AI and automated decision-making tools in insurance coverage and public assistance.

HB890

Introduced on January 9, 2024, HB 890, places a prohibition on discrimination based on age, race, color, sex, sexual orientation, gender, gender expression, national or ethic origin, religion, creed, familial status, marital status, disability or handicap, or genetic information, and prohibition shall include discrimination resulting from the use of or reliance upon artificial intelligence or automated decision tools.

HB986

GA HB986, introduced 1/22/2024, would have prohibited the publication of "materially deceptive media" (AI-generated content that appears authentic) within 90 days of an election, with the intent to influence the election outcome or the administration of the election. It also would have required specific disclosures for the use of AI-generated content in campaign advertisements.

Proposed

HB465

Introduced on January 21, 2025, HI HB 465 would prohibit retailer from using dynamic pricing (including AI-enabled pricing adjustments) for food items sold through or qualifying for federal assistance programs. The act would be effective upon enactment.

Violators would be subject to civil fines up to $5,000 per item, per day. Violators may also be subject to administrative fines of up to $500 for the first offense and up to $1,000 for subsequent offenses.

HB639

Introduced on January 21, 2025, HI HB 639 would require corporations, organizations, or individuals engaging to commercial transactions or trade practices to clearly and conspicuously notify consumers when the consumer is interacting with an artificial intelligence chatbot or other technology capable of mimicking human behaviors. If enacted, HB 639 would take effect on July 1, 3000.

Any corporation, organization, developer, or individual found to be in violation may face a civil penalty of no more than $5,000,000.

Plaintiffs may seek injunctive relief and be awarded no less than $1,000 or threefold the damages sustained by the plaintiff, whichever is greater.

The Attorney General and the Director of the Office of Consumer Protection may also seek injunctive relief.

HB831

HI HB 831, introduced on January 23, 2025, would prohibit the use of algorithmic price-setting in Hawaii’s rental market through a coordinating function.

A “coordinating function” means (1) collecting historical or contemporaneous data regarding prices, supply levels, or rental termination and renewal dates of multiple rental property owners, (2) analyzing or processing such data through use of a system, software, or process that uses computation including by using the data to train an algorithm, and (3) recommending rental prices, renewal terms, or ideal occupancy levels to a rental property owner.

Individual violators of this section may face a fine of up to $100,000 and/or imprisonment for up to 3 years. Entities may face fines up to $1,000,000.

The act would be effective upon enactment.

SB59

SB 59, introduced January 16, 2025, would prohibit users of algorithmic decision-making from using “algorithmic eligibility determinations” in a discriminatory manner. "Algorithmic eligibility determination" is defined as “a determination based in whole or in significant part on an algorithmic process that utilizes machine learning, artificial intelligence, or similar techniques to determine an individual's eligibility for, or opportunity to access, important life opportunities.”  "Important life opportunities" is defined as access to, approval for, or offer of credit, insurance, education, employment, housing, or place of public accommodation as defined in section 489-2.”  Covered entities are prohibited from  making “an algorithmic eligibility determination or an algorithmic information availability determination on the basis of an individual's or class of individuals' actual or perceived race, color, religion, national origin, sex, gender identity or expression, sexual orientation, familial status, source of income, or disability in a manner that segregates, discriminates against, or otherwise makes important life opportunities unavailable to an individual or class of individuals.”

Covered entities (businesses with over 25,000 state residents' data or $15 million in annual revenue for the preceding three years) must send corresponding notices to individuals whose personal information is used and must submit annual reports to the state Attorney General that documents aspects of its algorithmic decision-making process, such as data sources, methodologies, and potential risks.

This bill would allow civil actions, with penalties up to $10,000 per violation.

SB640

SB 640, introduced January 17, 2025, would require any corporation, organization, or individual engaging in business of any kind and using an AI chatbot or other similar technology in a manner that may mislead or deceive a reasonable person to believe they are engaging with a human to first disclose to the consumer that the consumer is interacting with a chatbot. The disclosure must be clear and conspicuous.

This bill would authorize private rights of action and statutory penalties.

Failed

HB1607

Introduced January 17, 2024, HB 1607 (Senate version S2524) would have prohibited a covered entity, such as an individual, firm, corporation, partnership, or other commercial entity, from making an “algorithmic eligibility determination” or an “algorithmic information availability determination” on basis of class, race, color, religion, national origin, sex, gender identity or expression, sexual orientation, familial status, wealth, or disability in a discriminatory manner. “Algorithmic eligibility determination” is an AI-generated determination in whole or in part regarding a person’s eligibility for, or opportunity to access, important life opportunities. “Algorithmic information availability determination” is an AI-generated determination about a person’s ability to receive advertising, marketing, solicitations, or other offers for an important life opportunity. Failure to comply would result in a violation of an unlawful discriminatory practice.

HB1734

HB 1734, introduced January 18, 2024, would have required any AI-generated political advertisement containing an “image, video, footage, or audio recording” to include a “clear and conspicuous statement” disclosing the use of AI in creating the content. The disclosure, depending on the media, must be readable, follow specified procedures, and be intelligible.

SB974

Introduced on January 20, 2023, SB974, the Hawaii Consumer Data Protection Act, would establish a framework to regulate controllers and processors' access to personal consumer data and introduces penalties, as well as a new consumer privacy special fund.

The bill also provides consumers the option to opt-out of the processing of their personal data for the purposes of “profiling in furtherance of decisions made by the controller that results in the provision or denial by the controller of financial and lending services, housing, insurance; education enrollment, criminal justice, employment opportunities, health care services, or access to basic necessities, including food and water.”  "Profiling" is defined as any-form of automated processing performed on personal data to evaluate, analyze, or predict personal aspects related to an identified or identifiable natural person's economic situation; health, personal preferences, interests, reliability, behavior, location, or movements.

The bill further requires covered entities to conduct a data protection assessment when they process personal data for purposes of profiling and the profiling presents “a reasonably foreseeable risk of: (A) Unfair or deceptive treatment of, or unlawful disparate impact on, consumers; (B) Financial, physical, or reputational injury to consumers; (C) A physical intrusion or other intrusion upon the solitude or seclusion, or the private affairs or concerns; of consumers, where the intrusion would be offensive to a reasonable person; or (D) Other substantial injury to consumers[.]” The law goes into effect July 1, 2050, as currently drafted. The bill stalled in 2023 but was picked back up and carried over to the 2024 regular legislative session.

SB1110

Introduced on January 20, 2023, SB1110, an alternate version of the Hawaii Consumer Data Protection Act, would create materially similar obligations with respect to “profiling” as SB974. The bill stalled in 2023 but was picked back up and carried over to the 2024 regular legislative session.

SB2524

SB 2524, introduced January 19, 2024, would have prevented a covered entity, including an individual, firm, corporation, legal entity, or other commercial entity, from making an algorithmic eligibility determination or an algorithmic information availability determination on the basis of class, race, color, religion, national origin, sex, gender identity or expression, sexual orientation, familial status, wealth, or disability. “Algorithmic eligibility determination” is a determination about a person’s eligibility for important life opportunities based in whole or part on an algorithmic process using AI, machine learning, or similar technologies. “Algorithmic information availability determination” is an AI-generated determination of a person’s receipt of advertising, marketing, solicitations, or other information about important life opportunities. A violation shall be deemed an unlawful discriminatory practice.

SB2572

Introduced January 19, 2024, SB 2572 (Assembly version A2176) would have prohibited a person from deploying AI-generated products in Hawaii without submitting proof of the product’s safety to the office regulating AI. Violation of this bill would be subject to a monetary fine for each offense.

Proposed

H127

H127 was introduced on February 4, 2025 and aims to enhance transparency and consumer protection in AI-driven interactions. The bill establishes disclosure requirements for artificial intelligence communications. It prohibits businesses from using AI chatbots, avatars, or similar technologies to engage in textual or aural conversations without providing a clear and conspicuous notice that the consumer is interacting with AI. The bill applies in situations where a consumer might reasonably believe they are communicating with a human.

The bill includes a private right of action for the greater of actual damages or $1,000.

H203

Introduced February 10, 2025, H203 amends and adds to existing law to provide for monopsonies and to establish provisions regarding the prohibition of pricing algorithms. Making it unlawful to monopolize, attempt to monopolize, or conspire to monopolize any line of Idaho commerce.

Private right of action – any person may bring an action for injunctive relief and/or damages. If the violation is found to be a per se violation or intentional violation, it shall increase the recovery to an amount not to exceed three times the damages sustained.

Effective date if passed: July 01, 2025.

SB1067

S1067 was introduced on February 7, 2025. The bill prohibits any governmental entity in Idaho, including state agencies and political subdivisions, from enacting or enforcing laws that constrain the development, training, deployment, or consumer use of artificial intelligence. It also bars regulation of AI systems’ underlying algorithms or decision-making processes. AI technologies are classified as general-purpose technology for regulatory purposes. The bill frames AI systems as a form of personal expression protected under free speech. If enacted, the bill would take effect on July 1, 2025.

ENACTED

Illinois AI Video Interview Act

In 2019, Illinois became the first state to enact restrictions with respect to the use of AI in hiring.  The Illinois AI Video Interview Act was amended in 2021 and went into effect in 2022, and now requires employers using AI-enabled assessments to:

    • Notify applicants of AI use;
    • Explain how the AI works and the “general types of characteristics” it uses to evaluate applicants;
    • Obtain their consent;

 

  • Share any applicant videos only with service providers engaged in evaluating the applicant;
  • Upon an applicant’s request, destroy all copies of the applicant’s videos and instruct service providers to do so as well; and
  • Report annually, after use of AI, a demographic breakdown of the applicants they offered an interview, those they did not, and the ones they hired.

HB3773

Introduced February 17, 2023 and signed into law August 12, 2024, HB 3773, amends the Human Rights Act, and provides that an employer that uses predictive data analytics in its employment decisions may not consider the applicant’s protected class information or ZIP code when used as a proxy for race to make certain employment-related decisions. Namely, it shall be a civil rights violation to: (1) use artificial intelligence to make decisions with respect to recruitment, hiring, promotion, renewal of employment, or conditions of employment, training or apprenticeship, discharge, discipline, tenure, or the terms, privileges, or conditions of employment, or for an employer to use artificial intelligence that has the effect of subjecting employees to discrimination on the basis of protected classes identified under the Article or to use zip codes as a proxy for protected classes; or (2) for an employer to fail to provide notice to an employee that the employer is using artificial intelligence.

The law defines “artificial intelligence” to mean a “machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments,” and expressly include generative artificial intelligence. “Generative artificial intelligence” is defined to mean an “automated computing system that, when prompted with human prompts, descriptions, or queries, can produce outputs that simulate human-produced content,” including text, images, multimedia, and other content that would be otherwise produced by human means.

The Department of Human Rights is tasked with adopting implementing rules, including those relating to notice. The new amendments to the Human Rights Act will be codified at 775 ILCS 5/2-101 and 775 ILCS 5/2-102. The law goes into force on January 1, 2026.

Proposed

HB35

Introduced January 09, 2025, HB 35 would create the Artificial Intelligence Systems Use in Health Insurance Act. Provides that the Department of Insurance's regulatory oversight of insurers includes oversight of an insurer's use of AI systems to make or support adverse determinations that affect consumers; provides that any insurer authorized to operate in the State is subject to review by the Department in an investigation or market conduct action regarding the development, implementation, and use of the AI systems or predictive models and the outcomes from the use of those AI systems and predictive models. Provides such an insurer shall not issue an adverse consumer outcome with regard to the denial, reduction, or termination of insurance plans or benefits that result solely from any AI system or predictive model

Enforced by the Department of Insurance.

Effective date: none stated.

HB1594

Introduced January 22, 2025, HB 1594 would amend the Illinois Human Rights Act. Provides that it is a civil rights violation for an employer, employment agency, or labor organization to take certain employment-related actions on the basis of an individual's weight and size. Must give notice of the use of AI included when making employment decisions.

Enforced by the Department of Human Rights (under 775 ILCS 5/1-103) who may issue an employer 30 day notice to correct a violation, and a charge of a civil rights violation if it is not corrected, brought by the Attorney General.

Effective date: none stated.

HB3021

Introduced February 06, 2025, HB 3021 would amend the Consumer Fraud and Deceptive Business Practice Act to provide that it is an unlawful practice for any person to engage in a commercial transaction or trade practice with a consumer in which: (1) the consumer is communicating or otherwise interacting with a chatbot, artificial intelligence agent, avatar, or other computer technology that engages in a textual or aural conversation; (2) the communication may mislead or deceive a reasonable consumer to believe that the consumer was communicating with a human representative; and (3) the consumers are not notified in a clear and conspicuous manner that they are communicating with an artificial intelligence system and not a human representative.

Under the Illinois Consumer Fraud and Deceptive Business Practices Act (815 ILCS 505/1), the Attorney General or State’s Attorney may maintain an action for injunctive relief and seek civil penalties not to exceed $50,000.

Private Right of Action: Any person may file a civil action only if the Attorney General or State’s Attorney fails to bring an enforcement action and maintain an action for injunctive relief for compensatory damages to recover prohibited fees, or for additional relief to deter, prevent, or compensate for the violation, of up to 3 times the amount of the prohibited fees or a minimum of $1,000 in punitive damages.

Effective date: January 1, 2026.

HB3041

Introduced February 06, 2025, HB 3041 would create the Illinois Data Privacy and Protection Act that provides that a covered entity (any entity or any person, other than an individual acting in a non-commercial context, that alone or jointly with others determines the purposes and means of collecting, processing, or transferring covered data) may not collect, process, or transfer covered data unless the collection, processing or transfer is limited to what is reasonably necessary and proportionate; provides that a service provider shall establish, implement, and maintain reasonable policies, practices, and procedures concerning the collection and processing of covered data. AI is included under the definition of “Covered Algorithm.”

The Attorney General or State’s Attorney may bring a civil action to enjoin a violating practice, enforce compliance, obtain damages, and/or civil penalties and restrictions.

Creates a private right of action such that any person subject to a violation of the Act may bring a civil action where the court may award them an amount equal to the sum of any compensatory liquidated or punitive damages and/or injunctive or declaratory relief. Small businesses (where for the previous 3 years gross income did not exceed $41,000,000) are not included under the private right of action.

Effective date: 180 days after becoming law.

HB3506

Introduced February 07, 2025, HB 3506 would create the Artificial Intelligence Safety and Security Protocol Act which provides that a developer shall produce, implement, follow, and conspicuously publish a safety and security protocol that includes specified information. Provides, no less than every 90 days, a developer must produce and publicly publish a risk assessment report that is based on a report assessing whether the developer has complied with such protocol. Sets forth provisions on the redaction of sensitive information and whistleblower protections.

Provides for civil penalties of up to $1,000,000, and injunctive or declaratory relief, for actions brought by the Attorney General. No private right of action.

Effective date: none stated.

HB3838

Introduced February 07, 2025, HB 3838 Amends the Ticket Sale and Resale Act. Provides that a ticket seller, ticket reseller, and ticket broker shall display the full price of a ticket, including all assessed fees, to a purchaser when the price is first shown to the purchaser and shall not increase that price during the transaction with the purchaser; provides that the use of dynamic pricing in the course of selling a ticket is a violation of the provision. Defines "dynamic pricing" to include adjusting pricing using AI enabled technologies.

Enforcement mechanism not stated.

Effective date: none stated.

SB1425

Introduced January 31, 2025, SB 1425 would create the Artificial Intelligence Systems Use in Health Insurance Act to provide that an insurer authorized to do business in Illinois shall not issue an adverse consumer outcome with regard to the denial, reduction, or termination of insurance plans or benefits that result solely from the use or application of any AI system or predictive model.

Enforced by the Illinois Department of Insurance.

Effective date: none stated.

SB1792

Introduced February 06, 2025, SB 1792 would amend the Consumer Fraud and Deceptive Business Practices Act to provide that the owner, licensee, or operator of a generative artificial intelligence system shall conspicuously display a warning on the system's user interface that is reasonably calculated to consistently apprise the user that the outputs of the generative AI intelligence system may be inaccurate or inappropriate.

Under the Illinois Consumer Fraud and Deceptive Business Practices Act (815 ILCS 505/1), the Attorney General or State’s Attorney may maintain an action for injunctive relief and seek civil penalties not to exceed $50,000.

Private Right of Action: Any person may file a civil action only if the Attorney General or State’s Attorney fails to bring an enforcement action and maintain an action for injunctive relief for compensatory damages to recover prohibited fees, or for additional relief to deter, prevent, or compensate for the violation, of up to 3 times the amount of the prohibited fees or a minimum of $1,000 in punitive damages.

Effective date: none stated.

SB1929

Introduced February 06, 2025, SB 1929 would create the Provenance Data Requirement Act that provides that a generative artificial intelligence tool provider shall apply provenance data, either directly or through the use of third-party technology, to wholly-generated synthetic content generated by the provider's generative automated intelligence tool. Sets forth additional requirements on generative Artificial intelligence tool providers, large online platforms, and manufacturers of capture devices.

Enforcement mechanism not stated.

Effective date: none stated.

SB2203

Introduced February 07, 2025, SB 2203 would create the Preventing Algorithmic  Discrimination Act which provides that a deployer of an automated decision tool shall perform an annual impact assessment for any automated decision tools the deployer uses or designs, codes, or produces that includes specified information; provides that a deployer shall, at or before the time an automated tool is used to make a consequential decision, notify any natural person who is the subject of the consequential decision that an automated instrument is being used in making, or be a controlling factor in making such consequential decision and provide specified information.

Violations result in an administrative fine of not more than $10,000 per violation in an administrative enforcement action brought by the Attorney General.

Effective date: January 01, 2027.

SB2259

Introduced February 07, 2025, SB 2259 would amend the Medical Practice Act of 1987 to provide that a health facility, clinic, physician's office, or office of a group practice that uses generative artificial intelligence to generate written or verbal patient communications pertaining to patient clinical information shall ensure that the communications meet certain criteria, including the display of a disclaimer that indicates to the patient that the communication was generated by generative AI.

Communications generated by generative artificial intelligence and read and reviewed by a human licensed or certified health care provider are exempted.

A violation of the amendatory provisions by a licensed health facility or a licensed clinic is subject to penalties as implemented by the Department of Financial and Professional Regulation by rule. A violation of the amendatory provisions by a physician is subject to penalties as determined by the Illinois State Medical Board.

Effective date: None stated.

SB2398

Introduced February 07, 2025, SB 2398 would amend the Sports Wagering Act to prohibit a sports wagering licensee from using artificial intelligence to: (1) track the sports wagers of an individual; (2) create an offer or promotion targeting a specific individual; or (3) create a gambling product.

Enforced by the Illinois Gaming Board under the Sports Wagering Act (230 ILCS 45/25-10)

Effective date: None stated.

Failed

HB1002

Introduced December 19, 2022, HB 1002, would amend the University of Illinois Hospital Act and the Hospital Licensing Act, to require that before using any diagnostic algorithm to diagnose a patient, a hospital must first confirm that the diagnostic algorithm has been certified by the Department of Public Health and the Department of Innovation and Technology, has been shown to achieve as or more accurate diagnostic results than other diagnostic means, and is not the only method of diagnosis available to a patient.

HB3880

Introduced February 17, 2023, HB 3880, would create the Children’s Privacy Protection and Parental Empowerment Act, and require a business that provides an online service to children shall not profile a child by default unless the profiling is necessary to provide the online service and only with respect to the aspect of the online service with which the child is actively and knowingly engaged and the business can demonstrate a compelling reason that profiling is in the best interest of children. Profiling is defined as any form of automated processing of personal information that uses personal information to evaluate certain aspects relating to a natural person, including analysing or predicting aspects concerning a natural person's performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location, or movements.

HB3943

Introduced February 17, 2023, HB 3943, would create the Social Media Content Moderation Act, and require that a social media company post terms of service for each social media platform owned or operated by the company in a manner reasonably designed to inform all users of the social media platform of the existence and contents of the terms of service and submit a terms of service report to the Attorney General on a semi-annual bases that includes a detailed description of content moderation systems, information on content that was flagged and how that content was flagged, including if the content was flagged and actioned by AI software.

HB4869

Introduced February 06, 2024, HB 4869 would amend the Consumer Fraud and Deceptive Business Practices Act. Provides that any person who, for any commercial purpose, makes, publishes, disseminates, airs, circulates, or places an advertisement for goods or services before the public or causes, directly or indirectly, an advertisement for goods or services to be made, published, disseminated, aired, circulated, or placed before the public, that the person knows or should have known contains synthetic media, shall disclose in the advertisement that the advertisement contains synthetic media. Provides that if synthetic media has been used in any advertisement for goods or services that is published, aired, circulated, disseminated, or otherwise placed before the public and that depicts a person engaged in any action or expression that the person did not actually engage, the advertisement shall include a disclaimer that clearly and conspicuously states the likeness featured in the advertisement is synthetic, does not depict an actual person, and is generated to create a human likeness. Provides that a violation of the provisions constitutes an unlawful practice within the meaning of the Act.

HB5116

Introduced February 08, 2024, HB 5116 would create the Automated Decision Tools Act; provides that, on or before a specified date, and annually thereafter, a deployer of an automated decision tool shall perform an impact assessment for any automated decision tool the deployer uses or designs, codes or produces that includes specified information; provides that a deployer shall, at or before the time an automated decision tool is used to make a consequential decision, notify any natural person who is the subject of the consequential decision.

HB5321

Introduced February 09, 2024, HB 5321 would amend the Consumer Fraud and Deceptive Business Practices Act; provides that each generative artificial intelligence system and artificial intelligence system that, using any means or facility of interstate or foreign commerce, produces image, video, audio or multimedia AI-generated content shall include on the AI-generated content a clear and conspicuous disclosure that satisfies specified criteria.

HB5322

Introduced February 09, 2024, HB 5322 would create the Commercial Algorithmic Impact Assessments Act; defines algorithmic discrimination, artificial intelligence, consequential decision, deployer, developer and other terms; requires that by specified amount and annually thereafter, a deployer of an automated decision tool must complete and document an assessment that summarizes the nature and extent of that tool, how it is used and assessment of its risks, among other things.

HB5591

Introduced February 09, 2024, HB 5591 would create the Bolstering Online Transparency Act; provides that a person shall not use an automated online account, or bot, to communicate or interact with another person in this state online, with the intent to mislead the other person about its artificial identity for the purpose of knowingly deceiving the person about the content of the communication in order to incentivize a purchase or sale of goods or services in a commercial transaction or to influence a vote in an election, unless the person makes a specified disclosure.

HB5649

Introduced February 09, 2024, HB 5649 would amend the Consumer Fraud and Deceptive Business Practices Act; provides that it is an unlawful practice within the meaning of the act for a licensed mental health professional to provide mental health services to a patient through the use of artificial intelligence without first obtaining informed consent from the patient for the use of artificial intelligence tools and disclosing the use of artificial intelligence tools to the patient before providing services through the use of artificial intelligence.

Enacted

SB5

Introduced on January 9, 2023, SB5, creates an omnibus consumer privacy law along the lines of the Virginia Consumer Data Privacy Act and the Colorado Privacy Act, to regulate, among other data uses, the collection and processing of personal information.  In particular, the bill sets out rules for profiling and automated decision-making.  The bill enables individuals to opt-out of “profiling in furtherance of decisions that produce legal or similarly significant effects” concerning the consumer.  Profiling is defined as “any form of automated processing of personal data to evaluate, analyze, or predict personal aspects concerning an identified or identifiable natural person's economic situation, health, personal preferences, interests, reliability, behavior, location, or movements[.]” Controllers must also perform a data protection impact assessment for high-risk profiling activities.  Enrolled as Public Law 94 May 01, 2023.

Proposed

HB1620

Introduced January 21, 2025, HB 1620 would amend Indiana Code Title 16 Health, to require health care providers and insurers to disclose use of AI technologies to a patient if it is used to make informed decisions or generate any part of a communication regarding health care.

Effective date if passed: July 01, 2025

SB0480

Introduced February 03, 2025, SB 0480, places a limitation on the use of artificial intelligence or other electronic means to determine medical necessity. A utilization review entity may conduct an initial review of a request for prior authorization and issue a decision by using AI for not more than 2% of prescription drugs that are subject to prior authorization requirement and have an annualized net price between $100 and $5,000. If more than 2% during a calendar year, the entity may not require prior authorization during the next calendar year.

Effective date if passed: July 01, 2025.

Proposed

HF406

HF406 was introduced on February 13, 2025. The bill restricts how artificial intelligence (AI) can be used in smart devices and establishes a civil cause of action for violations. It requires device companies to present an agreement to users when initializing a smart device, including a clear statement of purpose that explains all types of private data the device or application is designed to access. Companies must update this agreement if data access practices change. The bill prohibits companies from using private data in ways not disclosed in the agreement. It also defines key terms such as “smart device,” “private data,” and “artificial intelligence.” If enacted, the bill would apply to smart devices sold in Iowa on or after July 1, 2025.

This bill includes a private right of action for actual and punitive damages. Punitive damages are not to exceed $250,000 per violation.

HSB294

HSB294 was introduced on March 4, 2025 and is an omnibus AI bill.

The bill addresses the use of artificial intelligence in election-related materials and interactions with AI systems. It requires that any published material generated using AI that expressly advocates for or against a candidate or ballot issue must include a clear disclosure stating: “this material was generated using artificial intelligence.” The bill also includes provisions for protections in interactions with AI systems and establishes penalties for violations.

Developers must use reasonable care to protect individuals from known or reasonably foreseeable risks of discrimination arising from the use of the developer’s high risk AI system. A developer must provide a deployer with a summary of the reasonably foreseeable uses and known harmful or inappropriate uses of the high risk AI system. Businesses must implement a risk management system to ensure the use of a high risk AI system does not result in algorithmic discrimination.

The bill further requires a deployer disclose to an individual that they are interacting with an AI system, if a reasonable person may not realize they are not interacting with a person.

There is no private right of action.

SF143

Introduced January 30, 2025, SF143 is an Act relating to consumer data protection. AI systems covered under definition of “Profiling.” Consumers must be notified of, or be given the chance to opt out, of profiling in furtherance of a decision that produces legal or similarly significant effects concerning a consumer, meaning a decision made by a controller that effects the ability of a person to access financial and lending services; housing; insurance; education; criminal justice services; employment opportunities; or health care services.

Enforcement mechanism is unclear.

This act would apply retroactively to January 01, 2025.

FAILED

SB266

Introduced February 23, 2024, and engrossed March 25, 2024, SB 266 would prohibit an automated online account, or bot, from communicating or interacting with another person in Kentucky online with the intent to mislead the other person about its artificial identify for the purpose of knowingly deceiving the person about the content of the communication in order to incentivize a purchase or sale of goods or services in a commercial transaction; provides that a violation of the act is a deceptive act or practice in the conduct of trade or commerce; prohibits a private right of action.

Proposed

HB114

Introduced on March 25, 2025, HB114 would  regulate the use of artificial intelligence by healthcare providers. As amended by the House Committee on Health and Welfare, the bill prohibits healthcare providers from using AI to (1) make treatment or diagnosis decisions, (2) interact directly with patients regarding treatment or diagnosis, or (3) generate therapeutic recommendations or treatment plans—unless such use is reviewed and approved by a licensed healthcare professional. Establishes a civil penalty of up to $10,000 per violation. Directs the Louisiana Department of Health to investigate complaints and refer violations to the appropriate licensing board.

Failed

HB673

Introduced March 11, 2024, HB 673 would provide for consumer protection from unfair discrimination using AI with respect to insurance practices.

SB118

Introduced March 11, 2024, SB 118 would provide for the registration of artificial intelligence foundation models in the private sector.

Proposed

HP596 (reintroduced as an amendment on January 1, 2025)

Introduced March 02, 2023, HP 596, An Act To Protect Workers From Employer Surveillance, would require an employer to provide upon an employee request whether employee data interacts with an automated decisions system. Amended by H-173 and H-575.

SP807 (reintroduced as an amendment on Jan. 4, 2025)

Originally introduced May 18, 2023 as LD 1973, SP807, would enact the Maine Consumer Privacy Act aimed at protecting consumer data. Section 9604 would give consumers the option to opt-out of 1) having their data used for targeted advertising, 2) sale of personal data, and 3) personal data processing for use in profiling for automated decisions that produce legal or similarly significant effects .. Consumers may also request access to their data, correct inaccuracies in their data, request a copy of their data, and request the deletion of their collected data. Section 9608 requires a controller to conduct a DPA if processing personal data for the purpose of profiling if the profiling presents a reasonably foreseeable risk to the consumer.

Enforcement of the bill would begin on July 1, 2025. Businesses that maintain the data of at least 100,000 consumers (beyond the data needed for payment transactions) and businesses with at least 25,000 consumers and derive 25% of their gross revenue from selling data are in scope.

Failed

HP1270

Introduced on May 23, 2023, the Data Privacy and Protection Act, HP 1270, is a comprehensive bill aimed at protecting consumer data. The Act includes retention limits, use restrictions, and reporting requirements. Section 9615 specifically governs the use of algorithms. The Act provides that covered entities using covered algorithms (broadly defined, including machine learning, AI, and natural language processing tools) to collect, process, or transfer data “in a manner that poses a consequential risk of harm” complete an impact assessment of the algorithm. The impact assessment must be submitted to the Attorney General’s office within 30 days of finishing it. The assessment must include a publicly available and easily accessible summary.

In addition to an impact assessment, the Act requires covered entities to create a design evaluation prior to deploying a covered algorithm. The design evaluation must include the design, structure, and inputs of the covered algorithm.

This bill includes a private right of action and allows for the recovery of punitive damages.

 

Enacted

HB820

Introduced on January 29, 2025, HB 820 requires certain carriers, pharmacy benefits managers, and private review agents to ensure that AI, algorithms, or other software tools are used in an equitable and non-discriminatory way when used for conducting utilization review. Effective October 1, 2025.

HB1202

Maryland law, HB 1202, prohibits an employer from using a facial recognition service for the purpose of creating a facial template during an applicant’s pre-employment interview, unless the applicant consents by signing a specified waiver.  This workplace AI law went into force on October 1, 2020.

Proposed

HB589

Introduced on January 23, 2025, HB 589 provides that a person who intentionally, knowingly, or negligently designs or creates AI software able to cause physical injury or death is strictly liable for damages and subject to a civil penalty if the AI software is used to cause personal injury. The bill also generally prohibits someone from intentionally, knowingly, or negligently creating AI software able to cause injury or death. Effective October 1, 2025.

HB740

Introduced on January 27, 2025, HB 740 requires people that publish, distribute, disseminate certain campaign materials that use or contain synthetic media (generated by AI) to include a certain disclosure: “THIS IMAGE HAS BEEN ALTERED OR MODIFIED THROUGH THE USE OF COMPUTER PROGRAMS TO DISPLAY AN EVENT OR IMAGE THAT DID NOT OCCUR.”. Effective October 1, 2025.

HB823

Introduced on January 29, 2025, HB 823 aims to ensure transparency in the data used to train generative AI systems by requiring developers to publish documentation about the training data. The bill applies to generative AI systems released on or after January 1, 2022. Under this bill, developers must post on their website documentation detailing the data used to train the system, including: 1) sources or owners of the data, 2) description of how the data furthers the AI system, 3) number of data points in static datasets, 4) types of labels used in the dataset, 5) whether the data are in the public domain or protected by copyright/trademark/patent rights, and more. Effective October 1, 2025.

HB1314

Introduced on February 7, 2025, HB 1314 prohibits certain insurers, nonprofit health service plans, and health orgs from using AI to automatically deny prior authorizations. It also prohibits healthcare providers from charging a fee to obtain a prior authorization from a carrier or managed care organization. Effective January 1, 2026.

HB1331

Introduced on February 7, 2025, HB 1331 requires a developer that sells certain AI systems to provide certain information and make disclosures about their AI use, requires a deployer to implement a certain risk management policy and take certain precautions to protect consumers from AI risks, requires deployer to complete a certain impact assessment, etc. Effective October 1, 2025.

HB1477

Introduced on February 7, 2025, HB 1477 establishes requirements for consumer reporting agencies that use algorithmic systems to assemble or evaluate consumer credit information on consumers for the purpose of adding to consumer reports to third parties. It requires the Commissioner of Financial Regulation to establish certain assessment thresholds for algorithms, mandate regular training for reviewers, and implement a whistle-blower protection program.

Effective on October 1, 2025.

SB936

Introduced on February 5, 2025, SB 936 requires developers who use high-risk AI system to use reasonable care to protect consumers from known and reasonably foreseeable risks of certain algorithmic discrimination, prohibits a developer from providing to a deployers a high-risk AI system unless certain disclosures are provided to the deployer or developer, requires these developers to conduct impact assessments and maintain a risk management policy, and requires disclosure to the consumer regarding the deployment of and decisions made by a high-risk AI system.

Effective October 1, 2025.

failed

HB697

Introduced on January 24, 2025, HB 697 requires a health insurance carrier to submit quarterly reports to the Maryland Insurance Commissioner on information related to the carrier’s use of AI or automated decision-making systems. The bill changes the information related to adverse insurance decisions and grievances carriers are generally required to report to the Commission. Effective October 1, 2025.

HB1240

Introduced on February 7, 2025, HB 1240 prohibits health care providers and carriers from using AI if the AI has been designed only to reduce costs for a healthcare provider at the expense of reducing the quality of care, delaying care, or denying coverage for care. It also requires the use of AI for healthcare decisions to annually publish certain key data about the decisions on the provider’s website and undergo a third-party audit. Effective October 1, 2025.

HB1255

Introduced March 11, 2024, HB 1255 would restrict an employer from using an automated employment decision tool to make certain employment decisions. The bill would require an employer, under certain circumstances, to notify an applicant for employment of the employer's use of an automated employment decision tool within a certain time period and generally relating to automated employment decision tools.

Proposed

H81

H.81, introduced on February 27, 2025, is a bill that would establish the Massachusetts Artificial Intelligence Disclosure Act. It requires any generative AI system used to create or modify audio, video, text, or print content within the state to include a clear and conspicuous disclosure identifying the content as AI-generated. This disclosure must be appropriate to the medium and, where technically feasible, permanent or difficult to remove. The bill also mandates metadata that includes the identity of the AI system and the date and time of content creation.  Violations of this section are punishable by an initial fine of $500 and $1000 fines for subsequent offenses.

H83

Introduced on February 16, 2023, H. 83, would create an omnibus consumer privacy law called the Massachusetts Data Privacy Protection Act to regulate, among other data uses, the collection and processing of personal information. In particular, the bill sets out rules for the use of automated decision making technologies that would require a covered entity using automated decision making technologies (Covered Algorithms) to conduct an impact assessment and evaluate any training data used to develop the Covered Algorithm to reduce the risk of any potential harms from the use of such technologies.

H90

H.90, introduced on February 27, 2025, establishes new transparency requirements for AI-generated content. It mandates that large online platforms retain any available provenance data for content posted on their platforms and make that data, or a clear indicator of its availability, accessible to users. The bill also requires generative AI providers to embed metadata in AI-generated content and offer public tools to read this provenance information. Additionally, it calls for capture devices like phones and cameras to include features enabling users to embed provenance data in their content.

H97

H.97, introduced on February 27, 2025, titled An Act Protecting Consumers in Interactions with Artificial Intelligence Systems  establishes safeguards for consumers when interacting with AI technologies, ensuring transparency, accountability, and ethical use of AI in consumer-facing applications. A developer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of the high-risk artificial intelligence system. Developers must give deployers a summary including the known and foreseeable risks of using AI to make consequential decisions. Deployers must also develop a risk management strategy to protect users from algorithmic discrimination in consequential decisions made by AI in high risk arenas. The developer must also post easy to read consumer notices on its website about the use of AI.

This bill does not apply to deployers with fewer than 50 employees. There is no private right of action. The bill becomes effective 6 months after passage.

H94

H.94, introduced on February 27, 2025, is a bill titled the Act to Ensure Accountability and Transparency in Artificial Intelligence Systems. The bill prohibits algorithmic discrimination, with a particular emphasis on consequential decisions in high-risk areas like education, employment, finance, housing, healthcare, insurance, and legal/government services. It requires using high risk AI to influence consequential decisions to disclose the system’s purpose, how it influences consequential decisions, and the involvement of any third-party entities. Companies using this high risk AI to make consequential decisions must disclose it on their websites. There is no private right of action.

H495

Introduced on February 27, 2025, H.495,  the Act Reducing Emissions from Artificial Intelligence prohibits covered entities from operating search engines that automatically return results using AI unless users have given affirmative consent. Individuals must be informed of the use of generative AI before the result is shown and shown the affirmative consent notice for each new search. The affirmative consent notice must be clear, and easy to use, including by users with disabilities. The bill requires the Executive Office of Technology Services and Security to recognize centralized mechanisms for users to manage this consent. Any company that generated over $10 million in revenue in the past three years must produce an environmental impact report of their use of generative AI in search engine content. Failure to submit the report can result in a fine of $20,000 for each year the entity does not report.

There is no private right of action.

H1873

Introduced on February 16, 2023, H1873, An Act Preventing A Dystopian Work Environment, would require that employers provide employees and independent contractors (collectively, “workers) with a particularized notice prior to the use of an Automated Decision System (ADS) and the right to request information, including, among other things, whether their data is being used as an input for the ADS, and what ADS output is generated based on that data. “Automated Decision System (ADS)” or “algorithm,” is defined as “a computational process, including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques, that makes or assists an employment-related decision.” The bill further requires that employers review and adjust as appropriate any employment-related decisions or ADS outputs that were partially or solely based on the inaccurate data, and inform the worker of the adjustment. Employers and vendors acting on behalf of an employer must maintain an updated list of all ADS currently in use, and must submit this list to the department of labor on or before January 31 of each year. The bill also prohibits the use of ADSs in certain circumstances and requires the performance of algorithmic impact assessments.

HD1788

Introduced on January 11, 2024, HD. 4788, the Artificial Intelligence Disclosure Act would require that any generative artificial intelligence system used to create audio, video, text or print AI-generated content within Massachusetts include on or within such content a clear and conspicuous disclosure that meets the following criteria: (i) a clear and conspicuous notice, as appropriate for the medium of the content, that identifies the content as AI-generated content, which is to the extent technically feasible, permanent or uneasily removed by subsequent users; and (ii) metadata information that includes an identification of the content as being AI-generated content, the identity of the system, tool or platform used to create the content, and the date and time the content was created.

HD1861

Introduced Jan. 15, 2025, HD1861 would require all public-facing social media platforms, content-sharing platforms, messaging platforms, advertising network, or standalone search engine that distributes content to users to provide users with the context (called “provenance data”) of an AI image. This provenance data includes information about the origins of the content and its history of modifications. Generative AI providers must also facilitate users’ ability to access provenance data, by providing users adequate tools to read this data.

HD4053

Introduced Jan. 17, 2025, HD4053 (substantively same as Bill HD396) is designed to protect consumers from discrimination by high-risk AI systems. High-risk AI systems are defined as those that make consequential decisions without human review or interference. A consequential decision is one which affects: education enrollment or an education opportunity; employment; lending decision; essential government service; health care services; housing; insurance; or legal service.

Section 2 of the bill would require a developer of a high-risk artificial intelligence system to use reasonable care to protect consumers from any known risks of algorithmic discrimination from a high-risk AI system. The bill further requires the developer to conduct a risk assessment assessing the nature and scope of the use of high-risk AI, the sensitivity and volume of the data processed and an impact assessment. The documentation must include the type of data used, the foreseeable risk of discrimination, the purpose of the AI system, amongst other things spelled out in Section 3.

The bill would be effective six months after passage and does not have a private right of action.

S46

S.46,  introduced on February 27, 2025, addresses the use of artificial intelligence for utilization review and management in healthcare decision-making. It is broad, prohibiting the AI from being used in a discriminatory manner, in contravention of state and local or to fully supplant human decision-making. The bill also limits the use of patient data to utilization management.

S264

S.264, introduced on February 27, 2025, would require commercial entities using chatbots to clearly and conspicuously disclose to users that they are interacting with a chatbot and not a human. The bill also establishes that any representations made by a chatbot carry the same legal weight as those made by a human representative of the business. Violations would be considered unfair or deceptive practices under Massachusetts consumer protection law (Chapter 93A).

SD745 and HD2281

Introduced on January 18 and 19, 2023, the Massachusetts Data Privacy Protection Act (MDPPA) was filed in both the Senate SD 745, and in the House HD 2281. The bill is based on the federal American Data Privacy Protection Act with additional provisions relating to workplace surveillance. The MDPPA would require companies to conduct impact assessments if they use a “covered algorithm” in a way that poses a consequential risk of harm to individuals. “Covered algorithm,” is defined as “a computational process that uses machine learning, natural language processing, artificial intelligence techniques, or other computational processing techniques of similar or greater complexity and that makes a decision or facilitates human decision-making with respect to covered data, including determining the provision of products or services or to rank, order, promote, recommend, amplify, or similarly determine the delivery or display of information to an individual.”

SD1313

Introduced Jan. 16, 2025, SD1313 sets transparency requirements for covered platforms, defined as an online, mobile, or internet application or service that does business in the state and maintains the data of at least 100,000 consumers (beyond the data needed for payment transactions) and businesses with at least 25,000 consumers and get 25% of their gross revenue from selling data are in scope.

Before Jan. 1 each year, covered platforms must register with the office of social media transparency and accountability. Third party auditors will then assess the platform’s algorithmic risk of harm to children, as established by an Advisory Council of mental health experts. Beginning Jan. 1, 2026, and continuing annually, platforms must submit a transparency report detailing the number of users reasonably believed to be minors, amounts of time spent on the platform, features the platform uses to increase, sustain, or extend use of the platform, and descriptions of the personal data the platform collects, with justifications. By Jan. 1, 2027, the platform must report each instance of specified harms on its service.

There is no private right of action and each violation results in up to a $500,000 fine.

SD1971 and HD3263

Introduced on January 20, 2023, in both the Senate SD 1971 (assigned SB227), and in the House HD 3263, the Massachusetts Information Privacy and Security Act (MIPSA) creates various rights for individuals regarding the processing of their personal information, including the right to a privacy notice at or before the point of collection of an individual's personal information, the right to opt out of the processing of an individual's personal information for the purposes of sale and targeted advertising, rights to access and transport, delete, and correct personal information, and the right to revoke consent. Additionally, large data holders are required to perform risk assessments where the processing is based in whole or in part on an algorithmic computational process. A “large data holder”, is a controller that, in a calendar year: (1) has annual global gross revenues in excess of $1,000,000,000; and (2) determines the purposes and means of processing of the personal information of not less than 200,000 individuals, excluding personal information processed solely for the purpose of completing a payment-only credit, check or cash transaction where no personal information is retained about the individual entering into the transaction.

SD2223

Introduced on Jan. 17, 2025, SD2223 (substantively same as SD2592) categorizes as an unfair and deceptive trade practice a commercial transaction in which an individual interacts or communicates with a bot and reasonably could have believed they were engaging with a human. A bot in this bill is considered an automated online account,, including an AI agent. Commercial entities can avoid liability by clearly and conspicuously disclosing that the entity is a computer, not a human.

S2539

Introduced on December 28, 2023, S. 2539, would require the development of a comprehensive set of policies designed to bring cybersecurity and AI preparedness up to the latest standards and to keep the Massachusetts government up to date as technology continues to rapidly advance.

Failed

HB1974

Introduced on February 16, 2023, HB1974, would regulate the use of artificial intelligence (AI) in providing mental health services. In particular, the bill provides that the use of AI by any licensed mental health professional in the provision of mental health services must satisfy the following conditions: (1) pre-approval from the relevant professional licensing board; (2) any AI system used must be designed to prioritize safety and must be continuously monitored by the mental health professional to ensure its safety and effectiveness; (3) patients must be informed of the use of AI in their treatment and be afforded the option to receive treatment from a licensed mental health professional; and (4) patients must provide their informed consent to receiving mental health services through the use of AI. AI is defined as “any technology that can simulate human intelligence, including but not limited to, natural language processing, training language models, reinforcement learning from human feedback and machine learning systems.”

SB31

Introduced on February 16, 2023, SB31, An Act drafted with the help of ChatGPT to regulate generative artificial intelligence models like ChatGPT, would require any company operating a large-scale generative artificial intelligence model to adhere to certain operating standards such as reasonable security measures to protect the data of individuals used to train the model, informed consent from individuals before collecting, using, or disclosing their data, and performance of regular risk assessments.  A “large-scale generative artificial intelligence model” is defined to mean “a machine learning model with a capacity of at least one billion parameters that generates text or other forms of output, such as ChatGPT.” The bill further requires any company operating a large-scale generative artificial intelligence model to register with the Attorney General and provide certain enumerated information regarding the model.

Enacted

HB5141

Effective Feb. 13, 2024, HB 5141 regulates use of AI in political advertising. The law requires political ads created using AI, including prerecorded phone messages created using AI, to include a clear and conspicuous disclaimer. The law includes specific requirements for the disclaimer depending on the media form. The fine for the first violation of this section is $250, then $1,000.

Proposed

SB659

Introduced Nov. 9, 2023, SB 659 would enact the Michigan Personal Data Privacy Act aimed at protecting consumer data. Section 13 gives consumers the option to opt-out of 1) having their data used for targeted advertising, 2) sale of personal data, and 3) personal data processing for use in profiling for automated decisions that produce legal or similarly significant effects. Consumers may also request access to their data, correct inaccuracies in their data, request a copy of their data, and request the deletion of their collected data.

Section 21 requires a controller to obtain consent before processing sensitive data and prohibits a controller from selling sensitive data. Controllers also cannot process the data of a minor for purposes of targeted advertising.

Section 29 requires a controller to conduct a DPA if processing personal data for the purpose of profiling if the profiling presents a reasonably foreseeable risk to the consumer.  Section 25 requires a public facing privacy notice, explaining consumers’ rights.

Enforcement of the bill would begin one year after enactment. Businesses that maintain the data of at least 100,000 consumers (beyond the data needed for payment transactions) and businesses with at least 25,000 consumers and derive any revenue from selling data are in scope. Violations are $7,500 each + costs.

Enacted

HF2309

Introduced on March 1, 2023, HF2309, would create an omnibus consumer privacy law based on the Colorado Privacy Act and Connecticut Data Privacy Act, to regulate, among other data uses, the collection and processing of personal information.  In particular, the bill sets out rules for profiling and automated decision-making.  Specifically, the bill enables individuals to opt-out of “profiling in furtherance of decisions that produce legal or similarly significant effects” concerning the consumer. Profiling is defined as “any form of automated processing of personal data to evaluate, analyze, or predict personal aspects concerning an identified or identifiable natural person's economic situation, health, personal preferences, interests, reliability, behavior, location, or movements.”  Controllers must also perform a data privacy and protection assessment for high-risk profiling activities.

SF2915

Originally introduced on March 15, 2023 as SF2915, the Minnesota Consumer Data Privacy Act was passed as Section 325O of HF 4757 an omnibus bill dealing primarily with cannabis. Section 325O.05 gives consumers the option to opt-out of 1) having their data used for targeted advertising, 2) sale of personal data, and 3) personal data processing for use in profiling for automated decisions that produce legal or similarly significant effects. Consumers may also request access to their data, correct inaccuracies in their data, request a copy of their data, and request the deletion of their collected data. Section 325O.08 requires a controller to conduct a DPA if processing personal data for the purpose of profiling if the profiling presents a reasonably foreseeable risk to the consumer.

Enforcement of the bill will begin on July 31, 2025. Businesses that maintain the data of at least 100,000 consumers (beyond the data needed for payment transactions) and businesses with at least 25,000 consumers and get 25% of their gross revenue from selling data are in scope.

PRoposed

HF2500

Introduced on March 17, 2025, HF2500 proposes an amendment to Minnesota Statutes 2024, section 62A.59, to prohibit the use of algorithms or artificial intelligence in reviewing prior authorization requests for health insurance. The bill aims to ensure that such decisions are made by qualified human reviewers rather than automated systems. The law is set to become effective on January 1, 2026, and will apply to health plans offered, sold, issued, or renewed on or after that date.

SF3098

Introduced on March 27, 2025, SF3098 proposes a new consumer protection law under Minnesota Statutes, Chapter 325F.997. The bill prohibits the use of artificial intelligence to dynamically set product prices based on real-time factors such as market demand, competitor pricing, inventory levels, or customer behavior. The Minnesota Attorney General would be authorized to enforce this prohibition. There is no effective date stated.

SF1886

Introduced on February 27, 2025, SF1886 proposes a new consumer protection law under Minnesota Statutes, Chapter 325M.40. The bill would require businesses using artificial intelligence in customer interactions to disclose that the individual is communicating with AI. It also mandates that consumers be given the option to interact with a human instead of an AI system. Violation of this act is treated as an unfair and deceptive trade practice.

There is a private right of action and violations are penalized with fines of $1,000.

Proposed

SB2642

Introduced on Jan. 20, 2025, SB 2642 would regulate the use of AI in political advertisements. The law would require political ads created using AI, including prerecorded phone messages created using AI, to include a clear and conspicuous disclaimer. The law includes specific requirements for the disclaimer depending on the media form. The attorney general or an injured candidate for office may bring suit.

Proposed

HB673

Introduced on Dec. 23, 2024, HB 673 regulates the use of generative AI in political advertisements. Per the bill, any political advertisements for a candidate or ballot measure that was created using any generative AI, must clearly and conspicuously state that it was made by AI. This applies whether AI created the whole advertisement or just a part of it, and no matter what component of the ad the generative AI created. The bill would be effective August 28, 2025.

HB1462

HB1462 was introduced on February 25, 2025. This bill establishes the “AI Non-Sentience and Responsibility Act.” It assigns liability for harm caused by AI systems to the human developers, owners, or users responsible for their deployment or misuse and allows courts to pierce the corporate veil in cases where corporate structures are used to evade responsibility for AI-related harm. If enacted, this bill shall apply on or after August 28, 2025.

SB509

Similar to HB 673, SB 509 which was introduced on Dec. 18, 2024, seeks to regulate the use of generative AI in political advertising and electioneering. This bill requires a disclaimer on the ad if generative AI was used to create it and the ad 1) appears to depict a real person doing something they did not do, 2) manipulates the voice or actions of a real candidate to depict them saying or doing something they did not, or 3) was created to injure a candidate or deceive voters. The bill provides language for the disclaimer and specific requirements for displaying it on the ad, depending on the medium. The bill would be effective August 28, 2025.

Enacted

SB384

Introduced on February 16, 2023, SB384An act establishing the Consumer Data Privacy Act, would create an omnibus consumer privacy law, to regulate, among other data uses, the collection and processing of personal information, and profiling and automated decision-making. Specifically, the bill creates certain transparency requirements around profiling and enables individuals to opt-out of “profiling in furtherance of automated decisions that produce legal or similarly significant effects” concerning the consumer.  Profiling is defined as “any form of automated processing performed on personal data to evaluate, analyze, or predict personal aspects related to an identified or identifiable individual's economic situation, health, personal preferences, interests, reliability, behavior, location, or movements.”  Controllers must also perform a data protection assessment for high-risk profiling activities.

failed

SB212

Introduced on Feb. 4, 2025, SB 212 sets requirements for critical infrastructure (as defined by Montana Code 82-1-601 as an asset critical to US “security, national economic security, national public health or safety”) controlled by AI. All systems of critical infrastructure run in whole or in part by AI need to have a shutoff the developer can deploy to regain human control of the system. To this end, the entity using the AI system must implement an annual review and test a risk management program to ensure manual takeover is possible. If passed, the act would be effective on date of passage.Introduced on Feb. 4, 2025, SB 212 sets requirements for critical infrastructure (as defined by Montana Code 82-1-601 as an asset critical to US “security, national economic security, national public health or safety”) controlled by AI. All systems of critical infrastructure run in whole or in part by AI need to have a shutoff the developer can deploy to regain human control of the system. To this end, the entity using the AI system must implement an annual review and test a risk management program to ensure manual takeover is possible. If passed, the act would be effective on date of passage.

SB452

SB 452 was introduced on February 24, 2025, to require manufacturers of publicly distributed online media to disclose their use of artificial intelligence (AI) systems and provide identifiable markers for AI-generated content. The bill excluded government entities from these obligations.

Enacted

LB504

Introduced Jan. 21, 2025, LB 504, the Nebraska Age-Appropriate Online Design Code Act is designed to prevent compulsive usage of social media platforms by minors. The bill requires taking reasonable care in the use of personal data and the design of the platform, minimizing the harm of compulsive usage, severe psychological harm, emotional distress, intrusions on privacy, identity theft, discrimination, and physical or financial harm. With respect to AI, the bill requires platforms prevent AI from using users’ personal data to communicate or interact with the user. Users must also be able to opt out of design features, defined in the bill as features designed to expand a user’s frequency or time spent using the app, or activity on the platform.

The bill prohibits the profiling of a minor on the app, targeting ads at minors, and using the personal data collected for any reason other than its initial collection purpose. The bill also has transparency requirements and requires covered entities to issue an annual public report describing, in part, the type of personal data collected, the reason for collecting it, and how the platform uses algorithms. A violation of this bill would result in a fine of up to $50,000 and there is no private right of action.

Although the act takes effect on January 1, 2026, the Attorney General is restricted from initiating any action to recover a civil penalty under the act until July 1, 2026.

Proposed

LB642

Introduced Jan. 22, 2025, LB 642, the Artificial Intelligence Consumer Protection Act is designed to protect consumers from discrimination by high-risk AI systems. High-risk AI systems are defined as those that make consequential decisions without human review or interference. A consequential decision is one which affects: education enrollment or an education opportunity; employment; lending decision; essential government service; health care services; housing; insurance; legal service; or pardon, parole, probation, or release decision.

Section 3 of the bill would require a developer of a high-risk artificial intelligence system to use reasonable care to protect consumers from any known risks of algorithmic discrimination from a high-risk AI system. The bill further requires the developer provide certain documentation both for subsequent developers and consumers. The documentation must include the type of data used, the foreseeable risk of discrimination, the purpose of the AI system, amongst other things spelled out in Section 3.

The bill would be effective Feb. 1, 2026 and does not provide a private right of action.

Failed

LB1203

Introduced January 17, 2024, LB1203 would regulate the use of AI in political advertising. The bill would require all covered advertisements created in whole or in part by AI to display a clear and conspicuous disclaimer of the use of AI to create the ad. Specific requirements for the disclaimer vary depending on the medium and are articulated in the bill. There is no private right of action.

Proposed

AB73

Introduced on Nov. 21, 2024, AB73 will regulate use of AI in political advertising. The law requires political ads created using AI, including audio messages created using AI, to include a clear and conspicuous disclaimer. The bill includes specific requirements for the disclaimer depending on the media form. A copy of the ad must be furnished to the Nevada Secretary of State in accordance with other Nevada election advertising laws.

This law will become effective Jan. 1, 2026 and does not provide a private right of action.

FAILED

SB186

Introduced on February 3, 2025, SB186 governs the use of artificial intelligence in healthcare. Any patient communication must contain a disclaimer that the communication was generated using AI and provide the patient a way to contact a human.

Enacted

NHDPA

On January 1, 2025, the New Hampshire Data Privacy Act ("NHDPA") came into effect. The NHDPA is an omnibus consumer privacy law that also sets out rules for profiling and automated decision-making.  Specifically, the law enables individuals to opt-out of "profiling in furtherance of solely automated decisions that produce legal or similarly significant effects concerning the consumer.” Profiling is defined as “any form of automated processing of personal data to evaluate, analyze, or predict personal aspects concerning an identified or identifiable natural person's economic situation, health, personal preferences, interests, reliability, behavior, location, or movements.”  Controllers must also perform a data protection assessment for high-risk profiling activities.

 

ENacted

S332

Initially introduced on January 11, 2022, S332 (the “Act”), creates an omnibus consumer privacy law along the lines of the Washington Privacy Act.   Among other things, the Act requires companies to conduct data protection assessments of “processing that presents a heightened risk of harm to a consumer” before conducting such processing. Such “heightened risk” results from activities such as profiling.  “Profiling” means any form of automated processing performed on personal data to evaluate, analyze or predict personal aspects related to an identified or identifiable individual’s economic situation, health, personal preferences, interests, reliability, behavior, location or movements. Consumers are also afforded the right to opt-out of profiling in furtherance of decisions that produce legal or similarly significant effects.

The bill was signed into law on January 16, 2024.  The law will go into effect January 15, 2025.

Proposed

A3854

Introduced February 22, 2024, A3854, which is similar to A4030, essentially would make it unlawful for the sale, development, deployment, use, or offer for sale of an automated employment decision tool unless (1) a bias audit assesses the tool within the past year prior to selling or offering the tool for sale; (2) the tool includes at no additional costs this annual bias audit service; (3) the tool is developed, sold, deployed, used, or offered for sale with a notice stating the tool is subject to this bill; and (4) the tool’s developer has implemented the recommendations of the most recent bias audit conducted and the developer issues a press release stating so. “Employer” includes an “individual, partnership, association, corporation,” and other business entities. “Automated employment decision tool” is a “machine-based system that can, for a set of human-defined objectives provided by an employer or an individual acting on behalf of an employer, make predictions, recommendations, or decisions influencing recruitment, workforce, or employment decisions.” A “bias audit” would be an “impartial evaluation conducted by an independent auditor.”

A3911

S3015, introduced April 8, 2024 (Assembly version A3911), would require an employer located in New Jersey, including a person, firm, business, educational institution, nonprofit, corporation, LLC, or other entity, that requests applications to record video interviews and uses an AI-analysis of the video to notify the applicant that AI may be used to analyze their video, to provide the applicant with information before the interview as to how the AI works and evaluates applicants, and to obtain written consent before the interview that the application will be evaluated by AI. If an applicant has not consent, then an employer cannot use AI for analysis. Additionally, the bill would require an employer using an AI-analysis to determine applicant fitness to collect and report the applicants’ race and ethnicity who are and are not afforded the opportunity for an in-person interview as well as the applicants’ race and ethnicity who are offered a position or hired. This data must be reported annually to the Department of Labor and Workforce Development. Violation of this bill will result in a civil penalty of $500 for the first offense and $1,000 for any subsequent offense.

A3912

Introduced February 27, 2024, A3912 would expand the definition of “identity left” to include impersonation or false depictions of a person generated entirely or substantially manipulated by computer technology or AI-generated speech, speech transcription, or text. To constitute criminal activity, a person must reasonably believe the AI-generated content accurately exhibits the activity of a person, the content was produced without the person’s content, and the exhibition is “substantially likely” to create perceptible individual or societal harm. This act would take effect immediately.

A4030

A4030, introduced March 7, 2024,would prohibit the sale or offer for sale in New Jersey an automated employment decision tool unless (1) a bias audit has been performed on the tool in the past year prior to sale; (2) the sale includes, at no additional fee, the annual bias audit service; and (3) the tool is sold or offered with a notice stating it is subject to these provisions. “Automated employment decision tool” is “any system” governed by “statistical theory” or other methodologies that filter candidates for hire automatically in a way that establishes a preferred candidate or candidates. “Bias audit” is an “impartial evaluation” of the automated employment decision tool to assess its compliance with anti-discrimination laws. A violation of this bill would result in a civil penalty of not more than $500 for the first violation and not less nor more than $1,500 for subsequent violations.

A4909

Introduced on December 5, 2022,  A4909, would regulate the “use of automated tools in hiring decisions to minimize discrimination in employment.” The bill imposes limitations on the sale of automated employment decision tools (AEDTs), including mandated bias audits, and requires that candidates be notified that an AEDT was used in connection with an application for employment within 30 days of the use of the tool.

A5164

Introduced on January 14, 2025, A5164, provides that “any news media or other entity disseminating news or purporting to disseminate news within the State may permit the use of artificial intelligence to assist its professionals and staff in investigating, researching, and reporting information, but shall be prohibited from using artificial intelligence in lieu of professionals and staff.”   In addition, “[a]ny news media or other entity disseminating news or purporting to disseminate news within the State that uses generative AI content, regardless of what entity or mechanism produced it, shall disclose the following: (1) a prominently displayed label indicating that the content is generative AI; (2) credit to any source used to produce the content; and (3) a disclaimer that the content may not accurately reflect the source material from which it was produced.”

Penalties for Non-Compliance: Civil penalties for violations, starting at $10,000 for the first violation and increasing for subsequent violations.  There is no private right of action.

Effective date: If passed, the law will take effect on the first day of the seventh month next following the date of enactment.

AR141

Introduced June 6, 2024, AR141 encourages platforms that generate and disseminate deepfake and cheapfake media “to voluntarily commit to prevent and remove harmful content.” “Deepfake” and “cheapfake” media includes video recordings, motion picture films, sound recordings, electronic images, photographs or other technological representations of speech or conduct that depict a person’s speech of conduct that would not normally engage in those behaviors. These medias are AI-produced content that can “manipulate public understandings of evidence and truth.”

S1588

Introduced on January 9, 2024, S1588, regulates the use of automated employment decision tools during the hiring process to minimize employment

discrimination that may result from the use of the tools. The Bill would prohibit the sale of automated employment decision tools unless certain requirements are met, including a previous bias audit, a no cost yearly bias audit, and a notice that the tool is subject to the specific Bill. Additionally, the Bill has specific employee notification requirements for companies that use these tools.

S2964 (A3855)

Introduced March 18, 2024, S2964 (Assembly version A3855) establishes standards for independent bias auditing of automated employment decision tools (“AEDT”). This bill would apply to employers, including employment agencies, individuals, partnerships, associations, corporations, and other entities employing any person. An “independent auditor” would be a person or group capable of exercising objective judgment on all issues within the scope of a bias audit of an AEDT. “AEDT” is a system governed by statistical theory or related methodologies, including learning algorithms, that automatically filter candidates for hire for any term, condition, or privilege of employment in a way that “establishes a preferred candidate or candidates.” A “bias audit” would be an “impartial evaluation, including but not limited to testing, of an automated employment decision tool to assess its predicted compliance” with anti-discrimination laws.

S3046

Introduced April 8, 2024, S3046 would provide corporation business and gross income tax credit for employing persons who have experienced job loss because of automation. The corporation tax credit would be equal to 10 percent of the salary and wages paid to each person employed by the corporation who experienced termination because of automation. To qualify, the corporation as a taxpayer must employ the person for at least seven months of the privilege period the taxpayer claims the credit. The credit, however, cannot exceed $2,500 per employee per privilege period. “Automation” is defined as a “device, process, or system that functions without continuous input from a human operator.” This bill would take effect immediately and would apply to privilege periods and taxable years beginning on or after January 1 of the year following enactment.

S3225

S3225, introduced May 13, 2024, would require a business entity, such as a business corporation, professional services corporation, LLC, partnership, limited partnership, business trust, association, or any other legal commercial entity organized under New Jersey, that use a text-based chat to offer a transcript of the chat to the consumer. “Chat” includes any tool used by the entity “to provide real-time, text-based communication with a consumer.” “Transcript” is a “typed or printed verbatim record of a chat.” Additionally, the entity must provide “clear and conspicuous notice to the consumer at the outset of any interaction, informing the consumer of the option to receive a transcript of the chat.” Failure to comply will be unlawful. The bill would take effect immediately.

S3298

Introduced May 20, 2024, S3298 (Assembly version A3858) would require insurance carriers to disclose in a “clear and conspicuous” location on the website if the carrier uses an “automated utilization management system” and the number of claims reviewed using this system in the previous year. “Automated utilization system” is a system used for reviewing the “appropriate and efficient allocation of health care services under a health benefits plan according to specified guidelines” to recommend or determine if and to what extent a health care service should be given or proposed to a covered person. The automated utilization system may use AI or other software. This bill, if enacted, would take effect on the first day of the 13th month following the date of enactment.

S3742

Introduced on October 7, 2024, S3742, would require AI Technology Companies perform annual safety tests on all AI technologies sold, developed, deployed, used, or offered for sale in New Jersey.  Safety tests must assess potential biases, inaccuracies, and cybersecurity threats.  In-scope entities must submit an annual report to the Office of Information Technology (OIT) detailing:

  • A list of all AI technologies tested.
  • Descriptions of the safety tests conducted.
  • Lists of third parties involved in conducting the tests, if any.
  • Results of the safety tests.

Effective date:  The law shall take effect on the first day of the sixth month next following enactment.

Failed

A537

Introduced on January 1, 2022, A537, would require an automobile insurer using an automated or predictive underwriting system to annually provide documentation and analysis to the Department of Banking and Insurance to demonstrate that there is no discriminatory outcome in the pricing on the basis of race, ethnicity, sexual orientation, or religion, that is determined by the use of the insurer's automated or predictive underwriting system. Under this bill, "automated or predictive underwriting system" is defined to mean a computer-generated process that is used to evaluate the risk of a policyholder and to determine an insurance rate. An automated or predictive underwriting system may include, but is not limited to, the use of robotic process automation, artificial intelligence, or other specialized technology in its underwriting process.

S1402

Introduced on February 10, 2022, S1402, provides that it is unlawful discrimination and a violation of the law against discrimination for an automated decision system (ADS) to discriminate against any person or group of persons who is a member of a protected class in: (1) the granting, withholding, extending, modifying, renewing, or purchasing, or in the fixing of the rates, terms, conditions or provisions of any loan, extension of credit or financial assistance; (2) refusing to insure or continuing to insure, limiting the amount, extent or kind of insurance coverage, or charging a different rate for the same insurance coverage provided to persons who are not members of the protected class; or (3) the provision of health care services.  Under the bill, ADS means a computational process, including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques, that makes a decision or facilitates human decision making.

An ADS is discriminatory if the system selects individuals who are members of a protected class for participation or eligibility for services at a rate that is disproportionate to the rate at which the system selects individuals who are not members of the protected class.  If passed, the law would take effect on the first day of the third month next following enactment.

Enacted

HB182

N.M. Stat. Ann. § 1-19-26.4 (proposed as HB 182), outlines regulations regarding advertisements containing AI-generated media. If someone creates, produces, or purchases an advertisement with deceptive media, they must include a clear disclaimer stating, "This [image/video/audio] has been manipulated or generated by artificial intelligence," depending on the type of media used. The disclaimer must be easily readable or audible, depending on the media type, and must be present throughout the duration of the media or at specific intervals. These regulations became effective on May 15, 2024.

Proposed

HB60

Introduced on January 17, 2025, HB 60 (Artificial Intelligence Act) focuses on regulating the use of AI to ensure transparency, fairness, and accountability. It outlines the responsibilities of an AI developer: 1) duty of care to protect consumers from known or foreseeable risks of algorithmic discrimination, 2) duty to provide summaries and documentation on AI systems’ uses, data, performance, and risks, and 3) duty to disclose and report incidents of algorithmic discrimination to the State Department of Justice within 90 days of the incident. It also outlines responsibilities for deployers of AI systems: 1) duty to implement risk management policies; 2) duty to conduct annual impact assessments and within 90 days of substantial changes to the AI systems; 3) duty to provide consumers with notice and information about AI use in important decisions, including opportunities to correct data and appeal adverse decisions. Individual consumers can bring a civil action in district court against a developer or deployer for declaratory or injunctive relief and attorney fees for a violation of this act. Otherwise, the State Department of Justice can enforce the Act. It would take effect on July 1, 2026. As of February 25, 2025, the bill was reported by committee with Do Not Pass but with a Do Pass recommendation on Committee Substitution.

HB215

Introduced on January 29, 2025, HB 215 prohibits the use of AI to manipulate rent, and provides private right of action to someone who is injured by unlawful actions pursuant to the Uniform Owner-Resident Relations Act. The litigant can sue in the county in the state where defendant resides/is found/agent resides, or where service can be obtained. As of February 18, 2025, the bill was reported by committee with a Do Not Pass but with a Do Pass recommendation on Committee Substitution.

HB307

Introduced on February 10, 2025, HB 307 enacts the Internet Privacy and Safety Act, which defines profiling as “automated processing of personal data that uses personal data to evaluate certain aspects relating to a consumer, including analyzing or predicting aspects concerning the consumer's behavior, economic situation, health, interests, location, movement, performance at work, personal preferences or reliability. "Profiling" does not include the processing of data that does not result in an assessment or judgment about a consumer.”

HB401

Introduced on February 12, 2025, HB401, also known as the Artificial Intelligence Synthetic Content Accountability Act, provides for civil and criminal enforcement for the use of synthetic content created by AI, and provides penalties for violations of this act.

SB420

Introduced on February 17, 2025, SB420, also known as the Community Privacy and Safety Act, prohibits covered entities (service providers, businesses that offer online features, etc.) from profiling a consumer by default, unless profiling is necessary to provide the online features, product or service requested.

Failed

SB68

Introduced on January 17, 2024, SB 68, the Age-Appropriate Design Code Act applies to “a sole proprietorship, partnership, limited liability company, corporation, association, affiliate or other legal entity that is organized or operated for the profit or financial benefit of the entity's shareholders or other owners and that offers online products, services or features to individuals in New Mexico and processes children's personal data.”

The Act would prohibit a covered entity from “profiling” a child under 18 unless:

(1) the covered entity can demonstrate that the covered entity has appropriate safeguards in place to ensure that profiling is consistent with the best interest of

children reasonably likely to access the online product, service or feature; and

(2) profiling is necessary to provide the online product, service or feature requested, and only with respect to the aspects of the online product, service or

feature with which the child is actively and knowingly engaged; or

(3) the covered entity can demonstrate a compelling reason that profiling is in the best interest of children.  "Profiling" means automated processing of personal data that uses personal data to evaluate certain aspects relating to a natural person, including analyzing or predicting aspects concerning a natural person's performance at work, economic situation, health, personal preferences, interests, reliability, behavior, location or movements. "Profiling" does not include the processing of data that does not result in an assessment or judgment about a natural person.

For the most part, SB 68 is the same as SB 319, which was introduced on February 2, 2023, and failed to pass.

ENACTED

Local Law 144

In December 2021, New York City passed the first law (Local Law 144), in the United States requiring employers to conduct bias audits of AI-enabled tools used for employment decisions. The law imposes notice and reporting obligations.

Specifically, employers who utilize automated employment decision tools (AEDTs) must:

  1. Subject AEDTs to a bias audit, conducted by an independent auditor, within one year of their use;
  2. Ensure that the date of the most recent bias audit and a “summary of the results”, along with the distribution date of the AEDT, are publicly available on the career or jobs section of the employer’s or employee agency’s website;
  3. Provide each resident of NYC who has applied for a position (internal or external) with a notice that discloses that their application will be subject to an automated tool, identifies the specific job qualifications and characteristics that the tool will use in making its assessment, and informs candidates of their right to request an alternative selection process or accommodation (the notice shall be issued on an individual basis at least 10 business days before the use of a tool); and
  4. Allow candidates or employees to request alternative evaluation processes as an accommodation.

While enforcement of the law has been delayed multiple times pending finalization of the law’s implementing rules, on April 6, 2023 the Department of Consumer and Worker Protection (DCWP) published the law’s Final Rule. The law is now in effect, and enforcement began on July 5, 2023.

Proposed

A00222

Introduced on January 9, 2025, A00222 requires owners of chatbot systems to provide clear, conspicuous and explicit notice to users that they are interacting with an artificial intelligence chatbot program rather than a human representative; provides that no such liability shall be imposed where the proprietor has corrected the information and substantially or completely cured the harm to the user within thirty days of notice of such harm.

A00768

Introduced on January 8, 2025, A00768 (senate version S01962), enacts the "New York Artificial Intelligence Consumer Protection Act", in relation to preventing the use of artificial intelligence algorithms to discriminate against protected classes.

A00773

Introduced on January 8, 2025, A00773, relates to the use of automated decision tools by banks for the purposes of making lending decisions; allows loan applicants to consent to or opt out of such use.

A01342

Introduced on January 9, 2025, A01342, requires the collection of oaths of responsible use from users of certain generative or surveillance advanced artificial intelligence systems by the operators of such systems, and transmission of such oaths to the attorney general.

A01456

Introduced on January 9, 2025, A01456, provides for notice requirements where an insurer authorized to write accident and health insurance in New York, or a health maintenance organization uses artificial intelligence-based algorithms in the utilization review process.

A01952

Introduced on January 14, 2025, A01952, would require employers or employment agencies to notify each such candidate of the use of automated employment decision tools and allow such candidate to request an alternative selection process or accommodation.

A03125

Introduced on January 23, 2025, A03125, relates to the use of automated decision tools to make housing decisions.  The bill requires a disparate impact analysis annually to assess the impact of any automated decision tool used by any landlord to select applicants for housing and requires the landlord to notify each such applicant of such use.  The bill allows for the attorney general to initiate an investigation if a preponderance of the evidence establishes a suspicion of a violation.

A03327

Introduced on January 27, 2025, A03327, would require any political communication, whether made by phone call, email or other message-based communication, that utilizes an artificial intelligence system to engage in human-like conversation with another, by reasonable means, apprise the person of the fact that they are communicating with an artificial intelligence system.

A03991

Introduced on January 30, 2025, A03991, would require that a health care plan or specialized health care service plan that uses an artificial intelligence, algorithm, or other software tool for the purpose of utilization review or utilization management functions, or that contracts with or otherwise works through an entity that uses such tools, comply with certain requirements.

The bill requires, among other things, that the use of the artificial intelligence, algorithm, or other software tool (i) does  not  adversely  discriminate,  directly  or  indirectly, against  an  individual  on the basis of race, color, religion, national origin, ancestry, age, sex, gender, gender identity, gender  expression, sexual  orientation, present or predicted disability, expected length of life, degree of medical dependency, quality of  life,  or  other  health conditions, (ii) is fairly and equitably applied, (iii) is open to inspection.  The bill further requires that disclosures pertaining to the use and oversight of the artificial intelligence, algorithm, or other software tool are contained in the covered entity’s written policies and procedures.

A03930

Introduced on January 30, 2025, A03930, would regulate the use of artificial intelligence in aiding decisions on rental housing and loans and require a study on the impact of artificial intelligence and machine learning on housing discrimination and redlining.

Among other things, the bill provides that it shall be unlawful for a landlord to implement or use an automated decision tool that fails to comply with the following provisions:

(a) No less than annually, a disparate impact analysis shall be conducted to assess the actual impact of any automated decision tool used by any landlord to select applicants for housing within the state. Such disparate impact analysis shall be provided to the landlord.

(b)  A summary of the most recent disparate impact analysis of such tool as well as the distribution date of the tool to which the analysis applies shall be made publicly available on the website of the landlord prior to the implementation or use of such tool.   Such summary shall also be made accessible through any listing for housing on a digital platform for which the landlord intends to use an automated decision tool to screen applicants for housing.

A04947

Introduced on February 10, 2025, A04947, would establish the "New York privacy act" which would regulate, among other things, the automated  processing  performed  on personal  data to evaluate, analyze, or predict personal aspects related to an identified or identifiable natural  person's  economic  situation, health,   personal   preferences,   interests,   reliability,  behavior, location, or movements.

A216

Introduced on January 4, 2023, A216, would require advertisements to disclose the use of synthetic media.  Synthetic media is defined as “a computer-generated voice, photograph, image, or likeness created or modified through the use of artificial intelligence and intended to produce or reproduce a human voice, photograph, image, or likeness, or a video created or modified through an artificial intelligence algorithm that is created to produce or reproduce a human likeness.”  Violators would be subject to a $1,000 civil penalty for a first violation and a $5,000 penalty for any subsequent violation.

A768

A768, introduced on January 1, 2025, and S1962, introduced on January 14, 2025, enact the New York Artificial Intelligence Consumer Protection Act in relation to preventing the use of artificial intelligence algorithms to discriminate against protected classes. It provides that such act shall not include: (i) the offer, license, or use of a high-risk artificial intelligence decision system by a developer or deployer for the sole purpose of: (A) such developer's or deployers self-testing to identify, mitigate, or prevent discrimination or otherwise ensure compliance with state and federal law; or (B) expanding an applicant, customer, or participant pool to increase diversity or redress historic discrimination.

A "high-risk artificial intelligence decision system" refers to AI systems that shall mean any artificial intelligence decision system that, when deployed, makes, or is a substantial factor in making a consequential decision. It excludes systems designed for narrow tasks, pattern detection without human review, anti-fraud technology without facial recognition, AI-enabled video games, cybersecurity tools, internal management technologies, and consumer communication tools that follow non-discriminatory policies.

Starting January 1, 2027, businesses in the state must disclose to consumers when they are interacting with an AI decision system, unless it's obvious to a reasonable person. Exceptions include compliance with laws, cooperation with law enforcement, protecting safety, and conducting research. Developers and deployers are not required to disclose if it violates legal privileges or affects rights like freedom of speech.

There is no private right of enforcement. The act shall take effect nine months after enacted.

A773

Introduced on January 8, 2025, A773 relates to the use of automated lending decision-making tools by banks for the purposes of making lending decisions, excluding national banks and federal financial institutions. It requires each covered entity that uses such tools to conduct an impact assessment annually and provides for testing for accuracy, fairness, bias and discrimination, and an assessment of whether such tool produces discriminatory results on the basis of a consumer or a consumer's perceived race, religion, national origin, gender, status, gender identity, familial status, biometric information, age, or biometric data, etc.

Covered entities must conduct annual impact assessments to evaluate the fairness, accuracy, and bias in their automated lending tools, and post a summary on their website.

If the tool is found to produce discriminatory or biased outcomes, the entity must report the findings to the regulatory department, which may require immediate cessation of the tool’s use. Borrowers must be notified when automated lending tools are used to assess loan applications, including information about criteria, data sources, and reasons for denial.

Applicants who are denied loans based on incorrect personal information have thirty days to appeal. Information collected for impact assessments must be retained for seven years, and regulations are designed to ensure transparency, fairness, and accountability in automated lending practices.

This bill takes effect 90 days after passage

A1952

Introduced on January 14, 2025, A1952 requires employers and employment agencies to notify candidates for employment if machine learning technology is used to make hiring decisions. Employers must notify candidates at least ten business days before use, providing details about the tool’s criteria, data sources, and data retention policies. The bill allows such candidate to request an alternative selection process or accommodation and provides that such provisions do not limit any candidate's right to bring a civil action in any court of competent jurisdiction.

This bill takes effect on the January 1 immediately following passage.

A3125

Introduced on January 23, 2025, A3125 relates to the use of automated housing decision making tools to make housing decisions. It provides that a disparate impact analysis shall be conducted annually to assess the actual impact of any automated housing tool used by any landlord to select applicants for housing. It requires the landlord to notify each such applicant of the use and allows the attorney general to initiate an investigation if a preponderance of the evidence establishes a violation.

This bill takes effect 90 days after passage.

A3265

Introduced on January 27, 2025, A3265 enacts the "New York artificial intelligence bill of rights.” It provides that any New York resident affected by any system making decisions without human intervention be entitled to certain rights and protections to ensure that the system impacting their lives do so lawfully, properly, and with meaningful oversight, provides for the right to have agency over one's data, and provides that the act provides for access to critical resources and services that are fundamental for the well-being, security, and equitable participation of New York residents in society.

The rights and protections for individuals are as follows:

    • Right to safe and effective systems.
    • Protections against algorithmic discrimination.
    • Protections against abusive data practices.
    • Right to have agency over one's data.
    • Right to know when an automated system is being used.
    • Right to understand how and why an automated system contributed to outcomes.
    • Right to opt out of an automated system.
    • Right to work with a human instead of an automated system.

Companies must ensure their automated systems are safe, non-discriminatory, respect data privacy, provide clear notice and explanations, offer human alternatives, and allow timely human consideration and remedies for errors or appeals. They must also comply with reporting requirements and avoid practices that obscure user choice or burden users with privacy-invasive default settings.

If an operator of an automated system violates any rights stated in the article, they are liable for a penalty of at least three times the damages caused. This penalty can be recovered through an action brought by the attorney general. There is no private right of action.

This bill takes effect 90 days after passage.

A3411

Introduced on January 27, 2025, A3411 requires the owner, licensee or operator of a generative artificial intelligence system to conspicuously display a warning on the system's user interface that is reasonably calculated to consistently apprise the user that the outputs of the generative artificial intelligence system may be inaccurate and/or inappropriate.

Violations of this bill incur a $1,000 fine per violation. This bill takes effect 90 days after passage.

A3911

Introduced on January 30, 2025, A3991 requires that a health care plan or specialized health care service plan that uses AI, algorithm, or other software tool for the purpose of utilization review or utilization management functions shall comply with the following requirements:

  • AI must base its determinations on the enrollee's medical or dental history, individual clinical circumstances, and other relevant clinical information.
  • AI cannot replace health care provider decision making.
  • AI must not discriminate based on race, color, religion, national origin, ancestry, age, sex, gender identity, sexual orientation, disability, or other health conditions.
  • AI must be applied fairly and equitably.
  • AI must be open to inspection.
  • Policies and procedures must disclose the use and oversight of AI.
  • AI performance, use, and outcomes must be periodically reviewed and revised.
  • Patient data must not be used beyond its intended purpose, in compliance with state laws and HIPAA.
  • AI must not cause harm to enrollees.

This act shall take effect immediately upon passage.

A3924

Introduced on January 30, 2025, A3924 relates to privacy rights involving digitization and provides that a person, firm or corporation that uses for advertising purposes, or for the purposes of trade, the name, portrait, picture, likeness, or voice created or altered by digitization, without having first obtained the written consent of such person, or if a minor of such minor's parent or guardian, is guilty of a misdemeanor.

For purposes of this section, "digitization" means the use of software, machine learning, artificial intelligence, or any other computer-generated or technological means, including adapting, modifying, manipulating, or altering a realistic depiction.

This act shall take effect immediately upon passage.

A5309

Introduced on March 7, 2023, A5309, would amend state finance law to require that where state units purchase a product or service that is or contains an algorithmic decision system, that such product or service adheres to responsible artificial intelligence standards. The bill requires the commissioner of taxation and finance to adopt regulations in support of the law.

A6540

Introduced on March 5, 2025, A6540 applies to synthetic content creation system providers, synthetic content creation system hosting platforms, social media platforms, and state agencies that produce or distribute AI generated content for users in New York.

It requires AI-generated content to include provenance data identifying its origin, including the provider's name, the time of creation, and any modifications made. Hosting platforms cannot distribute synthetic content unless this data is properly applied, and social media platforms must preserve it when users upload content.

This bill does not cover all digital content, but only synthetic content generated or modified by AI systems. It does not apply to simple editing tools, as it focuses on AI models that heavily alter content.

The Attorney General can enforce violations through injunctions and penalties of up to $25,000 per infraction.

This bill takes effect 6 months after passage.

A6543A

Introduced on January 8, 2025, A6543A, the RAISE Act, seeks to mitigate the potential harms of frontier AI models. In scope developers have trained at least one frontier model, the compute cost of which exceeds five million dollars, and has spent over one hundred million  dollars in compute costs in aggregate in training frontier models.

The bill requires developers of a frontier model to have a written safety and security protocol, revised annually, and published to the Division of Homeland Security and Emergency Services. Developers must also ensure the frontier AI will not cause critical harm, defined as the death or serious injury of one hundred or more people or at least one billion dollars of damages to rights in money or property. Finally, the bill requires developers to submit to a third party audit of the frontier models.

The bill includes whistleblower protections for employees at large developers and allows employees facing retaliation a right of action. There is no broad private right of action and violations begin with a $10 million fine, then climb to $30 million for subsequent violations.

The bill takes effect 90 days after passage.

A6545

Introduced on May 6, 2025, A6545 imposes liability for damages caused by a chatbot impersonating certain licensed professionals. It requires a proprietor to provide clear, conspicuous notice to consumers that they are interacting with a non-human chatbot program. The bill includes a private right of action for actual damages..

This bill takes effect 90 days after passage.

A8098

A8098 (Senate version S7922) reintroduced in 2025 as A1509 (senate version S1815), would require publishers of books created wholly or partially with the use of generative artificial intelligence to disclose such use of generative artificial intelligence before the completion of such sale; applies to all printed and digital books consisting of text, pictures, audio, puzzles, games or any combination thereof.

A8129

A8129 (senate version S8209), reintroduced in 2025 as S8209, would create the New York Artificial Intelligence Bill of Rights. Where a New York resident is affected by any system making decisions without human intervention, under the AI Bill of Rights they would be afforded the following rights and protections: (i) the right to safe and effective systems; (ii) protections against algorithmic discrimination; (iii) protections against abusive data practices; (iv) the right to have agency over one's data; (v) the right to know when an automated system is being used; (vi)  the right to understand how and why an automated system contributed to outcomes that impact one; (vii) the right to opt out of an automated system; and (viii) the right to work with a human in the place of an automated system.

A8195

A8195, reintroduced in 2025 as A3356, the Advanced Artificial Intelligence Licensing Act, requires the registration and licensing of high-risk advanced artificial intelligence systems, establishes the advanced artificial intelligence ethical code of conduct, and prohibits the development and operation of certain artificial intelligence systems.

A8546

Introduced on May 20, 2025, and referred to the judiciary, A8546 relates to the disclosure of use of generative artificial intelligence in a civil action. It requires certification of filings produced using generative artificial intelligence, that the brief of an appellant contains an affidavit disclosing the use of AI and certifying that a human has reviewed and verified the content.

If no generative AI was used in drafting a document, no disclosure is required.

The act will take effect 90 days after enacted.

A10374

A10374 (Senate version S9439), introduced May 21, 2024, and reintroduced in 2025 as S3133, would amend the general business law to prohibit robots and uncrewed aircraft equipped or mounted with weapons. “Robotic device” is a “mechanical device capable of locomotion, navigation, or movement on the ground and that operates at a distance from its operator or supervisor, based on comments or in response to sensor data, artificial intelligence, or a combination.” The bill would make it unlawful for any person to use a robotic device or uncrewed aircraft to commit the crime of menacing; criminally harass another person; or use the device to physically restrain or attempt to restrain a human being. A knowing violation of this law would result in a civil penalty. This bill would not apply to a defense industrial company if the company were acting within their contract with the U.S. Dept. of Defense; a manufacturer or developer who modifies or operates these devices for the purpose of developing technology intended to detect the unauthorized weaponization of a robotic device or uncrewed aircraft; or government officials acting within the scope of their duties.

AB6765

Introduced on March 12, 2025, AB6765 regulates the use of algorithmic pricing. The bill requires any site using an algorithm to set individualized prices based on consumer data to clearly and conspicuously state that the price was generated using AI. The bill prohibits the use of protected class data in algorithmic pricing.

The bill includes a private right of action and becomes effective 60 days after passage.

S934

Introduced on January 8, 2025, S934 (assembly version 3411), requires the owner, licensee or operator of a generative artificial intelligence system to conspicuously display a warning on the system's user interface that is reasonably calculated to consistently apprise the user that the outputs of the generative artificial intelligence system may be inaccurate and/or inappropriate.

S1169

Introduced on January 8, 2025, S1169, established the “New York Artificial Intelligence Act” to regulate the development and use of certain artificial intelligence systems to prevent algorithmic discrimination.  The bill requires independent audits of high risk AI systems and provides for enforcement by the attorney general as well as a private right of action.

S1854

Introduced on January 14, 2025, S1854 establishes the New York Workforce Stabilization Act. It requires employers employing more than 100 employees to conduct artificial intelligence impact assessments prior to using AI. The bill also establishes a surcharge on corporations that use artificial intelligence or data mining or have a specific number of employees displaced by artificial intelligence. The threshold numbers of displaced employees is relative to the number of employees at the company, and is detailed in the bill. Furthermore, it provides that such impact assessments shall be conducted prior to any material change to the artificial intelligence that may change the outcome or effect of such system.

This act shall take effect immediately; provided, however, that the referenced tax law amendment (section three of the bill) takes effect in January 1, 2026.

S2277

S2277 (Assembly version A3308), originally introduced January 19, 2023, and  reintroduced in 2025 as S4276, would require business entities in New York that have personal information of at least 500 individuals to give notice about the entity’s use of the personal information. The bill also would create anti-discrimination practices for the entity to follow regarding the use of the AI.

S5668

Introduced on February 27, 2025, S5668 imposes liability on proprietors of chatbots for misleading, incorrect, contradictory or harmful information user interactions with a chatbot that result in financial loss or other demonstrable harm. The bill has additional protections for minors, requiring age verification and parental consent before access to companion chatbots.

Liability is imposed only in specific situations, including when a chatbot provides materially misleading, incorrect, contradictory, or harmful information that leads to financial loss or bodily harm, when a companion chatbot fails to prevent or adequately respond to self-harm risks, and when a chatbot proprietor fails to meet accuracy, disclosure, or intervention requirements outlined in the law. Proprietors cannot disclaim liability simply by notifying users they are interacting with AI.

This bill takes effect one year after passage.

S6638

S6638 (assembly version A7106), reintroduced in 2025 as S2414, the Political Artificial Intelligence Disclaimer (PAID) Act, would amend election and legislative law in relation to the use and disclosure of synthetic media. The act would add a subdivision to the election law that requires any political communication which was produced by synthetic media to be disclosed via printed or digital communications. The disclosure must read “This political communication was created with the assistance of artificial intelligence.” If passed, the act would take effect on January 1, 2024.

S6955

Introduced on March 27, 2025, S6955 establishes the Artificial Intelligence Training Data Transparency Act. It requires the developer of a model/service to post on the developer’s website documentation regarding the data used by the developer to train the generative artificial intelligence model or services. This is including, but not limited to, the sources of owners of the databases and the number of data points in such data sets, which may be in general ranges with estimated figures.

Developers must post documentation on their website about the data used to train generative AI models, including:

  • Sources and owners of datasets.
  • Purpose and description of datasets.
  • Number and types of data points.
  • Copyright status and whether datasets include personal information.
  • Any modifications made to the datasets.
  • Time periods of data collection and usage.

This act shall take effect immediately upon passage.

S7623

Originally introduced on August 4, 2023, as S7623 (reprinted as S7623C on May 31, 2024) (assembly version A9315), and reintroduced as S185 (assembly bill A3779), this bill would impose statewide requirements regulating tools that incorporate artificial intelligence to assist in employee monitoring and the employment decision-making process.  In particular, the bill (1) defines a narrow set of allowable purposes for the use of electronic monitoring tools (EMTs), (2) requires that the EMT be “strictly necessary” and the “least invasive means” of accomplishing those goals, and (3) requires that the EMT collect as little data as possible on as few employees as possible to accomplish the goal. The bill also requires that employers exercise “meaningful human oversight” of the decisions of automated tools, and conduct and publicly post the results of an independent bias audit, and provide notification requirements to candidates that a tool is in use.

S8331

Introduced on June 3, 2025, S8331 enacts the "New York artificial intelligence transparency for journalism act.” It requires developers of generative artificial intelligence systems or services to post information on the developer's website regarding video, audio, text and data from a covered publication used to train the generative artificial intelligence system or service.

The information to be posted includes:

  • List the URLs or URIs accessed by crawlers.
  • Describe the video, audio, text, and data used, including type, source, and how it was obtained.
  • State if any source identifiers, terms, or copyright notices were removed.
  • The timeframe during which the data was collected.
  • Need to post this information if there's a written agreement with the content provider allowing access and agreeing not to post the details.

There is a private right of action authorizing fines up to $10,000 per violation. The act shall take effect immediately upon passage.

SB365

Introduced on January 4, 2023, SB 365, the New York Privacy Act, would be the state’s first comprehensive privacy law. The law would require companies to disclose their use of automated decision-making that could have a “materially detrimental effect” on consumers, such as a denial of financial services, housing, public accommodation, health care services, insurance, or access to basic necessities; or could produce legal or similarly significant effects. Companies must provide a mechanism for a consumer to formally contest a negative automated decision and obtain a human review of the decision, and must conduct an annual impact assessment of their automated decision-making practices to avoid bias, discrimination, unfairness or inaccuracies.

The law would also permit consumers to opt-out of “profiling in furtherance of decisions that produce legal or similarly significant effects concerning a consumer.” Profiling is defined as any type of automated processing performed on personal data to evaluate, analyze, or predict personal aspects” such as “economic situation, health, personal preferences, interests, reliability, behavior, location, or movements.” Finally, the law would mandate that companies conduct a data protection assessment on their profiling activities, since profiling would be considered a processing activity with a heightened risk of harm to the consumer.

SB5641

Introduced on March 10, 2023, SB 5641A (Assembly version A567), would amend labor law to establish criteria for the use of automated employment decision tools (AEDTs). The proposed bills mirrors NYC’s Local Law 144 in many ways. In particular, employers who utilize AEDTs must: (1) obtain from the seller of the AEDT a disparate impact analysis, not less than annually; (2) ensure that the date of the most recent disparate impact analysis and a summary of the results, along with the distribution date of the AEDT, are publicly available on the employer’s or employee agency’s website prior to the implementation or use of such tool; and (3) annually provide the labor department a summary of the most recent disparate impact analysis.

S6748

Introduced on March 21, 2025, and referred to Consumer Protection, S6748 requires that every newspaper, magazine or other publication printed or electronically published in this state, which contains the use of generative artificial intelligence or other information communication technology, shall identify that certain parts of such newspaper, magazine, or publication were composed through the use of artificial intelligence or other information communication technology.

This bill is effective 60 days after passage.

S8206

S8206 (assembly version A8105), reintroduced in 2025 as A1342, requires that every operator of a generative or surveillance advanced artificial intelligence system that is accessible to residents of the state require a user to create an account prior to utilizing such service. Prior to each user creating an account, such operator must present the user with a conspicuous digital or physical document that the user must affirm under penalty of perjury prior to the creation or continued use of such account.  Such document shall state the following:

“I, ________ RESIDING AT ________, DO AFFIRM UNDER PENALTY OF PERJURY THAT I HAVE NOT USED, AM NOT USING, DO NOT INTEND TO USE, AND WILL NOT USE THE SERVICES PROVIDED BY THIS ADVANCED ARTIFICIAL INTELLIGENCE SYSTEM IN A MANNER THAT VIOLATED OR VIOLATES ANY OF THE FOLLOWING AFFIRMATIONS:

  1. I WILL NOT USE THE PLATFORM TO CREATE OR DISSEMINATE CONTENT THAT CAN FORESEEABLY CAUSE INJURY TO ANOTHER IN VIOLATION OF APPLICABLE LAWS;
  2. I WILL NOT USE THE PLATFORM TO AID, ENCOURAGE, OR IN ANY WAY PROMOTE ANY FORM OF ILLEGAL ACTIVITY IN VIOLATION OF APPLICABLE LAWS;
  3. I WILL NOT USE THE PLATFORM TO DISSEMINATE CONTENT THAT IS DEFAMATORY, OFFENSIVE, HARASSING, VIOLENT, DISCRIMINATORY, OR OTHERWISE HARMFUL IN VIOLATION OF APPLICABLE LAWS;
  4. I WILL NOT USE THE PLATFORM TO CREATE AND DISSEMINATE CONTENT RELATED TO AN INDIVIDUAL, GROUP OF INDIVIDUALS, ORGANIZATION, OR CURRENT, PAST, OR FUTURE EVENTS THAT ARE OF THE PUBLIC INTEREST WHICH I KNOW TO BE FALSE AND WHICH I INTEND TO USE FOR THE PURPOSE OF MISLEADING THE PUBLIC OR CAUSING PANIC."

Failed

A3593

Introduced February 3, 2023, and referred to the Consumer Affairs and Protection Committee, A3593 would amend general business law to require companies to follow a host of guidelines centered around protecting consumer privacy. In regard to AI, the bill would apply to a “controller” or “the person who, alone or jointly with others, determines the purposes and means of the processing of personal data.” This bill defines AI as an “automated decision-making” process derived from machine learning, AI, or an automated process involving personal data resulting in a decision affecting consumers. If a “controller makes an automated decision involving solely automated processing that materially contributes to a denial of financial or lending services, housing, public accommodation, insurance, health care services, or access to basic needs,” the controller would need to (1) disclose that an automated process made the decision; (2) provide an avenue for consumers to appeal the decision; and (3) explain the process to appeal the decision. In addition, a controller or processor engaged in this automated decision-making must annually do an “impact assessment” describing the automated decision-making process and assess if the process produces any discriminatory results. An independent auditor must assess the impact assessment results. This bill would take effect immediately.

A7859

A7859, introduced July 7, 2023, and referred to the Labor Committee, would amend labor law to require an employer or employment agency using an “automated employment decision tool to screen candidate who have applied for a position” to notify each candidate that this tool has been used to assess or evaluate the candidate, the job qualification and characteristics the tool uses, and information about the type of data the tool collects. “Automated employment decision tool” is any computation process that uses “machine learning, statistical modeling, data analytics, or artificial intelligence” to substantially assist or replace discretionary decision making for employment decisions. This bill would take effect on January 1 following enactment.

A8158

Introduced on October 16, 2023, A8158 (senate version S7847), requires that every newspaper, magazine or other publication printed or electronically published in this state, which contains the use of generative artificial intelligence or other information communication technology, identify that certain parts of such newspaper, magazine, or publication were composed through the use of artificial intelligence or other information communication technology.

A8179

A8179, introduced October 27, 2023, and referred to the Ways and Means Committee, would tax certain corporations that have displaced people from their employment because of AI technologies, including machinery, AI algorithms, or computer applications. This bill would apply to corporations doing business in New York that have met specified requirements, such as having less than one million dollars but at least ten thousand dollars of receipts in New York. This act would take effect immediately upon enactment and apply to the next taxable year.

A8195

A8195, introduced October 27, 2023, and referred to the Assembly Science and Technology Committee, would, amongst a variety of things, establish an AI ethical code of conduct as well as require registration and licensing of “high-risk advanced artificial intelligence systems.” “High-risk” advanced AI system is a system that “possesses capabilities that can cause significant harm to the liberty, emotional, psychological, financial, physical, or privacy interests of an individual or groups of individuals, or which have significant implications on governance, infrastructure, or the environment.” This bill would apply to operators who distribute and have control over the development of a high-risk AI system.

A8369

A8369, introduced December 13, 2023, would amend insurance law to prohibit insurers from essentially using AI, an algorithm, or predictive model that incorporates external consumer data and information sources in a way to “unfairly discriminate” on the basis of “race, color, national or ethnic origin, religion, sex, sexual orientation, disability, gender identity, or gender expression.” The bill includes certain requirements that the insurer must follow, such as providing information to the superintendent, in order to avoid unfairly discriminating against people. “External consumer data and information source” includes data used by an insurer to establish lifestyle indicators in “marketing, underwriting, pricing, utilization management, reimbursement methodologies, and claims management” practices.

A9028

Introduced February 5, 2024, and referred to the Assembly Election Law Committee, A9028 would amend election law to, as is relevant, require disclosure of any political communication covered by the bill and made by AI or artificial media. The bill would apply to printed or digital political communications, including “brochures, flyers, posters, mailings, electronic mailings, or internet advertising.” The disclosure must state the communication was “created by or with the assistance of artificial intelligence.” The disclosure must be readable, clear, and conspicuous. If a person has an intent to damage a candidate or deceive with the political communication, then a violation can amount to a criminal charge.

A9054

A9054, introduced February 5, 2024, and referred to the Assembly Election Law Committee, would amend election law to prohibit entities from using generative AI in whole or in part to create a political communication that contains “any realistic photo, video, or audio depiction of a candidate, or person interacting with a candidate.” AI includes “any technology that engages in its own learning and decision-making to generate new data.” If passed, this bill would take effect immediately.

A9103

A9103, introduced February 7, 2024, and referred to the Assembly Election Law Committee, would amend election law to include a notification requirement. The bill would require “any political communication made by phone call, email, or other message-based communication” that uses AI to create a human-like conversation to reasonably inform the person that they are communicating with AI. If passed, this bill would take effect immediately.

A9149

A9149, introduced February 8, 2024, and referred to the Assembly Insurance Committee, would amend insurance law to require insurers to notify insureds about the use or lack of use of AI-based algorithms to review. This bill would broadly apply to insurers who are authorized to write accident and health insurance in New York, clinical peer reviewers who participate in a utilization review process for insurers, a corporation organized under New York, and health maintenance organizations. The department should certify these AI-based algorithms and trainings being used have minimized the risk of bias regarding a “covered person’s race, color, religious creed, ancestry, age, sex, gender, national origin, handicap or disability” and should “adhere to evidence-based clinical guidelines.” In addition, the bill would require documentation of “the utilization review of the individual clinical records or data prior to issuing an adverse determination.” A violation can result in a license suspension or revocation; refusal, for a maximum of 1 year, to issue a new license; a maximum fine of $5,000 per violation; or a maximum fine of $10,000 for each willful violation.

A9314

A9314, introduced February 24, 2024, and referred to the Labor Committee, would create criteria for the use of an “automated employment decision tool.” This is a system “used to filer employment candidates or prospective candidate for hire in a way that establishes a [referred candidate or candidate without relying on candidate-specific assessments by individual decision-makers.” This includes personality tests, cognitive ability tests, resume scoring systems, and other systems governed by statistical theory or specified methodologies. “Automated employment decision tool” does not include a tool that “does not automate, support, substantially assist or replace discretionary decision-making processes and that does not materially impact natural persons.” The guidelines this bill would create are conducting a disparate impact analysis to assess the impact of the employer’s use of an automated employment decision tool, writing a summary of the most recent disparate impact analysis, and providing to the department this summary. This act would take effect immediately.

S2477

S2477 (Assembly version A5631), introduced January 20, 2023, and amended recently on April 15, 2024, would revise the New York State Fashion Workers Act to require model management companies to obtain “clear written consent for the creation or use of a model’s digital replica, detailing the scope, purpose, rate of pay, and duration of such use.” The bill would prohibit model management companies from creating, altering, or manipulating a model’s digital replica using AI without written consent from the model. “Digital replica” is a “significant, computer-generated or artificial intelligence-enhanced representation of a model’s likeness.”

S6685

S6685 (Assembly version A843), introduced May 4, 2023, would prohibit motor vehicle insurers from using AI-generated algorithms used to construct coverage terms, premiums and rates, and actuarial tables that can discriminate based on age, marital status, sex, sexual orientation, educational background or education level attained, employment status or occupation, wealth, consumer credit information, ownership or interest in real property, and other characteristics.

S7422 and A7634

S7422, introduced on May 24, 2023 and A7634, introduced on May 25, 2023, would prohibit film production companies who apply for Empire State film production credit from using synthetic media in any component of production that would displace a natural person from that role. This includes any form of media, such as text, image, video, or sound that is created or modified by use of artificial intelligence. Compliance with this act would be a condition for granting of the credit. If passed, the act would take effect immediately.

S7592

Introduced on July 7, 2023, S7592 (assembly version A7904), would amend election law to require that any political communication, that uses an image or video footage that was generated in whole or in part with the use of artificial intelligence, disclose that artificial intelligence was used in such communication.

S7735

Introduced on November 3, 2023, S7735 (assembly version A7906), provides that it shall be unlawful for a landlord to implement or use an automated decision tool, unless it: (1) no less than annually, conducts a disparate impact analysis to assess the actual impact of any automated decision tool and publicly files the assessment; and (2) notifies all applicants than an automated decision tool will be used and provides the applicant with certain disclosures related to the automated decision tool.  If passed, the law will go into immediate effect.

S8214

Introduced on January 12, 2024, S8214, requires the registration with the Department of State of certain companies whose (i) primary business purpose is related to artificial intelligence as evidenced by their North American Industry Classification System (NAICS) Code of 541512, 334220, or 511210, and (ii) who reside in New York or sell their products or services in New York.  The fee for registration is $200. Failure to register can result in a fine of up to ten

thousand dollars. Companies that knowingly fail to register may be barred from operating or selling their AI products or services in the state for a period of up to ten years.

S8755

S8755, introduced March 7, 2024, establishes the New York artificial intelligence ethics commission, which would promulgate rules regulating AI use by business entities as well as other regulations. This bill also specifies that no entity doing business in New York shall use AI systems that discriminate based on race, gender, sexuality, disability, or other protected characteristics; create or disseminate false or misleading information created by AI to deceive the public; participate in the unlawful collection, processing, or dissemination of personal information by an AI system without consent; participate in the unauthorized use or reproduction of IP through AI; fail to have safeguards to prevent harm or material loss through AI; conduct AI research that is harmful or without the subjects’ consent; intentionally disrupt, damage, or subvert an AI system to undermine its integrity or performance; or participate in the unauthorized use of a person’s personal identity or data by AI to commit fraud or theft. The commission can impose penalties for any violation. This act would take effect immediately.

S9381

S9381 (Assembly version A10494), introduced May 14, 2024, would amend the general business law to add liability to proprietors for chatbot responses. “Proprietors” includes any person or business entity with more than 20 employees that owns, operates, or deploys a chatbot system that interacts with users. This would not include third-party developers that license their chatbot technology to the proprietor. “Chatbot” is an AI system, software program, or technological application that creates “human-like conversation and interaction through text messages, voice commands, or a combination thereof to provide information and services to users.” The proprietor is responsible for “ensuring such chatbot accurately provides information aligned with the formal policies, product details, disclosures and terms of service offered to users.” This liability cannot be waived through disclosure to users. Additionally, proprietors would have to provide “clear, conspicuous, and explicit notice to users that they are interacting” with AI, rather than a human representative.

S9401

S9401, introduced May 15, 2024, would amend the labor law to prohibit an employer from using or applying an AI unless the employer has conducted an impact assessment for the AI’s impact and use. This assessment should be done at least once every 2 years and before any material change to the AI. The impact assessment must include these requirements: a description of the AI’s objectives; an evaluation of the ability of the AI to achieve its objectives; a summary of the underlying AI tools being used; the design and training data to develop the AI process; the extent the AI requires input of sensitive and personal data, how that data is used and stored, and any control users may have over this data; an estimated number of employees who have already been displaced by AI; and an estimated number of employees expected to be displaced by AI. “Employer” includes a business residing in New York, is not a small business, and employs more than 100 people.

S9434

S9434 (Assembly version A9472), introduced May 15, 2024, would prohibit landlords from using an algorithmic device to set the amount of a residential tenant’s rent. “Algorithmic device” includes “a device that uses one or more algorithms to perform calculations of data, including data concerning local or statewide rent amounts being charged to tenants by landlords.” This also would include a product that incorporates an algorithmic device. A violation would result in monetary penalty.

S9542

S9542, introduced May 16, 2024, would amend general business law by prohibiting the publication of a “digital or physical newspaper, magazine, or periodical which was wholly or partially produced or edited through the use of artificial intelligence without significant human oversight.” AI includes the “use of machine learning technology, software, automation, and algorithms to perform tasks, to make rules and/or predictions based on existing data sets and instructions.”

S9450

S9450 (Assembly version A10103), introduced May 15, 2024, would amend general business law to require an owner, licensee, or operator of “generative artificial intelligence” to “conspicuously” disclose a warning on the user’s interface that would inform the user that the outputs may be inaccurate and/or inappropriate. If an entity fails to do this, then they must pay a civil penalty of $25 per user of such system or $100,000.

S9609

S9609, introduced May 16, 2024, would make it unlawful for a rental property owner, or any agent or subcontractor thereof, to collect information on historical or contemporaneous prices, supply levels, or contract information as well as renewal dates using a system, software, process made by an algorithm. “Rental property owner” includes individuals as well as business entities. The rental property owner cannot exchange for value the services of a coordinator, which any person that operates software or data analytics services.

Proposed

H375

Introduced on March 11, 2025, H375, also known as the "Artificial Intelligence and Synthetic Media Act" regulates the use of deep fake ads in elections and protect minors and the public from misuse of artificial intelligence and synthetic media. A political candidate who has been affected by a deepfake in violation of the bill can seek injunctive relief and reasonable attorneys’ fees. The bill also provides a private cause of action for those affected by the disclosure of fabricated intimate images and imposes criminal liability if the defendant commits the offense with the aid of generative AI.

H462

Introduced on March 19, 2025, H462 provides consumers the right to opt out of the processing of their personal data for profiling furtherance of solely automated decisions that produce significantly effects concerning the consumer. This bill applies to anyone conducting business in North Carolina and allows the NC Department of Justice to investigate and enforce violations of the bill.

H970

Introduced on April 10, 2025, H970 titled "Preventing Algorithmic Rent Fixing," seeks to regulate the residential rental housing market by outlawing practices that facilitate rent collusion among real estate lessors. The bill defines "coordinating functions" that service providers might offer, such as collecting and analyzing competitor data or recommending rental prices using algorithms trained on nonpublic competitor information. It explicitly prohibits real estate lessors from using these coordinating functions and forbids service providers from facilitating agreements not to compete.

Furthermore, the legislation ensures that aggrieved parties can pursue legal action, classifying violations as unfair or deceptive trade practices, and nullifies pre-dispute arbitration agreements or joint-action waivers in cases related to these violations.

The act goes into force October 1, 2025.

S287

Introduced on March 17, 2025, S287 outlines proposed changes to healthcare insurance utilization review processes. The bill, titled "Safeguard Health Ins. Utilization Reviews," primarily focuses on restricting the sole reliance on artificial intelligence for making determinations about the medical necessity of healthcare services. It mandates that licensed and qualified human healthcare providers must ultimately make these decisions, preventing AI algorithms from being the only basis for denials, delays, or modifications of services. Furthermore, the legislation extends these requirements to the North Carolina State Health Plan for Teachers and State Employees, ensuring similar safeguards are implemented across all its review practices and third-party contracts.

The act would go into force 30 days after it becomes law.

S514

Introduced on March 25, 2025, S514 prohibits the use of a North Carolina minor’s data for advertising or algorithmic recommendations, which are defined as a computational process that uses machine learning, artificial intelligence, generative AI, etc. that makes decisions or facilitates human decision-making using user data. A covered platform’s violation of the bill is an unfair or deceptive act or practice and provides a private right of action for minors if they are affected by a covered platform, entitling them to compensatory, punitive, injunctive, declaratory relief and attorneys’ fees.

S624

Introduced on March 25, 2025, S624 titled "AI Chatbots - Licensing/Safety/Privacy," aiming to regulate artificial intelligence chatbots. The bill is divided into two main parts: Chatbot Licensing and Safety and Privacy. The licensing section focuses on health information chatbots, requiring them to obtain a license from the North Carolina Department of Justice, demonstrating adherence to strict technical, security, and privacy standards, including comprehensive audits and regular reporting. The safety and privacy section establishes a "Duty of Loyalty" for all covered chatbot platforms, mandating transparent identification of chatbots as non-human entities, explicit user consent for interactions and data collection, and robust data privacy measures like de-identification and transport encryption for user-related and sensitive personal information. The bill also includes provisions for enforcement, penalties, and individual civil actions against platforms that violate its requirements.

The act becomes effective on January 1, 2026.

S722

Introduced on March 26, 2025, S722 imposes a duty of care on covered platforms and requires them to utilize the highest privacy settings by default for all users reasonably likely to be children and establish data minimization principles to include prohibiting profiling and behavioral advertising targeting children.v

Failed

HB1320

Introduced on January 21, 2025, HB 1320 prohibits the use of deepfake videos and images, which are defined as any media digitally altered or created by AI with the intent to deceive. Failed.

Proposed

SB79

Introduced on February 4, 2025, SB79 regulates the use of pricing algorithms, which are defined as any computational process, including those derived from artificial intelligence techniques, that process data to recommend or set a price or other commercial provision that is affecting commerce in the state. The bill prohibits the use or distribution of a pricing algorithm that uses or incorporates nonpublic competitor data. Each violation is considered a conspiracy against trade. As of March 4, 2025, the bill has entered its first hearing with the Senate Financial Institutions, Insurance and Technology Committee. 

SB163

Introduced on April 1, 2025, SB163 requires AI-generated products have a watermark, to prohibit simulated child pornography generated from AI, and to prohibit identity fraud using a replica of a person. As of May 7, 2025, the bill has entered its second hearing with the Senate Judiciary Committee.

SB164

Introduced on April 1, 2025, SB164 seeks to regulate the use of artificial intelligence by health insurers within the state. The bill introduces new reporting requirements for health plan issuers regarding their use of AI-based algorithms in utilization review processes, mandating annual submissions to the superintendent of insurance, which will then be publicly available. Crucially, the legislation prohibits health plan issuers from making decisions about patient care, including denials or delays, based solely on AI-derived results, emphasizing that human physicians or qualified providers must make medical necessity determinations considering individual clinical circumstances. It further requires a plain language explanation for any decision influenced by an AI algorithm, and grants the superintendent of insurance the authority to audit health plan issuers' AI usage.

If passed, Section 3902.80 will apply to health benefit plans that are issued, amended, or renewed on or after the effective date of this section.

SB217

Introduced on January 24, 2024, SB 217 would require AI-generated products have a watermark, prohibit removing such a watermark, prohibit simulated child pornography, and prohibit identity fraud using a replica of a person. Provides for injunctive relief and, for unauthorized removal of an AI watermark, a civil penalty of up to $10,000.

SB328

Introduced on November 14, 2024, SB 328 (reintroduced as SB79 on February 6, 2025) amends sections of the Revised Code to regulate the use of pricing algorithms, where pricing algorithm means any computational process, including ones derived from machine learning or AI techniques, that processes data to recommend or set a price or commercial term. No effective date.

Proposed

HB1537

Introduced on January 16, 2025, HB 1537 provides that a sports wagering operator cannot use AI to track the wagers of an individual, create an offer or promotion targeting a specific person, or create a gambling product like a microbet. As of February 5, 2025, the bill has been referred to the Rules Committee.

HB1762

Introduced on February 3, 2025, HB1762 provides that a covered entity that provides an online product or service cannot process a child’s personal data in a way that the entity knows is inconsistent with the best interest of the child, and prohibits the entity from profiling a child by default unless it can demonstrate that it has the proper safeguards in place to ensure that the profiling is consistent with the best interest of the child, or that profiling is necessary to provide the online product or service.

HB1915

Introduced on February 4, 2025, HB 1915 mandates that AI devices in healthcare be deployed and utilized in accordance with certain regulations, requires exclusive use by qualified end-user, and directs deployers to implement Quality Assurance Program, etc. Effective on November 1, 2025. As of February 4, 2025, the bill was referred for a second reading to the Rules Committee. 

HB1916

Introduced on February 3, 2025, HB 1916 creates the Responsible Deployment of AI Systems Act, which 1) directs AI systems to comply with existing laws, 2) requires deployers to classify AI systems,3) requires deployers to conduct assessments of AI systems, 4) notifies individuals when high-risk AI system influences certain decisions, 5) requires implementation of protocols, 6) requires annual performance report, 7) directs the AI Council to analyze feedback & make annual recommendations, and 8) provides for penalties. Would become effective on November 1, 2025.

As of February 4, 2025, the bill has been referred for a second reading to the Rules Committee.

HB1917 and HB1899

Both introduced on February 3, 2025, HB 1917 and HB 1899 create the Artificial Intelligence Act of 2025. The bill does not modify existing statutes but creates a standalone piece of legislation. It does not contain any details about how the bill will regulate AI but serves as a preliminary legislative action to recognize and regulate AI technologies at the state level. Effective on November 1, 2025. As of February 4, 2025, both bills have been referred for a second reading to the Rules Committee.

SB546

Introduced on January 13, 2025, SB 546 allows a consumer to exercise their consumer rights by opting out of the processing of the personal data for the purpose of profiling in furtherance of a decision that produces a decision affecting the customer. As of April 24, 2025, the bill has passed the Commerce and Economic Development Oversight Committee.

SB611

Introduced on January 14, 2025, SB 611 amends the current Oklahoma Statute to mandate that government, or businesses cannot use AI and biotech applications to determine who will live or die in any situation, receive medical care, or receive insurance coverage/ determine the amount of the coverage. Effective upon its passage and approval. As of February 4, 2025, the bill has been referred for a second reading to the Judiciary Committee. 

SB885

Introduced on January 16, 2025, SB 885 prohibits social media platforms from using an algorithm, AI, machine learning, or other technology to select, recommend, rank, or personalize content for a minor user based on the user’s profile/ other data from their online activity. Effective on November 1, 2025. As of March 4, 2025, the bill has been placed on General Order.

SB894

Introduced on January 16, 2025, SB 894 prohibits using a known deepfake of a candidate or political party within 90 days of an election, unless there is a conspicuous disclaimer stating that the image has been generated by AI. Effective on November 1, 2025. As of March 4, 2025, the bill has been placed on General Order.

Failed

HB3453

Introduced on February 5, 2024, HB 3453, the Oklahoma Artificial Intelligence Bill of Rights would give Oklahoma residents the following rights:

  1. The right to know when they are interacting with an artificial intelligence engine rather than a real person;
  2. The right to know when their data is being used in an artificial intelligence model and the right to opt-out;
  3. The right to know when contracts and other documents that they are relying on were generated by an artificial intelligence engine rather than a real person;
  4. The right to know when they are consuming images or text that were generated entirely by an artificial intelligence engine and not reviewed by a human;
  5. The right to be able to rely on a watermark or some other form of content credentials to verify the authenticity of creative product they generate or consume. Specifically, it shall not be permissible for any websites, social media platforms, search engines, and the like, to remove a watermark or content credential without inserting an updated credential that indicates that the original was removed or altered.
  6. The right to know that any company which includes any of their data in an artificial intelligence model has implemented industry best practice security measures for data privacy, and conducts at least annual risk assessments to assess design, operational and discrimination harm.
  7. The right to approve any derivative media that is generated by an artificial intelligence engine and uses audio recordings of their voice or images of them to recreate their likeness.
  8. The right to not be subject to algorithmic or model bias which discriminates based on age, race, national origin, sex, disability, pregnancy, religious beliefs, veteran status, or any other legally protected classification.

If passed, the act would take effect November 1, 2024.

HB3577

Introduced on February 5, 2024, HB3577, the Artificial Intelligence Utilization Review Act would:

  • Require health insurers to disclose the use of AI algorithms;
  • Require health insurers to submit AI systems to Oklahoma Department of Insurance for review;

A violation shall be deemed to be an unfair method of competition and an unfair or deceptive act or practice. Civil penalties between $5,000 and $10,000.

If passed, the act would take effect November 1, 2024.

HB3835

Introduced on February 5, 2024, HB 3835, the Ethical Artificial Intelligence Act would:

  • direct deployers of automated decision tools to complete and document certain impact assessments;
  • direct developers of automated decision tools to complete and document certain impact assessment;
  • direct deployers and developers to make impact assessment of certain updates;
  • mandate that developers and deployers provide certain impact assessment to the office of the attorney general;
  • require developer provide certain documentation to deployer;
  • require developer make certain information publicly available;
  • prohibit deployers from algorithmic discrimination.

The act would be enforced by the attorney general. A violation of the act would be an unfair or deceptive act in trade or commerce for the purpose of applying the Oklahoma Consumer Protection Act. Harmed parties may bring a civil action.

If passed, the act would take effect November 1, 2024.

Enacted

SB619

On August 1, 2023, Oregon passed SB619, the state’s first omnibus consumer privacy law.  The bill generally follows the Virginia Consumer Data Protection Act and sets out rules for profiling and automated decision-making.  Specifically, the bill enables individuals to opt-out of processing for the purpose of “profiling the consumer to support decisions that produce legal effects or effects of similar significance.”  Profiling is defined as “an automated processing of personal data for the purpose of evaluating, analyzing or predicting an identified or identifiable consumer’s economic circumstances, health, personal preferences, interests, reliability, behavior, location or movements.” Controllers must also perform a data protection assessment for high-risk profiling activities. The law went into effect on January 1, 2024.

Proposed

HB3899

Introduced on February 27, 2025, HB 3899 amends existing laws to prohibit controllers from processing sensitive data for the purposes of targeted advertising or profiling a consumer to make decisions that produce legal effects or similar significance. The bill also prohibits a controller from selling sensitive data for any reason. The bill applies to anyone who conducts business in the state who processes or controls the personal data of at least 35,000 consumers, or at least 10,000 consumers if the individual derives more than 20% of their annual gross revenue from selling personal data.

Proposed

HB78

Introduced on January 14, 2025, HB 78 would enact the Consumer Data Privacy Act, and gives the consumer the right to opt out of the processing of the consumer’s personal data for the purpose of profiling. As of April 23, 2025, the bill has been re-sent to the Appropriations Committee.

HB95

Under HB 95, introduced on January 14, 2025, it is considered an unfair or deceptive act to knowingly or recklessly create, distribute, or publish AI-generated content without a clear and conspicuous disclosure. The disclosure must state that the content was generated using AI and be presented in a way that is understandable and noticeable to the average consumer. The disclosure must be 1) displayed in the first instance when the content is presented to the consumer, 2) presented in the same medium as the content, 3) readily noticeable and understandable, 4) not contradictory or inconsistent, and 5) presented in the same medium as the content. Would be effective 60 days after passage. As of January 14, 2025, it has been referred to the Communications and Technology Committee. 

HB431

Under HB 431, introduced on January 31, 2025, an individual is guilty of unauthorized dissemination if they knowingly or recklessly distribute an artificially generated impersonation of an individual without their consent. This offense is a first-degree misdemeanor and is considered a third-degree felony if committed with the intent to defraud or injure the other person. Consent from the depicted person and law enforcement officers performing their official duties are exempted from prosecution under this bill. This bill is applicable if the victim or the offender is located within Pennsylvania. Would be effective 90 days after passage. As of January 31, 2025, it has been referred to the Communications and Technology Committee.

HB518

HB 518, introduced on February 5, 2025, aims to modify the definitions of “unfair methods of competition” and “unfair or deceptive acts or practices” to include failing to comply with the terms of a written guarantee, warranty, or policy generated by consumer-facing AI used by a business, and also engaging in any other fraudulent or deceptive behavior that creates a likelihood of confusion or misunderstanding. Would be effective 60 days after passage.

HB2660

Under both HB 2660 and HB 317, individuals who create or distribute AI-generated content must place a water mark of 50% opacity on 30% of the content. The watermark must include the statement: “Artificial Intelligence Generated Material.” Film or television productions are exempt if AI is used for visual effects without involving the use of an individual or if the individual has provided written consent for their likeness to be used. Violating this requirement is a second-degree misdemeanor and the first offense will result in a $1000 fine, while the second or subsequent offense within five years will result in a $10,000 fine. As of January 27, 2025, the bill has been referred to the Communications and Technology Committee.

SB806

Introduced on June 3, 2025, SB806 would amend the Unfair Trade Practices and Consumer Protection Law to include “knowingly or recklessly creating, distributing, or publishing any content generated by artificial intelligence without clear and conspicuous disclosure…” as a part of its definition of “unfair methods of competition” and “unfair or deceptive acts or practices.” As of June 3, 2025, it has been referred to the Communications and Technology Committee.

Failed

HB49

Introduced on March 7, 2023, HB49, would direct the Department of State to establish a registry of businesses operating artificial intelligence systems in the State.  The registry would include (1) The name of the business operating artificial intelligence systems; (2) The IP address of the business; (3) The type of code the business is utilizing for artificial intelligence; (4) The intent of the software being utilized; (5) The personal information and first and last name of a contact person at the business; (6) The address, electronic email address and ten-digit telephone number of the contact person; and (7) A signed statement indicating that the business operating an artificial intelligence system has agreed for the Department of State to store the business's information on the registry. There has been no further action on HB49 since March 7, 2023.

HB708

Introduced on March 27, 2023, HB708, would establish an omnibus consumer privacy law along the lines of those enacted in states like Virginia. Among its requirements, the bill provides consumers with the right to opt-out of the processing of their personal data for purposes of “profiling in furtherance of decisions that produce legal or similarly significant effects concerning the consumer.” Profiling is defined as a “form of automated processing performed on personal data to evaluate, analyze or predict personal aspects related to an identified or identifiable natural person's economic situation, health, personal preferences, interests, reliability, behavior, location or movements.” The bill also mandates the performance of data protection assessments in connection with “profiling” where the profiling presents “a reasonably foreseeable risk of: (i) discriminatory, unfair or deceptive treatment of, or unlawful disparate impact on, consumers; (ii) financial, physical or reputational injury to consumers; (iii) a physical or other intrusion upon the solitude or seclusion, or the private affairs or concerns, of consumers, where the intrusion would be offensive to a reasonable person; or (iv) other substantial injury to consumers.”

If passed, the act would go into effect in 18 months. There has been no further action taken on HB708 since March 27, 2023.

HB1201

Introduced on December 13, 2023, HB 1201 appears similar to HB 708 (above) in that it would establish an omnibus consumer privacy law. It provides consumers with the right to “Opt out of the processing of the consumer's personal data for the purpose of any of the following: (i) Targeted advertising; (ii) The sale of personal data, except as provided under section 5(b); and (iii) Profiling in furtherance of solely automated decisions that produce legal or similarly significant effects concerning the consumer.” “Profiling” is defined as “Any form of automated processing performed on personal data to evaluate, analyze or predict personal aspects related to an identified or identifiable individual's economic situation, health, personal preferences, interests, reliability, behavior, location or movements.” The bill would mandate data protection impact assessments where “the profiling presents a reasonably foreseeable risk of any of the following: (i) Unfair or deceptive treatment of, or an unlawful disparate impact on, a consumer.

(ii) Financial, physical or reputational injury to a consumer. (iii) A physical or other intrusion upon the solitude or seclusion of a consumer or the private affairs or concerns of a consumer where the intrusion would be offensive to a reasonable person. (iv) Any other substantial injury to a consumer.”

If passed, the act will take effect in 6 months.

HB1598

Introduced on August 7, 2023, HB 1598 would amend the Unfair Trade Practices and Consumer Protection Law to expand the definition of an unfair trade practice to include “creating, distributing or publishing any content generated by artificial intelligence without clear and conspicuous disclosure, including written text, images, audio and video content and other forms of media.

If passed, the act will take effect in 60 days.

HB1663

Introduced on September 7, 2023, HB 1663 would require disclosure by health insurers of the use of artificial intelligence-based algorithms in the utilization review process. Requirements would include:

  • Disclose to clinicians, subscribers, and the public that claims evaluations use AI algorithms
  • Define ‘Algorithms used in claims review’ as clinical review criteria and therefore ensure they are subject to existing laws and regulations that such criteria be grounded in clinical evidence
  • Require specialized health care professionals who review claims for health insurance companies and rely on initial AI algorithms for such reviews to individually open each clinical record or clinical data, examine this information, and document both their own review and reason for denial before any decision to deny a claim is conveyed to a subscriber or health care provider.
  • Require health insurance companies to submit their AI-based algorithms and training datasets to the Pennsylvania Department of Insurance for transparency and require the Department of Insurance to certify that said algorithms and training data sets have minimized the risk of bias based on categories outlined in the Human Relations Act and other anti-discrimination statutes as applicable to health insurance in Pennsylvania and adhere to evidence-based clinical guidelines.

If passed, the act will take effect in 60 days. No further action has been taken on HB 1663 since September 7, 2023.

HB1947

Introduced on January 9, 2024, HB 1947 appears similar to HB 708 and HB 1201 (above) in that it would establish an omnibus consumer privacy law.  It provides consumers with the right to “Decline or opt out of the processing of the consumer's personal information for the purpose of any of the following: (i) Targeted advertising. (ii) The sale of personal information. (iii) Profiling in furtherance of decisions that produce legal or similarly significant effects concerning a consumer.” “Profiling” is defined as “A form of automated processing of personal information to evaluate, analyze or predict personal aspects concerning an identified individual or identifiable individual, including the individual's economic situation, health, personal preferences, interests, reliability, behavior, location or movements.” A Data Protection Impact Assessment is not specifically mentioned in this bill.

If passed, the act will take effect in 1 year.

SB1044

PA SB1044, introduced May 16, 2024, proposes amendments to the Unfair Trade Practices and Consumer Protection Law that address the creation, distribution, or publication of AI-generated content. A disclosure would be required that clearly states that the content was AI-generated. The amendments would exempt owners, agents, or employees of radio or television stations, ISPs, newspapers, and other publications that, in good faith, acted without knowledge that the content was AI-generated.

Proposed

H5224

Introduced on February 7, 2025, H5224 provides a cause of action in AI cases—developers of covered models or covered model derivatives are strictly liable for all injuries to a non-user of the covered model that satisfy the harm element of a negligence claim, if causation and foreseeability elements are met. Effective upon passage. As of February 11, 2025, the bill was held for further study on recommendation of the Committee.

S13 and H5172

Introduced on January 23 and 24, 2025, S13 and H5172 both amend Title 27 of the General Laws to add the Transparency and Accountability in Artificial Intelligence Use by Health Insurers to Manage Coverage and Claims Act. The bill aims to regulate the use of AI by health insurers to ensure transparency, accountability, and compliance with state and federal requirements for claims and coverage management. Under the bill, insurers must publicly disclose how they use AI to manage claims and coverage, and maintain documentation of AI decision for at least five years. Additionally, enrollees and healthcare providers must receive notice when AI is used to issue an adverse determination, along with a clear process for appealing such determination. To increase accountability in the use of AI in insurance coverage and claims management, the bill also provides that insurers cannot rely exclusively on AI to deny, reduce, or alter coverage or claims for medically necessary care, and that adverse determinations must be reviewed by human healthcare professionals. Effective upon passage.

S627

Introduced on March 7, 2025, S627 provides that a developer of a high-risk artificial intelligence system must use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination from using the high-risk AI system. As of May 12, 2025, the bill has been recommended by the Committee to be held off for further study.

S9203

Introduced on March 27, 2025, S903 prohibits a covered entity that provides an online service, product or feature to profile a known child by default unless it can demonstrate that it has the appropriate safeguards in place to ensure that profiling is consistent with reasonable care or that profiling is necessary to provide the online service. As of April 1, 2025, the bill has been recommended by the Committee to be held off for further study.

Failed

H7521

RI H7521, introduced February 7, 2024, seeks to regulate automated decision tools and artificial intelligence by requiring regular impact assessments to measure the purpose, outputs, safeguards, and adverse impacts of such technologies. The bill would require that individuals subject to such automated decisions be notified that the consequential decisions were made using automated tools and/or AI. It also prohibits discrimination and allows civil actions against developers and deployers for such discrimination.

Committee recommended that the measure be held for further study.

HB6236

Introduced on March 30, 2023, HB6236, the Rhode Island Data Transparency And Privacy Protection Act, would establish an omnibus consumer privacy law along the lines of those enacted in states like Virginia.  Among its requirements, the bill provides consumers with the right to opt-out of the processing of their personal data for purposes of “profiling in furtherance of solely automated decisions that produce legal or similarly significant effects concerning the customer.”  Profiling is defined as “any form of automated processing performed on personal data to evaluate, analyze or predict personal aspects related to an identified or identifiable individual's economic situation, health, personal preferences, interests, reliability, behavior, location or movements.”  The bill also mandates the performance of data protection assessments in connection with “profiling” where the profiling presents “a reasonably foreseeable risk of unfair or deceptive treatment of, or unlawful disparate impact on, customers, financial, physical or reputational injury to customers, a physical or other intrusion upon the solitude or seclusion, or the private affairs or concerns, of customers, where such intrusion would be offensive to a reasonable person, or other substantial injury to customers[.]” The law was not accepted prior to the end of the legislative session in June 2023.

H6286

Introduced on April 19, 2023, H6286, would regulate companies’ uses of generative artificial intelligence models. Any company using large-scale generative AI may not use AI for discriminatory practices. The AI model must be programmed to generate text with a distinctive watermark to prevent plagiarism. The company must implement reasonable security measures to protect the data of individuals used to train the model, and the company must obtain informed consent from these individuals before using their data. The company must also conduct regular risk assessments of potential risks and harms related to their services. Within 90 days of the effective date of this act, any company using large-scale generative AI must register the name of the company, description of the AI model, and information on the company’s data gathering practices with the attorney general.

S2888

S2888, entitled “Automated Decision Tools” and introduced on March 22, 2024, would require companies developing or deploying high-risk AI systems to conduct impact assessment and adopt risk management programs. Deployers would be required to implement and maintain risk management programs that identify, mitigate, and document risks associated with “consequential artificial intelligence decision systems” (CAIDS) before deployment. Developers would be obligated to provide deployers with information related to impact assessments, including the capabilities and limitations of CAIDS.

Committee recommended that the measure be held for further study.

SB146

Introduced on February 1, 2023, SB146, would prohibit certain uses of automated decision systems and algorithmic operations in connection with video-lottery terminals and sports betting applications.  The law would take effect upon passage. The law was not accepted prior to the end of the legislative session in June 2023.

Proposed

HB3401

Introduced December 5, 2024, HB3401 provides consumers with the right to opt out of the processing of the personal data for the purposes of profiling in furtherance of a decision that produces a legal or similar effect concerning a consumer. As January 14, 2025, it was referred to the Committee on the Judiciary.

HB3402

Introduced on December 5, 2024, HB 3402 provides that a covered entity that provides an online service, product, or feature reasonably likely to be accessed by children cannot profile a child by default unless it can demonstrate that appropriate safeguards are in place, profiling is necessary, and a compelling reason exists for the profiling. Effective upon approval by the Governor. As of January 14, 2025, the bill was referred to the Committee on the Judiciary. As of January 14, 2025, the bill was referred to the Committee on the Judiciary. 

SB268

Introduced on January 28, 2025, SB 268 aims to protect minors by regulating how online services handle minors’ personal data and design features. Under this bill, online services must exercise care in using minors’ personal data and designing features to prevent harms such as compulsive usage, psychological harm, and identity theft. Online services can only collect the minimum amount of personal data necessary and restrict its use to that specific purpose. The bill also prohibits sending push notifications to minors during specific hours and restrict profiling of minors unless necessary and with appropriate safeguards. For the minors and their parents, the bill provides tools for the minors to limit communication, control data visibility, opt-out features, and manage in-app purchases. The bill also offers parents tools to manage account settings, restrict purchases, and monitor their children’s usage. The state’s Attorney General is responsible for enforcing this bill, and online services may be liable for financial damages for violations, while officers and employees may be held personally liable for willful violations. Effective upon approval by the Governor. As of May 1, 2025, the bill has been referred to the Committee on the Judiciary. As of May 1, 2025, the bill has been referred to the Committee on the Judiciary.

Failed

H4660

Introduced on January 9, 2024, H4660 would require that “a person, corporation, committee, or other entity shall not, within ninety days of an election at which a candidate for elective office will appear on the ballot, distribute a synthetic media message that the person, corporation, committee, or other entity knows or should have known is a deceptive and fraudulent deepfake of a candidate on the ballot.”

If passed, the act would take effect immediately.

H4696

Introduced on January 9, 2024, H4696 would create a consumer data privacy law in South Carolina similar to those in states like Virginia. Among other requirements, controllers must honor verifiable consumer requests to opt-out of “profiling in furtherance of a decision that produces a legal or similarly significant effect concerning a consumer.” Controllers also must conduct a data protection impact assessment for “ the processing of personal data for purposes of profiling if the profiling presents a reasonably foreseeable risk of: (a) unfair or deceptive treatment of or unlawful disparate impact on consumers;  (b) financial, physical, or reputational injury to consumers; (c) a physical or other intrusion on the solitude or seclusion, or the private affairs or concerns, of consumers, if the intrusion would be offensive to a reasonable person; or (d) other substantial injury to consumers.”

"Profiling" means “any form of solely automated processing performed on personal data to evaluate, analyze, or predict personal aspects related to an identified or identifiable individual's economic situation, health, personal preferences, interests, reliability, behavior, location, or movements.”  If passed, the act would take effect immediately.

H4842

Introduced on January 16, 2024, H4842, the South Carolina Age-Appropriate Design Code Act would apply to any business operating in South Carolina that either: “(i) has annual gross revenues more than twenty-five million dollars, as adjusted every odd-numbered year to reflect the Consumer Price Index;  (ii) alone or in combination, annually buys, receives for the covered entity's commercial purposes, sells, or shares for commercial purposes, alone or in combination, the personal data of fifty thousand or more consumers, households, or devices; or (iii) derives fifty percent or more of its annual revenues from selling consumers' personal data.”

Covered entities would be prohibited from “profiling” children under age 18 by default unless both of the following criteria are met: “ (a) the covered entity can demonstrate it has appropriate safeguards in place to ensure that profiling is consistent with the best interests of children reasonably likely to access the online service, product, or feature; and (b) either of the following is true: (i) profiling is necessary to provide the online service, product, or feature requested and only with respect to the aspects of the online service, product, or feature with which a child is actively and knowingly engaged; or (ii) the covered entity can demonstrate a compelling reason that profiling is in the best interests of children.”

“Profiling” means “any form of automated processing of personal data to evaluate, analyze, or predict personal aspects concerning an identified or identifiable natural person’s economic situation, health, personal preferences, interests, reliability, behavior, location, or movements. ‘Profiling’ does not include the processing of information that does not result in an assessment or judgment about a natural person.”

SB404

Introduced on January 18, 2023, SB404, would prohibit any operator of a website, an online service, or an online or mobile application, including any social media platform, from utilizing an automated decision system (ADS) for content placement, including feeds, posts, advertisements, or product offerings, for a user under the age of eighteen.  In addition, an operator that utilizes an ADS for content placement for residents of South Carolina who are eighteen years or older shall perform an age verification through an independent, third-party age-verification service, unless the operator employs the bill’s prescribed protections to ensure age verification. The bill includes a private right of action.

Proposed

SB164

Enacted on March 31, 2025, SB 164 prohibits the use of a deepfake to influence an election within 90 days of an election if the person knows or reasonably should know the item being disseminated is a deepfake and does not include the required disclosure.

Under SB 164, a deepfake includes any image, audio or video recording created with the use of artificial intelligence or other digital technology that is so realistic that a reasonable person would believe it depicts an individual who did not actually engage in the speech or conduct depicted.

Deepfakes must include a disclosure that states: “This (image/video/audio) has been manipulated or generated by artificial intelligence.” For an image or video recording, the text of the disclosure must appear in a size that is easily readable by the average viewer and no smaller than the largest font size of other text appearing in the image or video recording. The disclosure must be superimposed over each deepfake. For an audio recording, the disclosure must be read in a clearly spoken manner and in a pitch that is easily heard by the average listener at the beginning and end of the audio recording.

This act does not apply to (1) satirical or parodic uses of a deepfake, (2) radio or television broadcast systems or a cable or satellite television operator, programmer or producer that broadcasts a deepfake as part of bona fide news if the broadcast clearly acknowledges that there are questions about the authenticity of the deepfake or is paid to broadcast a deepfake, (3) an internet website or regularly published newspaper, magazine or other periodical that routinely carries news and is paid to disseminate a deepfake, or (4) to an internet computer service, internet service provider, domain provider, cloud service provider, or other similar provider that features a deepfake, to the extent that the provider acts in a merely technical, automatic, or intermediate nature.

The state Attorney General, a candidate, or the individual depicted in a deepfake may seek injunctive or other equitable relief for a deepfake disseminated in violation of this act. Violators may also be liable for damages, reasonable costs and attorney fees, and other relief as deemed appropriate by the court.

Enacted

HB1181

Effective July 1, 2024, HB1181, the Tennessee Information Protection Act, establishes an omnibus consumer privacy law along the lines of those enacted in states like Virginia.  Among its requirements, the bill mandates the performance of data protection assessments in connection with “profiling” where the profiling presents a reasonably foreseeable risk of: (A) Unfair or deceptive treatment of, or unlawful disparate impact on, consumers; (B) Financial, physical, or reputational injury to consumers; (C) A physical or other intrusion upon the solitude or seclusion, or the private affairs or concerns, of consumers, where the intrusion would be offensive to a reasonable person; or (D) Other substantial injury to consumers.  "Profiling" is defined as “a form of automated processing performed on personal information to evaluate, analyze, or predict personal aspects related to an identified or identifiable natural person's economic situation, health, personal preferences, interests, reliability, behavior, location, or movements[.]”  The law gives the Tennessee Attorney General’s Office authority to impose civil penalties on companies who violate the law.

ELVIS Act

The Ensuring Likeness Voice and Image Security Act (“ELVIS Act”) was signed into law on March 21, 2024. The Act protects voices of songwriters, performers, and celebrities from artificial intelligence and deepfakes by prohibiting the use of AI to mimic a person’s voice without their permission, and treats violations as Class A misdemeanors. The Act also authorizes civil action against any person who violates the law. The Act becomes effective July 1, 2024.

Enacted

HB4

Introduced on February 16, 2023, HB4, the Texas Data Privacy and Security Act, is based on the Virginia Consumer Data Protection Act.  Once effective, the law will create similar requirements enabling individuals to opt-out of “profiling” that produces a legal or similarly significant effect concerning the individual.  Controllers must also perform a data protection assessment for high-risk profiling activities.  The Act goes into force on July 1, 2024.

HB1709

HB1709, the Texas Responsible AI Governance Act (“TRAIGA”) was introduced by Rep. Capriglione on December 23, 2024. Rep. Capriglione has had prior success with privacy-related bills in Texas, such as the Texas Data Privacy and Security Act, and worked with industry stakeholders to draft TRAIGA. If passed, TRAIGA would amend the Texas Data Privacy and Security Act to establish risk-based obligations in connection with the use of AI systems.

A “high-risk artificial intelligence system” is defined as any AI system that, when deployed, makes, or contributes to making, a consequential decision. “Consequential decisions” are decisions that have a material legal or similarly significant effect on the consumer, such as those relating to criminal case assessments, education enrollment, financial services, electricity services, food, healthcare services, housing, and other similarly important considerations. “Algorithmic discrimination” is defined as any unlawful differential treatment or impact that disfavors an individual or group based on their actual or perceived age, color, disability, ethnicity, genetic information, national origin, race, religion, sex, veteran status, or other protected classifications. TRAIGA would also establish an AI Council in Texas.

TRAIGA would require developers of “high-risk artificial intelligence systems” to exercise reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of the system. Developers would also be required to provide a risk assessment to any deployers of the system, describing how the system should be used, any known limitations, and any reasonably foreseeable risks associated with its use, among other factors. Deployers of these systems would also be required to independently prepare a separate risk assessment of the system.

TRAIGA would impose additional obligations, including requirements to: (i) limit the risk that a high-risk AI system could be used to circumvent informed decision-making; (ii) prohibit the use of the system for social scoring; (iii) prohibit the collection of biometric identifiers in certain instances; (iv) prohibit emotion recognition without a consumer’s consent; and (v) prohibit the development of sexually explicit media.

Consumers would be required to be notified about the use of high-risk AI systems, both by the developer and the deployer, depending on the circumstances and the consumer's relationship with the deployer. In general, consumers must be notified of the system's use prior to interacting with it. This notification must include a description of the system's purpose, the fact that the system may or will make a consequential decision affecting the consumer, the nature of any consequential decisions in which the system may be a contributing factor, the factors used in making those decisions, contact information for the deployer, a statement regarding any human or automated components of the system, and a declaration of consumer rights under Section 551.107 (e.g., the right to seek declaratory or injunctive relief, as described below).

The deployer or developer would also be required to notify relevant state regulators (e.g., the state Attorney General, the AI Council created under TRAIGA, or the relevant state regulator for the industry) and affected consumers “as soon as practicable, but no later than the 10th day” after discovering that a high-risk AI system has caused or is likely to cause: (1) algorithmic discrimination of an individual or group, or (2) an inappropriate or discriminatory consequential decision. If the developer discovers or is made aware that a deployed high-risk AI system is using inputs or producing outputs that violate TRAIGA, the deployer must cease operating the system as soon as technically feasible and notify the AI Council and the Texas Attorney General as soon as practicable, but no later than 10 days after discovering the violation.

Enforcement would be handled by the Texas Attorney General, who would be authorized to bring civil actions against the developer and/or deployer to recover reasonable attorney’s fees and other reasonable expenses. The Attorney General could also impose a fine of between $5,000 and $10,000 per uncured violation. If a violation cannot be cured, the Attorney General may impose an administrative fine of between $40,000 and $100,000 per violation. As currently drafted, there would be a 30-day cure period from the notification of any alleged violation of the Act. Any developer or deployer who continues to operate in violation of the Act would be subject to a fine of $1,000 to $20,000 per day.

TRAIGA also authorizes consumers to seek declaratory relief (with the ability to recover reasonable attorneys’ fees) or injunctive relief against any deployer or developer who violates the Act.

TRAIGA would establish an “AI Regulatory Sandbox Program” for participating AI developers to test AI systems under a statutory exemption from TRAIGA’s general restrictions. Additionally, there is an exemption for AI developers who release their systems under a free and open-source license in certain circumstances.

Proposed

HB149

HB 149, the Texas Responsible AI Governance Act (“TRAIGA”), was introduced on March 14, 2025, passed by the legislature on June 1, 2025, and awaiting the Governor’s signature. TRAIGA will take effect January 1, 2026.  TRAIGA amends the Texas Data Privacy and Security Act to establish risk-based obligations in connection with the use of AI systems.

TRAIGA requires government agencies that deploy AI systems that interact with consumer to disclose to each consumer (before or at the time of interaction) that the consumer is interacting with an AI system. A person must make such disclosures regardless of whether it would be obvious to the reasonable consumer that they are interacting with an AI system. The disclosure may be provided using a hyperlink to a separate Internet page, but it must be clear and conspicuous, written plainly, and may not use a dark pattern.

If an AI system is used in the provision of a healthcare service or treatment, the provider shall provide the disclosure no later than the date the service or treatment is first provided.

Further, TRAIGA prohibits the deployment of an AI system in a manner that intentionally seeks to incite or encourage a person to (1) commit self-harm, (2) harm another person, or (3) engage in criminal activity.  

TRAIGA also prohibits government entities from using or deploying AI systems that evaluate or classify a person or group of people based on social behavior or personal characteristics with the intent to calculate or assign a social score or similar categorical estimation of the person or group of people (i.e., social scoring) that results or may result in discriminatory, detrimental, unconstitutional, or disproportionate  treatment against a person or group of people.

Non-government entities or individuals are similarly prohibited from developing or deploying an AI system with the sole intent to infringe, restrict, or otherwise impair an individual’s Constitutional rights.

Enforcement will be handled by the Texas Attorney General and state agencies. The Attorney General will create an online mechanism where consumers may submit a complaint under the Act to the Attorney General. If the Attorney General determines that a person is in violation, the Attorney General shall notify the person in writing and must provide the person a 60-day period to cure before bringing an action. The Attorney General could may impose a civil penalties between $10,000 and $12,000 per uncured violation. If a violation cannot be cured, the Attorney General may impose civil penalties between $80,000 and $200,000 per violation.  Continued violations of the Act would be subject to a fine of $2,000 to $40,000 per day. The Attorney General may bring an action to collect civil penalties, seek injunctive relief, and recover attorney’s fees and reasonable costs, but the Attorney General may not bring an action against a person for an AI system that has not been deployed.

A state agency may impose sanctions (e.g., suspension, probation, revocation of a license, or a monetary penalty up to $100,000) against a person that is licensed, registered, or certified by that agency for a violation of the Act, if the person has been found in violation and the Attorney General has recommended additional enforcement by the applicable agency.

TRAIGA also establishes an “AI Council” and an “AI Regulatory Sandbox Program” for participating AI developers to test AI systems under a statutory exemption from TRAIGA’s general restrictions. Additionally, there is an exemption for AI developers who release their systems under a free and open-source license in certain circumstances. The AI council will be responsible for ensuring AI systems are ethical and developed in the public’s best interest in Texas and that AI systems do not harm the public safety or infringe people’s rights and freedoms, among other responsibilities to ensure the ethical deployment of AI systems, while not impeding innovation in the state.

HB366

Introduced on February 27, 2025, TX HB 366 would require disclosures on certain political advertisements that contain altered media. Under HB 366, a person may not publish, distribute, or broadcast political advertising that includes an image, audio, or video recording (including where such media has been altered by generative artificial intelligence technology) of an officeholder or candidate that falsely depicts their appearance, speech, or conduct with the intent to influence an election. In such cases, the political advertising must include a disclosure indicating that the image, audio, or video did not occur in reality.

The requirements under HB 366 do not apply to an interactive computer service, an internet service provider, cloud service provider, communication service provider or similar provider, a radio or television broadcaster, or the owner or operator of a commercial sign.

A violation of this act is a Class A misdemeanor. If enacted, HB 366 will take effect on September 1, 2025.

SB2567

Introduced on March 13, 2025, TX SB 2567 would make it a deceptive trade practice to fail to disclose information regarding the use of an artificial intelligence system or algorithmic pricing systems to set prices. An “algorithmic pricing system” is any condition in which an AI system, when deployed, generates recommendations on pricing. If enacted, SB 2567 would take effect on September 1, 2025.

SB2991

SB 2991, introduced on March 14, 2025, would regulate the use of an automated employment decision tool to assess a job candidate’s fitness for a position without disclosure and consent. An “automated employment decision tool” is a computational process or software application that uses algorithms, machine learning, statistical modelling, data analytics, or an AI system to assess an applicant’s fitness for a position.

An employer must (1) notify the applicant that an automated employment decision tool may be used, (2) provide the applicant with information describing how the tool will be used to assess the applicant’s fitness for the position, and (3) obtain the applicant’s written consent for such use before using the tool.

SB 2991 would also prohibit employers from sharing an assessment of an applicant made by an automated employment decision tool with any other person other than the person whose knowledge and skill is necessary to ensure the tool is correctly processing the applicant’s data.

After 30 days, the employer must destroy any hard copies and erase any electronic data files of the assessment. If instructed by an applicant, the employment must destroy copies of the assessment as soon as practicable.

Applicants may file a complaint with the commission under this act, and the commission will investigate the complaint. The commission may also impose administrative penalties between $2,500 and $7,500 against an employer for each violation. If enacted, SB 2991 will take effect September 1, 2025.

Failed

HB4695

Introduced on March 10, 2023, HB4695, would prohibit the use of artificial intelligence technology to provide counseling, therapy, or other mental health services unless (1) the artificial intelligence technology application through which the services are provided is an application approved by the commission; and (2) the person providing the services is a licensed mental health professional or a person that makes a licensed mental health professional available at all times to each person who receives services through the artificial intelligence technology.  The artificial intelligence technology must undergo testing and approval by the, Texas Health and Human Services Commission, the results of which will be made publicly available.  If passed, the law would take effect September 1, 2023.

Enacted

HB 452

Effective May 7, 2025, HB 452 (“Artificial Intelligence Applications Relating to Mental Health”) regulates the use of mental health chatbots that employ artificial intelligence (AI) technology. A "Mental health chatbot" means an AI technology that: (i) uses generative AI to engage in interactive conversations with a user of the mental health chatbot similar to the confidential communications that an individual would have with a licensed mental health therapist; and (ii) a supplier represents, or a reasonable person would believe, can or will provide mental health therapy or help a user manage or treat mental health conditions. Where a supplier uses a mental health chatbot, it must cause the mental health chatbot to clearly and conspicuously disclose to users that the mental health chatbot is an AI technology and not a human. Such disclosures must be made before the Utah user can access the mental health chatbot features, at the beginning of any interaction with a Utah user if the user has not used the chatbot within the past seven days, and any time the user asks the chatbot whether AI is being used.

Under the act, suppliers of mental health chatbots are prohibited from: (1) selling to or sharing with any third party any: (a) individually identifiable health information of a Utah user or (b) user input of a Utah user; (2) using a mental health chatbot to advertise a specific product or service to a Utah user in a conversation between the Utah user and the mental health chatbot without disclosing clearly and conspicuously: (a) that the advertisement as an advertisement and (b) any: (i) sponsorship, (ii) business affiliation, or (iii) agreement that the supplier has with a third party to promote, advertise, or recommend the product or service; and (3) using a Utah user’s input to: (a) determine whether to display an advertisement to the user, unless the advertisement is for the mental health chatbot itself, (b) determine a product, service, or category of product or service, to advertise to the user, or (c) customize how an advertisement is presented to the user. Despite the foregoing restrictions, mental health chatbots may recommend that a user seek counseling, therapy, or other assistance from a licensed professional. A “supplier” is  anyone who is a seller, lessor, assignor, offeror, broker, or other person who regularly solicits, engages in, or enforces consumer transactions, whether or not he deals directly with the consumer.

Utah’s Division of Consumer Protection may enforce the provisions of this act, and may do so in conjunction with the Utah Attorney General. Violators of the act may face administrative fines of up to $2,500 for each violation and the division may bring suit, in which the court may (1) issue an injunction for a violation, (2) order disgorgement of money received in violation, (3) order payment of disgorged money to an injured purchaser or consumer, (4) impose a fine of up to $2,500 for each violation, (5) award other relief that the court determines reasonable and necessary. Additionally, a court may impose a civil penalty of no more than $5,000 for each violation of an administrative or court order issued for a violation of this chapter.

Note that it is an affirmative defense to liability in an action if the supplier can show that it: (1) created, maintained, and implemented a policy; (2) maintains documentation regarding the development and implementation of the mental health chatbot that describes (a) the models and data used to train and develop the chatbot, (b) the supplier’s compliance with federal health privacy regulations, (c) user data collection and sharing practices, and (d) the supplier’s ongoing efforts to ensure accuracy, reliability, fairness, and safety; (3) filed the policy with the division; and (4) complied with all requirements of the filed policy at the time of the alleged violation. The policy must be in writing and state: (1) the intended purposes of the mental health chatbot; (2) the abilities and limitations of the mental health chatbot; and (3) describe the procedures by which the supplier ensures the mental health chatbot is properly developed in accordance with industry best practices, that the chatbot avoids reasonably foreseeable adverse outcomes and potentially harmful interactions, and that the supplier implements measures to prevent discrimination, among other requirements. 

SB149

Signed into law on March 13, 2024, the Artificial Intelligence Policy Act (SB149) requires covered businesses to “clearly and conspicuously” disclose to a consumer that they are interacting with “generative artificial intelligence and not a human.” “Generative artificial intelligence” refers to “an artificial system that: i) is trained on data; ii) interacts with a person using text, audio, or visual communication; and iii) generates non-scripted outputs similar to outputs created by a human, with limited or no human oversight.” The Act applies to any “person” in a “regulated occupation” (i.e., occupations that require a person to obtain a license or state certificate), and any person regulated by the Utah Consumer Protection Division that uses generative artificial intelligence (though the disclosures obligations differ).

The Act requires that individuals be informed about the use of generative artificial intelligence in the following instances: individuals interacting with a generative artificial intelligence system, when that system is being provided by a “person” in a “regulated occupation”; and “if asked or prompted by the person” who is interacting with the system for acts governed by Utah Code 13-2-1, which include sales, charitable activities, interactions with consumers, online activities, and similar commercial activities. Put another way, persons in a regulated occupation are required to disclose the use of a generative artificial intelligence system in most instances where a third person interacts with the system, while other persons regulated by the Consumer Protection Division are only required to disclose the use of such a system when asked by a third person about whether the system is a human or not.

Such disclosures must be provided verbally, at the start of an oral exchange or conversation, and through electronic messaging before a written exchange.

The Act also creates the Office of Artificial Intelligence Policy to propose implementing rules and administer the Artificial Intelligence Learning Laboratory Program. Participants in the Learning Laboratory must meet certain eligibility criteria (e.g., technical expertise, financial resources, and effective plans to monitor and mitigate risks associated with generative artificial intelligence) and will work with the Office and will be able to apply for “regulatory mitigation” under an agreement with the Office to govern their use of artificial intelligence. These mitigations include, for instance, reduced civil fines during participation in the program, limits on when restitution may be required to the individual interacting with the system, and terms and conditions related to any cure period before penalties may be assessed.

Failure to comply with the bill would result in a civil penalty of no more than $2,500 per violation. The Utah Attorney General may also seek $5,000 per violation from any person who violates an administrative or court order relating to the use of generative artificial intelligence. Note that it is not a defense that the generative artificial intelligence made the violation, such as a violative statement, violative act, or in furtherance of any other violation of the Act.

The Act became effective May 1, 2024.  SB 332 delays the AI Policy Act’s automatic repeal date from May 2025 to July 2027, providing two additional years of effect.

SB 226

Effective May 7, 2025, SB  226 amends the Artificial Intelligence Policy Act.  If a supplier uses generative AI to interact with an individual in connection with a consumer transaction and the individual clearly and unambiguously asks the supplier about whether AI is being used, it shall disclose to the individual at the start of the interaction that the individual is interacting with generative AI and not a human.

If an individual is providing services in a regulated occupation (i.e., an occupation that requires an individual to obtain a license), it shall: (1) prominently disclose when an individual receiving services is interacting with generative AI if the use constitutes a high-risk AI interaction and (2) comply with all requirements of the regulated occupation when providing services through generative AI. A ”high risk AI interaction” is an interaction with generative AI that involves: (1) the collection of sensitive personal information (e.g., health data); (2) the provision of personalized recommendations, advice, or information that could reasonably be relied upon to make significant personal decisions (e.g., legal advice); and (3) other applications as defined by division rule.

Utah’s Division of Consumer Protection may enforce the provisions of this act, and may do so in conjunction with the Utah Attorney General. Violators of the act may face administrative fines of up to $2,500 for each violation and the division may bring suit, in which the court may (1) issue an injunction for a violation, (2) order disgorgement of money received in violation, (3) order payment of disgorged money to an injured purchaser or consumer, (4) impose a fine of up to $2,500 for each violation, (5) award other relief that the court determines reasonable and necessary. Additionally, a court may impose a civil penalty of no more than $5,000 for each violation of an administrative or court order issued for a violation of this chapter.

However, a person is not subject to an enforcement action if the generative AI clearly and conspicuously discloses at the outset and throughout any interaction with an individual in connection with a consumer transaction or the provision of regulated services and throughout the interaction that it is generative AI, not a human, or is an AI assistant.

SB 271

Effective May 7, 2025, SB 271 prohibits using a person’s identity to falsely convey an endorsement. The act prohibits the abuse of an individual’s personal identity by (1) using content containing the personal identity of an individual for advertising, solicitation, or other commercial purposes in which the use: (a) expressly or impliedly conveys that the individual approves, endorses, has endorsed, or will endorse the specific subject matter; (b) creates a likelihood of confusion as to the participation, association, or connection of the individual; or (c) creates a false impression that the individual participated in or approved the use, and the use of the individual’s personal identity was done without his or her consent; and (2) knowingly distributing, selling, or licensing any technology, software, or tool whose intended primary purpose is the unauthorized creation or modification of content that includes an individual's personal identity for commercial purposes. One’s “personal identity” includes any simulation, reproduction, or artificial recreation of an individual’s name, image, likeness, picture, portrait, video likeness, voice, or audiovisual appearance, whether created through generative artificial intelligence (AI), computer animation, digital manipulation, or other technological means.

The act provides exceptions for use of one’s personal identity connection with: (1) a news, public affairs, or sports broadcast, (2) works of art (e.g., a play, book, magazine, newspaper, musical composition, visual work of art, etc.), (3) works of political, public interest, or newsworthy value, or (4) an advertisement or commercial announcement for any of the foregoing uses.

An individual whose personal identity has been abused may bring an action against a person who published the advertisement or content: (1) if the advertisement or content, on its face is such that a reasonable person would conclude that it is unlikely that an individual would consent to such use; and (2) the publisher did not take reasonable steps to assure that consent was obtained. The plaintiff will be entitled to injunctive relief, damages alleged and proved, exemplary damages, and reasonable attorney's fees and costs.

SB332

UT SB 332, enacted on March 25, 2025, extends the repeal date of Utah’s Artificial Intelligence Policy Act from May 1, 2025, to July 1, 2027.

Proposed

H710

Introduced on January 9, 2024, H.710 would put in place certain obligations for both developers and deployers of “high-risk artificial intelligence system.” For developers, these obligations would include, among others, using reasonable care to avoid any risk of algorithmic discrimination that is a reasonably foreseeable consequence of developing or modifying a high-risk system to make consequential decisions. Developers would also be required to provide disclosures relating to the system, such as disclosures about the known limitations of the system and foreseeable risks of algorithmic discrimination, a summary of the type of data to be processed, the purpose of processing, mitigation measures put in place to limit identified risks, and other similar information necessary to conduct a risk assessment. Similar obligations would apply to developers of generative artificial intelligence.

Deployers would be required to use reasonably care to avoid any risk of algorithmic discrimination that is a reasonably foreseeable consequence of deploying or using a high-risk artificial intelligence system. High-risk systems may only be used to the extent that the deployer has already implemented a risk management policy that is at least as stringent as the Artificial Intelligence Risk Management Framework published through NIST and the deployer has conducted a risk assessment for the system.

Search engines and social media platforms knowingly using, or which reasonably believes that it is using, synthetic digital content would also be required to provide consumers with a signal indicating that the content was produced, or is reasonably believed to have been produced, by generative artificial intelligence.

Failure to comply with the Act would be treated as an unfair and deceptive act in trade and commerce in violation of 9 VSA 2453. The Attorney General may provide a cure period at its discretion. The Act would take effect on July 1, 2024.

H711

Introduced on January 9, 2024, H.711 would create an oversight and enforcement agency to collect and review risk assessments taken in connection with the use of high-risk artificial intelligence systems. The Act would require each deployer of “inherently dangerous artificial intelligence systems” to submit a risk assessment prior to deploying such a system and every two years thereafter, as well as submit a new risk assessment in case material and substantial changes are made to the system. Deployers would also be required to submit a 1-, 6-, and 12-month testing results to the Division of Artificial Intelligence showing the reliability of the results generated by the systems, as well as variances and mitigation measures put in place to limit risks posed by the use of such systems.

The Act would also create a duty for deployers and developers to meet a certain standard of care for the use of any inherently dangerous artificial intelligence systems that “could be reasonably expected to impact consumers.” The Act would also prohibit the deployment of inherently dangerous artificial intelligence systems that pose disproportionate risks unless those risks are evaluated and validated against the Artificial Intelligence Risk Management Framework published by NIST.

Violations of the Act would be treated as an unfair practice in commerce. The Act would also create a private right of action for consumers harmed by a violation of the chapter. The Act would take effect July 1, 2024.

HB208

Introduced on February 12, 2025, VT HB 208 proposes to establish the Vermont Data Privacy and Online Surveillance Act, which would protect consumers’ personal data and provide additional control over how their information is collected, used, and shared online.

The Act would apply to persons who conduct business in Vermont or who produces products or services that are targeted to residents of Vermont and that control or process the personal data of at least 25,000 consumers (excluding personal data processed solely for the purpose of completing a payment transaction) or that control or process the personal data of at least 12,500 consumers and derive more than 25% of gross revenue from the sale of personal data.

Among several data protection rights that the Act aims to provide to consumers in Vermont, the Act particularly enables consumers to know whether their personal data is or will be used in any artificial intelligence system and for what purpose. The Act also imposes several duties on controllers of personal data, including providing consumers with a reasonably accessible, clear, and meaningful privacy notice that describes any collection, processing, selling, or sharing of personal data for training or use of artificial intelligence systems, among other requirements.

The Attorney General has authority to enforce the Act. Prior to bringing an action, the Attorney General may issue a notice of violation extending a 60-day cure period to the controller or processor in violation. Individual consumers may also bring suit to recover $5,00 or actual damages (whichever is greater), injunctive relief, punitive damages, reasonable costs and attorney’s fees, and any other relief deemed appropriate by the court.

If enacted, the Vermont Data Privacy and Online Surveillance Act would take effective in phases, with Section 1 of the Act taking effect on July 1, 2025 and the final section, Section 4, taking effect on July 1, 2028.

HB262

Introduced on February 19, 2025, VT HB 262 would restrict electronic monitoring of employees and the use of employment-related automated decision systems. If passed, this act will take effect on July 1, 2025.

Under HB 262, employers may not electronically monitor employees unless (1) certain requirements are met; (2) the form of electronic monitoring is: (a) necessary to accomplish the purpose, (b) the least invasive means of accomplishing that purpose, (c) used with the smallest number of employees and collects the smallest amount of data necessary; and (3) the employer ensures only authorized persons have access to any data produced through the electronic monitoring.

Employers must also provide employees with a notice at least 15 calendar days prior to commencing any form of electronic monitoring which includes a description of the technologies that will be used, among other information related to the electronic monitoring.

Additionally, HB 262 restricts employers from using automated decision systems in a manner that violates laws, makes predictions about an employee’s behavior that are unrelated to the employee’s essential job functions, or that uses customer or client data as an input. An employer may not rely on outputs from an automated decision system when making employment-related decisions unless (1) such outputs are corroborated by human oversight of the employee, (2) the employer has conducted an impact assessment of the automated decision system, and (3) the employer provides a notice that explains the nature and scope for which the automated decision system will be used, the logic of the automated decision system, the specific category and sources of employee data that the system will use, and any performance metric the employer will consider using with the system, among other requirements.

Employees will also have the right to access and correct data that relates to the employee and that was produced or used by the employer in electronic monitoring or an automated decision system. Employers will be required to correct potential errors within 7 days of receiving the request to correct.

HB365

Introduced on February 26, 2025, VT HB 365 would require providers of social media platforms and artificial intelligence systems to register annually with the Secretary of State and to agree to product safety and privacy terms.

As part of registration, a provider of an AI system must pay a fee of $100 and provide the following information: (1) name, email, and internet address, (2) the most recent version of the privacy policy and terms and condition used by the AI system, and (3) the data collection, storage, and security practices of the AI system.  Additionally, providers must provide a description of the AI model and agree to the product safety and privacy terms set forth in the act.

Providers of AI systems who fail to register and provide the required information would face civil penalties of $50 for each day, not to exceed a total for $10,000 for each year that it fails to register. The Attorney General may bring action to recover the penalties imposed and to seek injunctive relief.

In terms of product safety and privacy, the provider of an AI system would (1) have a duty to exercise reasonable care to protect consumers from foreseeable risks of algorithmic discrimination, (2) disclose to consumers that they are interacting with an AI system, (3) obtain informed consent from consumers before collecting or using their data, (4) obtain separate informed consent before sharing or selling a consumer’s data, and (5) implement reasonable security measures to protect consumer data used to train the model.

If enacted, this act will take effect on July 1, 2025.

HB371

Introduced on February 26, 2025, VT HB 371 aims to regulate the use of dynamic pricing by retail establishments. “Dynamic pricing” is the use of artificial intelligence to adjust the prices on the electronic shelf labels of consumer commodities (e.g., food, druat any given moment.

Under HB 371, retailers would be prohibited from using dynamic pricing to alter the retail or unit price of a consumer commodity while the retailer is open to the public.

If enacted, HB 371 will take effect on January 16, 2027.

HB389

VT HB 389, introduced on February 26, 2025, aims to restrict the use of artificial intelligence to influence the price and supply of rental housing.

SB23

Introduced on January 22, 2025, VT SB 23 would require disclosures of synthetic media (image, audio, or video) that creates a false representation of an individual with the intent to injure the reputation of a candidate, to influence the outcome of an election, or otherwise deceive a voter within 90 days of an election. This act will be effective upon enactment.

Under SB 23, a disclosure must be provided with the synthetic media stating “This media has been created or intentionally manipulated by digital technology or artificial intelligence.” For images and video, the disclosure shall appear in a size that is easily readable and not smaller than the largest font size of other text appearing in the media, and the disclosure shall appear for the full duration of the video. For audio recordings, the disclosure must be read in a clearly spoken manner, and in a pitch and pace that can be easily heard at the beginning and end of the recording.

SB 23 shall not apply to (1) a radio or television broadcasting station, including a cable or satellite operator, programmer, or product, or to a website, streaming platform, or mobile application that broadcasts deceptive and fraudulent synthetic media as part of bona fide news, so long as the broadcast contains the required disclosure; (2) a website or regularly published newspaper that routinely carries news and that publishes deceptive and fraudulent synthetic media with the required disclosure; (3) a person that produces deceptive and fraudulent synthetic media that is satire or parody; (4) a provider of telecommunication or information services; or (5) a provider of an interactive computer service.

A person that knowingly and intentionally violates this act may face fines up to $1,000. A candidate who is misrepresented through the use of deceptive and fraudulent synthetic media may seek injunctive or other equitable relief against the publisher of such media. Additionally, the Attorney General may bring an action to seek injunctive relief against violators.

Failed

H114

Introduced on January 25, 2023, H114, would restrict the use of electronic monitoring of employees and the use of automated decision systems (ADSs) for employment-related decisions. Electronic monitoring of employees may only be conducted when, for example, the monitoring is used to ensure compliance with applicable employment or labor laws or to protect employee safety, and certain notice is given to employees 15 days prior to commencement of the monitoring. ADSs must also meet a number of requirements, including corroboration of system outputs by human oversight of the employee and creation of a written impact assessment prior to using the ADS.  The law was not accepted prior to the end of the legislative session in May 2023.

Enacted

VA321127

VA ST § 32.1-127, effective July 1, 2025, requires each hospital, nursing home, and certified nursing facility to establish and implement policies to ensure the permissible access to and use of an “intelligent personal assistant” provided by a patient, in accordance with such regulations, while receiving inpatient services. Such policies shall ensure protection of health information in accordance with the requirements of the federal Health Insurance Portability and Accountability Act of 1996.

Under this section, an “intelligent personal assistant” means a combination of an electronic device and a specialized software application designed to assist users with basic tasks using a combination of natural language processing and artificial intelligence, including such combinations known as “digital assistants” or “virtual assistants”.

VCDPA

The Virginia Consumer Data Protection Act (VCDPA), which went into force on January 1, 2023, sets out rules for profiling and automated decision-making.  Specifically, the VCDPA enables individuals to opt-out of “profiling in furtherance of decisions that produce legal or similarly significant effects” concerning the consumer, which is generally defined as “the denial and/or provision of financial and lending services, housing, insurance, education enrollment or opportunities, criminal justice, employment opportunities, healthcare services, or access to basic necessities.”  Controllers must also perform a data protection impact assessment for high-risk profiling activities.

failed

HB747

Introduced on January 10, 2024, HB 747, the Artificial Intelligence Developer Act, would prohibit developers of “high-risk artificial intelligence systems” from offering, selling, leasing, giving, or otherwise providing to a third party to deploy any artificial intelligence unless they provide the developers with sufficient information to perform a risk assessment on the use of the system, such as through a document detailing the potential risks and benefits of using the system, as well as a description of the intended uses of that system. Similar obligations would apply to developers of generative artificial intelligence.

The Act would also require deployers of artificial intelligence to take reasonable care to avoid any risk of reasonably foreseeable "algorithmic discrimination” and may only use the high-risk artificial intelligence system to make “consequential decisions” if the deployer has designed and implemented a risk management policy for the use of that program. The Act also specifies the elements that must be included in a risk assessment, which includes, among other considerations, the purpose of processing, a description of transparency measures taken concerning the system, a description of the data used to train the algorithm, and other information.

Failure to comply with the Act would result in civil penalties not to exceed $1,000 plus reasonable attorney fees, expenses, court costs, and willful violations may result in civil penalties between $1,000 and $10,000. The law would take effect July 1, 2026.

HB2094

Passed on February 20, 2025, HB 2094, the High-Risk Artificial Intelligence Developer and Deployer Act, prohibits developers of “high-risk artificial intelligence systems” from offering, selling, leasing, giving, or otherwise providing to a third party to deploy any artificial intelligence unless they provide the developers with sufficient information to perform a risk assessment on the use of the system through a documentation detailing the potential risks and benefits of using the system, the purpose of the system, how the system was evaluated for performance, and the measures taken to mitigate reasonable foreseeable risks of algorithmic discrimination that the developer knows arises from use of such system,  as well as a statement disclosing the intended uses of that system. Similar obligations apply to developers of generative artificial intelligence.

The Act also requires deployers of artificial intelligence to take reasonable care to avoid any risk of reasonably foreseeable "algorithmic discrimination” and may only use the high-risk artificial intelligence system to make “consequential decisions” if the deployer has designed and implemented a risk management policy and impact assessment for the use of that program. The Act also specifies the elements that must be included in a impact assessment, which includes, among other considerations, the purpose and intended uses of the system and whether the deployment of the system poses and known or foreseeable risks of algorithmic discrimination, including the nature of such algorithmic discrimination and steps taken to mitigate the risk. Developers or deployers or such systems must keep record of impact assessments for at least three years following the final deployment of the system.

Moreover, a deployer or developer that has deployed a high-risk artificial intelligence system to interact with consumers, the deployer must disclose to the consumer that it is interacting with an artificial intelligence system, in addition to the purpose and nature of the system, the nature of the consequential decision, and descriptions of the characteristics and attributes that the system measures or assesses, among other requirements. Where the consequential decision was adverse to such consumer and based on personal data beyond what the consumer directly provided to the deployer, the deployer shall provide a statement regarding the reasons for the decision, an opportunity to correct any inaccuracies in the consumer’s personal data, and an opportunity to appeal such decision.

Failure to comply with the Act can result in civil penalties not to exceed $1,000 plus reasonable attorney fees, expenses, court costs, and willful violations may result in civil penalties between $1,000 and $10,000. The law takes effect July 1, 2026.

proposed

HB1168

WA HB 1168, introduced on February 25, 2025, aims to increase transparency in artificial intelligence. HB 1168 would require developers of a generative AI system or service to post on their website documentation regarding the data used to train the generative AI, which includes a high-level summary of the datasets used in the development of the generative AI system. The summary should include the sources or owners of the datasets, a description of how the data furthers the intended purpose of the generative AI system, a description of the types of data points within the datasets, among other information regarding the development of the generative AI system.

The Attorney General will handle enforcement and a developer who is found in violation is liable for a civil penalty of $5,000 per day. Before bringing an action against a violator, the Attorney General must notify the violator and provide a 45-day period to cure. If the violator fails to cure, the Attorney General may bring a civil action without further notice.  

HB1170

WA HB 1170, introduced on January 13, 2025, would require covered providers to make publicly available an artificial intelligence detection tool to users that allows users to assess whether the image, video, or audio content was created or altered by the covered provider’s AI system. A “covered provider” is a person that creates, codes, or otherwise produces a generative AI system that has over 1,000,000 monthly visitors or users and is publicly accessible within the state.

The covered provider must also ensure that the tool allows a user to upload content or provide a uniform resource locator linking to online content and that the tool supports an application programming interface that allows a user to invoke the tool without visiting the covered provider’s internet website.

Additionally, a covered provider must offer the user the option to include a disclosure in image, video, or audio content that the content is created or altered by the covered provider’s generative AI system. The disclosure must (1) identify the content as AI-generated content, (2) be clear and conspicuous, and (3) be permanent or extraordinarily difficult to remove.

HB 1170 would not apply to any product, service, internet website, or application that exclusively provides video game, television, streaming, movie, or interactive experiences.

The Attorney General may enforce this act, but prior to bringing an action, the Attorney General must issue a notice of violation to the covered provider and provide a 45-day cure period.

HB1671

WA HB 1671, introduced on January 28, 2025, aims to enhance personal data privacy for Washington state consumers. The bill outlines consumer rights, including the right to access, correct, delete personal data, and opt out of data processing for targeted advertising and imposes duties on data controllers and processors to ensure data security and transparency.

With respect to the use of artificial intelligence, it would provide consumers the right to (1) confirm whether a controller is collecting or processing personal data of the consumer, (2) access such personal data, and (3) confirm whether the consumer’s personal data is used to profile the consumer for the purpose of automated decision making. Consumers may also opt-out of the processing of their personal data for purposes of profiling in furtherance of solely automated decisions that produce legal or similarly significant effects concerning the consumer. Controllers would be required to establish and describe in a privacy notice a means by which consumers may submit a request to exercise their consumer rights under the act. This can be done through a link on the controller’s website that enables the consumer to opt-out of profiling in furtherance of solely automated decisions that produce significant effects concerning the consumer.

The Attorney General may bring an action to enforce the requirements under this act, however, the Attorney  General must first notify the violator and provide a 30-day period to cure.

If enacted, HB 1671 would take effect on August 1, 2026.

HB1672

WA HB 1672, introduced on January 28, 2025, would regulate the use of electronic monitoring and automated decision systems in the workplace. Under HB 1672, employers may not utilize electronic monitoring of employees unless (1) it is for a permitted purpose as prescribed under the act, (2) the electronic monitoring is necessary to accomplish such purpose, (3) the form of electronic monitoring is the least invasive means, (4) the form of electronic monitoring is used with the smallest number of employees, and (5) the employer ensures that only authorized persons have access to any data produced through the electronic monitoring.

Additionally, at least 15 calendar days prior to commencing any electronic monitoring, employers must provide notice to each employee who will be subject to it and such notice must explain the form of electronic monitoring and its purpose along with a description of how any data generated will be used. The act also outlines prohibited uses of electronic monitoring in the workplace, such as for audio-visual monitoring of private areas of the workplace.

Further, HB 1672 outlines the ways in which an employer may employ an automated decision system. It may not be used to violate laws, profile or predict the likelihood that the employee will exercise their legal rights, use customer or client data as an input, or make predictions about an employee’s behavior which are unrelated to the employee’s essential job functions.

Prior to using an automated decisions system, employers would need to create a written impact assessment that describes the system and its purpose, what data will be used and how data will be used by the system, and a detailed assessment of the potential risks of the system.

Employers, anyone who develops, operates or maintains electronic monitoring or an automated decision systems on behalf of an employer, and any person who collects, stores, or analyzes data produced or used by electronic monitoring or an automated decision system would be required to implement reasonable security measures to protect employees’ personal data.

If the department determines that an employer has violated the act, the department may impose civil penalties of at least $1,000.  The department may also bring an action to enforce a violation under the act or to collect civil penalties.

If enacted, HB 1672 would be effective July 1, 2026.

SB5469

Introduced on January 23, 2025, WA SB 5469 would prohibit algorithmic rent fixing and noncompete agreements in the rental housing market.

Under SB 5469, it would be a violation to coordinate two or more landlords. “Coordinate” means the act of a service provider that (1) collects historical or contemporary prices, supply levels, occupancy rates or lease terminator and renewal dates of residential dwelling units from multiple landlords, private databases, or public databases, and (2) analyzes or processes the information through the use of a system, software, algorithm, or other automated process to provide recommendations about the rental housing market to multiple landlords.

The Attorney General may bring and action on behalf of the state to enforce the act and any person injured by a violation of the act may bring a civil action to recover damages.

failed

HB1951

Introduced on December 14, 2023, HB1951, provides that by January 1, 2025, and annually thereafter, a developers and deployers of automated decision tools must complete and document an impact assessment for any automated decision tool the deployer uses, or the developer develops, as specified.  "Automated decision tool" means a system or service that uses artificial intelligence and has been specifically developed and marketed to, or specifically modified to, make, or be a controlling factor in making, consequential decisions. Upon the request of the office of the attorney general, a developer or deployer must provide any impact assessment that it performed pursuant to this section to the office of the attorney general.  The bill requires certain other public disclosures.   The bill also prohibits the use of an automated decision tool that results in algorithmic discrimination.

SB5643

Introduced on January 31, 2023, and reintroduced on January 8, 2024, SB5643 and its companion HB1616, the People’s Privacy Act, would prohibit a covered entity or Washington governmental entity from operating, installing, or commissioning the operation or installation of equipment incorporating “artificial intelligence-enabled profiling” in any place of public resort, accommodation, assemblage, or amusement, or to use artificial intelligence-enabled profiling to make decisions that produce legal effects (e.g., denial or degradation of consequential services or support, such as financial or lending services, housing, insurance, educational enrollment, criminal justice, employment opportunities, health care services, and access to basic necessities, such as food and water) or similarly significant effects concerning individuals. "Artificial intelligence-enabled profiling" is defined as the “automated or semiautomated process by which the external or internal characteristics of an individual are analyzed to determine, infer, or characterize an individual's state of mind, character, propensities, protected class status, political affiliation, religious beliefs or religious affiliation, immigration status, or employability.”  The bill also bans the use of “face recognition” in any place of public resort, accommodation, assemblage, or amusement.  “Face recognition” is defined as “(i) An automated or semiautomated process by which an individual is identified or attempted to be identified based on the characteristics of the individual's face; or (ii) an automated or semiautomated process by which the characteristics of an individual's face are analyzed to determine the individual's sentiment, state of mind, or other propensities including, but not limited to, the person's level of dangerousness[.]”

SB6299

Introduced on January 24, 2024, SB6299, would make it unlawful for any employer to utilize artificial intelligence or generative artificial intelligence to evaluate or otherwise make employment decisions regarding current employees without written disclosure of the employer's use of such technology at the time of the employee's initial hire, or within 30 calendar days of the employer starting to use such technology for such purpose.

Failed

HB3498

Introduced on February 14, 2023, HB3498, the Consumer Data Protection Act, would create an omnibus consumer privacy law.  The bill generally follows the Virginia Consumer Data Protection Act and sets out rules for profiling and automated decision-making.  Specifically, the bill enables individuals to opt-out of the processing of their personal data for the purpose of “profiling in furtherance of decisions that produce legal or similarly significant effects concerning the consumer.”  Profiling is defined as “any form of automated processing performed on personal data to evaluate, analyze, or predict personal aspects related to an identified or identifiable natural person’s economic situation, health, personal preferences, interests, reliability, behavior, location, or movements.”  Controllers must also perform a data protection assessment for high-risk profiling activities.

ENACTED

AB664

WI AB 664, enacted on March 22, 2024, requires audio and video communications for political advertisements that contain synthetic media to include a disclosure that states “Contains content generated by AI.” For audio communications, the disclosure must be included at the beginning and at the end of the communication. For video communications, the disclosure must be included throughout the duration of each portion of the communication and in writing stating “This video/audio content generated by AI” or “This content generated by AI” for audiovisual media.

Violators may face penalties up to $1,000 for each violation.

Proposed

SB142

WI SB 142, introduced on March 21, 2025, would prohibit the use of algorithmic software in setting rental rates or occupancy levels for residential dwelling units and prohibits the sale, license, or provision of algorithmic software to residential landlords. “Algorithmic software” means software, including revenue management software, that uses an algorithm to perform calculations on nonpublic competitor data regarding rent or occupancy levels in Wisconsin for the purposes of informing a landlord’s decision on setting rental prices or occupancy rates. The foregoing includes a product or device that incorporates algorithmic software.

The Attorney General or a District Attorney may bring an action to seek injunctive relief or to recover a civil penalty of up to $1,000 per violation, including attorney’s fees and court costs. Each month in which a violation persists constitutes a separate offense. Additionally, each dwelling unit for which a person has used algorithmic software constitutes a separate offense. Tenants may also bring a civil action for a violation for actual damages or damages of $1,000, whichever is greater, and/or injunctive relief.

Related Capabilities

  • Data Privacy & Security

This material is not comprehensive, is for informational purposes only, and is not legal advice. Your use or receipt of this material does not create an attorney-client relationship between us. If you require legal advice, you should consult an attorney regarding your particular circumstances. The choice of a lawyer is an important decision and should not be based solely upon advertisements. This material may be “Attorney Advertising” under the ethics and professional rules of certain jurisdictions. For advertising purposes, St. Louis, Missouri, is designated BCLP’s principal office and Kathrine Dixon (kathrine.dixon@bclplaw.com) as the responsible attorney.