SB892
SB 892, introduced on January 1, 2024, would impact businesses entering into a contract with state agencies to provide artificial intelligence services by prohibiting such a contract unless the business met California’s Department of Technology safety, privacy, and nondiscrimination standards relating to artificial intelligence services. The Department of Technology to date has not promulgated these standards.
SB970
Introduced on January 25, 2024, SB970, this bill would require any person or entity that sells or provides access to any artificial intelligence technology that is designed to create content to provide a consumer warning that misuse of the technology may result in civil or criminal liability for the user. The bill would require the Department of Consumer Affairs to specify the form and content of the consumer warning and would impose a civil penalty for violations of the requirement. Failure to comply with consumer warning requirement would be punishable by a civil penalty not to exceed twenty-five thousand dollars ($25,000) for each day that the technology is provided to or offered to the public without a consumer warning.
SB1047 (Vetoed by Governor Newsom)
The Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act, SB 1047, introduced February 7, 2024, would in general authorize an AI developer of a covered model that is nonderivative to determine if the model qualifies for a limited duty exemption before training on that model can begin. The “limited duty exemption” would apply to a covered AI model defined by this bill that the develop can provide reasonable assurance the model does not, and will not, possess a hazardous capability. “Hazardous capability” means the model creates or uses a “chemical, biological, radiological, or nuclear weapon in a manner that results in mass casualties”; causes at least $500,000,000 “of damages through cyberattacks on critical infrastructure via a single incident” or related incidents; causes at least $500,000,000 of damages by engaging in bodily harm to another human or theft of, or harm to, property with the requisite mental state; and other comparable “grave threats in severity to public safety and security." Before starting training, the developer must meet specified requirements, such as the capability to promptly shutdown, until the model falls under the limited duty exemption. If an incident occurs, the developer must report each AI safety incident to the Frontier Model Division, a subdivision of the Department of Technology.
SB1229
Introduced February 15, 2024, SB 1229 would require property and casualty insurers to disclose until January 1, 2030, if it has used AI to make decisions that affect applications and claims review, as specified.