CPPA publishes new draft regulations addressing AI, risk assessments, cyber audits | Consumer Finance Monitor

CPPA publishes new draft regulations addressing AI, risk assessments, cyber audits
By Philip N. Yannella, Gregory P. Szewczyk & Timothy Dickens on September 7, 2023
POSTED IN ARTIFICIAL INTELLIGENCE, CYBERSECURITY, PRIVACY, REGULATORY AND ENFORCEMENT
The California Privacy Protection Agency (CPPA) recently published two new sets of draft regulations addressing a range of cutting-edge data protection issues. Although the CPPA has not officially started the formal rulemaking process, the Draft Cybersecurity Audit Regulations and the Draft Risk Assessment Regulations will serve as the foundation for the process moving forward. Discussion of the draft regulations will be a central topic of the CPPA’s upcoming September 8th meeting.

Among the noteworthy aspects to the draft Regulations are (1) a proposed definition of “artificial intelligence” that differentiates the technology from automated decision-making; (2) transparency obligations for companies that train AI to be used by consumers or other businesses; and (3) a significant list of potential harms to be considered by businesses when conducting risk assessments.

The Draft Cybersecurity Audit Regulations make both modifications and additions to the existing California Consumer Privacy Act (“CCPA”) regulations. At a high level, the draft regulations:

Outline the requirement for annual cybersecurity audits for businesses “whose processing of consumers’ personal information presents significant risk to consumers’ security”;
Outline potential standards used to determine when processing poses a “significant risk”;
Propose options specifying the scope and requirements of cybersecurity audits; and
Propose new mandatory contractual terms for inclusion in Service Provider data protection agreements.
Similarly, the Draft Risk Assessment Regulations propose both modifications and additions to the existing CCPA regulations. The draft regulations:

Propose new and distinct definitions for Artificial Intelligence and Automated Decision-making technologies;
Identify specific processing activities that present a “significant” risk of harm to consumers, requiring a risk assessment. These activities include:
Selling or sharing personal information; processing sensitive personal information (outside of the traditional employment context); using automated decision-making technologies; processing the information of children under the age of 16; using technology to monitor the activity of employees, contractors, job applicants, or students; or
Processing personal information of consumers in publicly accessible places using technology to monitor behavior, location, movements, or actions.
Propose standards for stakeholder involvement in risk assessments;
Propose risk assessment content and review requirements;
Require that businesses that train AI for use by consumers or other businesses conduct a risk assessment and include with the software a plain statement of the appropriate uses of the AI; and
Outline new disclosure requirements for businesses that implement automated decision-making technologies.