The Computer & Communications Industry Association (CCIA) is presenting its concerns today before the California Assembly Privacy and Consumer Protection Committee regarding SB 243. The bill aims to protect children from deceptive chatbot interactions but could impose significant requirements on AI tools not intended to simulate human companionship.
SB 243 may classify AI models used for tasks like tutoring, mock interviews, or customer service as "companion chatbots," subjecting them to new rules such as repeated pop-up disclosures, mandatory audits, and detailed reporting requirements. Current California law under SB 1001 already mandates bots identify themselves to prevent misleading users about their artificial identity during online interactions.
CCIA argues that adding more obligations on low-risk AI tools would create compliance confusion without enhancing safety benefits. Additionally, the bill allows private lawsuits for minor violations like brief notification delays. CCIA suggests a balanced enforcement approach with centralized oversight by the Attorney General to promote consistency and enable businesses to demonstrate good-faith compliance.
Aodhan Downey, State Policy Manager for CCIA, stated: “We agree with California’s leadership that children’s online safety is of the utmost importance, and our members prioritize advanced tools that reflect that priority. But SB 243 casts too wide a net, applying strict rules to everyday AI tools that were never intended to act like human companions. Requiring repeated notices, age verification, and audits would impose significant costs without providing meaningful new protections. We urge lawmakers to narrow the scope of this bill and move toward a more targeted, consistent approach that supports both user safety and responsible innovation.”