California Courier
State

CA’s Chatbot Law Could Fragment the Web Nationwide, Kill Innovation, Critics Warn

“AI is too promising of a technology to be badly regulated,” reads a tech coalition letter opposing recent overbroad chatbot regulations.

Last Fall, California’s legislature passed what is arguably among the most consequential artificial intelligence laws in the nation yet. Senate Bill 243, signed by Gov. Gavin Newsom in October, puts limits on how “companion” AI systems, or chatbots, operate. Critics warn that the seemingly well-intentioned law results not in greater cybersecurity but in stifled innovation and aggressive compliance costs that will drive tech giants out of the Golden State. 

They also warn it sets the stage for a fragmented legal landscape where developers must navigate 50 different state rulings.

Lawmakers say that SB-243’s goal was to rein in emotionally manipulative bots aimed at children, but the definition is elastic enough to pull in far more than that. Many everyday tools—website chat widgets that remember users, virtual shopping assistants that personalize recommendations, financial wellness bots that encourage users, or education platforms with “study buddy” features—could plausibly fall within the law’s scope, even if their primary purpose is benign. There are carve-outs for customer service and technical support, but even they come with ambiguities (if a chatbot does more than answer basic questions—if it feels friendly, remembers prior interactions, or offers encouragement—it may no longer qualify for those exclusions).

And naturally, the bill imposes new disclosure and reporting obligations. It requires companies to police certain conversations, and—perhaps most significantly—it arguably shifts enormous legal risk onto companies that never set out to build “companion” bots at all. 

However, any semblance of nuance in the critique of the measure was lost when the bill’s proponents made the classic “think of the children” appeal. That framing helped the SB-243 pass with uncharacteristically bipartisan support in both chambers. In the Senate, where it was introduced, the opposition was led by Republicans: namely, Senators Alvarado-Gil (R-Jackson), Choi (R-Irvine), and Strickland (R-Huntington Beach). In the Assembly, it had proponents and opponents on both sides of the aisle.

Fluffy articles tout the new legislation as being common sense, long overdue, and “deceptively simple.” After all, what’s so controversial about forcing companies to disclose that their AI programs are AI? The bill’s author, Senator Steve Padilla (D-San Diego) said in his own press release on the bill’s passage that it is a “first-of-its-kind in the nation” bill which brings “critical, reasonable, and attainable safeguards.”

But being first or fast is far less important than being thorough, accurate, and just. 

It is “our responsibility to ensure [tech innovation] doesn’t come at the expense of our children’s health,” Padilla argues. But, similarly, it’s also Sacramento’s responsibility to ensure the legislation is clearly defined, narrowly-tailored, enforceable, and fair. The bill’s critics argue that’s simply not the case.

A coalition of trade organizations including TechNet and the Computer & Communications Industry Association issued a letter of opposition, saying the law and its definitions are, among other things, “vague, undefined” and “overbroad.” 

“For example, what does it mean to ‘meet a user’s social needs?’” the letter reads. “Would a model that provides responses as part of a mock interview be meeting a user’s social needs? Similarly, is a model that can draw upon previous queries or interactions ‘able to sustain a relationship across multiple interactions?’ We appreciate the attempt to narrow the scope of the bill but believe more work needs to be done to match the legislative intent.”

The coalition also took issue with the fact that SB 243 authorizes a private right of action for violations of its provisions, which they believe is an “overly punitive method of enforcement” which “exposes operators to liability for trivial violations” such as glitches. 

Furthermore, its implementation is not going to be cheap. A fiscal summary of the bill upon its third reading says that initial costs to the California Department of Public Health (CDPH) for the Office of Suicide Prevention to collect and publish the required data are “absorbable.” But litigation is a different story. While the exact cost pressures are unknown, the summary admits the expense to taxpayers could be a “significant amount” for “courts to adjudicate cases filed under the new cause of action created by this bill.”

Legal challengers will undoubtedly argue that the law is narrow-sighted because it treats chatbots as if their use can easily be restricted to one state. 

Companies build websites and AI tools—at great expense—to serve all users, not just Californians. Under this law, a single chatbot might need different rules or functionality depending on the user’s state. 

It’s often said that what happens in California doesn’t stay in California. State policy has a long history of becoming a de facto national blueprint, especially on tech. Indeed, SB-243 set the runway for copycats to follow suit. New York has done just that with the passage of the AI Companion Models Law.

Expecting companies to navigate 50 different sets of regulations—each with its own definitions, liability rules, and reporting requirements—is a tall order. And it’s not inconceivable that it could force firms to limit services, fragment platforms, or avoid offering certain features altogether in states where said rules are cumbersome. That would ultimately harm Californians and the economic engine that supports the nation’s most populous state.

President Trump weighed in on the matter in a Truth Social post last month, stating that if there were 50 conflicting state policies, AI will be “destroyed in its infancy.”

“You can’t expect a company to get 50 Approvals every time they want to do something,” said Trump. “THAT WILL NEVER WORK!”

Mere days after SB-243 cleared the Assembly Appropriations Committee and headed to the Assembly floor—where passage looked all but assured—members of Congress introduced the federal CHAT Act, a bill built on the same emotional appeals to impose broad new restrictions on chatbots nationwide. While it comes closer to addressing the concern about letting states set their own chatbot policies, it comes with its own myriad issues that inherently come with the territory of overregulation in a budding, multibillion dollar industry.

Another coalition of tech policy organizations and think tanks—including a number of those who signed onto the SB-243 opposition letter—warned that, despite its “noble intentions,” the CHAT Act would do the opposite of its stated goal and instead “endanger the privacy and data security of children and families nationwide.” 

“AI is too promising of a technology to be badly regulated,” the opposition letter to the CHAT Act states. “America is a free country, and its freedom has made it the world’s leading economy and given it the world’s leading technology sector… Lawmakers should avoid imposing policies that would force users to submit to cybersecurity and privacy dangers as a precondition of using everyday digital services.”

Related posts

Governor Newsom Announces Transformation of Riverside Site into 209 Affordable Homes

California Courier

Toni Atkins Withdraws from California Governor’s Race, Citing No “Viable Path Forward”

CJ Womack

Paradise Lost? Californians Observe Their Changing State, Wonder if the California Dream is Still Attainable

California Courier