close
close

California SB 1047: Guard dog or lap dog?

California SB 1047: Guard dog or lap dog?

Rob Eleveld and Jai Jaisimha are the founders of the Transparency Coalition, a nonprofit organization dedicated to creating AI safeguards for the public good.

When California State Senator Scott Wiener introduced SB 1047 a little over six months ago, we at the Transparency Coalition viewed it as a fair attempt to provide safeguards for the world’s largest AI models. Today, we view SB 1047 as a cautionary tale: This is what happens when the tech lobby pulls the teeth out of a bill and turns a watchdog into a lapdog.

In its original form in February, Wiener’s bill did not address the issues around training data and AI transparency—our group’s focus—but we still admired its core messages.

The proposal required developers to ensure that their AI models could not cause critical harm, introducing a duty of care within the AI ​​industry for the first time. And the bill had teeth. A new state agency (the Frontier Model Division) would enforce the legal requirements of SB 1047. The state’s attorney general would have the ability to sue negligent companies.

Further reading: California’s AI reforms scare all developers, not just big tech companies

Earlier this month, as the bill was approaching a final vote in Sacramento, pressure from the tech and business lobby forced Senator Wiener to effectively eliminate SB 1047’s enforcement mechanisms. The Frontier Model Division disappeared. Requirements became suggestions.

Most importantly, SB 1047 no longer allows the attorney general to sue AI developers for negligent safety practices before something goes wrong. The attorney general can only sue the developer after a model or service causes harm. That’s like holding an offshore oil drilling company accountable for its negligent practices only after a catastrophic oil spill. It does nothing to ensure the safety of its drilling and prevent the spill in the first place.

Despite our serious concerns, we hope that SB 1047, if passed, will lay an initial foundation upon which California policymakers will continue to build, creating an enabling environment for the most innovative, transparent, and ethical AI industry in the world. However, we fear that passing SB 1047 in its watered-down form will only allow lawmakers in Sacramento to ignore the urgent need for AI regulation in future sessions in the false belief that SB 1047 “takes care of everything.”

Other AI-focused bills (such as AB 2013, which we support and which is also pending a vote in the California Senate) build on existing statutory provisions in California law regulating AI by adding transparency requirements for training data used to develop AI models. Industry practices in collecting and using training data have been linked to harms such as hallucinations, mis/disinformation, and violations of user privacy.

There’s an old saying in sports. When it comes to defending against outstanding athletes, you can’t stop them; you can only hope to keep them at bay. By eliminating SB 1047’s requirements and enforcement mechanisms, the tech lobby hasn’t completely killed the law, but it has significantly weakened it. And that, unfortunately, could be enough to limit their ability to have a real impact on AI safety.

Leave a Reply

Your email address will not be published. Required fields are marked *