A new bill in the United States of America’s California that proposes to regulate large frontier AI models has been met with stiff resistance from various stakeholders in the tech industry including startup founders, investors, AI researchers, and organizations that advocate for open source software. The bill is called SB 1047 and was introduced by California State Senator Scott Weiner.
According to Weiner, the bill requires developers of large and powerful AI systems to comply with common-sense safety standards. But the opponents of the legislation are certain that it would kill innovation and doom the entire AI industry.
In May this year, the California legislature passed the controversial bill, which is currently being advanced by various committees. Following a final vote in August, the bill could be ascended for Gavin Newsom, the governor of the state, to sign into law. If that happens, SB 1047 would become the country’s first major law regulating AI in a state where many big tech companies are located.
What does the bill propose?
Also known as the ‘Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act’, SB 1047 looks to hold top AI companies such as Meta, OpenAI, Anthropic, and Mistral responsible for the potential catastrophic dangers that come attached with the rapidly advancing technology.
The bill mainly applies to entities rolling out large frontier AI models, with the term “large” being defined in the bill as those AI systems trained using computing power of 10^26 floating operations per second (FLOPS) with the training process costing more than USD $100 million (Rs 834 crore, approximately). AI models that have been fine tuned using computing power greater than three times 10^25 FLOPS also come under the bill.
“If not properly subject to human controls, future development in artificial intelligence may also have the potential to be used to create novel threats to public safety and security, including by enabling the creation and the proliferation of weapons of mass destruction, such as biological, chemical, and nuclear weapons, as well as weapons with cyber-offensive capabilities,” the bill reads.
As per the latest draft of the bill, developers behind large frontier AI models can be held liable for “critical harms”. These harms include using AI to create chemical or nuclear weapons and launching cyberattacks on critical infrastructure. It also covers human crimes committed by AI models acting with limited human oversight that result in death, bodily injury, and property damage.
However, developers cannot be held responsible if the AI-generated output that leads to death or injury is information available elsewhere. Interestingly, the bill also requires AI models to have an inbuilt kill switch for emergencies. Also, developers are not allowed to launch large frontier AI models that pose a reasonable risk of causing or enabling critical harm.
In order to ensure compliance, AI models are required to undergo independent audits by third party auditors. Developers who violate the proposed provisions of the bill could face legal action by California’s attorney general. They would also have to mandatorily comply with safety standards recommended by a new AI certifying body called the ‘Frontier Model Division’ that is envisaged to be set up by the California government.
Why has the bill sparked uproar?
Essentially, the draft legislation encapsulates the perspectives voiced by AI doomers. It has been backed by tech industry figures such as Geoffrey Hinton and Yoshua Bengio, who broadly believe that AI could end humanity and must, therefore, be regulated. One of the sponsors of the bill is the Center for AI Safety which published an open letter stating that the risks posed by AI were just as severe as nuclear wars or another pandemic.
While the bill is receiving support from these quarters, it has been heavily criticized by almost everyone else. One of the main arguments against the bill is that it would effectively eliminate open source AI models.
When AI models are open source, it means that their inner workings can be freely accessed or modified by anyone, thus ensuring greater transparency and improved security. But the proposed California bill could discourage companies like Meta from making their AI models open source as they could be held responsible for other developers misusing the technology.
Experts have also pointed out that preventing AI systems from misbehaving is more complicated than it seems. Hence, placing the regulatory burden solely on AI companies is not entirely fair, especially since the prescribed safety standards in the bill are not flexible enough for a fast-growing technology like AI.