Silicon Valley comes out in force against an AI-Safety Bill

Since the start of the AI ​​boom, attention to this technology has focused not only on its world-changing potential, but also fears of how it could go wrong. A set of so-called AI doomers have suggested that artificial intelligence could grow powerful enough to incite nuclear war or enable large-scale cyber attacks. Even top leaders in the AI ​​industry have said the technology is so dangerous, it needs to be heavily regulated.

A high-profile bill in California is now trying to do just that. The proposed law, Senate Bill 1047, introduced by State Senator Scott Wiener in February, hopes to prevent the worst possible effects of AI by requiring companies to take certain security measures. Vienna objected to any characterization of it as a doom bill. “AI has the potential to make the world a better place,” he told me yesterday. “But as with any powerful technology, it brings benefits as well as risks.”

SB 1047 subjects any AI model that costs more than $100 million to train to a number of safety regulations. Under the proposed law, the companies that make such models would have to submit a plan outlining their protocols for managing the risk and agree to annual third-party audits, and they would have to be able to turn off the technology at any time — essentially the setting a kill switch. AI companies could face fines if their technology causes “critical harm”.

The bill, which must be voted on in the coming days, has met with strong opposition. Tech companies including Meta, Google and OpenAI have raised concerns. Opponents argue the bill will stifle innovation, hold developers liable for user abuse, and drive the AI ​​business out of California. Last week, eight Democratic members of Congress wrote a letter to Governor Gavin Newsom, noting that while it is “somewhat unusual” for them to weigh in on state legislation, they felt compelled to do so. In the letter, the members worry that the bill focuses too much on the most dire effects of AI, and “creates unnecessary risks to California’s economy with very little public safety benefit.” They urged Newsom to veto it, should it pass. To top it all off, Nancy Pelosi weighed in separately on Friday, calling the bill “well-intentioned but ill-informed.”

In part, the debate on the bill comes to a core question with AI. Will this technology end the world, or have people just seen too much sci-fi? In the center of everything is Wiener. Because so many AI companies are based in California, the bill, if passed, could have major implications nationwide. I caught up with the state senator yesterday to discuss what he describes as his “hardball politics” of this bill — and whether he actually believes AI is capable of going rogue and firing nuclear weapons.

Our conversation has been condensed and edited for clarity.


Caroline Mimbs Nyce: How did this bill become so controversial?

Scott Wiener: Anytime you try to regulate any industry in any way, even in a light-hearted way—what, this legislation is light-touch—you’re going to get pushback. And especially with the technical industry. This is an industry that has become very, very used to not being regulated in the public interest. And I say this as someone who has been a supporter of the technology industry in San Francisco for many years; I am not anti-tech by any means. But we also have to take into account the public interest.

It is not at all surprising that there was pushback. And I respect the pushback. That is democracy. I do not respect some of the fear mongering and misinformation that Andreessen Horowitz and others have spread. [Editor’s note: Andreessen Horowitz, also known as a16z, did not respond to a request for comment.]

New: What is mainly grinding your gears?

Vienna: People told start-up founders that SB 1047 would send them to prison if their model did not cause unexpected damage, which was completely false and made up. The fact that the bill doesn’t apply to start-ups — you’d have to spend over $100 million training the model for the bill to even apply to you — the bill won’t send anyone to jail. There are some inaccurate statements about open source.

These are just a few examples. It’s just a lot of inaccuracies, exaggerations, and, sometimes, misrepresentations about the bill. Listen: I’m not naive. I’m from San Francisco politics. I am used to hard politics. And this is hard politics.

New: You have also received some uproar from politicians at the national level. What did you make of the letter from the eight members of Congress?

Vienna: As much as I respect the signers of the letter, I respectfully and strongly disagree with them.

In an ideal world, all of this should be handled at the federal level. All of it. When I wrote California’s net neutrality law in 2018, I was very clear that I would be happy to close up shop if Congress passed a strong net neutrality law. We passed that law in California, and here we are six years later; Congress has yet to enact a net neutrality law.

If Congress goes ahead and can pass a strong federal AI security law, that’s fantastic. But I’m not holding my breath given the track record.

New: Let’s run through some of the popular criticisms of this bill. The first is that it takes a doomer perspective. Do you really believe that AI could be involved in the “creation and use” of nuclear weapons?

Vienna: Just to be clear, this is not a doomer bill. The opposition claims that the bill is aimed at “science-fiction risks.” They are trying to say that anyone who supports this bill is a doomer and is crazy. This bill is not about the Terminator risk This bill is about huge damages that are quite tangible.

If we’re talking about an AI model that shuts down the electricity or disrupts the banking system in a significant way – and makes it much easier for bad actors to do those things – then these are big damages. We know there are people today who try to do that, and sometimes succeed, in limited ways. Imagine if it becomes profoundly easier and more efficient.

As for chemical, biological, radiological, nuclear weapons, we are not talking about what you can learn on Google. We’re talking about when it will be much, much easier and more efficient to do that with an AI.

New: The next criticism of your bill is around harm – that it doesn’t address the real harms of AI, such as job losses and biased systems.

Vienna: It’s classic whataboutism. There are several risks of AI: deepfakes, algorithmic discrimination, job loss, misinformation. These are all harms that we must address and that we must try to prevent from happening. We have bills moving forward to do that. But in addition, we must try to get ahead of these catastrophic risks to reduce the likelihood that they will happen.

New: This is one of the first major AI regulatory bills to receive national attention. I’d be curious to know what your experience has been – and what you’ve learned.

Vienna: I definitely learned a lot about the AI ​​factions, for lack of a better term – the effective altruists and effective accelerationists. It’s like the Jets and the Sharks.

Like human nature, both sides caricature each other and try to demonize each other. The effective accelerationists will classify the effective altruists as insane doomsayers. Some of the effective altruists will classify all effective accelerationists as extreme libertarians. Of course, as is the case with human existence, and human opinions, it is a spectrum.

New: You don’t sound too frustrated, all things considered.

Vienna: This legislative process—even though I get frustrated with some of the inaccurate statements that are being made about the bill—this has actually been a very thoughtful process in many ways, with a lot of people with really thoughtful views, whether I agree or disagree with them. I am honored to be a part of a legislative process where so many people care because the issue is actually important.

If the opposition refers to the risks of AI as “science fiction”, well, we know that’s not true, because if they really thought the risk was science fiction, they wouldn’t oppose the bill. They wouldn’t care, right? Because it would all be made up. But it’s not made up science fiction. It’s real.

#Silicon #Valley #force #AISafety #Bill

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top