Close Menu
OnlyPlanz –

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Take a peek inside Nayanthara’s 7,000 sq. ft colonial bungalow in Chennai: ‘My most favourite part is Vignesh’s studio’ | Lifestyle News

    September 12, 2025

    Assisted dying bill is a ‘licence to kill’, Theresa May says | Assisted dying

    September 12, 2025

    FTC probes Google and Amazon over ad pricing disclosures

    September 12, 2025
    Facebook X (Twitter) Instagram
    Trending
    • Take a peek inside Nayanthara’s 7,000 sq. ft colonial bungalow in Chennai: ‘My most favourite part is Vignesh’s studio’ | Lifestyle News
    • Assisted dying bill is a ‘licence to kill’, Theresa May says | Assisted dying
    • FTC probes Google and Amazon over ad pricing disclosures
    • How to Implement Consultative Selling in 2025 How to Implement Consultative Selling in 2025
    • iPhone 17 Pro: Wild Orange Color, Bigger Zoom, A19 Pro Chip for $1,099
    • A ‘Spinal Tap’ Sequel Riffs on the End of Rock
    • The Myriad Film Holder: Bringing Forgotten Film Back to Life
    • Suspended lawyer accused of citing hallucinated case in bid to reinstate law license
    Facebook X (Twitter) Instagram Pinterest Vimeo
    OnlyPlanz –OnlyPlanz –
    • Home
    • Marketing
    • Branding
    • Modeling
    • Video Creation
    • Editing Tips
    • Content
    • Engagement
    • More
      • Tools
      • Earnings
      • Legal
      • Monetization
    OnlyPlanz –
    Home»Content»The debate behind SB 53, the California bill trying to prevent AI from building nukes
    Content

    The debate behind SB 53, the California bill trying to prevent AI from building nukes

    onlyplanz_80y6mtBy onlyplanz_80y6mtSeptember 12, 2025No Comments15 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    The debate behind SB 53, the California bill trying to prevent AI from building nukes
    Share
    Facebook Twitter LinkedIn Pinterest Email

    On the subject of AI, as California goes, so goes the nation. The most important state within the US by inhabitants can also be the central hub of AI innovation for the whole globe, residence to 32 of the world’s high 50 AI firms. That dimension and affect have given the Golden State the load to develop into a regulatory trailblazer, setting the tone for the remainder of the nation on environmental, labor, and client safety rules — and extra lately, AI as nicely. Now, following the dramatic defeat of a proposed federal moratorium on states regulating AI in July, California policymakers see a restricted window of alternative to set the stage for the remainder of the nation’s AI legal guidelines.This week, the California State Meeting is ready to vote on SB 53, a invoice that will require transparency experiences from the builders of extremely highly effective, “frontier” AI fashions. The fashions focused characterize the cutting-edge of AI — extraordinarily adept generative programs that require large quantities of knowledge and computing energy, like OpenAI’s ChatGPT, Google’s Gemini, xAI’s Grok, and Anthropic’s Claude. The invoice, which has already handed the state Senate, should go the California State Meeting earlier than it goes to the governor to both be vetoed or signed into legislation.AI can provide great advantages, however because the invoice is supposed to deal with, it’s not with out dangers. And whereas there isn’t any scarcity of current dangers from points like job displacement and bias, SB 53 focuses on doable “catastrophic dangers” from AI. Such dangers embrace AI-enabled organic weapons assaults and rogue programs finishing up cyberattacks or different felony exercise that might conceivably carry down essential infrastructure. Such catastrophic dangers characterize widespread disasters that might plausibly threaten human civilization at native, nationwide, and international ranges. They characterize dangers of the form of AI-driven disasters that haven’t but occurred, quite than already-realized, extra private harms like AI deepfakes.Precisely what constitutes a catastrophic threat is up for debate, however SB 53 defines it as a “foreseeable and materials threat” of an occasion that causes greater than 50 casualties or over $1 billion in damages {that a} frontier mannequin performs a significant position in contributing to. How fault is set in follow can be as much as the courts to interpret. It’s laborious to outline catastrophic threat in legislation when the definition is much from settled, however doing so might help us defend in opposition to each near- and long-term penalties.By itself, a single state invoice centered on elevated transparency will most likely not be sufficient to forestall devastating cyberattacks and AI-enabled chemical, organic, radiological, and nuclear weapons. However the invoice represents an effort to control this fast-moving expertise earlier than it outpaces our efforts at oversight.SB 32 is the third state-level invoice to attempt to particularly concentrate on regulating AI’s catastrophic dangers, after California’s SB 1047, which handed the legislature solely to be vetoed by the governor — and New York’s Accountable AI Security and Training (RAISE) Act, which lately handed the New York legislature and is now awaiting Gov. Kathy Hochul’s approval.SB 53, which was launched by state Sen. Scott Wiener in February, requires frontier AI firms to develop security frameworks that particularly element how they method catastrophic threat discount. Earlier than deploying their fashions, firms must publish security and safety experiences. The invoice additionally provides them 15 days to report “essential security incidents” to the California Workplace of Emergency Providers, and establishes whistleblower protections for workers who come ahead about unsafe mannequin deployment that contributes to catastrophic threat. SB 53 goals to carry firms publicly accountable for his or her AI security commitments, with a monetary penalty as much as $1 million per violation.In some ways, SB 53 is the non secular successor to SB 1047, additionally launched by Wiener.Each cowl massive fashions which might be skilled at 10^26 FLOPS, a measurement of very important computing energy utilized in quite a lot of AI laws as a threshold for important threat, and each payments strengthen whistleblower protections. The place SB 53 departs from SB 1047 is its concentrate on transparency and preventionWhile SB 1047 aimed to carry firms answerable for catastrophic harms attributable to their AI programs, SB 53 formalizes sharing security frameworks, which many frontier AI firms, together with Anthropic, already do voluntarily. It focuses squarely on the heavy-hitters, with its guidelines making use of solely to firms that generate $500 million or extra in gross income.“The science of how one can make AI protected is quickly evolving, and it’s at the moment troublesome for policymakers to write down prescriptive technical guidelines for the way firms ought to handle security,” mentioned Thomas Woodside, the co-founder of Safe AI Venture, an advocacy group that goals to scale back excessive dangers from AI and is a sponsor of the invoice, over e-mail. “This mild contact coverage prevents backsliding on commitments and encourages a race to the highest quite than a race to the underside.”A part of the logic of SB 53 is the power to adapt the framework as AI progresses. The invoice authorizes the California Lawyer Basic to vary the definition of a big developer after January 1, 2027, in response to AI advances.Proponents of the invoice are optimistic about its probabilities of being signed by the governor ought to it go the legislature, which it’s anticipated to. On the identical day that Gov. Gavin Newsom vetoed SB 1047, he commissioned a working group focusing solely on frontier fashions. The ensuing report by the group supplied the muse for SB 53. “I’d guess, with roughly 75 p.c confidence, that SB 53 might be signed into legislation by the top of September,” mentioned Dean Ball — former White Home AI coverage adviser, vocal SB 1047 critic, and SB 53 supporter — to Transformer.However a number of trade organizations have rallied in opposition, arguing that further compliance regulation can be costly, on condition that AI firms ought to already be incentivized to keep away from catastrophic harms. OpenAI has lobbied in opposition to it and expertise commerce group Chamber of Progress argues that the invoice would require firms to file pointless paperwork and unnecessarily stifle innovation.“These compliance prices are merely the start,” Neil Chilson, head of AI coverage on the Abundance Institute, instructed me over e-mail. “The invoice, if handed, would feed California regulators truckloads of firm data that they may use to design a compliance industrial complicated.”In contrast, Anthropic enthusiastically endorsed the invoice in its present state on Monday. “The query isn’t whether or not we’d like AI governance – it’s whether or not we develop it thoughtfully in the present day or reactively tomorrow,” the corporate defined in a weblog publish. “SB 53 presents a strong path towards the previous.” (Disclosure: Vox Media is considered one of a number of publishers which have signed partnership agreements with OpenAI, whereas Future Excellent is funded partly by the BEMC Basis, whose main funder was additionally an early investor in Anthropic. Neither group has editorial enter into our content material.)The talk over SB 53 ties into broader disagreements about whether or not states or the federal authorities ought to drive AI security regulation. However because the overwhelming majority of those firms are primarily based in California, and practically all do enterprise there, the state’s laws issues for the whole nation.“A federally led transparency method is much, far, far preferable to the multi-state different,” the place a patchwork of state rules can battle with one another, mentioned Cato Institute expertise coverage fellow Matthew Mittelsteadt in an e-mail. However “I like that the invoice has a provision that will permit firms to defer to a future different federal commonplace.”“The pure query is whether or not a federal method may even occur,” Mittelsteadt continued. “For my part, the jury is out on that however the chance is much extra possible that some counsel. It’s been lower than 3 years since ChatGPT was launched. That’s hardly a lifetime in public coverage.”However in a time of federal gridlock, frontier AI developments gained’t watch for Washington.The catastrophic threat divideThe invoice’s concentrate on, and framing of, catastrophic dangers just isn’t with out controversy.The thought of catastrophic threat comes from the fields of philosophy and quantitative threat evaluation. Catastrophic dangers are downstream of existential dangers, which threaten humanity’s precise survival or else completely cut back our potential as a species. The hope is that if these doomsday situations are recognized and ready for, they are often prevented or no less than mitigated.But when existential dangers are clear — the top of the world, or no less than as we all know it — what falls underneath the catastrophic threat umbrella, and one of the best ways to prioritize these dangers, relies on who you ask. There are longtermists, folks centered totally on humanity’s far future, who place a premium on issues like multiplanetary growth for human survival. They’re usually mainly involved by dangers from rogue AI or extraordinarily deadly pandemics. Neartermists are extra preoccupied with current dangers, like local weather change, mosquito vector-borne illness, or algorithmic bias. These camps can mix into each other — neartermists would additionally prefer to keep away from getting hit by asteroids that might wipe out a metropolis, and longtermists don’t dismiss dangers like local weather change — and one of the best ways to consider them is like two ends of a spectrum quite than a strict binary.You possibly can consider the AI ethics and AI security frameworks because the near- and longtermism of AI threat, respectively. AI ethics is in regards to the ethical implications of the methods the expertise is deployed, together with issues like algorithmic bias and human rights, within the current. AI security focuses on catastrophic dangers and potential existential threats. However, as Vox’s Julia Longoria reported within the Good Robotic collection for Unexplainable, there are inter-personal conflicts main these two factions to work in opposition to one another, a lot of which has to do with emphasis. (AI ethics folks argue that catastrophic threat issues over-hype AI capabilities and ignores its influence on weak folks proper now, whereas AI security folks fear that if we focus an excessive amount of on the current, we gained’t have methods to mitigate larger-scale issues down the road.)However behind the query of close to versus long-term dangers lies one other one: what, precisely, constitutes a catastrophic threat?SB 53 initially set the usual for catastrophic threat at 100 quite than 50 casualties — just like New York’s RAISE Act — earlier than halving the edge in an modification to the invoice. Whereas the typical individual would possibly contemplate, say, many individuals pushed to suicide after interacting with AI chatbots to be catastrophic, such a threat is exterior of the invoice’s scope. (The California State Meeting simply handed a separate invoice to control AI companion chatbots by stopping them from collaborating in discussions about suicidal ideation or sexually express materials.)SB 53 focuses squarely on harms from “expert-level” frontier AI mannequin help in growing or deploying chemical, organic, radiological, and nuclear weapons; committing crimes like cyberattacks or fraud; and “lack of management” situations the place AIs go rogue, behaving deceptively to keep away from being shut down and replicating themselves with out human oversight. For instance, an AI mannequin may very well be used to information the creation of a brand new lethal virus that infects tens of millions and kneecaps the worldwide financial system.“The 50 to 100 deaths or a billion {dollars} in property injury is only a proxy to seize actually widespread and substantial influence,” mentioned Scott Singer, lead writer of the California Report for Frontier AI Coverage, which helped inform the premise of the invoice. “We do have a look at like AI-enabled or AI probably [caused] or correlated suicide. I believe that’s like a really severe set of points that calls for policymaker consideration, however I don’t suppose it’s the core of what this invoice is attempting to deal with.”Transparency is useful in stopping such catastrophes as a result of it could assist elevate the alarm earlier than issues get out of hand, permitting AI builders to right course. And within the occasion that such efforts fail to forestall a mass casualty incident, enhanced security transparency might help legislation enforcement and the courts determine what went improper. The problem there may be that it may be troublesome to find out how a lot a mannequin is accountable for a selected consequence, Irene Solaiman, the chief coverage officer at Hugging Face, a collaboration platform for AI builders, instructed me over e-mail.“These dangers are coming and we ought to be prepared for them and have transparency into what the businesses are doing,” mentioned Adam Billen, the vp of public coverage at Encode, a corporation that advocates for accountable AI management and security. (Encode is one other sponsor of SB 53.) “However we don’t know precisely what we’re going to want to do as soon as the dangers themselves seem. However proper now, when these issues aren’t occurring at a big scale, it is sensible to be type of centered on transparency.”Nonetheless, a transparency-focused invoice like SB 53 is inadequate for addressing already-existing harms. Once we already know one thing is an issue, the main focus ought to be on mitigating it.“Possibly 4 years in the past, if we had handed some type of transparency laws like SB 53 however centered on these harms, we’d have had some warning indicators and been in a position to intervene earlier than the widespread harms to youngsters began occurring,” Billen mentioned. “We’re attempting to form of right that mistake on these issues and get some type of forward-facing details about what’s occurring earlier than issues get loopy, mainly.”SB 53 dangers being each overly slim and unclearly scoped. We’ve got not but confronted these catastrophic harms from frontier AI fashions, and probably the most devastating dangers would possibly take us solely unexpectedly. We don’t know what we don’t know.It’s additionally actually doable that fashions skilled beneath 10^26 FLOPS, which aren’t lined by SB 53, have the potential to trigger catastrophic hurt underneath the invoice’s definition. The EU AI Act units the edge for “systemic threat” on the smaller 10^25 FLOPS, and there’s disagreement in regards to the utility of computational energy as a regulatory commonplace in any respect, particularly as fashions develop into extra environment friendly.Because it stands proper now, SB 53 occupies a unique area of interest from payments centered on regulating AI use in psychological healthcare or knowledge privateness, reflecting its authors’ want to not step on the toes of different laws or chew off greater than it could moderately chew. However Chilson, the Abundance Institute’s head of AI coverage, is a part of a camp that sees SB 53’s concentrate on catastrophic hurt as a “distraction” from the true near-term advantages and issues, like AI’s potential to speed up the tempo of scientific analysis or create nonconsensual deepfake imagery, respectively.That mentioned, deepfakes may actually trigger catastrophic hurt. As an illustration, think about a hyper-realistic deepfake impersonating a financial institution worker to commit fraud at a multibillion-dollar scale, mentioned Nathan Calvin, the vp of state affairs and common counsel at Encode. “I do suppose a number of the traces between these items in follow is usually a bit blurry, and I believe in some methods…that isn’t essentially a foul factor,” he instructed me.It may very well be that the ideological debate round what qualifies as catastrophic dangers, and whether or not that’s worthy of our legislative consideration, is simply noise. The invoice is meant to control AI earlier than the proverbial horse is out of the barn. The common individual isn’t going to fret in regards to the probability of AI sparking nuclear warfare or organic weapons assaults, however they do take into consideration how algorithmic bias would possibly have an effect on their lives within the current. However in attempting to forestall the worst-case situations, maybe we are able to additionally keep away from the “smaller,” nearer harms. In the event that they’re efficient, forward-facing security provisions designed to forestall mass casualty occasions can even make AI safer for people.If SB 53 passes the legislature and will get signed by Gov. Newsom into legislation, it may encourage different state makes an attempt at AI regulation via an identical framework, and ultimately encourage federal AI security laws to maneuver ahead.How we take into consideration threat issues as a result of it determines the place we focus our efforts on prevention. I’m a agency believer within the worth of defining your phrases, in legislation and debate. If we’re not on the identical web page about what we imply after we discuss threat, we are able to’t have an actual dialog.You’ve learn 1 article within the final monthHere at Vox, we’re unwavering in our dedication to protecting the problems that matter most to you — threats to democracy, immigration, reproductive rights, the atmosphere, and the rising polarization throughout this nation.Our mission is to offer clear, accessible journalism that empowers you to remain knowledgeable and engaged in shaping our world. By turning into a Vox Member, you immediately strengthen our capacity to ship in-depth, unbiased reporting that drives significant change.We depend on readers such as you — be a part of us.Swati SharmaVox Editor-in-Chief

    bill Building California debate nukes Prevent
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous Article6 essential 3D printing tools I couldn’t live without
    Next Article I’m a phone deals expert – these are the best iPhone 17 deals you can preorder today
    onlyplanz_80y6mt
    • Website

    Related Posts

    Legal

    Assisted dying bill is a ‘licence to kill’, Theresa May says | Assisted dying

    September 12, 2025
    Content

    A ‘Spinal Tap’ Sequel Riffs on the End of Rock

    September 12, 2025
    Monetization

    Mastodon rolls out quote posts with protections to prevent ‘dunking’

    September 12, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    5 Steps for Leading a Team You’ve Inherited

    June 18, 20255 Views

    A Pro-Russia Disinformation Campaign Is Using Free AI Tools to Fuel a ‘Content Explosion’

    July 1, 20253 Views

    Meera Sodha’s vegan recipe for Thai-style tossed walnut and tempeh noodles | Noodles

    June 28, 20253 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews
    Modeling

    Take a peek inside Nayanthara’s 7,000 sq. ft colonial bungalow in Chennai: ‘My most favourite part is Vignesh’s studio’ | Lifestyle News

    onlyplanz_80y6mtSeptember 12, 2025
    Legal

    Assisted dying bill is a ‘licence to kill’, Theresa May says | Assisted dying

    onlyplanz_80y6mtSeptember 12, 2025
    Monetization

    FTC probes Google and Amazon over ad pricing disclosures

    onlyplanz_80y6mtSeptember 12, 2025

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    SLR reform is happening. Does it matter?

    June 18, 20250 Views

    Panthers in awe of Brad Marchand’s ‘will to win’ in Cup run

    June 18, 20250 Views

    DOJ Offers Divestiture Remedy in Lawsuit Opposing Merger of Defense Companies

    June 18, 20250 Views
    Our Picks

    Take a peek inside Nayanthara’s 7,000 sq. ft colonial bungalow in Chennai: ‘My most favourite part is Vignesh’s studio’ | Lifestyle News

    September 12, 2025

    Assisted dying bill is a ‘licence to kill’, Theresa May says | Assisted dying

    September 12, 2025

    FTC probes Google and Amazon over ad pricing disclosures

    September 12, 2025
    Recent Posts
    • Take a peek inside Nayanthara’s 7,000 sq. ft colonial bungalow in Chennai: ‘My most favourite part is Vignesh’s studio’ | Lifestyle News
    • Assisted dying bill is a ‘licence to kill’, Theresa May says | Assisted dying
    • FTC probes Google and Amazon over ad pricing disclosures
    • How to Implement Consultative Selling in 2025 How to Implement Consultative Selling in 2025
    • iPhone 17 Pro: Wild Orange Color, Bigger Zoom, A19 Pro Chip for $1,099
    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Disclaimer
    • Get In Touch
    • Privacy Policy
    • Terms and Conditions
    © 2025 ThemeSphere. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.