{"id":320,"date":"2024-02-16T13:00:00","date_gmt":"2024-02-16T14:00:00","guid":{"rendered":"https:\/\/reshebniki-online.com\/?p=320"},"modified":"2024-02-22T15:37:32","modified_gmt":"2024-02-22T15:37:32","slug":"can-california-show-the-way-forward-on-ai-safety","status":"publish","type":"post","link":"https:\/\/reshebniki-online.com\/index.php\/2024\/02\/16\/can-california-show-the-way-forward-on-ai-safety\/","title":{"rendered":"Can California show the way forward on AI safety?"},"content":{"rendered":"
\n
\n \"A
Moor Studio\/Getty Images<\/figcaption><\/figure>\n

A new state bill aims to protect us from the most powerful and dangerous AI models.<\/p>\n

Last week, California state Senator Scott Wiener (D-San Francisco) introduced a landmark new piece of AI<\/a> legislation aimed at<\/a> \u201cestablishing clear, predictable, common-sense safety standards for developers of the largest and most powerful AI systems.\u201d <\/p>\n

It\u2019s a well-written, politically astute approach to regulating AI, narrowly focused on the companies building the biggest-scale models and the possibility that those massive efforts could cause mass harm.<\/p>\n

As it has in fields from car emissions to climate change<\/a>, California\u2019s legislation could provide a model for national regulation, which looks likely to take much longer. But whether or not Wiener\u2019s bill makes it through the statehouse in its current form, its existence reflects that politicians are starting to take tech leaders seriously when they claim they intend to build radical world-transforming technologies that pose significant safety risks<\/a> \u2014 and ceasing to take them seriously when they claim, as some do, that they should do that with absolutely no oversight.<\/p>\n

What the California AI bill gets right<\/h3>\n

One challenge of regulating powerful AI systems is defining just what you mean by \u201cpowerful AI systems.\u201d We\u2019re smack in the middle of the present AI hype cycle, and every company in Silicon Valley claims that they\u2019re using AI, whether that means building customer service chatbots, day trading algorithms, general intelligences capable of convincingly mimicking humans, or even literal killer robots<\/a>. <\/p>\n

Defining the question is vital, because AI has enormous economic potential, and clumsy, excessively stringent regulations that crack down on beneficial systems could do enormous economic damage while doing surprisingly little about the very real safety concerns.<\/p>\n

The California bill attempts to avoid this problem in a straightforward way: it concerns itself only with so-called \u201cfrontier\u201d models, those \u201csubstantially more powerful than any system that exists today<\/a>.\u201d Wiener\u2019s team argues that a model which meets the threshold the bill sets would cost at least $100 million to build, which means that any company that can afford to build one can definitely afford to comply with some safety regulations. <\/p>\n

Even for such powerful models, the requirements aren\u2019t overly onerous<\/a>: The bill requires that companies developing such models prevent unauthorized access, be capable of shutting down copies of their AI in the case of a safety incident (though not other copies \u2014 more on that later), and notify the state of California on how they plan to do all this. Companies must demonstrate that their model complies with applicable regulation (for example from the federal government \u2014 though such regulations don\u2019t exist yet, they may at some point). And they have to describe the safeguards they\u2019re employing for their AI and why they are sufficient to prevent \u201ccritical harms,\u201d defined as mass casualties and\/or more than $500 million in damages. <\/p>\n

The California bill was developed in significant consultation with leading, highly respected AI scientists, and released with endorsements from leading AI researchers, tech industry leaders, and advocates for responsible AI alike. It\u2019s a reminder that despite vociferous, heated online disagreement, there\u2019s actually a great deal these various groups agree on. <\/p>\n

\u201cAI systems beyond a certain level of capability can pose meaningful risks to democracies and public safety,\u201d Yoshua Bengio, considered one of the godfathers of modern AI and a leading AI researcher, said of the proposed law. \u201cTherefore, they should be properly tested and subject to appropriate safety measures. This bill offers a practical approach to accomplishing this, and is a major step toward the requirements that I\u2019ve recommended to legislators.\u201d<\/p>\n

Of course, that\u2019s not to say that everyone loves the bill.<\/p>\n

What the California AI bill doesn\u2019t do<\/h3>\n

Some critics have worried that the bill, while it\u2019s a step forward, will be toothless in the case of a truly dangerous AI system. For one thing, if there\u2019s a safety incident requiring a \u201cfull shutdown\u201d of an AI system, the law doesn\u2019t require you to retain the capability to shut down copies of your AI which have been released publicly, or are owned by other companies or other actors. The proposed regulations are easier to comply with, but because AI, like any computer program, is so easy to copy, it means that in the event of a serious safety incident, it wouldn\u2019t actually be possible to just pull the plug.<\/p>\n

\u201cWhen we really need a full shutdown, this definition won\u2019t work,\u201d analyst Zvi Mowshowitz writes<\/a>. \u201cThe whole point of a shutdown is that it happens everywhere whether you control it or not.\u201d<\/p>\n

There are also many concerns about AI that can\u2019t be addressed by this particular bill. Researchers working on AI anticipate that it will change our society in many ways (for better and for worse), and cause diverse and varied harms: mass unemployment, cyberwarfare, AI-enabled fraud and scams, algorithmic codification of biased and unfair procedures, and many more. <\/p>\n

To date, most public policy on AI has tried to target all of those at once: Biden\u2019s executive order on AI<\/a> last fall mentions all of these concerns. These problems, though, will require very different solutions, including some we have yet to imagine. <\/p>\n

But existential risks, by definition, have to be solved to preserve a world in which we can make progress on all the others \u2014 and AI researchers take seriously the possibility<\/a> that the most powerful AI systems will eventually pose a catastrophic risk to humanity. Regulation addressing that possibility should therefore be focused on the most powerful models, and on our ability to prevent mass casualty events they could precipitate.<\/p>\n

At the same time, a model does not have to be extremely powerful to pose serious questions of algorithmic bias or discrimination \u2014 that can be done with an extremely simple model<\/a> that predicts recidivism or eligibility for a mortgage on the basis of data that reflects decades of past discriminatory practices. Tackling those issues will require a different approach, one less focused on powerful frontier models and mass casualty incidents and more on our ability to understand and predict even simple AI systems.<\/p>\n

No one law could possibly solve every challenge that we\u2019ll face as AI becomes a bigger and bigger part of modern life. But it\u2019s worth keeping in mind that \u201cdon\u2019t release an AI that will predictably cause a mass casualty event,\u201d while it\u2019s a crucial element of ensuring that powerful AI development proceeds safely, is also a ridiculously low bar. Helping this technology reach its full potential for humanity \u2014 and ensuring that its development goes well \u2014 will require a lot of smart and informed policymaking. What California is attempting is just the beginning.<\/p>\n

A version of this story originally appeared in the <\/em>Future Perfect<\/strong><\/em><\/a> newsletter. <\/em>Sign up here!<\/strong><\/em><\/a><\/p>\n

\n","protected":false},"excerpt":{"rendered":"

Moor Studio\/Getty Images A new state bill aims to protect us from the most powerful and dangerous AI models. Last week, California state Senator Scott Wiener (D-San Francisco) introduced a landmark new piece of AI legislation aimed at \u201cestablishing clear, predictable, common-sense safety standards for developers of the largest and most powerful AI systems.\u201d It\u2019s […]<\/p>\n","protected":false},"author":1,"featured_media":322,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[11],"tags":[],"_links":{"self":[{"href":"https:\/\/reshebniki-online.com\/index.php\/wp-json\/wp\/v2\/posts\/320"}],"collection":[{"href":"https:\/\/reshebniki-online.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/reshebniki-online.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/reshebniki-online.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/reshebniki-online.com\/index.php\/wp-json\/wp\/v2\/comments?post=320"}],"version-history":[{"count":2,"href":"https:\/\/reshebniki-online.com\/index.php\/wp-json\/wp\/v2\/posts\/320\/revisions"}],"predecessor-version":[{"id":323,"href":"https:\/\/reshebniki-online.com\/index.php\/wp-json\/wp\/v2\/posts\/320\/revisions\/323"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/reshebniki-online.com\/index.php\/wp-json\/wp\/v2\/media\/322"}],"wp:attachment":[{"href":"https:\/\/reshebniki-online.com\/index.php\/wp-json\/wp\/v2\/media?parent=320"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/reshebniki-online.com\/index.php\/wp-json\/wp\/v2\/categories?post=320"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/reshebniki-online.com\/index.php\/wp-json\/wp\/v2\/tags?post=320"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}