[vc_row full_width=”stretch_row” css=”.vc_custom_1531732107238{background-color: #fcc118 !important;}”][vc_column]

[/vc_column][/vc_row][vc_row css=”.vc_custom_1531730959461{border-bottom-width: 1px !important;background-color: #f9fafb !important;border-bottom-color: #eef3f7 !important;border-bottom-style: solid !important;}”][vc_column css=”.vc_custom_1531891416301{margin-bottom: 0px !important;}”][bsfp-cryptocurrency style=”widget-6″ align=”auto” columns=”2″ scheme=”light” coins=”top-x-coins” coins-count=”8″ coins-selected=”” currency=”USD” title=”Cryptocurrencies” show_title=”0″ icon=”” heading_color=”” heading_style=”default” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ css=”.vc_custom_1531730265600{margin-bottom: 0px !important;}” custom-css-class=”” custom-id=””][/vc_column][/vc_row]

Anthropic’s ‘responsible scaling’ policy introduces outline for safe AI development

0

Anthropic, the  artificial intelligence research company behind the chatbot Claude, unveiled a comprehensive Responsible Scaling Policy (RSP) this week aimed at mitigating the anticipated risks associated with increasingly capable AI systems.

Borrowing from the US government’s biosafety level standards, the RSP introduces an AI Safety Levels (ASL) framework. This system sets safety, security, and operational standards corresponding to each model’s catastrophic risk potential. Higher ASL standards would require stringent safety demonstrations, with ASL-1 involving systems with no meaningful catastrophic risk, while ASL-4 and above would address systems far from current capabilities.

The ASL system is intended to incentivize progress in safety measures by temporarily halting the training of more powerful models if AI scaling surpasses their safety procedures. This measured approach aligns with the broader international call for responsible AI development and use, a sentiment echoed by U.S. President Joe Biden in a recent address to the United Nations.

Anthropic’s RSP seeks to assure existing users that these measures will not disrupt the availability of their products. Drawing parallels with pre-market testing and safety design practices in the automotive and aviation industries, they aim to rigorously establish the safety of a product before its release.

While this policy has been approved by Anthropic’s board, any changes must be ratified by the board following consultations with the Long Term Benefit Trust, which is set to balance public interests with Anthropic’s stockholders. The Trust comprises five Trustees experienced in AI safety, national security, public policy, and social enterprise.

Ahead of the game

Throughout 2023, the discourse around artificial intelligence (AI) regulation has been significantly amplified across the globe, signaling that most nations are just starting to grapple with the issue. AI regulation was brought to the forefront during a Senate hearing in May when OpenAI CEO Sam Altman called for increased government oversight, paralleling the global regulation of nuclear weapons.

Outside of the U.S., the U.K. government proposed objectives for their AI Safety Summit in November, aiming to build international consensus on AI safety. Meanwhile, in the European Union, tech companies lobbied for open-source support in the EU’s upcoming AI regulations.

China also initiated its first-of-its-kind generative AI regulations, stipulating that generative AI services respect the values of socialism and put in adequate safeguards. These regulatory attempts underscore a broader trend, suggesting that nations are just beginning to understand and address the complexities of regulating AI.

Leave A Reply

Your email address will not be published.