December 28, 2024

AI companies voluntarily commit to better safety and transparency but can ignore pledges if they want

0

 

Today, the Biden administration announced that it had secured voluntary commitments from leading AI firms to manage the risks posed by artificial intelligence.

The seven companies, Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI, all agreed to improve the safety, security, and transparency of their systems, including allowing reviews of their models by third-party experts.

 “Companies that are developing these emerging technologies have a responsibility to ensure their products are safe,” a White House statement sent to TechRadar said. “To make the most of AI’s potential, the Biden-Harris Administration is encouraging this industry to uphold the highest standards to ensure that innovation doesn’t come at the expense of Americans’ rights and safety.”

The seven companies immediately agreed to several specific points of concern surrounding the rollout of AI. 

First, the companies committed to internal and external security testing of their AI systems before they are released to the public, as well as sharing information with relevant industry players, governments, academia, and the public to help manage AI risks.

The companies also commit to cybersecurity investment and insider threat controls to “protect proprietary and unreleased model weights”, which are critical to the operations of the data models that power generative AI. They also agreed to facilitate third-party investigation and report any security gaps in the systems.

The companies also agreed to measures to improve public trust in their systems, including developing a way to ensure that people know when they are seeing AI-generated content, such as watermarking or other measures. The companies will also prioritize research into the societal risks AI models pose, including racial and other forms of bias that can lead to discrimination, as well as “protecting privacy”.

But it is clear from the announcement that these are still strictly voluntary measures by the companies involved, and that while some may follow through on their commitments, it is on the companies to do so for the time being.

New AI rules could be on the way

Voluntary commitments cannot replace actual enforceable regulations that include real consequences for violations, which today’s agreement does not include, but a source close to the matter told TechRadar that new AI rules aren’t just on the table, but are actively being pursued.

“We’re certainly coordinating with Congress on AI quite a bit, in the big picture,” they said. “I think we know that legislation is going to be critical to establish the legal and regulatory regime to make sure these technologies are safe.”

The source also signaled that executive action on AI is forthcoming, though they could not detail what exactly this will entail.

“I think the focus here is on what the companies are doing,” they said. “But I think it’s fair to say that this is the next step in our process. It is voluntary commitment, [but] we’re going to take executive action, and developing executive action now where you’ll see more of a government role.”