Seven leading AI companies in the United States have agreed to voluntary safeguards for the development of the technology, the White House announced Friday, pledging to manage the risks of the new tools even as they compete for the potential of artificial intelligence.
The seven companies — Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI — will formally announce their commitment to new standards of safety, security and trust during a meeting with President Biden at the White House on Friday afternoon.
The announcement comes as the companies race to outdo each other with versions of AI that offer powerful new ways to create text, photos, music and video without human intervention. But the technological leaps have led to fears of disinformation spreading and dire warnings of a “risk of extinction” as self-aware computers evolve.
The voluntary safeguards are just an early, tentative step as Washington and governments around the world scramble to put in place legal and regulatory frameworks for artificial intelligence development. They reflect the urgency of the Biden administration and lawmakers to respond to the rapidly evolving technology, even as lawmakers have struggled to regulate social media and other technologies.
The White House did not provide details about an upcoming presidential executive order that will address a larger issue: how to monitor China’s and other competitors’ ability to get their hands on the new artificial intelligence programs, or the components used to develop them.
That means new restrictions on advanced semiconductors and restrictions on the export of the major language models. Those are hard to check – much of the software fits, compressed, on a USB stick.
An executive order could elicit more industry opposition than Friday’s voluntary commitments, which experts say were already reflected in the practices of the companies involved. The promises will not hinder the plans of the AI companies, nor hinder the development of their technologies. And as voluntary commitments, they will not be enforced by government regulators.
“We are excited to make these voluntary commitments along with others in the industry,” Nick Clegg, the president of global affairs at Meta, Facebook’s parent company, said in a statement. “They are an important first step in ensuring that responsible guardrails are put in place for AI and they create a model for other governments to follow.”
As part of the safeguards, the companies agreed to:
Security testing of their AI products, in part by independent experts and to share information about their products with governments and others trying to manage the risks of the technology.
Ensuring consumers can recognize AI-generated content by implementing watermarks or other means to identify generated content.
Regularly publicly report on the capabilities and limitations of their systems, including security risks and evidence of bias.
Using advanced artificial intelligence tools to tackle society’s biggest challenges, such as curing cancer and fighting climate change.
Research the risks of bias, discrimination and invasion of privacy from the proliferation of AI tools.
In a statement announcing the agreements, the Biden administration said the companies must ensure that “innovation does not come at the expense of the rights and safety of Americans.”
“Companies developing these emerging technologies have a responsibility to ensure their products are safe,” the administration said in a statement.
Brad Smith, Microsoft’s president and one of the executives who attended the White House meeting, said his company endorses the voluntary safeguards.
“By acting quickly, the White House commitments create a foundation to ensure the promise of AI stays ahead of the risks,” said Smith.
Anna Makanju, Vice President of Global Affairs at OpenAI, described the announcement as “part of our ongoing partnership with governments, civil society organizations and others around the world to advance AI governance.”
For the companies, the standards described Friday serve two purposes: as an attempt to self-monitor legislative and regulatory movements, and as a signal that they are thoughtful and proactive about this new technology.
But the rules they’ve agreed upon are largely the lowest common denominator, and can be interpreted differently by each company. For example, the companies are committed to strict cybersecurity around the data and code used to create the “language models” on which generative AI programs are developed. But there’s no specificity as to what that means – and the companies would have an interest in protecting their intellectual property anyway.
And even the most cautious companies are vulnerable. Microsoft, one of the firms attending the White House event with Mr Biden, last week attempted to counter a Chinese government-sponsored hack on the private emails of US officials doing business with China. It now appears that China has stolen or somehow obtained a “private key” from Microsoft that is the key to verifying emails – one of the company’s most closely guarded bits of code.
As a result, the agreement is unlikely to slow down efforts to pass legislation and impose regulations on the emerging technology.
Paul Barrett, deputy director of the Stern Center for Business and Human Rights at New York University, said more needs to be done to protect us from the dangers artificial intelligence poses to society.
“The voluntary commitments announced today are not enforceable. Therefore, it is vital that Congress, along with the White House, quickly enact legislation requiring transparency and privacy protections, and increase research on the broad range of risks posed by generative AI,” Mr. Barrett said in a statement.
European regulators are poised to pass AI laws later this year, prompting many of the companies to push for US regulation. Several lawmakers have introduced bills that include licenses for AI companies to release their technologies, the creation of a federal agency to oversee the industry, and data privacy requirements. But members of Congress are still far from agreeing on rules and racing to educate themselves on the technology.
Lawmakers have struggled with how to handle the rise of AI technology, with some focusing on risks to consumers, while others acutely concerned about falling behind adversaries, especially China, in the race for dominance in the field.
This week, the Select Committee on Strategic Competition with China sent bipartisan letters to US-based venture capital firms demanding a reckoning of investments they had made in Chinese AI and semiconductor companies. Those letters come on top of months in which a variety of House and Senate panels have polled the AI industry’s most influential entrepreneurs and critics to determine what kind of guardrails and incentives Congress should investigate.
Many of those witnesses, including Sam Altman of San Francisco start-up OpenAI, have pleaded with lawmakers to regulate the AI industry, pointing to the new technology’s potential to cause unnecessary harm. But that regulation is slow to take off in Congress, where many legislators are still struggling to understand what exactly AI technology is.
In an effort to improve lawmakers’ understanding, Senator Chuck Schumer, New York Democrat and Majority Leader, began a series of listening sessions for lawmakers this summer to hear from government officials and experts about the merits and dangers of artificial intelligence in a number of fields.
Mr. Schumer also prepared an amendment to this year’s Senate version of the defense authorization bill to encourage Pentagon employees to report potential problems with AI tools through a “bug bounty” program, to commission a Pentagon report on how to improve AI data sharing, and to improve reporting on AI in the financial services industry.
Karun Demirjian contributed reporting from Washington.