Leading AI firms pledge to external AI system testing and other safety obligations
The White House reported on Friday’s agreement by Microsoft, Google, and other leading artificial intelligence firms to fully test new AI systems before making them accessible to the general public and to clearly label anything created using AI. While Congress and the White House develop more comprehensive regulations to regulate the quickly growing industry, the White House and seven major AI companies
including Amazon, Meta, OpenAI, Anthropic, and Inflection, reached an agreement on a number of voluntary commitments aimed at making AI systems and products safer and more dependable. The president and vice president of each of the seven organizations met on Friday at the White House. The pledges made by the companies, according to Biden, are “real and concrete,” and they will help them fulfill their “fundamental obligations to Americans to develop safe, secure, and trustworthy technologies that benefit society and uphold our values and our shared values.”
Biden made this statement in a speech on Friday. “In the next ten years, or even the next few years, there will be more technical development than there has been in the previous fifty years. What an incredible discovery,” said Biden. White House officials assert that although while some of the companies have already partially executed some of the promises, generally, these commitments would raise “the standards for safety, security, and trust of AI” and serve as a “bridge to regulation.”
It is a beginning step and a bridge to where we need to go, White House deputy head of staff Bruce Reed, who has been in charge of the AI policy process, said in an interview. “It will support the growth of industry and governmental skills to guarantee the safety and security of AI. We felt the need to move rapidly due to how quickly and how far this technology is developing.
Even while the majority of the companies currently conduct internal “red-teaming” exercises, this will mark the first occasion that they have all concurred to let outside experts examine their systems before they are made publicly accessible. Red team exercises simulate what could go wrong with a particular technology, such as a cyberattack or its ability to be utilized by hostile actors, helping firms to proactively uncover faults and avoid unintended effects.
According to Reed, the external red-teaming “will help pave the way for government oversight and regulation,” potentially laying the groundwork for similar external testing to be carried out by a government regulator or licenser. The agreements may potentially lead to broad watermarking of AI-generated audio and visual content in order to combat fraud and incorrect information.
The businesses also agreed to invest in cybersecurity and “insider threat safeguards,” particularly to protect AI model weights, which are essentially the knowledge base on which AI systems rely; develop and deploy AI systems “to help address society’s greatest challenges,” according to the White House. The companies also agreed to prioritize research on the societal risks of AI. In answer to a query from CNN’s Jake Tapper on Friday, Brad Smith, vice chair and president of Microsoft, noted “what people, bad actors, individuals or countries will do” with the technology.
“That they’ll try to hack into our computer networks using it and use it to sabotage our elections. They’ll use it to undermine the stability of our jobs, as you already know, he said. But, according to Smith, the best way to deal with these problems is to focus on them, understand them, get people together, and come up with answers.
And what’s intriguing about AI, in my opinion, is that when we take those steps and are dedicated to doing so, we can use AI to fend off these problems much more effectively than we can now. Smith stated in response to Tapper’s question regarding the concerns with artificial intelligence and salary that were made in a recent letter signed by many authors, “I don’t want technology to undercut anybody’s capacity to make a living by creating, by writing. That is the balance that everyone of us should aim towards.
Officials from the White House acknowledged that all of the duties are voluntary and that there is no enforcement mechanism to ensure that the corporations follow them, some of which are also ambiguous. Common Sense Media, an organization that advocates for children’s online safety, applauded the White House for taking steps to establish AI safeguards, but it also issued a warning: “History would indicate that many tech companies do not actually walk the walk on a voluntary pledge to act responsibly and support strong regulations.”
“If we’ve learned anything from the last decade and the complete mismanagement of social media governance,” says James Steyer, CEO of Common Sense Media, “it’s that many companies offer a lot of lip service.” Then, after putting their financial interests first, they refuse to take responsibility for the impact their products have on the American public, especially on families and children.
The federal government’s failure to regulate social media corporations when they initially appeared and the pushback from those enterprises have had an impact on White House officials’ work as they contemplate future AI laws and executive measures. During our discussions with the corporations, we underlined the need to make this as thorough as we could, Reed said.
I believe that AI is expanding even more swiftly than that, making it vital for this bridge to regulation to be solid. Ten years ago, the tech sector erred by opposing supervision, legislation, and regulation. After a group of AI CEOs visited the White House in May to meet with Vice President Kamala Harris, Vice President Joe Biden, and White House officials, a months-long conversation between the White House and the AI entrepreneurs resulted in the commitments.
The White House also asked for opinions from experts in non-industry AI safety and ethics. In an effort to go beyond voluntary promises, White House officials are putting together a series of executive measures, the first of which is slated to be unveiled later this summer. Officials are currently working with lawmakers on Capitol Hill to draft more comprehensive legislation to regulate AI.
“This is a serious responsibility. It must be done properly. According to Biden, there is also a great, huge potential upside. Officials from the White House have stated that the corporations will “immediately” begin putting the voluntary commitments into practice and that they anticipate other companies will do the same in the future. “We expect that other companies will understand that they have a duty to uphold the standards of safety, security, and trust. We would welcome them if they choose to join these agreements, a White House official said.