HomePoliticsThe White House’s Voluntary...

The White House’s Voluntary Framework for Ensuring Safe, Secure, and Trustworthy AI — AI: The Washington Report


Welcome to this week’s issue of AI: The Washington Report, a joint undertaking of Mintz and its government affairs affiliate, ML Strategies. The accelerating advances in artificial intelligence (“AI”) and the practical, legal, and policy issues AI creates have exponentially increased the federal government’s interest in AI and its implications. In these weekly reports, we hope to keep our clients and friends abreast of that Washington-focused set of potential legislative, executive, and regulatory activities.

In this issue, we discuss the White House’s July 21, 2023 announcement of a voluntary framework for ensuring safe, secure, and trustworthy AI (“July 21 Framework”). Our key takeaways are:

  1. Leaders of seven major technology companies, including Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI, met with President Biden to affirm their commitment to the July 21 Framework.
  2. The July 21 Framework consists of eight concrete commitments, each falling under one of three guiding principles: safety, security, and trust.
  3. The announcement of this framework reaffirms the key role of the executive branch in guiding the development of comprehensive AI regulation. The Biden administration intends to issue an executive order on AI in the near future. 
     

The White House’s Voluntary Framework for “Ensuring Safe, Secure, and Trustworthy AI”

On July 21, 2023, President Biden met with the leaders of seven major technology companies to secure their commitment to a voluntary framework for responsible AI innovation. “These commitments are real,” asserted Biden, “and they’re concrete.” The pronouncement of this framework (“July 21 Framework”), along with the announcement of a forthcoming executive order on AI, signals the resolve of the White House to maintain the strong role that the executive branch has played in guiding the development of American AI regulation. Absent substantive AI legislation from Congress, the frameworks and initiatives on autonomous systems released by the executive branch are, and will continue to be, the most concrete guidance from the federal government to businesses on AI development.

Companies representing the cutting edge of AI development, including Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI, agreed to the adoption of the July 21 Framework for AI risk management. Through these commitments, the White House aims to “make the most of AI’s potential” by “encouraging this industry to uphold the highest standards to ensure that innovation doesn’t come at the expense of Americans’ rights and safety.” This strategy of seeking to establish guardrails to encourage the development of AI within circumscribed boundaries is consistent with executive branch actions going back to the Obama administration, as well as more nascent efforts being discussed in Congress.

The voluntary framework consists of eight commitments, each falling under one of three guiding principles: safety, security, and trust.

Safety

  1. Rigorously test major AI models to ensure that they are resilient, do not promote bias, and do not pose an existential risk to society.[1]
  2. Promote the establishment of and participate in a forum or information-sharing mechanism through which companies can “develop, advance, and adopt shared standards and best practices for frontier AI safety.”[2]

Security

  1. Designate unreleased model weights, or parameters that AI models use to make decisions, as core intellectual property protected by cybersecurity and insider threat detection systems.
  2. Establish systems that incentivize third parties to discover and report vulnerabilities in AI systems.

Trust

  1. Commit to labeling AI-generated content, except content that is “readily distinguishable from reality or that is designed to be readily recognizable as generated by a company’s AI system.”
  2. Report the capabilities, limitations, and risks of AI models to users.
  3. Support research on countering the risks posed by AI systems.
  4. Develop cutting-edge AI systems to help address major social issues, such as climate change and cancer detection.

Context of and Reaction to the July 21 Framework

As discussed in the inaugural issue of this newsletter, since the closing months of the Obama administration, regulatory efforts on AI in the United States have largely been the prerogative of the executive branch.

However, the past few months have seen Congress wade into the question of AI regulation, with members in the House and Senate releasing targeted regulatory proposals and bills that would establish study groups on AI regulation. Senate Majority Leader Chuck Schumer’s (D-NY) SAFE Innovation in the AI Age and Representative Ted Lieu’s (D-CA-36) National AI Commission Act each promise to lay the groundwork for robust, comprehensive AI legislation. In spite of this flurry of legislative activity, even the most optimistic estimates place the passage of comprehensive legislation to regulate AI at next year, if not the next session of Congress.

Given the rapid pace of technological development, and the level of alarm expressed by experts at the potential of AI to disrupt the economy, perpetuate bias, and promote societal risk, business leaders have been expressing the need for immediate guidance on AI development. Absent comprehensive legislation to regulate AI from Congress, business leaders have turned to executive branch efforts on AI. On July 19, top AI firms and research institutions published an open letter to Congressional leadership urging the funding of the National AI Research Resource (“NAIRR”), arguing that the body “would transform the U.S. research ecosystem and facilitate the partnerships needed to address societal-level problems.”

Within this context, the White House’s July 21 Framework can be seen as an attempt to provide AI developers with concrete guidance in the absence of enacted AI regulation. In a speech delivered following the announcement of the framework, President Biden asserted that the framework’s commitments are “going to help…the industry fulfill its fundamental obligation to Americans to develop safe, secure, and trustworthy technologies that benefit society and uphold our values and our shared values.”

Leaders from the seven technology companies that publically committed to the July 21 Framework lauded the administration’s efforts to guide AI development. Microsoft President Brad Smith asserted that “the voluntary commitments address the risks presented by advanced AI models and promote the adoption of specific practices…that will propel the whole ecosystem forward.” Kent Walker, President of Global Affairs of Google and Alphabet, hailed the July 21 Framework as “a milestone in bringing the industry together to ensure that AI helps everyone.”

But despite the enthusiasm for this framework from key industry leaders, President Biden has indicated that this and other non-binding initiatives are insufficient, and must be followed up by enforceable AI legislation. “Realizing the promise of AI by managing the risk is going to require some new laws, regulations, and oversight,” said Biden. To ease the pathway for these “new laws” on AI, the White House has announced the future release of an executive order “that will ensure the federal government is doing everything in its power to advance safe, secure, and trustworthy AI and manage its risks to individuals and society.”

Conclusion: Reasserting the Executive Branch’s Role in Developing AI Regulation

As the 118th Congress has rapidly produced a slate of AI proposals, experts surveying the field of AI regulation have understandably turned their attention towards the legislative branch. The announcement of the July 21 Framework, along with the open letter on the NAIRR, are reminders to all those interested in the development of AI regulation in the United States to pay attention to executive branch efforts as well.

As we have discussed in previous editions of this newsletter, the executive branch has accumulated experience in developing AI risk-managing frameworks, establishing bodies to oversee AI development, and collaborating with industry and academia to build regulatory competency regarding autonomous systems. Any comprehensive regulation on AI is likely to draw from the executive branch’s work on AI regulation, including the Blueprint for an AI Bill of Rights, the AI Risk Management Framework, and the forthcoming National Priorities for Artificial Intelligence.

As the executive and legislative branches work towards the development of comprehensive AI regulation, we will continue to monitor, analyze, and issue reports.

 

Endnotes

[1] Specifically, this commitment entails the use of “red-teaming,” a strategy whereby an entity designates a team to emulate the behavior of an adversary attempting to break or exploit the entity’s technological systems. As the red team discovers vulnerabilities, the entity patches them, making their technological systems resilient to actual adversaries.
[2] On July 26, 2023, four of the companies that agreed to the July 21 Framework (Microsoft, Anthropic, Google, and OpenAI) announced the launch of such a body. The “Frontier Model Forum” is “a new industry body focused on ensuring safe and responsible development of frontier AI models.”

 

Subscribe To Viewpoints



Source link

Most Popular

LEAVE A REPLY

Please enter your comment!
Please enter your name here

More from Author

Read Now

Team India Squad for T20 World Cup 2024 Announced: Here’s India’s official team for T20 WC – Republic World

India T20 World Cup squad announcement | Image:APTeam India's squad for the upcoming ICC T20 World Cup 2024 has been announced. On Tuesday, the selection committee led by chief selector Ajit Agarkar convened in Ahmedabad and zeroed in on a 15-member unit, which they deem is the...

Justice Minallah says state has to protect judges, independence of judiciary

Justice Athar Minallah on Tuesday said the state had to protect the judges and the judiciary’s independence as the Supreme Court took up a suo motu case pertaining to allegations of interference in judicial affairs.A six-member bench resumed...

Stock futures slip slightly as investors look ahead to Fed decision, megacap earnings: Live updates

Traders work on the floor of the New York Stock Exchange during morning trading on February 29, 2024 in New York City. Michael M. Santiago | Getty ImagesU.S. stock futures fell slightly Tuesday morning after a positive start to the week, as investors brace for megacap earnings,...

Europe’s Economic Laggards Have Become Its Leaders

Something extraordinary is happening to the European economy: Southern nations that nearly broke up the euro currency bloc during the financial crisis in 2012 are growing faster than Germany and other big countries that have long served as the region’s growth engines.The dynamic is bolstering the...

Trump’s Plans for the Fed Make No Sense, Even for Him

A second Trump administration might be very different from the first, and that includes how the president treats the Fed. Donald Trump complained a lot about the US Federal Reserve when he was president, jawboning for lower interest rates and questioning its competence. Yet at the...

Police to launch raids to find migrants to deport to Rwanda, Cabinet Minister claims

Police will mount raids to find missing migrants so they can be deported to Rwanda, a Cabinet minister has said.Health Secretary Victoria Atkins was commenting on reports that the Home Office has lost contact with thousands of people who are set to be removed from the...

The French #Metoo Scandal Unraveling in Weinstein’s Shadow

French actor Gérard Depardieu was ordered to stand trial for allegedly sexually assaulting two women on a film set three years ago, marking the latest legal escalation for the 75-year-old movie star who has become a central figure in France’s #MeToo movement.The announcement coincides with a...

Hong Kong Bitcoin and Ether ETFs Have Soft Debut

Please note that our privacy policy, terms of use, cookies, and do not sell my personal information has been updated.CoinDesk is an award-winning media outlet that covers the cryptocurrency industry. Its journalists abide by a strict set of editorial policies. In November 2023, CoinDesk was acquired...

Customization Overview | Halo Infinite CU32

Operation: Banished Honor arrives on April 30 and you’re gonna want to look the part! After all, the Banished welcome all who pledge their service to Atriox, and your new allegiance and mindset demands a new outfit, so let’s find out more about the customization that...

T20 World Cup 2024 Squads: From India To Australia, Check Here Team-Wise Full Players List, Venues, Fixture, Timings And More

ICC T20 World Cup 2024 Cricket Matches Full Schedule: The T20 World Cup 2024 promises to be an exhilarating showcase of cricketing talent from around the globe. With teams from various nations competing for the prestigious title, fans can expect intense matches filled with thrilling moments...

How the Twins’ summer sausage celebration got made: It sparked the offense, but should they eat it?

CHICAGO — With Abe Froman unavailable, I called sausage expert Elias Cairo to address Rocco Baldelli’s concerns about a potentially hazardous pre-encased meat currently residing in the Minnesota Twins clubhouse.Nearly a week after it arrived and with the package showing visible signs of wear, tear and...