Sweeping White House executive order takes aim at AI’s toughest challenges

The Biden administration on Monday unveiled its ambitious next steps in addressing and regulating artificial intelligence development. Its sweeping new executive order seeks to establish more protections for the public as well as improve best practices for federal agencies and their contractors.

"The President had instructed his team several months ago to take every step," a senior administration official told reporters on a recent press call. "That’s what this order does, bringing the power of the federal government to bear on broad areas to manage the risks of AI and harness its benefits… It stands up for consumers and workers, fosters innovation and competition, Drives American leadership forward in the world and, like all executive orders, has the force of law."

These actions will be rolled out over the next year, with smaller safety and security changes rolling out over approximately 90 days and more involved reporting and data transparency plans taking 9 to 12 months to fully implement. The administration is also creating an “AI Council” chaired by the White House Deputy Chief of Staff bruce reedWho will meet with federal agency heads to ensure that tasks are being executed on schedule.

Bruce Reed, assistant to the president and deputy chief of staff, walks to Marine One behind President Joe Biden in Washington, Wednesday, July 6, 2022.  Biden is traveling to Cleveland to announce a new rule that would allow major new financial aid for troubled pensions covering about 2 million to 3 million workers.  (AP Photo/Patrick Semansky)
associated Press

public security

"In response to the President’s leadership on this topic, 15 major US technology companies have launched their own voluntary commitments to ensure that AI technology is safe, secure, and trusted before it is released to the public." Senior administrative officer said. "That is not enough."

The EO directs the establishment of new standards for AI safety and security, including reporting requirements for developers whose Foundation models could impact national or economic security. Those requirements will also apply in developing AI tools to autonomously apply security fixes to critical software infrastructure.

taking advantage of Defense Production Actit will eo "Requires that companies developing any foundation models that pose a serious threat to national security, national economic security, or national public health and safety must notify the federal government when training the model, and all red-team The results of security tests should be shared," According to a White House press release. That information should be shared before models are made available to the public, which could help reduce the rate Companies offer half-baked and potentially deadly machine learning products,

In addition to sharing red team test results, the EO also requires disclosure of the system’s training runs (essentially, its iterative development history). “What he does is he creates a space before the release… to verify that the system is secure,” officials said.

Administration officials were quick to point out that this reporting requirement will not have any impact on any AI models currently available on the market, nor will it impact independent or small to medium-sized AI companies going forward, as the extent of enforcement will be fairly limited. is more. It is designed specifically for next-generation AI systems, which companies like Google, Meta and OpenAI are already working on with implementations on models starting at 10^26. petaflopsA capability that is currently beyond the range of existing AI models. "It’s not going to catch up to AI systems trained by graduate students, or even professors,” the administration official said.

Additionally, the EO will encourage the Departments of Energy and Homeland Security to combat AI threats "To critical infrastructure as well as chemical, biological, radiological, nuclear and cyber security risks," Per release. "Agencies funding life-science projects would establish these standards as a condition of federal funding, creating powerful incentives to ensure proper screening and manage the risks potentially exacerbated by AI." In short, any developers found in violation of the EO can expect a quick and unpleasant visit from the DOE, FDA, EPA, or other applicable regulatory agency, regardless of the age or processing speed of their AI models.

In an effort to proactively address the deteriorating state of America’s digital infrastructure, the order also seeks to establish a cybersecurity program, building on the Administration’s existing AI Cyber ​​Challenge, to develop AI tools Which can autonomously root out and shore up security vulnerabilities. Software infrastructure. It remains to be seen whether those systems will be able to address concerns about models abusing SEC chief Gary Gensler recently raised,

AI Watermarking and Cryptographic Verification

We’re already seeing the normalization of deepfake trickery and AI-empowered propaganda on the campaign trail, Therefore, the White House is taking steps to ensure that the public can trust the text, audio, and video content published on its official channels. White House officials on the press call argued that the public should be able to easily verify whether the content they see is AI-generated.

AI generated image of penguin in the desert, with content credential information window open in the upper right corner
Adobe

The Commerce Department is in charge of the latter effort and is expected to work closely with existing industry advocacy groups like C2PA and its sister organization, CAI, to develop and implement a watermarking system for federal agencies. “Our goal is to support and facilitate and help standardize that work (by C2PA),” administration officials said. “We see ourselves as being included in that ecosystem.”

Officials further explained that the government is supporting the underlying technical standards and practices that will lead to widespread adoption of digital watermarking – similar to the work it has done to develop the HTTPS ecosystem and connect both developers and the public with it. This will help federal officials achieve their other goal of ensuring that the government’s official messages can be trusted.

Civil rights and consumer protection

White House unveils first blueprint for AI Bill of Rights Released last October The administration official said, directing agencies to “combat algorithmic discrimination while enforcing existing authorities to protect people’s rights and safety.” “But there is still more to do.”

The new EO will require guidance to “landlords, federal benefit programs, and federal contractors” to prevent AI systems from increasing discrimination within their area of ​​influence. It would also direct the Justice Department to develop best practices for investigating and prosecuting AI-related civil rights violations, as well as, according to the announcement, “sentencing, parole and probation, pre-trial release and detention, risk “Use of AI in assessment, surveillance, crime forecasting and predictive policing, and forensic analysis."

Additionally, the EO calls for prioritizing federal support to accelerate the development of privacy-preserving technologies that will enable future LLMs to be trained on larger datasets, given the existing risk of leaking personal details contained in those datasets. Without. According to the White House release, these solutions may include “cryptographic tools that protect individuals’ privacy,” developed with assistance from the Research Coordination Network and the National Science Foundation. The executive order also reiterates the call for bipartisan legislation from Congress to address the broader privacy issues presented by AI systems for consumers.

In the context of health care, the EO says the Department of Health and Human Services will establish a safety program that will monitor and remediate unsafe, AI-based medical practices. Teachers will also get support from the federal government in using AI-based educational tools like personalized chatbot tutoring.

worker safety

The Biden administration recognizes that while the AI ​​revolution is a definite boon for business, its capabilities also make it a threat to worker safety through job displacement. intruder workplace surveillance, The EO seeks to address these issues with “the development of principles and employer best practices that minimize the harms and maximize the benefits of AI for workers,” an administration official said. “We encourage federal agencies to adopt these guidelines in the administration of their programs.”

On September 13, 2023, a protest was held at the Paramount Pictures studios in Los Angeles.  Negociación left for roles as an actor and actor in Hollywood studios.  (Photo Richard Shotwell/Invision/AP)
Richard Shotwell/Invision/AP

The EO will also direct the Labor Department and the Council of Economic Advisers to study how AI could impact the labor market and how the federal government can better support workers “facing labor disruption” moving forward. Administration officials also pointed to the potential benefits that AI could bring to the federal bureaucracy, including cutting costs and increasing cybersecurity efficacy. “There are a lot of opportunities here, but we have to ensure responsible government development and deployment of AI,” an administration official said.

To that end, the administration is launching a new federal jobs portal, AI.gov, on Monday, which will provide information and guidance on available fellowship programs for people looking for work with the federal government. “We’re trying to bring in more AI talent across the board,” an administration official said. “Programs like the US Digital Service, the Presidential Innovation Fellowship and USA Jobs – are doing as much as they can to bring talent to the table.” The White House is also considering expanding existing immigration rules to streamline visa criteria, interviews and reviews for people trying to move to the US and work in these advanced industries.

The White House reportedly did not preview the industry on this particular aspect of the radical policy changes, though administration officials noted that they were already collaborating extensively with AI companies on many of these issues . The Senate held its second AI Insight Forum event on Capitol Hill last week, with Vice President Kamala Harris scheduled to speak. UK Summit on AI SecurityHosted by Prime Minister Rishi Sunak on Tuesday.

WASHINGTON, DC - SEPTEMBER 12: Senate Majority Leader Charles Schumer (D-NY) speaks to reporters after the weekly Senate Democratic policy luncheon meeting at the U.S. Capitol on September 12, 2023 in Washington, DC.  Schumer was asked about House Speaker Kevin McCarthy's announcement of a formal impeachment inquiry into President Joe Biden.  (Photo by Chip Somodevilla/Getty Images)
Chip Somodevilla via Getty Images

on a Washington Post program on ThursdaySenate Majority Leader Charles Schumer (D-NY) was already arguing that the executive order did not go far enough and could not be considered an effective replacement for congressional action, which has been slow in coming to date. Is.

“There are probably limits to what you can do by executive order,” Schumer said. WaPo“They (the Biden administration) are concerned, and they are doing a lot regulatoryly, but everyone recognizes that the only real answer is legislative.”

This article was originally published on Engadget

Leave a comment