Biden issues sweeping executive order that touches AI risk, deepfakes, privacy

Key Takeaways:

– President Joe Biden issued an executive order on AI that includes regulations on generative AI systems.
– The order mandates testing of advanced AI models to prevent their use in creating weapons.
– The order suggests watermarking AI-generated media to address concerns about deepfakes and disinformation.
– Developers of powerful AI systems that pose risks will be required to notify the federal government and share safety test results.
– The National Institute of Standards and Technology and the Department of Homeland Security will develop standards for testing AI systems for safety and security.
– The order directs several agencies to establish safety standards for AI use and study AI’s impact on the job market and potential job displacement.
– The studies aim to inform future policy decisions to support workers affected by AI advancements.

Ars Technica:

Aurich Lawson | Getty Images

On Monday, President Joe Biden issued an executive order on AI that outlines the federal government’s first comprehensive regulations on generative AI systems. The order includes testing mandates for advanced AI models to ensure they can’t be used for creating weapons, suggestions for watermarking AI-generated media, and provisions addressing privacy and job displacement.

In the United States, an executive order allows the president to manage and operate the federal government. Using his authority to set terms for government contracts, Biden aims to influence AI standards by stipulating that federal agencies must only enter into contracts with companies that comply with the government’s newly outlined AI regulations. This approach utilizes the federal government’s purchasing power to drive compliance with the newly set standards.

As of press time Monday, the White House had not yet released the full text of the executive order, but from the Fact Sheet authored by the administration and through reporting on drafts of the order by Politico and The New York Times, we can relay a picture of its content. Some parts of the order reflect positions first specified in Biden’s 2022 “AI Bill of Rights” guidelines, which we covered last October.

Amid fears of existential AI harms that made big news earlier this year, the executive order includes a notable focus on AI safety and security. For the first time, developers of powerful AI systems that pose risks to national security, economic stability, or public health will be required to notify the federal government when training a model. They will also have to share safety test results and other critical information with the US government in accordance with the Defense Production Act before making them public.

Moreover, the National Institute of Standards and Technology (NIST) and the Department of Homeland Security will develop and implement standards for “red team” testing, aimed at ensuring that AI systems are safe and secure before public release. Implementing those efforts is likely easier said than done because what constitutes a “foundation model” or a “risk” could be subject to vague interpretation.

The order also suggests, but doesn’t mandate, the watermarking of photos, videos, and audio produced by AI. This reflects growing concerns about the potential for AI-generated deepfakes and disinformation, particularly in the context of the upcoming 2024 presidential campaign. To ensure accurate communications that are free of AI meddling, the Fact Sheet says federal agencies will develop and use tools to “make it easy for Americans to know that the communications they receive from their government are authentic—and set an example for the private sector and governments around the world.”

Under the order, several agencies are directed to establish clear safety standards for the use of AI. For instance, the Department of Health and Human Services is tasked with creating safety standards, while the Department of Labor and the National Economic Council are to study AI’s impact on the job market and potential job displacement. While the order itself can’t prevent job losses due to AI advancements, the administration appears to be taking initial steps to understand and possibly mitigate the socioeconomic impact of AI adoption. According to the Fact Sheet, these studies aim to inform future policy decisions that could offer a safety net for workers in industries most likely to be affected by AI.

Source link

AI Eclipse TLDR:

President Joe Biden has issued an executive order on AI that outlines the federal government’s first comprehensive regulations on generative AI systems. The order includes testing mandates for advanced AI models to ensure they cannot be used for creating weapons, suggestions for watermarking AI-generated media, and provisions addressing privacy and job displacement. The order stipulates that federal agencies must only enter into contracts with companies that comply with the newly outlined AI regulations, utilizing the government’s purchasing power to drive compliance. The executive order also focuses on AI safety and security, requiring developers of powerful AI systems that pose risks to notify the federal government when training a model and share safety test results. Efforts will be made to develop standards for “red team” testing to ensure AI systems are safe and secure before public release. The order suggests, but does not mandate, the watermarking of AI-generated media to combat deepfakes and disinformation. Several agencies are directed to establish safety standards for the use of AI, and studies will be conducted on AI’s impact on the job market and potential job displacement. While the order cannot prevent job losses due to AI advancements, it aims to understand and mitigate the socioeconomic impact of AI adoption.