“Catastrophic” AI harms among warnings in declaration signed by 28 nations

Key Takeaways:

– The UK hosted an AI Safety Summit attended by 28 countries, including the US and China, to address potential risks posed by advanced AI systems.
– “The Bletchley Declaration” was signed at the summit, warning of potential harm from advanced AI and calling for international cooperation for responsible AI deployment.
– Rapid advancements in machine learning have led governments to consider regulating AI, prompting the meeting.
– Representatives from major tech companies and civil society groups participated in the summit.
– UK government representatives played a leading role in the event, aiming to position the UK as a leader in the AI space.
– UK Prime Minister Rishi Sunak applauded the declaration and emphasized the need to identify potential threats related to advanced AI systems.
– Elon Musk, who attended the summit, expressed concerns about the control of AI systems surpassing human intelligence.
– Some AI experts argue that the risks of AI are overhyped, while others focus on current perceived harms from AI.
– The summit initiated an international dialogue on AI safety but did not set specific policy goals.
– There is a lack of consensus on global AI regulations and who should be responsible for drafting them.
– The summit may have leaned more toward posturing and symbolism, according to some analysts.

Ars Technica:

Enlarge / UK Technology Secretary Michelle Donelan (front row center) is joined by international counterparts for a group photo at the AI Safety Summit at Bletchley Park in Milton Keynes, Buckinghamshire, on November 1, 2023.

On Wednesday, the UK hosted an AI Safety Summit attended by 28 countries, including the US and China, which gathered to address potential risks posed by advanced AI systems, reports The New York Times. The event included the signing of “The Bletchley Declaration,” which warns of potential harm from advanced AI and calls for international cooperation to ensure responsible AI deployment.

“There is potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models,” reads the declaration, named after Bletchley Park, the site of the summit and a historic World War II location linked to Alan Turing. Turing wrote influential early speculation about thinking machines.

Rapid advancements in machine learning, including the appearance of chatbots like ChatGPT, have prompted governments worldwide to consider regulating AI. Their concerns led to the meeting, which has drawn criticism for its invitation list. In the tech world, representatives from major companies included those from Anthropic, Google DeepMind, IBM, Meta, Microsoft, Nvidia, OpenAI, and Tencent. Civil society groups, like Britain’s Ada Lovelace Institute and the Algorithmic Justice League in Massachusetts, also sent representatives.

Political summit representatives from the US included Vice President Kamala Harris and Gina Raimondo, the secretary of commerce. China’s vice minister of science and technology, Wu Zhaohui, expressed Beijing’s willingness to “enhance dialogue and communication” on AI safety. UK government representatives like Technology Secretary Michelle Donelan played a starring role in the event, hoping to place the UK front and center in the AI space.

According to The Guardian, UK Prime Minister Rishi Sunak applauded the Bletchley Declaration and emphasized the need to identify potential threats related to advanced AI systems that may eventually surpass human intelligence. Along that line of thinking, Elon Musk, who attended the summit, said, “For the first time, we have a situation where there’s something that is going to be far smarter than the smartest human… It’s not clear to me we can actually control such a thing,” according to The Guardian.

Musk has been a prominent outspoken member of a movement to warn about the hypothetical existential risks of AI. However, some AI experts like Google Brain co-founder Andrew Ng and Meta Chief AI Scientist Yann LeCun call those risks overhyped. “The opinion of the *vast* majority of AI scientists and engineers (me included) is that the whole debate around existential risk is wildly overblown and highly premature,” LeCun wrote on X (formerly Twitter) on October 9. Other critics of AI technology prefer to focus on current perceived harms from AI, including environmental, privacy, ethics, and bias issues, instead of hypothetical threats.

While the summit began an international dialogue on AI safety, it stopped short of setting specific policy goals, according to The Times. That may have something to do with the nebulous nature of “AI” itself. “Artificial intelligence” is a very broad term with a fuzzy definition that can encompass many technologies, ranging from chess-playing computer programs to large language models that can code in Python. In particular, the declaration refers to what it calls “frontier AI,” which the document vaguely defines as “highly capable general-purpose AI models, including foundation models, that could perform a wide variety of tasks—as well as relevant specific narrow AI that could exhibit capabilities that cause harm—which match or exceed the capabilities present in today’s most advanced models.”

Similarly, there remains a lack of consensus on what global AI regulations should entail or who should be responsible for drafting them. The US, for its part, has announced a separate American AI Safety Institute, and President Biden recently issued an executive order on AI. The European Union is working on an AI bill to establish regulatory principles and guidelines for specific AI technologies.

While the summit signifies a move toward international cooperation on AI safety, some analysts believe it leaned more toward posturing and symbolism. Just before the summit, Prime Minister Sunak announced a live Thursday tie-in interview with Musk that will take place on Musk’s social media platform, X.


Source link

AI Eclipse TLDR:

The UK recently hosted an AI Safety Summit attended by 28 countries, including the US and China, to address potential risks posed by advanced AI systems. The event saw the signing of “The Bletchley Declaration,” which warns of potential harm from advanced AI and calls for international cooperation to ensure responsible AI deployment. The summit included representatives from major tech companies, civil society groups, and political leaders. UK government representatives, including Technology Secretary Michelle Donelan, played a significant role in the event, aiming to position the UK at the forefront of the AI space. While the summit initiated an international dialogue on AI safety, it did not establish specific policy goals due to the broad and fuzzy definition of “artificial intelligence” and the lack of consensus on global AI regulations. Some critics believe the summit leaned more towards posturing and symbolism.