– A world-first declaration on AI, called the Bletchley Declaration, was signed by 28 countries and the EU at the UK’s AI Safety Summit.
– The declaration establishes a shared understanding of AI’s dangers and opportunities and calls for international cooperation.
– It specifically focuses on “frontier AI,” which includes advanced models like OpenAI’s ChatGPT that pose significant risks.
– Critics argue that the fears around frontier AI have been exaggerated and influenced by big tech companies.
– Researchers argue that the focus should be on more urgent concerns like job automation, discrimination, and environmental impacts.
– The declaration lacks detail and does not propose any specific rules, roadmap, or ethical principles for regulating AI.
– There are already existing policies and agreements, such as the EU’s AI Act and the G7 International Code of Conduct for AI, that contain more substance.
– The declaration is seen as more of a symbolic gesture and a signal of willingness to cooperate rather than a meaningful action.
– South Korea and France will host future AI summits, but domestic legislation may hinder significant international agreements.
The Next Web:
A world-first declaration on AI that was agreed on Wednesday will have no real impact and has been manipulated by big tech, critics say.
The statement was signed by 28 countries — and the EU — who collectively cover six continents. They unveiled their pact at the UK’s AI Safety Summit in Bletchley Park, where codebreakers cracked Nazi Germany’s Enigma machine during World Ward Two.
The new agreement takes its name from the site. Known as the “Bletchley Declaration,” the communiqué establishes a shared understanding of AI’s dangers and opportunities.
“Many risks arising from AI are inherently international in nature, and so are best addressed through international cooperation,” the declaration said.
The statement also calls for international action on “frontier AI.” A favoured buzzword at the summit, frontier AI encompasses advanced, general-purpose models such as OpenAI’s ChatGPT. According to the British government, these are the systems that pose the most dangerous and urgent risks.
Signatories of the declaration agreed that “substantial risks” could arise from frontier AI. In some cases, they warned, frontier AI could cause “serious, even catastrophic, harm, either deliberate or unintentional.” But critics argue that such fears have been deliberately overblown.
Lewis Liu, the CEO of machine learning startup Eigen Technologies, is among the most vociferous critics. The apocalyptic warning, he said, “is overly influenced by a deeply flawed analysis and an agenda set by those big tech companies seeking to dominate the policy-making process.”
“This kind of doom-mongering echoes the words of OpenAI and its peers, who have been among the most influential corporate lobbyists in the run-up to the Summit,” he added.
“There is real fear in the startup community that this will be a forum where big tech takes control of the steering wheel, to try and regulate away open-source AI systems, set the terms of debate, and in doing so, freeze out the competition.”
28 countries & the EU have signed The Bletchley Declaration at the #AISafetySummit agreeing to:
⚠️ identify the key opportunities & risks of AI
🌍 build a global understanding of Frontier AI risks
🔬 collaborate on AI scientific research
— Department for Science, Innovation and Technology (@SciTechgovuk) November 1, 2023
The focus on frontier AI has also incensed researchers. Sandra Wachter, a professor of technology and regulation at Oxford University, argues that job automation, discrimination, and environmental impacts are more urgent concerns.
“Unfortunately, this is out of scope for this Summit and the predominant focus is on the ‘risk of losing control’ of AI, in the sense that AI develops a ‘will of its own’ and poses an ‘existential’ risk to humanity,” she said.
“Yet, there is no scientific evidence that we are on such a path, or that such a path even exists. But it distracts from the actual and already existing existential risks.”
Style over substance?
While the declaration makes bold warnings, it’s very light on detail. What’s more notable is the coalition of nations that have backed the pact.
The signatories include the US and China, who made a rare agreement on the world stage.
In a further show of unity, Gina Raimondo, the US Commerce Secretary, and Wu Zhaohui, the Chinese vice minister of science and technology, were seated side-by-side onstage at one session, where each of them delivered speeches on AI.
Their collaboration, however, will be extremely limited in practice. The statement calls for “international cooperation” and “inclusive global dialogue,” but doesn’t propose any specific rules, roadmap, or ethical principles.
“This declaration isn’t going to have any real impact on how AI is regulated,” said Martha Bennett, VP principal analyst at business advisory firm Forrester.
Bennett notes that there are already various policies that contain far more substance. Among them are such as the EU’s AI Act, the White House’s Executive Order on AI, and the G7 “International Code of Conduct” for AI.
“Moreover,” Bennett added, “the countries and entities represented at the AI Summit would not have agreed to the text of the Bletchley Declaration if it contained any meaningful detail on how AI should be regulated.”
This landmark declaration marks the start of a new global effort to build public trust in AI by making sure it’s safe 👇 https://t.co/EHACt7kRId
— Rishi Sunak (@RishiSunak) November 1, 2023
Despite her doubts about the real-world impacts, Bennett believes the agreement can serve a useful purpose.
“The Summit and the Bletchley Declaration are more about [sending] signals and demonstrating willingness to cooperate, and that’s important,” she said. “We’ll have to wait and see whether good intentions are followed by meaningful action.”
We might not have to wait long. During the Bletchley Park event, it was announced that South Korea will host a second summit in six months. Another one will then take place in France. As things stand, however, it appears domestic legislation will obstruct any significant international deal.
AI Eclipse TLDR:
Critics argue that a world-first declaration on artificial intelligence (AI) agreed upon by 28 countries and the EU will have no real impact and has been manipulated by big tech. The statement, known as the “Bletchley Declaration,” was unveiled at the UK’s AI Safety Summit and aims to establish a shared understanding of AI’s dangers and opportunities. It calls for international action on “frontier AI,” encompassing advanced, general-purpose models that pose the most dangerous risks. However, critics claim that fears of AI have been exaggerated by big tech companies seeking to dominate the policy-making process. They argue that job automation, discrimination, and environmental impacts are more urgent concerns. The declaration has also been criticized for lacking substance, as it does not propose any specific rules, roadmap, or ethical principles. Despite doubts about its real-world impacts, the agreement is seen as a signal of willingness to cooperate. South Korea and France will host future summits, but the current domestic legislation may obstruct any significant international deal.