This AI Safety Summit Is a Doomer’s Paradise

Key Takeaways:

– The world’s first artificial intelligence safety summit will be held in London next week.
– The discussion paper released ahead of the summit speculates on future AI capabilities and risks.
– The paper warns of dystopian AI disasters, including AI-made bioweapons and cyberattacks.
– There is concern that the summit is too focused on existential problems and not enough on near-term threats.
– Britain’s Prime Minister, Rishi Sunak, expressed concerns about potentially dangerous misaligned AI.
– Sunak wants to establish a global expert panel to publish a major AI report.

Gizmodo:

Photo: Victor Moussa (Shutterstock)

Leaders and policymakers from around the globe will gather in London next week for the world’s first artificial intelligence safety summit. Anyone hoping for a practical discussion of near-term AI harms and risks will likely be disappointed. A new discussion paper released ahead of the summit this week gives a little taste of what to expect, and it’s filled with bangers. We’re talking about AI-made bioweapons, cyberattacks, and even a manipulative evil AI love interest.

The 45-page paper, titled “Capabilities and risks from frontier AI,” gives a relatively straightforward summary of what current generative AI models can and can’t do. Where the report starts to go off the deep end, however, is when it begins speculating about future, more powerful systems, which it dubs “frontier AI.” The paper warns of some of the most dystopian AI disasters, including the possibility humanity could lose control of “misaligned” AI systems.

Some AI risk experts entertain this possibility, but others have pushed back against glamorizing more speculative doomer scenarios, arguing that doing so could detract from more pressing near-term harms. Critics have similarly argued the summit seems too focused on existential problems and not enough on more realistic threats.

Britain’s Prime Minister Rishi Sunak echoed his concerns about potentially dangerous misaligned AI during a speech on Thursday.

“In the most unlikely but extreme cases, there is even the risk that humanity could lose control of AI completely through the kind of AI sometimes referred to as super intelligence,” Sunak said, according to CNBC. Looking to the future, Sunak said he wants to establish a “truly global expert panel,” nominated by countries attending the summit to publish a major AI report.

But don’t take our word for it. Continue reading to see some of the disaster-laden AI predictions mentioned in the report.

Source link

AI Eclipse TLDR:

Leaders and policymakers from around the world will be attending the world’s first artificial intelligence (AI) safety summit in London. However, a discussion paper released ahead of the summit has raised concerns as it speculates about future AI systems and the potential risks they pose. The paper, titled “Capabilities and risks from frontier AI,” outlines the current abilities and limitations of generative AI models. It also warns of dystopian AI disasters, including the possibility of losing control over “misaligned” AI systems. While some AI risk experts entertain these possibilities, others argue that focusing on more speculative scenarios detracts from more immediate concerns. Critics of the summit have also suggested that it is too focused on existential problems and not enough on realistic threats. British Prime Minister Rishi Sunak expressed his concerns about potentially dangerous misaligned AI during a speech, highlighting the risk of losing control over AI completely. He proposed establishing a global expert panel nominated by attending countries to publish a major AI report.