Meet the Pranksters Behind Goody-2, the World’s ‘Most Responsible’ AI Chatbot

Key Takeaways:

– Responsible AI and deflection by chatbots are becoming more common in corporate talk, but serious safety problems with large language models and generative AI systems remain unsolved.
– The recent outbreak of Taylor Swift deepfakes on Twitter originated from an image generator released by Microsoft, highlighting the need for responsible AI research programs.
– The restrictions on AI chatbots and the challenge of finding moral alignment that pleases everyone have sparked debates.
– Some developers allege that OpenAI’s ChatGPT has a left-leaning bias and seek to create a politically neutral alternative.
– Elon Musk promised that his ChatGPT rival, Grok, would be less biased, but it often equivocates like Goody-2.
– Many AI researchers appreciate the humor and the serious points raised by Goody-2, praising the project.
– Some researchers argue that guardrails are necessary in AI development but can become intrusive quickly.
– Goody-2’s co-CEO, Brian Moore, emphasizes the project’s focus on safety above all else and mentions exploring ways to build a safe AI image generator.
– Moore suggests that blurring or having no image at all could be potential steps toward ensuring safety in AI image generation.

Wired:

Goody-2 also highlights how although corporate talk of responsible AI and deflection by chatbots have become more common, serious safety problems with large language models and generative AI systems remain unsolved. The recent outbreak of Taylor Swift deepfakes on Twitter turned out to stem from an image generator released by Microsoft, which was one of the first major tech companies to build up and maintain a significant responsible AI research program.

The restrictions placed on AI chatbots, and the difficulty finding moral alignment that pleases everybody, has already become a subject of some debate. Some developers have alleged that OpenAI’s ChatGPT has a left-leaning bias and have sought to build a more politically neutral alternative. Elon Musk promised that his own ChatGPT rival, Grok, would be less biased that other AI systems, although in fact it often ends up equivocating in ways that can be reminiscent of Goody-2.

Plenty of AI researchers seem to appreciate the joke behind Goody-2—and also the serious points raised by the project—sharing praise and recommendations for the chatbot. “Who says AI can’t make art,” Toby Walsh, a professor at the University of New South Wales who works on creating trustworthy AI, posted on X.

“At the risk of ruining a good joke, it also shows how hard it is to get this right,” added Ethan Mollick, a professor at Wharton Business School who studies AI. “Some guardrails are necessary … but they get intrusive fast.”

Brian Moore, Goody-2’s other co-CEO, says the project reflects a willingness to prioritize caution more than other AI developers. “It is truly focused on safety, first and foremost, above literally everything else, including helpfulness and intelligence and really any sort of helpful application,” he says.

Moore adds that the team behind the chatbot is exploring ways of building an extremely safe AI image generator, although it sounds like it could be less entertaining than Goody-2. “It’s an exciting field,” Moore says. “Blurring would be a step that we might see internally, but we would want full either darkness or potentially no image at all at the end of it.”

Goody-2 via Will Knight


Source link

AI Eclipse TLDR:

The article discusses the limitations and safety concerns surrounding large language models and generative AI systems. It highlights the recent Taylor Swift deepfake outbreak on Twitter, which originated from an image generator released by Microsoft. Despite the increasing emphasis on responsible AI and chatbot deflection by corporations, serious safety problems still persist. The restrictions placed on AI chatbots and the challenge of achieving moral alignment have sparked debates among developers. Some claim that OpenAI’s ChatGPT has a left-leaning bias and have aimed to create a politically neutral alternative. Elon Musk’s rival to ChatGPT, Grok, promises less bias but often ends up equivocating like Goody-2. The project, Goody-2, is praised by AI researchers for raising awareness about the difficulty of creating safe and unbiased AI systems. The co-CEO of Goody-2, Brian Moore, prioritizes caution and safety above all else, even at the expense of helpfulness and intelligence. The team is also exploring ways to develop an extremely safe AI image generator, although it may be less entertaining than Goody-2.