As 2024 election looms, OpenAI says it is taking steps to prevent AI abuse

Key Takeaways:

– OpenAI is taking steps to prevent the misuse of its AI technologies during the 2024 elections.
– The company is focused on transparency, accuracy, and access to reliable voting information.
– Initiatives include preventing abuse through deepfakes and bots, refining usage policies, and launching a reporting system for potential abuses.
– OpenAI regularly updates its Usage Policies to prevent misuse, especially in the context of elections.
– The organization is proactively engaged in strategies to safeguard its technologies against misuse.
– OpenAI is working on classifying image provenance and plans to embed digital credentials into its AI-generated images.
– The company is partnering with the National Association of Secretaries of State to provide verified voting information.
– OpenAI aims to build and use AI systems safely and will continue to evolve its approach based on learnings.

Ars Technica:

On Monday, ChatGPT maker OpenAI detailed its plans to prevent the misuse of its AI technologies during the upcoming elections in 2024, promising transparency in AI-generated content and enhancing access to reliable voting information. The AI developer says it is working on an approach that involves policy enforcement, collaboration with partners, and the development of new tools aimed at classifying AI-generated media.

“As we prepare for elections in 2024 across the world’s largest democracies, our approach is to continue our platform safety work by elevating accurate voting information, enforcing measured policies, and improving transparency,” writes OpenAI in its blog post. “Protecting the integrity of elections requires collaboration from every corner of the democratic process, and we want to make sure our technology is not used in a way that could undermine this process.”

Initiatives proposed by OpenAI include preventing abuse by means such as deepfakes or bots imitating candidates, refining usage policies, and launching a reporting system for the public to flag potential abuses. For example, OpenAI’s image generation tool, DALL-E 3, includes built-in filters that reject requests to create images of real people, including politicians. “For years, we’ve been iterating on tools to improve factual accuracy, reduce bias, and decline certain requests,” the company stated.

OpenAI says it regularly updates its Usage Policies for ChatGPT and its API products to prevent misuse, especially in the context of elections. The organization has implemented restrictions on using its technologies for political campaigning and lobbying until it better understands the potential for personalized persuasion. Also, OpenAI prohibits creating chatbots that impersonate real individuals or institutions and disallows the development of applications that could deter people from “participation in democratic processes.” Users can report GPTs that may violate the rules.

OpenAI claims to be proactively engaged in detailed strategies to safeguard its technologies against misuse. According to their statements, this includes red-teaming new systems to anticipate challenges, engaging with users and partners for feedback, and implementing robust safety mitigations. OpenAI asserts that these efforts are integral to its mission of continually refining AI tools for improved accuracy, reduced biases, and responsible handling of sensitive requests

Regarding transparency, OpenAI says it is advancing its efforts in classifying image provenance. The company plans to embed digital credentials, using cryptographic techniques, into images produced by DALL-E 3 as part of its adoption of standards by the Coalition for Content Provenance and Authenticity. Additionally, OpenAI says it is testing a tool designed to identify DALL-E-generated images.

In an effort to connect users with authoritative information, particularly concerning voting procedures, OpenAI says it has partnered with the National Association of Secretaries of State (NASS) in the United States. ChatGPT will direct users to CanIVote.org for verified US voting information.

“We want to make sure that our AI systems are built, deployed, and used safely,” writes OpenAI. “Like any new technology, these tools come with benefits and challenges. They are also unprecedented, and we will keep evolving our approach as we learn more about how our tools are used.”

Source link

AI Eclipse TLDR:

OpenAI, the creator of ChatGPT, has outlined its plans to prevent the misuse of its AI technologies during the upcoming 2024 elections. The company aims to ensure transparency in AI-generated content and improve access to reliable voting information. OpenAI’s approach involves policy enforcement, collaboration with partners, and the development of new tools to classify AI-generated media.

To prevent abuse, OpenAI proposes initiatives such as preventing deepfakes and bots from imitating candidates, refining usage policies, and implementing a reporting system for the public to flag potential abuses. One of OpenAI’s tools, DALL-E 3, includes filters that reject requests to create images of real people, including politicians. The company has been working on tools to improve factual accuracy, reduce bias, and decline certain requests.

OpenAI regularly updates its Usage Policies for ChatGPT and its API products to prevent misuse, particularly in the context of elections. The organization has restrictions on using its technologies for political campaigning and lobbying until it gains a better understanding of personalized persuasion. OpenAI also prohibits the creation of chatbots that impersonate real individuals or institutions and applications that discourage participation in democratic processes. Users can report violations of these rules.

OpenAI emphasizes its proactive engagement in strategies to safeguard its technologies against misuse. This includes anticipating challenges through red-teaming new systems, gathering feedback from users and partners, and implementing robust safety measures. The company considers these efforts integral to its mission of continuously refining AI tools for improved accuracy, reduced biases, and responsible handling of sensitive requests.

In terms of transparency, OpenAI is working on classifying image provenance. The company plans to embed digital credentials, using cryptographic techniques, into images produced by DALL-E 3 as part of its adoption of standards by the Coalition for Content Provenance and Authenticity. OpenAI is also testing a tool to identify DALL-E-generated images.

To connect users with authoritative information, particularly about voting procedures, OpenAI has partnered with the National Association of Secretaries of State (NASS) in the United States. ChatGPT will direct users to CanIVote.org for verified US voting information.

OpenAI is committed to building, deploying, and using AI systems safely. The company acknowledges the benefits and challenges of these tools and emphasizes its willingness to continually evolve its approach as it gains more insights into how its tools are used.