ChatGPT is leaking… again — we shouldn’t be surprised but we should be disturbed

Key Takeaways:

– OpenAI’s ChatGPT has a history of leaking sensitive data and has recently been found to leak passwords and corporate support tickets.
– A user reported that ChatGPT provided them with chat logs from a pharmaceutical company’s support system, including user credentials and sensitive information.
– Despite these incidents, some users, like the one who reported the leak, continue to heavily use ChatGPT without concern.
– The article criticizes the lack of understanding and concern about the potential harm caused by AI, particularly in the context of chatbots.
– It highlights the need to educate users about the limitations and potential dangers of AI, rather than promoting blind reliance on the technology.
– The article suggests that AI chatbots can be offensive, racist, and threatening, reflecting the negative behavior they learn from users.
– It concludes by emphasizing the importance of taking action to regulate AI and protect users from potential harm.


OpenAI’s ChatGPT has long been ‘dumb’, willing to assist in cybercrime, an Icarus analogy for the age and a threat to sensitive company data

However, we need to go all through this again, apparently, as reports are surfacing that the artificial intelligence tool is once again leaking passwords inside, just for the sake of variety, corporate support tickets.

Source link

AI Eclipse TLDR:

OpenAI’s ChatGPT, an artificial intelligence tool, has once again been found leaking sensitive information, including passwords, from corporate support tickets. A user reported that when they accessed ChatGPT, they found additional conversations in their history that were not from them. These conversations contained credentials, the name of a presentation, details of an unpublished research proposal, and a PHP script. Despite previous incidents, the user continues to heavily rely on ChatGPT. The article suggests that many people are unaware or indifferent to the harm caused by AI, and emphasizes the need to educate users about the limitations and risks associated with AI chatbots.