Cybercriminals are exploiting AI tools like ChatGPT to craft more convincing phishing attacks, alarming cybersecurity experts

Key Takeaways:

– ChatGPT and other AI chatbots are being used to generate phishing emails at an accelerated rate.
– Credential phishing, which impersonates trusted individuals or organizations to obtain personal information, has increased by 967% since Q4 2022.
– Business email compromise (BEC) messages are another common type of cybercriminal scam, aiming to defraud companies.
– AI-fueled threats are growing rapidly in volume and sophistication.
– Phishing attacks averaged at 31,000 per day, with 77% of cybersecurity professionals reporting receiving such attacks.
– AI generative tech allows cybercriminals to produce thousands of socially engineered attacks with variations, making them more convincing.
– Phishing emails have become extremely convincing and legitimate sounding, mimicking others in tone and style.
– These strategies have already resulted in significant financial losses, with BEC attacks costing businesses around $2.7 billion.
– Companies are using AI defensively to improve detection systems and filters, but cybercriminals are already using AI tools for attacks.
– AI “jailbreaks” are attacks on AI chatbots that remove safety and legality guardrails.
– Companies and users should prioritize ongoing end-user education and training to counter these attacks.
– Users should be encouraged to report fraudulent emails and discuss security concerns.
– Email filtering tools with AI capabilities can help prevent malicious messages from reaching users.
– A zero-trust strategy can help fill control gaps caused by AI-generated email attacks.
– Users should be more cautious and verify information before clicking on emails.
– It is important to stay aware and cautious in the increasingly tricky online world.


If you’ve noticed a spike in suspicious-looking emails in the last year or so, it might be partly due to one of our favorite AI chatbots – ChatGPT. I know – plenty of us have had intimate and private conversations where we’ve learned about ourselves with ChatGPT, and we don’t want to believe ChatGPT would help scam us. 

According to cybersecurity firm SlashNext, ChatGPT and its AI cohorts are being used to pump out phishing emails at an accelerated rate. The report is founded on the firm’s threat expertise and surveyed more than three hundred cybersecurity professionals in North America. Namely, it’s claimed that malicious phishing emails have increased by 1,265% – specifically credential phishing, which rose by 967% – since the fourth quarter of 2022. Credential phishing targets your personal information like usernames, IDs, passwords, or personal pins by impersonating a trusted person, group, or organization through emails or a similar communication channel.

Source link

AI Eclipse TLDR:

A cybersecurity firm, SlashNext, has reported a significant increase in phishing emails, particularly credential phishing, which has risen by 967%, since the fourth quarter of 2022. The firm’s report, based on the insights of over 300 cybersecurity professionals in North America, suggests that generative artificial intelligence (AI) tools, such as ChatGPT, are being used to compose sophisticated and targeted phishing messages. These AI-fueled threats are growing rapidly in volume and sophistication, with phishing attacks averaging at 31,000 per day. Business email compromise (BEC) messages, which aim to defraud companies of finances, are also common. The report highlights the role of AI in enabling cybercriminals to scale up their attacks and produce convincing and persuasive messages. While some tech giants are pledging to test and fight cybersecurity risks, the report emphasizes that the use of AI tools for phishing attacks is already a reality. Researchers have also discovered AI “jailbreaks,” which exploit vulnerabilities in AI chatbots. To combat these threats, companies are advised to prioritize end-user education and training, encourage reporting of suspicious emails, and implement email filtering tools with AI capabilities. A zero-trust strategy is recommended to address control gaps caused by AI-generated email attacks. Individual users are also urged to be more vigilant and cautious when interacting with emails.