States could already produce AI malware that evades detection

Key Takeaways:

– AI-generated malware that can avoid detection may already be available to nation states, according to the UK’s cybersecurity agency.
– To create such software, threat actors need to train an AI model on “quality exploit data” to create code that can evade current security measures.
– The National Cyber Security Centre (NCSC) warns that highly capable states likely have repositories of malware large enough to effectively train an AI model.
– AI is expected to heighten the global ransomware threat and improve the targeting of victims, making it easier for cybercriminals to enter the field.
– Generative AI is particularly useful for social engineering techniques, such as convincing interactions with victims and creating lure documents.
– Nation states are likely to gain the most powerful weapons in utilizing AI for advanced cyber operations.
– In the near term, AI is expected to enhance existing threats rather than transform the risk landscape, with ransomware being a significant national security concern.
– The NCSC warns that advancements in AI and the exploitation of this technology by cybercriminals will likely increase the ransomware threat in the coming years.

The Next Web:

AI-generated malware that avoids detection could already be available to nation states, according to the UK’s cybersecurity agency.

To produce such powerful software, threat actors need to train an AI model on “quality exploit data,” the National Cyber Security Centre (NCSC) said today. The resulting system would create new code that evades current security measures.

“There is a realistic possibility that highly capable states have repositories of malware that are large enough to effectively train an AI model for this purpose,” the NCSC warned.

As for what a “realistic possibility” actually means, the agency’s “probability yardstick” offers some clarity.