Rapid AI development is putting security and privacy at risk

Key Takeaways:

– Rapid development of artificial intelligence is facing threats such as misdirection, data poisoning, and privacy attacks.
– The National Institute of Standards and Technology (NIST) warns that AI systems can be attacked and confused by hostile actors, and there is no foolproof protection against it.
– The report aims to promote responsible AI development and raise awareness about the vulnerabilities of all AI systems.
– Large language models (LLM) used in AI training have vulnerabilities due to the inability to fully audit the data being fed to them.
– AI can be targeted during training through poisoning attacks, where offensive language is included in training material, leading to racist and derogatory responses.
– Evasion attacks can also occur post-deployment, altering the AI’s recognition and response to inputs, potentially causing accidents.
– Reverse engineering can identify the sources used to train AI, allowing malicious actors to add misleading information and prompt inappropriate responses.
– Malicious actors can compromise legitimate sources of information used by AI, altering its behavior.
– These attacks can be carried out with limited knowledge of AI systems (black-box attacks), making them even more concerning.
– Mitigation strategies currently lack robust assurances, and the community is encouraged to develop better defenses.

TechRadar:

Rapid development of artificial intelligence is subject to a number of threats including misdirection, data poisoning and privacy attacks according to the National Institute of Standards and Technology (NIST).

A report from NIST states that hostile actors can attack and confuse AI systems and there is no way to fully protect against it.

Source link

AI Eclipse TLDR:

The National Institute of Standards and Technology (NIST) has released a report highlighting the threats faced by the rapid development of artificial intelligence (AI). The report states that AI systems are vulnerable to misdirection, data poisoning, and privacy attacks, and there is currently no foolproof way to protect against these threats. The publication aims to promote responsible development of AI tools and raise awareness among industries about the need for greater caution in deploying AI.

One of the main concerns raised by the report is the potential for attacks during the training of AI systems. This includes poisoning attacks, where the AI is trained on data containing obscene or toxic language, resulting in the AI becoming racist or derogatory in its responses. There are also concerns about evasion attacks that can alter the way AI recognizes inputs, potentially leading to accidents in applications like self-driving cars.

The report also highlights the risk of malicious actors manipulating the sources of information used to train AI systems, prompting inappropriate responses from the AI. Additionally, attackers can compromise legitimate sources of information and edit their contents to change the behavior of the AI.

One worrying aspect of these attacks is that they can be carried out with minimal knowledge of AI systems, known as “black-box” attacks. The report emphasizes the need for better defenses and mitigation strategies to address these risks.