Enterprise AI applications are threatening security

Key Takeaways:

– AI applications like ChatGPT and Google Bard are being widely used in the enterprise space to enhance productivity and decision-making.
– The popularity of AI applications is growing rapidly, with a projected doubling in popularity by 2024.
– The use of generative AI tools like ChatGPT in the workplace has led to an increase in security risks, with employees sharing sensitive information on these platforms.
– Malicious actors are targeting the hype surrounding AI applications to exploit vulnerabilities and achieve their own malicious goals.
– Business leaders are struggling to find ways to use third-party AI apps safely and securely, with some companies blocking access to these apps or advising employees not to share confidential information.
– Sensitive information like source code is commonly uploaded to generative AI applications, posing a risk of exposing trade secrets.
– Removing generative AI from company networks can lead to the use of unauthorized third-party applications, which can be exploited in phishing and malware campaigns.
– To secure the workplace, data loss prevention policies and tools should be implemented, along with user coaching to notify employees of potential policy breaches.
– Scanning website traffic and URLs can help identify and prevent cloud and AI app themed attacks.
– Regular monitoring of AI app activity and trends can help identify critical vulnerabilities and implement effective security measures.


Over the past year, AI has emerged as a transformational productivity tool, potentially revolutionizing industries across the board. AI applications, such as ChatGPT and Google Bard, are becoming common tools within the enterprise space to streamline operations and enhance decision-making. However, AI’s sharp rise in popularity brings with it a new set of security risks that organizations must grapple with to avoid costly data breaches.

Generative AI’s rapid uptake

Just two months after its launch into the public realm, ChatGPT became the fastest growing consumer-focused application in history, using generative AI technology to answer prompts and help user needs. With an array of benefits that ultimately streamline processes for the individual – suggesting recipes, writing birthday inscriptions, and acting as a go-to knowledge encyclopedia – ChatGPT’s wider application and benefit to the workplace was quickly recognized. Today, many employees in offices worldwide rely on generative AI systems to help draft emails, propose calls to action, and summarize documents. Netskope’s recent Cloud and Threat Report found that AI app use is increasing exponentially within enterprises across the globe, growing by 22.5% over May and June 2023. At this current growth rate, the popularity of these applications will double by 2024.The hacker’s honeypot

Ray Canzanese

Ray Canzanese is the Director of Netskope Threat Labs.

The hacker’s honeypot

Source link

AI Eclipse TLDR:

AI has become a powerful productivity tool in various industries, with applications like ChatGPT and Google Bard being widely used in enterprises. However, the growing popularity of AI also brings new security risks that organizations need to address to avoid data breaches. ChatGPT, in particular, has gained significant traction, with 28% of US workers regularly using it. Unfortunately, employees often cut and paste confidential company content into the platform, making it an exposure point for sensitive information. Studies show that a quarter of the information shared on ChatGPT is considered sensitive. This popularity has attracted malicious actors who seek to exploit the platform for their own goals. Business leaders are struggling to find ways to use third-party AI apps securely, with some companies blocking access to ChatGPT or prohibiting the sharing of confidential information. Uploading sensitive information to generative AI platforms, like source code, poses a high-risk activity that can lead to the exposure of trade secrets. To address these security concerns, organizations should implement data loss prevention policies, real-time user coaching, and scan website traffic to detect potential threats. With proper security measures, AI can continue to benefit enterprises while minimizing risks.