By Rodman Ramezanian - Enterprise Cloud Security Advisor
February 24, 2025 7 Minute Read
AI-powered chatbots have seamlessly integrated into nearly every aspect of our lives, from answering questions and assisting with customer service to managing personal tasks and making transactions. With their ability to provide instant, personalized interactions, we freely share sensitive data, often without a second thought. However, when a chatbot service experiences a data leak, it doesn’t just compromise individual privacy – it strikes at the very trust that users have placed in these technologies. The sheer volume of personal, financial, and conversational data exchanged with these AI systems means that a breach could have far-reaching consequences, affecting everything from personal security to the reliability of AI services we’ve come to rely on.
A hacker has reportedly claimed to have infiltrated OmniGPT, a widely-used AI chatbot and productivity platform, according to information posted on the notorious Breach Forums. The breach allegedly exposed the personal data of 30,000 users, including emails, phone numbers, and more than 34 million lines of conversation logs. In addition to user chats, the exposed data also includes links to uploaded files, some of which contain sensitive information such as credentials, billing details, and API keys.
Truth be told; it’s hard not to be at least a little alarmed by an AI chatbot service facing a data leak, especially given how quickly these tools have skyrocketed in popularity. We rely on them for a range of capabilities – from creative writing and brainstorming to research and even business automation.
The accessibility and convenience of AI-powered chatbots and creativity services have made them an integral part of our daily lives, but what’s most concerning here is that the breach happened on the service’s back end. This means it’s entirely outside of anything users or consumers could have done to prevent it.
There’s a common tendency to treat AI chatbots as a casual repository for any and all information. In reality, however, these platforms operate as ‘black boxes,’ where the user has virtually no insight into how their data is handled, stored, or protected. This disconnect between perception and reality can have devastating consequences when sensitive data, such as uploaded documents, API keys, and personal details, is leaked, as seen in this case.
Equally important is ongoing security awareness training for users. Many people may still not be fully aware of the risks associated with entering sensitive information into AI chatbot services or cloud platforms. Educating users about potential data misuse and emphasizing caution when sharing personal or confidential data can go a long way in reducing exposure.
The allure of powerful, often free, AI-enriched services will only continue to grow, making a traditional “whack-a-mole” approach to security increasingly futile. With hundreds (if not, thousands) of new AI services emerging weekly, cybersecurity teams are already overwhelmed and can’t possibly keep pace.
“We’ve had so much spare time lately” – said no cybersecurity team, ever!
Instead of chasing every single AI tool, the focus must shift to the underlying risks. This means prioritizing the protection of sensitive data, regardless of the service it’s being shared with. Whether it’s a seemingly legitimate AI platform or any other shadow IT resource, limiting data interactions and implementing robust security measures will prove far more effective than trying to police an ever-expanding landscape of AI tools. As recent events demonstrate, even robust-looking services can become targets for sophisticated threat actors.
AI is undeniably revolutionary, transforming industries and daily life in countless ways. From a purely technical standpoint, however, AI services, particularly those accessed via the web and cloud, are fundamentally just websites or cloud platforms. While their underlying technology is remarkably powerful and innovative, from a cybersecurity perspective, they represent another potential avenue for data leakage and unauthorized access. For security professionals tasked with protecting sensitive organizational data like citizen records, PII, and source code, AI services must be viewed through the lens of shadow IT.
And while AI drives efficiency and innovation, it also poses challenges like data breaches, compliance violations, and shadow AI usage. The rapid adoption of AI often outpaces governance, leaving organizations vulnerable to reputational, financial, and legal risks without proper security measures.
Shadow IT, by definition, encompasses any service not explicitly sanctioned or authorized for corporate use and data transactions. When considering AI services as shadow IT, the associated risks become clear. Would you allow sensitive customer data to be entered into an unapproved website? Would you permit confidential attachments to be uploaded to an unknown cloud service? Would you condone employees using platforms hosted in jurisdictions with questionable data protection practices? The answer to all of these questions should be a resounding no.
Treating AI services as shadow IT forces a necessary shift in perspective. Instead of being dazzled by the technology’s capabilities, organizations must apply the same stringent security standards they would to any other unauthorized service. This includes restricting data interactions, limiting access, and implementing robust monitoring to prevent sensitive information from being exposed to the inherent risks associated with any unmanaged, external platform, regardless of how innovative or helpful it might appear.
We absolutely need to extend our current data protection rules to cover AI apps, and ideally, we need one set of rules that works for everything – approved apps, unapproved apps, even our own internal ones.
As a security practitioner, once you’ve identified the associated risks of a given service/entity, you’d ultimately want to be asking three questions:
With over 11 years’ worth of extensive cybersecurity industry experience, Rodman Ramezanian is an Enterprise Cloud Security Advisor, responsible for Technical Advisory, Enablement, Solution Design and Architecture at Skyhigh Security. In this role, Rodman primarily focuses on Australian Federal Government, Defense, and Enterprise organizations.
Rodman specializes in the areas of Adversarial Threat Intelligence, Cyber Crime, Data Protection, and Cloud Security. He is an Australian Signals Directorate (ASD)-endorsed IRAP Assessor – currently holding CISSP, CCSP, CISA, CDPSE, Microsoft Azure, and MITRE ATT&CK CTI certifications.
Candidly, Rodman has a strong passion for articulating complex matters in simple terms, helping the average person and new security professionals understand the what, why, and how of cybersecurity.