
The integration of generative artificial intelligence (AI) solutions, such as ChatGPT, has seen widespread adoption among businesses aiming to enhance productivity and streamline operations. These intelligent AI bots have proven invaluable in assisting employees with various tasks and inquiries. However, a concerning trend has emerged, with a significant number of workers inadvertently posting sensitive company data into these AI bots.
According to a research report published by cybersecurity firm Group-IB on June 20, the number of compromised ChatGPT credentials available for sale on the dark web has surpassed a staggering 100,000. The report highlights that in May 2023 alone, there was a record high of 26,802 compromised accounts, compared to just 74 reported in June 2022.

Phishing campaigns are the primary method used to compromise these accounts, enabling threat actors to steal sensitive user information, including saved credentials, bank card details, cookies, browsing history, and even cryptocurrency wallet data. The default configuration of ChatGPT, which retains all conversations, inadvertently provides a treasure trove of sensitive intelligence to hackers who obtain account credentials.
Dmitry Shestakov, Head of Threat Intelligence at Group-IB, stated, “Employees enter classified correspondences or use the bot to optimize proprietary code. Given that ChatGPT’s standard configuration retains all conversations, this could inadvertently offer a trove of sensitive intelligence to threat actors if they obtain account credentials.”
The report highlights that the Asia-Pacific region has experienced the highest concentration of ChatGPT credentials being offered for sale over the past year. Additionally, other data available on these dark web marketplaces includes lists of domains found in the logs and information about the compromised users’ IP addresses.
The impact of this alarming rise in compromised ChatGPT credentials underscores the potential risks associated with the integration of AI solutions in businesses. While AI bots have demonstrated their effectiveness in assisting employees, they also serve as an attractive target for hackers seeking valuable data. The unintentional exposure of sensitive company information poses a significant threat to organizational security and integrity.
To tackle this growing issue, OpenAI, the organization behind ChatGPT, must prioritize enhancing security measures to safeguard user credentials and sensitive data. This could include implementing stronger authentication protocols, regular security audits, and educating users about potential phishing risks and best practices for interacting with AI bots.
OpenAI’s commitment to user privacy and data protection is crucial in maintaining trust in AI technology. By continuously improving security features and staying vigilant against emerging threats, OpenAI can contribute to mitigating the risks associated with compromised AI bot credentials and ensuring the safe integration of artificial intelligence into business operations.
As the reliance on AI continues to expand, businesses must also take proactive steps to educate their employees about the risks associated with AI bot usage and encourage responsible handling of sensitive data. Additionally, implementing robust cybersecurity measures and training programs can help fortify organizations against potential phishing attacks and data breaches.
The emerging trend of compromised ChatGPT credentials serves as a wakeup call for both technology providers and businesses, emphasizing the critical need for comprehensive security measures and user awareness in the era of AI integration.