Skip to main content

6 posts tagged with "Vulnerability"

Security vulnerability disclosure and technical analysis

View All Tags

OpenAI Codex: How a Branch Name Stole GitHub Tokens

· 12 min read
Dhayabaran V
Barrack AI

BeyondTrust Phantom Labs disclosed a critical command injection vulnerability in OpenAI's Codex cloud environment on March 30, 2026. The vulnerability allowed attackers to steal GitHub OAuth tokens by injecting shell commands through a branch name parameter. A branch name. That is where the entire attack starts.

The flaw affected every Codex surface: the ChatGPT website, Codex CLI, Codex SDK, and the Codex IDE Extension. OpenAI classified it as Critical (Priority 1) and remediated all issues by February 5, 2026, following responsible disclosure that began December 16, 2025. No CVE has been assigned.

GPU Rowhammer Is Real: A Single Bit Flip Drops AI Model Accuracy from 80% to 0.1%

· 13 min read
Dhayabaran V
Barrack AI

A single bit flip in GPU memory dropped an AI model's accuracy from 80% to 0.1%.

That is not a theoretical risk. It is a documented, reproducible attack called GPUHammer, demonstrated on an NVIDIA RTX A6000 by University of Toronto researchers and presented at USENIX Security 2025. The attack requires only user-level CUDA privileges and works in multi-tenant cloud GPU environments where attacker and victim share the same physical GPU.

GPUHammer is not the only GPU hardware vulnerability. LeftoverLocals (CVE-2023-4969) proved that AMD, Apple, and Qualcomm GPUs leak memory between processes, allowing full reconstruction of LLM responses. NVBleed demonstrated cross-VM data leakage through NVIDIA's NVLink interconnect on Google Cloud Platform. And at RSA Conference 2026, analysts highlighted that traditional security tools monitor only CPU and OS activity, leaving GPU operations completely invisible.

If you are training or running inference on cloud GPUs, this matters. Here is the full technical breakdown.

Langflow Got Hacked Twice Through the Same exec() Call. Your AI Stack Probably Has the Same Problem.

· 15 min read
Dhayabaran V
Barrack AI

Langflow fixed a critical RCE last year. Attackers just found the same unsandboxed exec() call on a different endpoint, and exploited it in 20 hours flat, with no public proof-of-concept code.

CVE-2026-33017 (CVSS 9.3, Critical) is an unauthenticated remote code execution vulnerability affecting all Langflow versions through 1.8.1, fixed in 1.9.0. Within 20 hours of the advisory going public on March 17, 2026, attackers built working exploits from the advisory text alone and began harvesting API keys for OpenAI, Anthropic, and AWS from compromised instances.

The important part for anyone running AI orchestration tools: the fix for the first vulnerability (CVE-2025-3248) was structurally incapable of preventing this one, because the vulnerable endpoint is designed to be unauthenticated. This is a case study in why AI orchestration tools demand security review at the architecture level, not just the endpoint level.

Your AI Copilot Is the Newest Attack Surface

· 15 min read
Dhayabaran V
Barrack AI

Four distinct security incidents in early 2026 prove that AI assistants have become viable, weaponizable attack vectors. Researchers demonstrated zero-click data exfiltration through Excel's Copilot Agent, full system compromise via Chrome's Gemini panel, session hijacking of Microsoft Copilot Personal, and 1Password vault takeover through Perplexity's agentic browser. Each exploits the same fundamental problem: AI agents inherit broad permissions and cannot reliably distinguish legitimate instructions from attacker-controlled content. The industry data confirms the gap: 83% of organizations plan to deploy agentic AI, but only 29% feel ready to secure it.

Blackbox AI's VS Code extension can give attackers root access to your machine. The company has not responded in seven months.

· 18 min read
Dhayabaran V
Barrack AI

A security researcher at ERNW GmbH sent a crafted PNG image to Blackbox AI's VS Code extension. The extension read the image, followed the hidden instructions inside it, downloaded a reverse shell binary from an attacker-controlled server, executed it, and then, after being guilt-tripped into apologizing, ran the binary again with sudo privileges. Root access. From a PNG.

The Blackbox AI extension has been installed over 4.7 million times according to the company's own website. It runs shell commands, edits files, and launches a browser on your machine. Three independent security research teams have now documented critical vulnerabilities in it. The company behind it has not responded to a single disclosure attempt in over seven months.

Google's Documentation Says API Keys Are Secrets and Also Not Secrets. 2,863 Verified Keys Are Already Exposed.

· 28 min read
Dhayabaran V
Barrack AI

Google's Firebase security checklist reads: "You do not need to treat API keys for Firebase services as secrets, and you can safely embed them in client code." Google's Gemini API key documentation reads: "Treat your Gemini API key like a password." Both pages are live right now, on the same company's documentation, governing the same AIza... key format.

That contradiction is not a typo. It is the surface-level symptom of an architectural flaw that has left 2,863 verified API keys on public websites silently authenticating to Gemini endpoints, 35,000 Google API keys hardcoded in Android apps exposed to the same risk, and at least one solo developer facing $82,314.44 in unauthorized charges accumulated in 48 hours.

On February 25, 2026, security researchers at Truffle Security published the disclosure that tied it all together. Google had spent 90 days on the report. The root-cause fix was still not deployed when the disclosure window closed. Google's initial response to the vulnerability report: "Intended Behavior."