Skip to main content

5 posts tagged with "AI Security"

AI application security research and vulnerability analysis

View All Tags

OpenAI Codex: How a Branch Name Stole GitHub Tokens

· 12 min read
Dhayabaran V
Barrack AI

BeyondTrust Phantom Labs disclosed a critical command injection vulnerability in OpenAI's Codex cloud environment on March 30, 2026. The vulnerability allowed attackers to steal GitHub OAuth tokens by injecting shell commands through a branch name parameter. A branch name. That is where the entire attack starts.

The flaw affected every Codex surface: the ChatGPT website, Codex CLI, Codex SDK, and the Codex IDE Extension. OpenAI classified it as Critical (Priority 1) and remediated all issues by February 5, 2026, following responsible disclosure that began December 16, 2025. No CVE has been assigned.

DarkSword and the LLM Question: What Every Outlet Mentioned but Nobody Wrote About

· 24 min read
Dhayabaran V
Barrack AI

On March 18, 2026, Lookout, Google's Threat Intelligence Group (GTIG), and iVerify published coordinated research disclosing DarkSword, a full-chain iOS exploit kit targeting iPhones running iOS 18.4 through 18.7. Within 48 hours, every major cybersecurity outlet covered the story. They covered the six CVEs, the three zero-days, the exploit chain walkthrough, the 270 million affected devices, the threat actor attribution, and the Coruna connection. But buried inside nearly every article was a finding that none of them turned into its own piece: indicators of LLM-assisted code inside a mass-deployed iOS exploit kit.

Your AI Copilot Is the Newest Attack Surface

· 15 min read
Dhayabaran V
Barrack AI

Four distinct security incidents in early 2026 prove that AI assistants have become viable, weaponizable attack vectors. Researchers demonstrated zero-click data exfiltration through Excel's Copilot Agent, full system compromise via Chrome's Gemini panel, session hijacking of Microsoft Copilot Personal, and 1Password vault takeover through Perplexity's agentic browser. Each exploits the same fundamental problem: AI agents inherit broad permissions and cannot reliably distinguish legitimate instructions from attacker-controlled content. The industry data confirms the gap: 83% of organizations plan to deploy agentic AI, but only 29% feel ready to secure it.

CyberStrikeAI: the AI Attack Platform Behind the 600+ FortiGate Breach

· 27 min read
Dhayabaran V
Barrack AI

An open-source AI-powered offensive security platform, built by a developer with documented ties to China's Ministry of State Security, has been linked to a live campaign that compromised over 600 FortiGate devices across 55 countries in five weeks. Three separate investigations -- by Amazon Threat Intelligence, Team Cymru, and independent researcher blog Cyber and Ramen -- have collectively exposed how CyberStrikeAI and custom attacker-built tooling enabled a single, low-skilled operator to breach enterprise network infrastructure at industrial scale.

Every AI App Data Breach Since January 2025: 20 Incidents, Same Root Causes

· 29 min read
Dhayabaran V
Barrack AI

Between January 2025 and February 2026, at least 20 documented security incidents exposed the personal data of tens of millions of users across AI-powered applications. Nearly every single one traces back to the same preventable root causes: misconfigured Firebase databases, missing Supabase Row Level Security, hardcoded API keys, and exposed cloud backends.

This is not a collection of isolated mistakes.

Three independent large-scale research projects, CovertLabs' Firehound scanning 198 iOS apps, Cybernews' audit of 38,630 Android AI apps, and Escape's analysis of 5,600 vibe-coded applications, all converge on the same conclusion: the AI app ecosystem has a systemic, structural security crisis. The rush to ship AI wrappers, chatbots, and "vibe-coded" products has outpaced even the most basic security practices, leaving hundreds of millions of user records readable by anyone with a browser.

What follows is every documented incident, research finding, and industry statistic. Sourced, dated, and cross-referenced.