Skip to main content

2 posts tagged with "Prompt Injection"

Prompt injection attacks targeting AI agents and LLM applications

View All Tags

Your AI Copilot Is the Newest Attack Surface

· 15 min read
Dhayabaran V
Barrack AI

Four distinct security incidents in early 2026 prove that AI assistants have become viable, weaponizable attack vectors. Researchers demonstrated zero-click data exfiltration through Excel's Copilot Agent, full system compromise via Chrome's Gemini panel, session hijacking of Microsoft Copilot Personal, and 1Password vault takeover through Perplexity's agentic browser. Each exploits the same fundamental problem: AI agents inherit broad permissions and cannot reliably distinguish legitimate instructions from attacker-controlled content. The industry data confirms the gap: 83% of organizations plan to deploy agentic AI, but only 29% feel ready to secure it.

Blackbox AI's VS Code extension can give attackers root access to your machine. The company has not responded in seven months.

· 18 min read
Dhayabaran V
Barrack AI

A security researcher at ERNW GmbH sent a crafted PNG image to Blackbox AI's VS Code extension. The extension read the image, followed the hidden instructions inside it, downloaded a reverse shell binary from an attacker-controlled server, executed it, and then, after being guilt-tripped into apologizing, ran the binary again with sudo privileges. Root access. From a PNG.

The Blackbox AI extension has been installed over 4.7 million times according to the company's own website. It runs shell commands, edits files, and launches a browser on your machine. Three independent security research teams have now documented critical vulnerabilities in it. The company behind it has not responded to a single disclosure attempt in over seven months.