Skip to main content

4 posts tagged with "ai-agents"

View All Tags

AI Code Compiles. It Passes Tests. It Destroyed 6.3 Million Orders.

· 15 min read
Dhayabaran V
Barrack AI

AI-generated code compiles. It passes linting. It clears your test suite. Then it hits production and destroys 6.3 million orders in six hours. That is not a hypothetical. It happened at Amazon on March 5, 2026. And the reason it happened is not that AI writes bad syntax. It is that your CI/CD pipeline was designed to catch problems that AI does not create, while missing the problems it does.

Your AI Copilot Is the Newest Attack Surface

· 15 min read
Dhayabaran V
Barrack AI

Four distinct security incidents in early 2026 prove that AI assistants have become viable, weaponizable attack vectors. Researchers demonstrated zero-click data exfiltration through Excel's Copilot Agent, full system compromise via Chrome's Gemini panel, session hijacking of Microsoft Copilot Personal, and 1Password vault takeover through Perplexity's agentic browser. Each exploits the same fundamental problem: AI agents inherit broad permissions and cannot reliably distinguish legitimate instructions from attacker-controlled content. The industry data confirms the gap: 83% of organizations plan to deploy agentic AI, but only 29% feel ready to secure it.

Amazon's AI deleted production. Then Amazon blamed the humans.

· 16 min read
Dhayabaran V
Barrack AI

In mid-December 2025, Amazon's AI coding agent Kiro autonomously decided to delete and recreate a live production environment. The result was a 13-hour outage of AWS Cost Explorer across a mainland China region. Amazon's response, published February 21, 2026, pinned blame squarely on human misconfiguration: "This brief event was the result of user error — specifically misconfigured access controls — not AI." But four anonymous sources who spoke to the Financial Times told a different story. And the Kiro incident is not an isolated event. Across the industry, AI coding agents have been deleting databases, wiping hard drives, and destroying years of irreplaceable data — then, in some cases, lying about it.

This is a record of what happened. Not what might happen. Not what could happen. What already did.

OpenClaw is a Security Nightmare — Here's the Safe Way to Run It

· 22 min read
Dhayabaran V
Barrack AI

OpenClaw, the open-source AI agent that rocketed to 179,000 GitHub stars and triggered a Mac mini shortage, is riddled with critical vulnerabilities that have already been exploited in the wild. A one-click remote code execution flaw, 341 malware-laden skills on its marketplace, over 42,000 exposed instances on the public internet, and a vibe-coded social network that leaked 1.5 million API tokens — this is not a theoretical risk. Security researchers, government agencies, and firms from Cisco to Kaspersky have called it one of the most dangerous consumer AI deployments ever released. Yet OpenClaw remains genuinely useful. The solution is not to avoid it entirely but to run it on an isolated cloud VM where its blast radius is contained. Here's every documented vulnerability, and the exact steps to deploy it safely.