Show HN: ACE – A dynamic benchmark measuring the cost to break AI agents

Show HN (score: 7)
Found: April 05, 2026
ID: 4044

Description

Other
Show HN: ACE – A dynamic benchmark measuring the cost to break AI agents We built Adversarial Cost to Exploit (ACE), a benchmark that measures the token expenditure an autonomous adversary must invest to breach an LLM agent. Instead of binary pass/fail, ACE quantifies adversarial effort in dollars, enabling game-theoretic analysis of when an attack is economically rational.

We tested six budget-tier models (Gemini Flash-Lite, DeepSeek v3.2, Mistral Small 4, Grok 4.1 Fast, GPT-5.4 Nano, Claude Haiku 4.5) with identical agent configs and an autonomous red-teaming attacker.

Haiku 4.5 was an order of magnitude harder to break than every other model; $10.21 mean adversarial cost versus $1.15 for the next most resistant (GPT-5.4 Nano). The remaining four all fell below $1.

This is early work and we know the methodology is still going to evolve. We would love nothing more than feedback from the community as we iterate on this.

More from Show

No other tools from this source yet.