Show HN: Stanford's ACE paper was just open sourced

Show HN (score: 5)
Found: December 03, 2025
ID: 2562

Description

Other
Show HN: Stanford's ACE paper was just open sourced Last month, the SambaNova team, in partnership with Stanford and UC Berkeley, introduced the viral paper Agentic Context Engineering (ACE), a framework for building evolving contexts that enable self-improving language models and agents. Today, the team has released the full ACE implementation, available on GitHub, including the complete system architecture, modular components (Generator, Reflector, Curator), and ready-to-run scripts for both Finance and AppWorld benchmarks. The repository provides everything needed to reproduce results, extend to new domains, and experiment with evolving playbooks in your own applications.

More from Show

Show HN: On the edge of Apple Silicon memory speeds

Show HN: On the edge of Apple Silicon memory speeds I have developed open source CLI-tool for Apple Silicon macOS. It measures memory speeds in different ways and also latency. It can achieve up to 96-97% efficiency on read speed on M4 base what is advertised as 120GB&#x2F;s. All memory operations are in assembly.<p>I would really appreciate for results on different CPU&#x27;s how benchmark works on those. I have been able to test this on M1 and M4.<p>command : &#x27;memory_benchmark -non-cacheable -count 5 -output results.JSON&#x27; (close all applications before running)<p>This will generate JSON file where you find sections copy_gb_s, read_gb_s and write_gb_s statics.<p>Example M4 with 10 loops: &quot;copy_gb_s&quot;: { &quot;statistics&quot;: { &quot;average&quot;: 106.65421233311835, &quot;max&quot;: 106.70240696071005, &quot;median&quot;: 106.65069297260811, &quot;min&quot;: 106.6336774994254, &quot;p90&quot;: 106.66606919223108, &quot;p95&quot;: 106.68423807647056, &quot;p99&quot;: 106.69877318386216, &quot;stddev&quot;: 0.01930653530818627 }, &quot;values&quot;: [ 106.70240696071005, 106.66203166240008, 106.64410802226159, 106.65831409449595, 106.64148106986977, 106.6482935780762, 106.63974821679058, 106.65896986001393, 106.6336774994254, 106.65309236714002 ] }, &quot;read_gb_s&quot;: { &quot;statistics&quot;: { &quot;average&quot;: 115.83111228356601, &quot;max&quot;: 116.11098114619033, &quot;median&quot;: 115.84480882265643, &quot;min&quot;: 115.56959026587722, &quot;p90&quot;: 115.99667266786554, &quot;p95&quot;: 116.05382690702793, &quot;p99&quot;: 116.09955029835784, &quot;stddev&quot;: 0.1768243167963439 }, &quot;values&quot;: [ 115.79154681380165, 115.56959026587722, 115.60574235736468, 115.72112860271632, 115.72147129262802, 115.89807083151123, 115.95527337086908, 115.95334642887214, 115.98397172582945, 116.11098114619033 ] }, &quot;write_gb_s&quot;: { &quot;statistics&quot;: { &quot;average&quot;: 65.55966046805113, &quot;max&quot;: 65.59040040480241, &quot;median&quot;: 65.55933583741347, &quot;min&quot;: 65.50911885624045, &quot;p90&quot;: 65.5840272860955, &quot;p95&quot;: 65.58721384544896, &quot;p99&quot;: 65.58976309293172, &quot;stddev&quot;: 0.02388146120866979 },<p>Patterns benchmark also shows bit more of memory speeds. command: &#x27;memory_benchmark -patterns -non-cacheable -count 5 -output patterns.JSON&#x27;<p>Example M4 from 100 loops: &quot;sequential_forward&quot;: { &quot;bandwidth&quot;: { &quot;read_gb_s&quot;: { &quot;statistics&quot;: { &quot;average&quot;: 116.38363691482549, &quot;max&quot;: 116.61212708384109, &quot;median&quot;: 116.41264548721367, &quot;min&quot;: 115.449510036971, &quot;p90&quot;: 116.54143114134801, &quot;p95&quot;: 116.57314206456576, &quot;p99&quot;: 116.60095068065866, &quot;stddev&quot;: 0.17026641589059727 } } } }<p>&quot;strided_4096&quot;: { &quot;bandwidth&quot;: { &quot;read_gb_s&quot;: { &quot;statistics&quot;: { &quot;average&quot;: 26.460392735220456, &quot;max&quot;: 27.7722419653915, &quot;median&quot;: 26.457051473208285, &quot;min&quot;: 25.519925729459107, &quot;p90&quot;: 27.105171215736604, &quot;p95&quot;: 27.190715938337473, &quot;p99&quot;: 27.360449534513144, &quot;stddev&quot;: 0.4730857335572576 } } } }<p>&quot;random&quot;: { &quot;bandwidth&quot;: { &quot;read_gb_s&quot;: { &quot;statistics&quot;: { &quot;average&quot;: 26.71367836895143, &quot;max&quot;: 26.966820487564327, &quot;median&quot;: 26.69907406197067, &quot;min&quot;: 26.49374804466308, &quot;p90&quot;: 26.845236287807374, &quot;p95&quot;: 26.882004355057887, &quot;p99&quot;: 26.95742242818151, &quot;stddev&quot;: 0.09600564296001704 } } } }<p>Thank you for reading :)

Show HN: Cachekit – High performance caching policies library in Rust

Show HN: Cachekit – High performance caching policies library in Rust

Show HN: AI video generator that outputs React instead of video files

Show HN: AI video generator that outputs React instead of video files Hey HN! This is Mayank from Outscal with a new update. Our website is now live. Quick context: we built a tool that generates animated videos from text scripts. The twist: instead of rendering pixels, it outputs React&#x2F;TSX components that render as the video.<p>Try it: <a href="https:&#x2F;&#x2F;ai.outscal.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;ai.outscal.com&#x2F;</a> Sample video: <a href="https:&#x2F;&#x2F;outscal.com&#x2F;v2&#x2F;video&#x2F;ai-constraints-m7p3_v1&#x2F;12-01-26-18-47-41" rel="nofollow">https:&#x2F;&#x2F;outscal.com&#x2F;v2&#x2F;video&#x2F;ai-constraints-m7p3_v1&#x2F;12-01-26...</a><p>You pick a style (pencil sketch or neon), enter a script (up to 2000 chars), and it runs: scene direction → ElevenLabs audio → SVG assets → Scene Design → React components → deployed video.<p>What we learned building this:<p>We built the first version on Claude Code. Even with a human triggering commands, agents kept going off-script — they had file tools and would wander off reading random files, exploring tangents, producing inconsistent output.<p>The fix was counterintuitive: fewer tools, not more guardrails. We stripped each agent to only what it needed and pre-fed context instead of letting agents fetch it themselves.<p>Quality improved immediately.<p>We wouldn&#x27;t launch the web version until this was solid. Moved to Claude Agent SDK, kept the same constraints, now fully automated.<p>Happy to discuss the agent architecture, why React-as-video, or anything else.

Show HN: SubTrack – A SaaS tracker for devs that finds unused tools

Show HN: SubTrack – A SaaS tracker for devs that finds unused tools Hi HN,<p>I built SubTrack to help teams find unused SaaS tools and cloud resources before they silently eat into budgets.<p>The motivation came from seeing how hard it is to answer simple questions: – Which SaaS tools are actually used? – Which cloud resources are idle? – What will our end-of-month spend look like?<p>SubTrack connects to tools like AWS, GitHub, Vercel, and others to surface unused resources and cost signals from one place. Recently I added multi-account support, currency localization, and optional AI-based insights to help interpret usage patterns.<p>This is an early-stage project and I’m actively iterating. I’d really appreciate feedback—especially from people managing cloud or SaaS sprawl.

No other tools from this source yet.