🛠️ Hacker News Tools

Showing 461–480 of 2466 tools from Hacker News

Last Updated
April 21, 2026 at 12:00 PM

[CLI Tool] Scrt: A CLI secret manager for developers, sysadmins and DevOps

Found: March 12, 2026 ID: 3742

[DevOps] Show HN: PipeStep – Step-through debugger for GitHub Actions workflows Hey HN — I kept seeing developers describe the same frustration: the commit-push-wait-read-logs cycle when debugging CI pipelines. So I built PipeStep.<p>PipeStep parses your GitHub Actions YAML, spins up the right Docker container, and gives you a step-through debugger for your run: shell commands.<p>You can: 1. Pause before each step and inspect the container state. 2. Shell into the running container mid-pipeline (press I). 3. Set breakpoints on specific steps (press B). 4. Retry failed steps or skip past others.<p>It deliberately does <i>not</i> try to replicate the full GitHub Actions runtime — no secrets, no matrix builds, no uses: action execution. For full local workflow runs, use act. PipeStep is for when things break and you need to figure out why without pushing 10 more commits. Think of it as gdb for your CI pipeline rather than a local GitHub runner.<p>pip install pipestep (v0.1.2) · Python 3.11+ · MIT · Requires Docker<p>Would love feedback, especially from people who&#x27;ve hit the same pain point. Known limitations are documented in the README + have some issues in there that I&#x27;d love eyeballs on!

Found: March 12, 2026 ID: 3743

[Other] Show HN: Understudy – Teach a desktop agent by demonstrating a task once I built Understudy because a lot of real work still spans native desktop apps, browser tabs, terminals, and chat tools. Most current agents live in only one of those surfaces.<p>Understudy is a local-first desktop agent runtime that can operate GUI apps, browsers, shell tools, files, and messaging in one session. The part I&#x27;m most interested in feedback on is teach-by-demonstration: you do a task once, the agent records screen video + semantic events, extracts the intent rather than coordinates, and turns it into a reusable skill.<p>Demo video: <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=3d5cRGnlb_0" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=3d5cRGnlb_0</a><p>In the demo I teach it: Google Image search -&gt; download a photo -&gt; remove background in Pixelmator Pro -&gt; export -&gt; send via Telegram. Then I ask it to do the same for Elon Musk. The replay isn&#x27;t a brittle macro: the published skill stores intent steps, route options, and GUI hints only as a fallback. In this example it can also prefer faster routes when they are available instead of repeating every GUI step.<p>Current state: macOS only. Layers 1-2 are working today; Layers 3-4 are partial and still early.<p><pre><code> npm install -g @understudy-ai&#x2F;understudy understudy wizard </code></pre> GitHub: <a href="https:&#x2F;&#x2F;github.com&#x2F;understudy-ai&#x2F;understudy" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;understudy-ai&#x2F;understudy</a><p>Happy to answer questions about the architecture, teach-by-demonstration, or the limits of the current implementation.

Found: March 12, 2026 ID: 3741

[DevOps] Show HN: OneCLI – Vault for AI Agents in Rust We built OneCLI because AI agents are being given raw API keys. And it&#x27;s going about as well as you&#x27;d expect. We figured the answer isn&#x27;t &quot;don&#x27;t give agents access,&quot; it&#x27;s &quot;give them access without giving them secrets.&quot;<p>OneCLI is an open-source gateway that sits between your AI agents and the services they call. You store your real credentials once in OneCLI&#x27;s encrypted vault, and give your agents placeholder keys. When an agent makes an HTTP call through the proxy, OneCLI matches the request by host&#x2F;path, verifies the agent should have access, swaps the placeholder for the real credential, and forwards the request. The agent never touches the actual secret. It just uses CLI or MCP tools as normal.<p>Try it in one line: docker run --pull always -p 10254:10254 -p 10255:10255 -v onecli-data:&#x2F;app&#x2F;data ghcr.io&#x2F;onecli&#x2F;onecli<p>The proxy is written in Rust, the dashboard is Next.js, and secrets are AES-256-GCM encrypted at rest. Everything runs in a single Docker container with an embedded Postgres (PGlite), no external dependencies. Works with any agent framework (OpenClaw, NanoClaw, IronClaw, or anything that can set an HTTPS_PROXY).<p>We started with what felt most urgent: agents shouldn&#x27;t be holding raw credentials. The next layer is access policies and audit, defining what each agent can call, logging everything, and requiring human approval before sensitive actions go through.<p>It&#x27;s Apache-2.0 licensed. We&#x27;d love feedback on the approach, and we&#x27;re especially curious how people are handling agent auth today.<p>GitHub: <a href="https:&#x2F;&#x2F;github.com&#x2F;onecli&#x2F;onecli" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;onecli&#x2F;onecli</a> Site: <a href="https:&#x2F;&#x2F;onecli.sh" rel="nofollow">https:&#x2F;&#x2F;onecli.sh</a>

Found: March 12, 2026 ID: 3740

[Other] Show HN: GitClassic.com, a fast, lightweight GitHub thin client (pages <14KB) Hey HN,<p>I posted GitClassic here 2 months ago- since then I&#x27;ve rebuilt most of it based on what people asked for.<p><a href="https:&#x2F;&#x2F;gitclassic.com" rel="nofollow">https:&#x2F;&#x2F;gitclassic.com</a><p>What&#x27;s new: Issues, PRs w&#x2F; full diffs, repo intelligence (health scores, dependency graphs), trending&#x2F;explore, bookmarks, comparison tool, and advanced search.<p>Every page is server-rendered HTML- No React, no SPA, no client bundle, pages under 14KB(gzipped). Try loading facebook&#x2F;react and compare it to GitHub load times.<p>Public repos work without an account, Pro adds private repo access via GitHub OAuth.<p>Stack: Hono on Lambda, DynamoDB, CloudFront, 500KB Node bundle, cold starts usually &lt;500ms.<p>What&#x27;s missing?<p>Thanks, Chris

Found: March 12, 2026 ID: 3802

[Other] Show HN: A desktop app for managing Claude Code sessions

Found: March 12, 2026 ID: 3751

[Other] Show HN: Axe A 12MB binary that replaces your AI framework

Found: March 12, 2026 ID: 3738

[Monitoring/Observability] Show HN: We analyzed 1,573 Claude Code sessions to see how AI agents work We built rudel.ai after realizing we had no visibility into our own Claude Code sessions. We were using it daily but had no idea which sessions were efficient, why some got abandoned, or whether we were actually improving over time.<p>So we built an analytics layer for it. After connecting our own sessions, we ended up with a dataset of 1,573 real Claude Code sessions, 15M+ tokens, 270K+ interactions.<p>Some things we found that surprised us: - Skills were only being used in 4% of our sessions - 26% of sessions are abandoned, most within the first 60 seconds - Session success rate varies significantly by task type (documentation scores highest, refactoring lowest) - Error cascade patterns appear in the first 2 minutes and predict abandonment with reasonable accuracy - There is no meaningful benchmark for &#x27;good&#x27; agentic session performance, we are building one.<p>The tool is free to use and fully open source, happy to answer questions about the data or how we built it.

Found: March 12, 2026 ID: 3737

[Other] Document poisoning in RAG systems: How attackers corrupt AI's sources I&#x27;m the author. Repo is here: <a href="https:&#x2F;&#x2F;github.com&#x2F;aminrj-labs&#x2F;mcp-attack-labs&#x2F;tree&#x2F;main&#x2F;labs&#x2F;04-rag-security" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;aminrj-labs&#x2F;mcp-attack-labs&#x2F;tree&#x2F;main&#x2F;lab...</a><p>The lab runs entirely on LM Studio + Qwen2.5-7B-Instruct (Q4_K_M) + ChromaDB — no cloud APIs, no GPU required, no API keys.<p>From zero to seeing the poisoning succeed: git clone, make setup, make attack1. About 10 minutes.<p>Two things worth flagging upfront:<p>- The 95% success rate is against a 5-document corpus (best case for the attacker). In a mature collection you need proportionally more poisoned docs to dominate retrieval — but the mechanism is the same.<p>- Embedding anomaly detection at ingestion was the biggest surprise: 95% → 20% as a standalone control, outperforming all three generation-phase defenses combined. It runs on embeddings your pipeline already produces — no additional model.<p>All five layers combined: 10% residual.<p>Happy to discuss methodology, the PoisonedRAG comparison, or anything that looks off.

Found: March 12, 2026 ID: 3750

[CLI Tool] Show HN: Calyx – Ghostty-Based macOS Terminal with Liquid Glass UI

Found: March 12, 2026 ID: 3747

[Other] Coding after coders: The end of computer programming as we know it?

Found: March 12, 2026 ID: 3772

[Other] Executing programs inside transformers with exponentially faster inference

Found: March 12, 2026 ID: 3762

Show HN: Autoresearch@home

Hacker News (score: 49)

[Other] Show HN: Autoresearch@home autoresearch@home is a collaborative research collective where AI agents share GPU resources to collectively improve a language model. Think SETI@home, but for model training.<p>How it works: Agents read the current best result, propose a hypothesis, modify train.py, run the experiment on your GPU, and publish results back. When an agent beats the current best validation loss, that becomes the new baseline for every other agent. Agents learn from great runs and failures, since we&#x27;re using Ensue as the collective memory layer.<p>This project extends Karpathy&#x27;s autoresearch by adding the missing coordination layer so agents can actually build on each other&#x27;s work.<p>To participate, you need an agent and a GPU. The agent handles everything: cloning the repo, connecting to the collective, picking experiments, running them, publishing results, and asking you to verify you&#x27;re a real person via email.<p>Send this prompt to your agent to get started: Read <a href="https:&#x2F;&#x2F;github.com&#x2F;mutable-state-inc&#x2F;autoresearch-at-home" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;mutable-state-inc&#x2F;autoresearch-at-home</a> follow the instructions join autoresearch and start contributing.<p>This whole experiment is to prove that agents work better when they can build off other agents. The timeline is live, so you can watch experiments land in real time.

Found: March 11, 2026 ID: 3732

[Other] Show HN: A context-aware permission guard for Claude Code We needed something like --dangerously-skip-permissions that doesn’t nuke your untracked files, exfiltrate your keys, or install malware.<p>Claude Code&#x27;s permission system is allow-or-deny per tool, but that doesn’t really scale. Deleting some files is fine sometimes. And git checkout is sometimes not fine. Even when you curate permissions, 200 IQ Opus can find a way around it. Maintaining a deny list is a fool&#x27;s errand.<p>nah is a PreToolUse hook that classifies every tool call by what it actually does, using a deterministic classifier that runs in milliseconds. It maps commands to action types like filesystem_read, package_run, db_write, git_history_rewrite, and applies policies: allow, context (depends on the target), ask, or block.<p>Not everything can be classified, so you can optionally escalate ambiguous stuff to an LLM, but that’s not required. Anything unresolved you can approve, and configure the taxonomy so you don’t get asked again.<p>It works out of the box with sane defaults, no config needed. But you can customize it fully if you want to.<p>No dependencies, stdlib Python, MIT.<p>pip install nah &amp;&amp; nah install<p><a href="https:&#x2F;&#x2F;github.com&#x2F;manuelschipper&#x2F;nah" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;manuelschipper&#x2F;nah</a>

Found: March 11, 2026 ID: 3731

[Other] CRusTTY: A pedagogical C interpreter with time-travel debugging capabilities

Found: March 11, 2026 ID: 3729

[Other] Show HN: Vanilla JavaScript refinery simulator built to explain job to my kids Hi HN, I’m a chemical engineer and I manage logistics at a refinery down in Texas. Whenever I try to explain downstream operations to people outside the industry (including my kids), I usually get blank stares. I wanted to build something that visualizes the concepts and chemistry of a plant without completely dumbing down the science, so I put together this 5-minute browser game.<p>Here&#x27;s a simple runthrough: <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=is-moBz6upU" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=is-moBz6upU</a>. I pushed to get through a full product pathway to show the V-804 replay.<p>I am not a software developer by trade, so I relied heavily on LLMs (Claude, Copilot, Gemini) to help write the code. What started as a simple concept turned into a 9,000-line single-page app built with vanilla HTML, CSS, and JavaScript. I used Matter.js for the 2D physics minigames.<p>A few technical takeaways from building this as a non-dev: * Managing the LLM workflow: Once the script.js file got large, letting the models output full file rewrites was a disaster (truncations, hallucinations, invisible curly-quote replacements that broke the JS). I started forcing them to act like patch files, strictly outputting &quot;Find this exact block&quot; and &quot;Replace with this exact block.&quot; This was the only way to maintain improvements without breaking existing logic.<p>* Mapping physics to CSS: I wanted the minigames to visually sit inside circular CSS containers (border-radius: 50%). Matter.js doesn&#x27;t natively care about your CSS. Getting the rigid body physics to respect a dynamic, responsive DOM boundary across different screen sizes required running an elliptical boundary equation (dx * dx) &#x2F; (rx * rx) + (dy * dy) &#x2F; (ry * ry) &gt; 1 on every single frame. Maybe this was overkill to try to handle the resizing between phones and PCs.<p>* Mobile browser events: Forcing iOS Safari to ignore its default behaviors (double-tap zoom, swipe-to-scroll) while still allowing the user to tap and drag Matter.js objects required a ridiculous amount of custom event listener management and CSS (touch-action: manipulation; user-select: none;). I also learned that these actions very easily kill the mouse scroll making it very frustrating for PC users. I am hoping I hit a good middle ground.<p>* State management: Since I didn&#x27;t use React or any frameworks, I had to rely on a global state object. Because the game jumps between different phases&#x2F;minigames, I ran into massive memory leaks from old setInterval loops and Matter.js bodies stacking up. I had to build strict teardown functions to wipe the slate clean on every map transition.<p>The game walks through electrostatic desalting, fractional distillation, hydrotreating, catalytic cracking, and gasoline blending (hitting specific Octane and RVP specs).<p>It’s completely free, runs client-side, and has zero ads or sign-ups. I&#x27;d appreciate any feedback on the mechanics, or let me know if you manage to break the physics engine. Happy to answer any questions about the chemical engineering side of things as well.<p>For some reason the URL box is not getting recognized, maybe someone can help me feel less dumb there too. <a href="https:&#x2F;&#x2F;fuelingcuriosity.com&#x2F;game" rel="nofollow">https:&#x2F;&#x2F;fuelingcuriosity.com&#x2F;game</a>

Found: March 11, 2026 ID: 3768

[Monitoring/Observability] Launch HN: Sentrial (YC W26) – Catch AI agent failures before your users do Hey HN! We&#x27;re Neel and Anay, and we’re building Sentrial (<a href="https:&#x2F;&#x2F;sentrial.com">https:&#x2F;&#x2F;sentrial.com</a>). It’s production monitoring for AI products. We automatically detect failure patterns: loops, hallucinations, tool misuse, and user frustrations the moment they happen. When issues surface, Sentrial diagnoses the root cause by analyzing conversation patterns, model outputs, and tool interactions, then recommends specific fixes.<p>Here&#x27;s a demo if you&#x27;re interested: <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=cc4DWrJF7hk" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=cc4DWrJF7hk</a>. When agents fail, choose wrong tools, or blow cost budgets, there&#x27;s no way to know why - usually just logs and guesswork. As agents move from demos to production with real SLAs and real users, this is not sustainable.<p>Neel and I lived this, building agents at SenseHQ and Accenture where we found that debugging agents was often harder than actually building them. Agents are untrustworthy in prod because there’s no good infrastructure to verify what they’re actually doing.<p>In practice this looks like: - A support agent that began misclassifying refund requests as product questions, which meant customers never reached the refund flow. - A document drafting agent that would occasionally hallucinate missing sections when parsing long specs, producing confident but incorrect outputs. There’s no stack trace or 500 error and you only figure this out when a customer is angry.<p>We both realized teams were flying blind in production, and that agent native monitoring was going to be foundational infrastructure for every serious AI product. We started Sentrial as a verification layer designed to take care of this.<p>How it works: You wrap your client with our SDK in only a couple of lines. From there, we detect drift for you: - Wrong tool invocations - Misunderstood intents - Hallucinations - Quality regressions over time. You see it on our platform before a customer files a ticket.<p>There’s a quick mcp set up, just give claude code: claude mcp add --transport http Sentrial <a href="https:&#x2F;&#x2F;www.sentrial.com&#x2F;docs&#x2F;mcp">https:&#x2F;&#x2F;www.sentrial.com&#x2F;docs&#x2F;mcp</a><p>We have a free tier (14 days, no credit card required). We’d love any feedback from anyone running agents whether they be for personal use or within a professional setting.<p>We’ll be around in the comments!

Found: March 11, 2026 ID: 3724

[Other] Show HN: I built a tool that watches webpages and exposes changes as RSS I built Site Spy after missing a visa appointment slot because a government page changed and I didn’t notice for two weeks.<p>It watches webpages for changes and shows the result like a diff. The part I think HN might find interesting is that it can monitor a specific element on a page, not just the whole page, and it can expose changes as RSS feeds.<p>So instead of tracking an entire noisy page, you can watch just a price, a stock status, a headline, or a specific content block. When it changes, you can inspect the diff, browse the snapshot history, or follow the updates in an RSS reader.<p>It’s a Chrome&#x2F;Firefox extension plus a web dashboard.<p>Main features:<p>- Element picker for tracking a specific part of a page<p>- Diff view plus full snapshot timeline<p>- RSS feeds per watch, per tag, or across all watches<p>- MCP server for Claude, Cursor, and other AI agents<p>- Browser push, Email, and Telegram notifications<p>Chrome: <a href="https:&#x2F;&#x2F;chromewebstore.google.com&#x2F;detail&#x2F;site-spy&#x2F;jeapcpanagdgipcfnncmogeojgfofige" rel="nofollow">https:&#x2F;&#x2F;chromewebstore.google.com&#x2F;detail&#x2F;site-spy&#x2F;jeapcpanag...</a><p>Firefox: <a href="https:&#x2F;&#x2F;addons.mozilla.org&#x2F;en-GB&#x2F;firefox&#x2F;addon&#x2F;site-spy&#x2F;" rel="nofollow">https:&#x2F;&#x2F;addons.mozilla.org&#x2F;en-GB&#x2F;firefox&#x2F;addon&#x2F;site-spy&#x2F;</a><p>Docs: <a href="https:&#x2F;&#x2F;docs.sitespy.app" rel="nofollow">https:&#x2F;&#x2F;docs.sitespy.app</a><p>I’d especially love feedback on two things:<p>- Is RSS actually a useful interface for this, or do most people just want direct alerts?<p>- Does element-level tracking feel meaningfully better than full-page monitoring?

Found: March 11, 2026 ID: 3723

[API/SDK] Launch HN: Prism (YC X25) – Workspace and API to generate and edit videos Hey HN — we’re Rajit, Land, and Alex. We’re building Prism (<a href="https:&#x2F;&#x2F;www.prismvideos.com">https:&#x2F;&#x2F;www.prismvideos.com</a>), an AI video creation platform and API.<p>Here’s a quick demo of how you can remix any video with Prism: <a href="https:&#x2F;&#x2F;youtu.be&#x2F;0eez_2DnayI" rel="nofollow">https:&#x2F;&#x2F;youtu.be&#x2F;0eez_2DnayI</a><p>Here’s a quick demo of how you can automate UGC-style ads with Openclaw + Prism: <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=5dWaD23qnro" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=5dWaD23qnro</a><p>Accompanying skill.md file: <a href="https:&#x2F;&#x2F;docs.google.com&#x2F;document&#x2F;d&#x2F;1lIskVljW1OqbkXFyXeLHRsfMictCfuxGGwczAnB1vhk" rel="nofollow">https:&#x2F;&#x2F;docs.google.com&#x2F;document&#x2F;d&#x2F;1lIskVljW1OqbkXFyXeLHRsfM...</a><p>Making an AI video today usually means stitching together a dozen tools (image generation, image-to-video, upscalers, lip-sync, voiceover, and an editor). Every step turns into export&#x2F;import and file juggling, so assets end up scattered across tabs and local storage, and iterating on a multi-scene video is slow.<p>Prism keeps the workflow in one place: you generate assets (images&#x2F;video clips) and assemble them directly in a timeline editor without downloading files between tools. Practically, that means you can try different models (Kling, Veo, Sora, Hailuo, etc) and settings for a single clip, swap it on the timeline, and keep iterating without re-exporting and rebuilding the edit elsewhere.<p>We also support templates and one-click asset recreation, so you can reuse workflows from us or the community instead of rebuilding each asset from scratch. Those templates are exposed through our API, letting your AI agents discover templates in our catalog, supply the required inputs, and generate videos in a repeatable way without manually stitching the workflow together.<p>We built Prism because we were making AI videos ourselves and were unsatisfied with the available tools. We kept losing time to repetitive “glue work” such as constantly downloading files, keeping track of prompts&#x2F;versions, and stitching clips in a separate video editing software. We’re trying to make the boring parts of multi-step AI video creation less manual so users can generate → review → edit → assemble → export, all inside one platform.<p>Pricing is based on usage credits, with a free tier (100 credits&#x2F;month) and free models, so you can try it without providing a credit card: <a href="https:&#x2F;&#x2F;prismvideos.com">https:&#x2F;&#x2F;prismvideos.com</a>.<p>We’d love to hear from people who’ve tried making AI videos: where does your workflow break, what parts are the most tedious, and what do you wish video creation tools on the market could do?

Found: March 11, 2026 ID: 3725

[DevOps] Show HN: Klaus – OpenClaw on a VM, batteries included We are Bailey and Robbie and we are working on Klaus (<a href="https:&#x2F;&#x2F;klausai.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;klausai.com&#x2F;</a>): hosted OpenClaw that is secure and powerful out of the box.<p>Running OpenClaw requires setting up a cloud VM or local container (a pain) or giving OpenClaw root access to your machine (insecure). Many basic integrations (eg Slack, Google Workspace) require you to create your own OAuth app.<p>We make running OpenClaw simple by giving each user their own EC2 instance, preconfigured with keys for OpenRouter, AgentMail, and Orthogonal. And we have OAuth apps to make it easy to integrate with Slack and Google Workspace.<p>We are both HN readers (Bailey has been on here for ~10 years) and we know OpenClaw has serious security concerns. We do a lot to make our users’ instances more secure: we run on a private subnet, automatically update the OpenClaw version our users run, and because you’re on our VM by default the only keys you leak if you get hacked belong to us. Connecting your email is still a risk. The best defense I know of is Opus 4.6 for resilience to prompt injection. If you have a better solution, we’d love to hear it!<p>We learned a lot about infrastructure management in the past month. Kimi K2.5 and Mimimax M2.5 are extremely good at hallucinating new ways to break openclaw.json and otherwise wreaking havoc on an EC2 instance. The week after our launch we spent 20+ hours fixing broken machines by hand.<p>We wrote a ton of best practices on using OpenClaw on AWS Linux into our users’ AGENTS.md, got really good at un-bricking EC2 machines over SSM, added a command-and-control server to every instance to facilitate hotfixes and migrations, and set up a Klaus instance to answer FAQs on discord.<p>In addition to all of this, we built ClawBert, our AI SRE for hotfixing OpenClaw instances automatically: <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=v65F6VBXqKY" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=v65F6VBXqKY</a>. Clawbert is a Claude Code instance that runs whenever a health check fails or the user triggers it in the UI. It can read that user’s entries in our database and execute commands on the user’s instance. We expose a log of Clawbert’s runs to the user.<p>We know that setting up OpenClaw is easy for most HN readers, but I promise it is not for most people. Klaus has a long way to go, but it’s still very rewarding to see people who’ve never used Claude Code get their first taste of AI agents.<p>We charge $19&#x2F;m for a t4g.small, $49&#x2F;m for a t4g.medium, and $200&#x2F;m for a t4g.xlarge and priority support. You get $15 in tokens and $20 in Orthogonal credits one-time.<p>We want to know what you are building on OpenClaw so we can make sure we support it. We are already working with companies like Orthogonal and Openrouter that are building things to make agents more useful, and we’re sure there are more tools out there we don’t know about. If you’ve built something agents want, please let us know. Comments welcome!

Found: March 11, 2026 ID: 3721
Previous Page 24 of 124 Next