Show HN: Pygantry – Why ship a whole OS when you just need a Python environment?

Show HN (score: 5)
Found: February 05, 2026
ID: 3249

Description

DevOps
Show HN: Pygantry – Why ship a whole OS when you just need a Python environment? "Hi Hacker News, I’ve always found Docker to be overkill for simple Python deployments. It's heavy, complex for non-tech users, and often results in 500MB+ images for a 10KB script. That’s why I built Pygantry. It’s a minimalist 'container' engine based on Python venv but made portable and relocatable. Key features: Lightweight: A full 'shipped' app is usually < 20MB. Zero-Config: No daemon, no root, no Dockerfile complexity. Portable: Build once, zip it, and run it anywhere with a Python interpreter. Founder friendly: Built-in licensing and stealth modes for those building a business. I built this to simplify my own VPS deployments. I'd love to get your feedback on the architecture and how you handle 'Docker-fatigue' in your workflow.

More from Show

Show HN: Buquet – Durable queues and workflows using only S3

Show HN: Buquet – Durable queues and workflows using only S3 buquet (bucket queue) is a queue and workflow orchestration tool using only S3-compatible* object storage. S3 is the control plane making it much simpler than alternatives. This does come with tradeoffs (see docs), but I do believe there is a niche it can serve well.<p><a href="https:&#x2F;&#x2F;horv.co&#x2F;buquet.html" rel="nofollow">https:&#x2F;&#x2F;horv.co&#x2F;buquet.html</a> <a href="https:&#x2F;&#x2F;github.com&#x2F;h0rv&#x2F;buquet" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;h0rv&#x2F;buquet</a><p>* see <a href="https:&#x2F;&#x2F;github.com&#x2F;h0rv&#x2F;buquet&#x2F;blob&#x2F;main&#x2F;docs&#x2F;guides&#x2F;s3-compatibility.md" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;h0rv&#x2F;buquet&#x2F;blob&#x2F;main&#x2F;docs&#x2F;guides&#x2F;s3-comp...</a>

Show HN: Teaching AI agents to write better GraphQL

Show HN: Teaching AI agents to write better GraphQL We’ve been seeing more and more developers use AI coding agents directly in their GraphQL workflows. The problem is the agents tend to fall back to generic or outdated GraphQL patterns.<p>After correcting the same issues over and over, we ended up packaging the GraphQL best practices and conventions we actually want agents to follow as reusable “Skills,” and open-sourced them here: <a href="https:&#x2F;&#x2F;github.com&#x2F;apollographql&#x2F;skills" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;apollographql&#x2F;skills</a><p>Install with `npx skills add apollographql&#x2F;skills` and the agent starts producing named operations with variables, `[Post!]!` list patterns, and more consistent client-side behavior without having to restate those rules in every prompt.<p>We’re hopeful agents can now write GraphQL the way we&#x27;d write it ourselves. Try out the repo and let us know what you think.

Show HN: LUML – an open source (Apache 2.0) MLOps/LLMOps platform

Show HN: LUML – an open source (Apache 2.0) MLOps/LLMOps platform Hi HN,<p>We built LUML (<a href="https:&#x2F;&#x2F;github.com&#x2F;luml-ai&#x2F;luml" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;luml-ai&#x2F;luml</a>), an open-source (Apache 2.0) MLOps&#x2F;LLMOps platform that covers experiments, registry, LLM tracing, deployments and so on.<p>It separates the control plane from your data and compute. Artifacts are self-contained. Each model artifact includes all metadata (including the experiment snapshots, dependencies, etc.), and it stays in your storage (S3-compatible or Azure).<p>File transfers go directly between your machine and storage, and execution happens on compute nodes you host and connect to LUML.<p>We’d love you to try the platform and share your feedback!

Show HN: 127 PRs to Prod this wknd with 18 AI agents: metaswarm. MIT licensed

Show HN: 127 PRs to Prod this wknd with 18 AI agents: metaswarm. MIT licensed A few weeks ago I posted about GoodToGo <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=46656759">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=46656759</a> - a tool that gives AI agents a deterministic answer to &quot;is this PR ready to merge?&quot; Several people asked about the larger orchestration system I mentioned. This is that system.<p>I got tired of being a project manager for Claude Code. It writes code fine, but shipping production code is seven or eight jobs — research, planning, design review, implementation, code review, security audit, PR creation, CI babysitting. I was doing all the coordination myself. The agent typed fast. I was still the bottleneck. What I really needed was an orchestrator of orchestrators - swarms of swarms of agents with deterministic quality checks.<p>So I built metaswarm. It breaks work into phases and assigns each to a specialist swarm orchestrator. It manages handoffs and uses BEADS for deterministic gates that persist across &#x2F;compact, &#x2F;clear, and even across sessions. Point it at a GitHub issue or brainstorm with it (it uses Superpowers to ask clarifying questions) and it creates epics, tasks, and dependencies, then runs the full pipeline to a merged PR - including outside code review like CodeRabbit, Greptile, and Bugbot.<p>The thing that surprised me most was the design review gate. Five agents — PM, Architect, Designer, Security, CTO — review every plan in parallel before a line of code gets written. All five must approve. Three rounds max, then it escalates to a human. I expected a rubber stamp. It catches real design problems, dependency issues, security gaps.<p>This weekend I pointed it at my backlog. 127 PRs merged. Every one hit 100% test coverage. No human wrote code, reviewed code, or clicked merge. OK, I guided it a bit, mostly helping with plans for some of the epics.<p>A few learnings:<p>Agent checklists are theater. Agents skipped coverage checks, misread thresholds, or decided they didn&#x27;t apply. Prompts alone weren&#x27;t enough. The fix was deterministic gates — BEADS, pre-push hooks, CI jobs all on top of the agent completion check. The gates block bad code whether or not the agent cooperates.<p>The agents are just markdown files. No custom runtime, no server, and while I built it on TypeScript, the agents are language-agnostic. You can read all of them, edit them, add your own.<p>It self-reflects too. After every merged PR, the system extracts patterns, gotchas, and decisions into a JSONL knowledge base. Agents only load entries relevant to the files they&#x27;re touching. The more it ships, the fewer mistakes it makes. It learns as it goes.<p>metaswarm stands on two projects: <a href="https:&#x2F;&#x2F;github.com&#x2F;steveyegge&#x2F;beads" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;steveyegge&#x2F;beads</a> by Steve Yegge (git-native task tracking and knowledge priming) and <a href="https:&#x2F;&#x2F;github.com&#x2F;obra&#x2F;superpowers" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;obra&#x2F;superpowers</a> by Jesse Vincent (disciplined agentic workflows — TDD, brainstorming, systematic debugging). Both were essential.<p>Background: I founded Technorati, Linuxcare, and Warmstart; tech exec at Lyft and Reddit. I built metaswarm because I needed autonomous agents that could ship to a production codebase with the same standards I&#x27;d hold a human team to.<p>$ cd my-project-name<p>$ npx metaswarm init<p>MIT licensed. IANAL. YMMV. Issues&#x2F;PRs welcome!

No other tools from this source yet.