Show HN: Is AI "good" yet? โ tracking HN sentiment on AI coding
Show HN (score: 5)Description
More from Show
Show HN: LUML โ an open source (Apache 2.0) MLOps/LLMOps platform
Show HN: LUML โ an open source (Apache 2.0) MLOps/LLMOps platform Hi HN,<p>We built LUML (<a href="https://github.com/luml-ai/luml" rel="nofollow">https://github.com/luml-ai/luml</a>), an open-source (Apache 2.0) MLOps/LLMOps platform that covers experiments, registry, LLM tracing, deployments and so on.<p>It separates the control plane from your data and compute. Artifacts are self-contained. Each model artifact includes all metadata (including the experiment snapshots, dependencies, etc.), and it stays in your storage (S3-compatible or Azure).<p>File transfers go directly between your machine and storage, and execution happens on compute nodes you host and connect to LUML.<p>Weโd love you to try the platform and share your feedback!
Show HN: 127 PRs to Prod this wknd with 18 AI agents: metaswarm. MIT licensed
Show HN: 127 PRs to Prod this wknd with 18 AI agents: metaswarm. MIT licensed A few weeks ago I posted about GoodToGo <a href="https://news.ycombinator.com/item?id=46656759">https://news.ycombinator.com/item?id=46656759</a> - a tool that gives AI agents a deterministic answer to "is this PR ready to merge?" Several people asked about the larger orchestration system I mentioned. This is that system.<p>I got tired of being a project manager for Claude Code. It writes code fine, but shipping production code is seven or eight jobs โ research, planning, design review, implementation, code review, security audit, PR creation, CI babysitting. I was doing all the coordination myself. The agent typed fast. I was still the bottleneck. What I really needed was an orchestrator of orchestrators - swarms of swarms of agents with deterministic quality checks.<p>So I built metaswarm. It breaks work into phases and assigns each to a specialist swarm orchestrator. It manages handoffs and uses BEADS for deterministic gates that persist across /compact, /clear, and even across sessions. Point it at a GitHub issue or brainstorm with it (it uses Superpowers to ask clarifying questions) and it creates epics, tasks, and dependencies, then runs the full pipeline to a merged PR - including outside code review like CodeRabbit, Greptile, and Bugbot.<p>The thing that surprised me most was the design review gate. Five agents โ PM, Architect, Designer, Security, CTO โ review every plan in parallel before a line of code gets written. All five must approve. Three rounds max, then it escalates to a human. I expected a rubber stamp. It catches real design problems, dependency issues, security gaps.<p>This weekend I pointed it at my backlog. 127 PRs merged. Every one hit 100% test coverage. No human wrote code, reviewed code, or clicked merge. OK, I guided it a bit, mostly helping with plans for some of the epics.<p>A few learnings:<p>Agent checklists are theater. Agents skipped coverage checks, misread thresholds, or decided they didn't apply. Prompts alone weren't enough. The fix was deterministic gates โ BEADS, pre-push hooks, CI jobs all on top of the agent completion check. The gates block bad code whether or not the agent cooperates.<p>The agents are just markdown files. No custom runtime, no server, and while I built it on TypeScript, the agents are language-agnostic. You can read all of them, edit them, add your own.<p>It self-reflects too. After every merged PR, the system extracts patterns, gotchas, and decisions into a JSONL knowledge base. Agents only load entries relevant to the files they're touching. The more it ships, the fewer mistakes it makes. It learns as it goes.<p>metaswarm stands on two projects: <a href="https://github.com/steveyegge/beads" rel="nofollow">https://github.com/steveyegge/beads</a> by Steve Yegge (git-native task tracking and knowledge priming) and <a href="https://github.com/obra/superpowers" rel="nofollow">https://github.com/obra/superpowers</a> by Jesse Vincent (disciplined agentic workflows โ TDD, brainstorming, systematic debugging). Both were essential.<p>Background: I founded Technorati, Linuxcare, and Warmstart; tech exec at Lyft and Reddit. I built metaswarm because I needed autonomous agents that could ship to a production codebase with the same standards I'd hold a human team to.<p>$ cd my-project-name<p>$ npx metaswarm init<p>MIT licensed. IANAL. YMMV. Issues/PRs welcome!
Show HN: Quorum-free replicated state machine built atop S3
Show HN: Quorum-free replicated state machine built atop S3 Hi HN,<p>Iโm sharing the alpha release of S2C, a state machine replication system built atop S3.<p>The goal is to enable a distributed application to maintain consistent state without needing a quorum of nodes for availability or consistency.<p>The idea came from a side project that was using S3 and where I needed strongly consistent distributed state but wanted to avoid adding a separate consensus dependency. I initially tried to use S3 directly for coordination, but it became messy. Eventually, I realized I need a replicated state machine with a deterministic log, and then it ended up as a standalone project.<p>To mitigate S3's latency and API costs, it uses time- and size-based batching by default.<p>S2C supports: - Linearizable reads and writes (with single node) - Exactly-once command semantics (for nodes with stable identities) - Dynamic node joins and cold-start recovery from zero nodes - Split-brain safety without clocks or leases - Snapshotting, log truncation, etc.<p>Of course, it trades latency and S3 operation costs for operational simplicity - not meant to replace high-throughput Raft rings. And clearly, only usable in architectures that already use S3 (or compatible with similar guarantees).<p>It has passed chaos/fault-injection tests so far (crashes, partitions, leader kills); formal verification planned.<p>Itโs still alpha, but Iโd love for people to try it, experiment, and provide feedback.<p>If youโre curious, the code, and an extensive deep dive guide are here: [<a href="https://github.com/io-s2c/s2c" rel="nofollow">https://github.com/io-s2c/s2c</a>]
Show HN: I made a dual-bootable NixBSD (NixOS and FreeBSD) image
Show HN: I made a dual-bootable NixBSD (NixOS and FreeBSD) image I've been working on getting NixBSD (Nix package manager + FreeBSD) to boot alongside NixOS on a shared ZFS pool. The result is a <2GB disk image you can try in QEMU or virt-manager.<p>What works:<p><pre><code> - GRUB chainloads FreeBSD's bootloader - Both systems share a ZFS pool - Everything is defined in a single Nix flake - Fully reproducible builds (some dependencies are now cached on Cachix) </code></pre> Planned:<p><pre><code> - Support native compilation of NixBSD (currently cross-compiled on Linux) - Many shortcuts were taken to get this working, needs lots of cleanup - Add a semi-automated installer like nixos-wizard </code></pre> Try it:<p><pre><code> qemu-system-x86_64 -enable-kvm -m 2048 \ -bios /usr/share/ovmf/OVMF.fd \ -drive file=nixos.root.img,format=raw </code></pre> Login: nixos/nixos or root/toor<p>The hardest parts were getting mounts working at boot, making the bootloader setup idempotent, and debugging early init. This disk image could potentially work on a USB stick with a bit more work.<p>This is very much experimental. My goal is to eventually produce a proper NixBSD installation ISO and consolidate all configuration into one repository while still consuming upstream NixBSD as a flake.<p>Download: <a href="https://github.com/jonhermansen/nixbsd-demo/releases/tag/build-1" rel="nofollow">https://github.com/jonhermansen/nixbsd-demo/releases/tag/bui...</a><p>Feel free to leave feedback here or on GitHub! Thanks!
No other tools from this source yet.