How Container Filesystem Works: Building a Docker-Like Container from Scratch

Hacker News (score: 30)
Found: September 13, 2025
ID: 1423

Description

Other
How Container Filesystem Works: Building a Docker-Like Container from Scratch

More from Hacker

Show HN: Lume 0.2 – Build and Run macOS VMs with unattended setup

Show HN: Lume 0.2 – Build and Run macOS VMs with unattended setup Hey HN, Lume is an open-source CLI for running macOS and Linux VMs on Apple Silicon. Since launch (<a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=42908061">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=42908061</a>), we&#x27;ve been using it to run AI agents in isolated macOS environments. We needed VMs that could set themselves up, so we built that.<p>Here&#x27;s what&#x27;s new in 0.2:<p>*Unattended Setup* – Go from IPSW to a fully configured VM without touching the keyboard. We built a VNC + OCR system that clicks through macOS Setup Assistant automatically. No more manual setup before pushing to a registry:<p><pre><code> lume create my-vm --os macos --ipsw latest --unattended tahoe </code></pre> You can write custom YAML configs to set up any macOS version your way.<p>*HTTP API + Daemon* – A REST API on port 7777 that runs as a background service. Your scripts and CI pipelines can manage VMs that persist even if your terminal closes:<p><pre><code> curl -X POST localhost:7777&#x2F;lume&#x2F;vms&#x2F;my-vm&#x2F;run -d &#x27;{&quot;noDisplay&quot;: true}&#x27; </code></pre> *MCP Server* – Native integration with Claude Desktop and AI coding agents. Claude can create, run, and execute commands in VMs directly:<p><pre><code> # Add to Claude Desktop config &quot;lume&quot;: { &quot;command&quot;: &quot;lume&quot;, &quot;args&quot;: [&quot;serve&quot;, &quot;--mcp&quot;] } # Then just ask: &quot;Create a sandbox VM and run my tests&quot; </code></pre> *Multi-location Storage* – macOS disk space is always tight, so from user feedback we added support for external drives. Add an SSD, move VMs between locations:<p><pre><code> lume config storage add external-ssd &#x2F;Volumes&#x2F;ExternalSSD&#x2F;lume lume clone my-vm backup --source-storage default --dest-storage external-ssd </code></pre> *Registry Support* – Pull and push VM images from GHCR or GCS. Create a golden image once, share it across your team.<p>We&#x27;re seeing people use Lume for: - Running Claude Code in an isolated VM (your host stays clean, reset mistakes by cloning) - CI&#x2F;CD pipelines for Apple platform apps - Automated UI testing across macOS versions - Disposable sandboxes for security research<p>To get started:<p><pre><code> &#x2F;bin&#x2F;bash -c &quot;$(curl -fsSL https:&#x2F;&#x2F;raw.githubusercontent.com&#x2F;trycua&#x2F;cua&#x2F;main&#x2F;libs&#x2F;lume&#x2F;scripts&#x2F;install.sh)&quot; lume create sandbox --os macos --ipsw latest --unattended tahoe lume run sandbox --shared-dir ~&#x2F;my-project </code></pre> Lume is MIT licensed and Apple Silicon only (M1&#x2F;M2&#x2F;M3&#x2F;M4) since it uses Apple&#x27;s native Virtualization Framework directly—no emulation.<p>Lume runs on EC2 Mac instances and Scaleway if you need cloud infrastructure. We&#x27;re also working on a managed cloud offering for teams that need macOS compute on demand—if you&#x27;re interested, reach out.<p>We&#x27;re actively developing this as part of Cua (<a href="https:&#x2F;&#x2F;github.com&#x2F;trycua&#x2F;cua" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;trycua&#x2F;cua</a>), our Computer Use Agent SDK. We&#x27;d love your feedback, bug reports, or feature ideas.<p>GitHub: <a href="https:&#x2F;&#x2F;github.com&#x2F;trycua&#x2F;cua" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;trycua&#x2F;cua</a> Docs: <a href="https:&#x2F;&#x2F;cua.ai&#x2F;docs&#x2F;lume">https:&#x2F;&#x2F;cua.ai&#x2F;docs&#x2F;lume</a><p>We&#x27;ll be here to answer questions!

Show HN: Figma-use – CLI to control Figma for AI agents

Show HN: Figma-use – CLI to control Figma for AI agents I&#x27;m Dan, and I built a CLI that lets AI agents design in Figma.<p>What it does: 100 commands to create shapes, text, frames, components, modify styles, export assets. JSX importing that&#x27;s ~100x faster than any plugin API import. Works with any LLM coding assistant.<p>Why I built it: The official Figma MCP server can only read files. I wanted AI to actually design — create buttons, build layouts, generate entire component systems. Existing solutions were either read-only or required verbose JSON schemas that burn through tokens.<p>Demo (45 sec): <a href="https:&#x2F;&#x2F;youtu.be&#x2F;9eSYVZRle7o" rel="nofollow">https:&#x2F;&#x2F;youtu.be&#x2F;9eSYVZRle7o</a><p>Tech stack: Bun + Citty for CLI, Elysia WebSocket proxy, Figma plugin. The render command connects to Figma&#x27;s internal multiplayer protocol via Chrome DevTools for extra performance when dealing with large groups of objects.<p>Try it: bun install -g @dannote&#x2F;figma-use<p>Looking for feedback on CLI ergonomics, missing commands, and whether the JSX syntax feels natural.

Show HN: SnackBase – Open-source, GxP-compliant back end for Python teams

Show HN: SnackBase – Open-source, GxP-compliant back end for Python teams Hi HN, I’m the creator of SnackBase.<p>I built this because I work in Healthcare and Life Sciences domain and was tired of spending months building the same &quot;compliant&quot; infrastructure (Audit Logs, Row-Level Security, PII Masking, Auth) before writing any actual product code.<p>The Problem: Existing BaaS tools (Supabase, Appwrite) are amazing, but they are hard to validate for GxP (FDA regulations) and often force you into a JS&#x2F;Go ecosystem. I wanted something native to the Python tools I already use.<p>The Solution: SnackBase is a self-hosted Python (FastAPI + SQLAlchemy) backend that includes:<p>Compliance Core: Immutable audit logs with blockchain-style hashing (prev_hash) for integrity.<p>Native Python Hooks: You can write business logic in pure Python (no webhooks or JS runtimes required).<p>Clean Architecture: Strict separation of layers. No business logic in the API routes.<p>The Stack:<p>Python 3.12 + FastAPI<p>SQLAlchemy 2.0 (Async)<p>React 19 (Admin UI)<p>Links:<p>Live Demo: <a href="https:&#x2F;&#x2F;demo.snackbase.dev" rel="nofollow">https:&#x2F;&#x2F;demo.snackbase.dev</a><p>Repo: <a href="https:&#x2F;&#x2F;github.com&#x2F;lalitgehani&#x2F;snackbase" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;lalitgehani&#x2F;snackbase</a><p>The demo resets every hour. I’d love feedback on the DSL implementation or the audit logging approach.

Show HN: Open-Source 8-Ch BCI Board (ESP32 and ADS1299 and OpenBCI GUI)

Show HN: Open-Source 8-Ch BCI Board (ESP32 and ADS1299 and OpenBCI GUI) Hi HN, I recently shared this on r&#x2F;BCI and wanted to see what the engineering community here thinks.<p>A while back, I got frustrated with the state of accessible BCI hardware. Research gear was wildly unaffordable. So, I spent a ton of time designing a custom board, software and firmware to bridge that gap. I call it the Cerelog ESP-EEG. It is open-source (Firmware + Schematics), and I designed it specifically to fix the signal integrity issues found in most DIY hardware.<p>I believe in sharing the work. You can find the Schematics, Firmware, and Software setup on the GitHub repo: GITHUB LINK: <a href="https:&#x2F;&#x2F;github.com&#x2F;Cerelog-ESP-EEG&#x2F;ESP-EEG" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;Cerelog-ESP-EEG&#x2F;ESP-EEG</a><p>For those who don&#x27;t want to deal with BGA soldering or sourcing components, I do have assembled units available: <a href="https:&#x2F;&#x2F;www.cerelog.com&#x2F;eeg_researchers.html" rel="nofollow">https:&#x2F;&#x2F;www.cerelog.com&#x2F;eeg_researchers.html</a><p>The major features: Forked&#x2F;modified OpenBCI GUI Compatibility as well as Brainflow API, and LSL Compatibility. I know a lot of us rely on the OpenBCI GUI for visualization because it just works. I didn&#x27;t want to reinvent the wheel, so I ensured this board supports it natively.<p>It works out of the box: I maintain a forked modified version of the GUI that connects to the board via LSL (Lab Streaming Layer). Zero coding required: You can visualize FFTs, Spectrograms, and EMG widgets immediately without writing a single line of Python.<p>The &quot;active bias&quot; (why my signal is cleaner): The TI ADS1299 is the gold standard for EEG, but many dev boards implement it incorrectly. They often leave the Bias feedback loop &quot;open&quot; (passive), which makes them terrible at rejecting 60Hz mains hum. I simply followed the datasheet: I implemented a True Closed-Loop Active Bias (Drive Right Leg).<p>How it works: It measures the common-mode signal, inverts it, and actively drives it back into the body. The result: Cleaner data<p>Tech stack:<p><pre><code> ADC: TI ADS1299 (24-bit, 8-channel). MCU: ESP32 Chosen to handle high-speed SPI and WiFi&#x2F;USB streaming Software: BrainFlow support (Python, C++, Java, C#) for those who want to build custom ML pipelines, LSL support, and forked version of OpenBCI GUI support </code></pre> This was a huge project for me. I’m happy to geek out about getting the ESP32 to stream reliably at high sample rates as both the software and firmware for this project proved a lot more challenging than I expected. Let me know what you think!<p>SAFETY NOTE: I strongly recommend running this on a LiPo battery via WiFi. If you must use USB, please use a laptop running on battery power, not plugged into the wall.

No other tools from this source yet.