Show HN: Go Command-streaming lib for distributed systems (3x faster than gRPC)
Show HN (score: 5)Description
Why build around Commands? As serializable objects, they can be sent over the network and persisted. They also provide a clean way to model distributed transactions through composition, and naturally support features like Undo and Redo. These qualities make them a great fit for implementing consistency patterns like Saga in distributed systems.
On the performance side, sending a Command involves minimal overhead — only its type and data need to be transmitted. In benchmarks focused on raw throughput (measured using 1, 2, 4, 8, and 16 clients in a simple request/response scenario), cmd-stream/MUS (cmd-stream/Protobuf) is about 3x (2.8x) faster than gRPC/Protobuf, where MUS is a serialization format optimized for low byte usage. This kind of speedup can make a real difference in high-throughput systems or when you're trying to squeeze more out of limited resources.
By putting Commands at the transport layer, cmd-stream-go avoids the extra complexity of layering Command logic on top of generic RPC or REST.
The trade-offs: it’s currently Go-only and maintained by a single developer.
If you’re curious to explore more, you can check out the cmd-stream-go repository (https://github.com/cmd-stream/cmd-stream-go), see performance benchmarks (https://github.com/ymz-ncnk/go-client-server-benchmarks), or read the series of posts on Command Pattern and how it can be applied over the network (https://medium.com/p/f9e53442c85d).
I’d love to hear your thoughts — especially where you think this model could shine, any production concerns, similar patterns or tools you’ve seen in practice.
Feel free to reach me as ymz-ncnk on the Gophers Slack or follow https://x.com/cmdstream_lib for project updates.
More from Show
Show HN: KeyEnv – CLI-first secrets manager for dev teams (Rust)
Show HN: KeyEnv – CLI-first secrets manager for dev teams (Rust) Hi HN,<p>I built KeyEnv because I was tired of the "can you Slack me the Stripe key?" workflow.<p><pre><code> The problem: My team's secrets lived in a mix of Slack DMs, shared Google Docs, and .env files that definitely weren't in .gitignore at some point. Enterprise tools like Vault required more DevOps time than we had. Doppler was close but felt heavier than we needed. What KeyEnv does: keyenv init # link project keyenv pull # sync secrets to local .env keyenv run -- npm start # inject secrets, run command That's basically it. Secrets are encrypted client-side (AES-256-GCM) before leaving your machine. Zero-knowledge architecture—we can't read your secrets even if we wanted to. Technical details: - Single Rust binary, no runtime dependencies - Works offline (cached secrets) - RBAC for teams (owner/admin/member/viewer) - Service tokens for CI/CD - Full audit trail Honest tradeoffs: - SaaS only, no self-hosted option - Fewer integrations than Doppler - If you need dynamic secrets or PKI, use Vault Pricing: Free tier (3 projects, 100 secrets), $12/user/month for teams. Would love feedback on the CLI UX and any rough edges. Happy to answer questions about the architecture. </code></pre> <a href="https://www.keyenv.dev" rel="nofollow">https://www.keyenv.dev</a>
Show HN: WebTerm – Browser-based terminal emulator
Show HN: WebTerm – Browser-based terminal emulator
Show HN: WebGPU React Renderer Using Vello
Show HN: WebGPU React Renderer Using Vello I've built a package to use Raph Levien's Vello as a blazing fast 2D renderer for React on WebGPU. It uses WASM to hook into the Rust code
Show HN: On the edge of Apple Silicon memory speeds
Show HN: On the edge of Apple Silicon memory speeds I have developed open source CLI-tool for Apple Silicon macOS. It measures memory speeds in different ways and also latency. It can achieve up to 96-97% efficiency on read speed on M4 base what is advertised as 120GB/s. All memory operations are in assembly.<p>I would really appreciate for results on different CPU's how benchmark works on those. I have been able to test this on M1 and M4.<p>command : 'memory_benchmark -non-cacheable -count 5 -output results.JSON' (close all applications before running)<p>This will generate JSON file where you find sections copy_gb_s, read_gb_s and write_gb_s statics.<p>Example M4 with 10 loops: "copy_gb_s": { "statistics": { "average": 106.65421233311835, "max": 106.70240696071005, "median": 106.65069297260811, "min": 106.6336774994254, "p90": 106.66606919223108, "p95": 106.68423807647056, "p99": 106.69877318386216, "stddev": 0.01930653530818627 }, "values": [ 106.70240696071005, 106.66203166240008, 106.64410802226159, 106.65831409449595, 106.64148106986977, 106.6482935780762, 106.63974821679058, 106.65896986001393, 106.6336774994254, 106.65309236714002 ] }, "read_gb_s": { "statistics": { "average": 115.83111228356601, "max": 116.11098114619033, "median": 115.84480882265643, "min": 115.56959026587722, "p90": 115.99667266786554, "p95": 116.05382690702793, "p99": 116.09955029835784, "stddev": 0.1768243167963439 }, "values": [ 115.79154681380165, 115.56959026587722, 115.60574235736468, 115.72112860271632, 115.72147129262802, 115.89807083151123, 115.95527337086908, 115.95334642887214, 115.98397172582945, 116.11098114619033 ] }, "write_gb_s": { "statistics": { "average": 65.55966046805113, "max": 65.59040040480241, "median": 65.55933583741347, "min": 65.50911885624045, "p90": 65.5840272860955, "p95": 65.58721384544896, "p99": 65.58976309293172, "stddev": 0.02388146120866979 },<p>Patterns benchmark also shows bit more of memory speeds. command: 'memory_benchmark -patterns -non-cacheable -count 5 -output patterns.JSON'<p>Example M4 from 100 loops: "sequential_forward": { "bandwidth": { "read_gb_s": { "statistics": { "average": 116.38363691482549, "max": 116.61212708384109, "median": 116.41264548721367, "min": 115.449510036971, "p90": 116.54143114134801, "p95": 116.57314206456576, "p99": 116.60095068065866, "stddev": 0.17026641589059727 } } } }<p>"strided_4096": { "bandwidth": { "read_gb_s": { "statistics": { "average": 26.460392735220456, "max": 27.7722419653915, "median": 26.457051473208285, "min": 25.519925729459107, "p90": 27.105171215736604, "p95": 27.190715938337473, "p99": 27.360449534513144, "stddev": 0.4730857335572576 } } } }<p>"random": { "bandwidth": { "read_gb_s": { "statistics": { "average": 26.71367836895143, "max": 26.966820487564327, "median": 26.69907406197067, "min": 26.49374804466308, "p90": 26.845236287807374, "p95": 26.882004355057887, "p99": 26.95742242818151, "stddev": 0.09600564296001704 } } } }<p>Thank you for reading :)
No other tools from this source yet.