Show HN: Flyde 1.0 – Like n8n, but in your codebase
Show HN (score: 5)Description
I'm excited to share Flyde 1.0. A big update to the open-source visual programming tool I launched here in March of last year (https://news.ycombinator.com/item?id=39628285).
Since Flyde’s launch, there's been a huge rise in demand for visual builders, especially for AI-heavy workflows. Visual-programming shines with async and concurrency-heavy logic, which describes most LLM chains perfectly.
A few months ago, I tried to capitalize on this trend by launching a commercial version of Flyde called Flowcode (https://news.ycombinator.com/item?id=43830193). It didn't go well. I learned the hard way that Flyde’s strength wasn't just about flexibility or performance compared to tools like n8n. The real value was always how Flyde fits inside your existing codebase. The launch also helped me understand that there's still a big gap: no tool really covers the full lifecycle, from rapid prototyping to deep integration, evaluation, and iteration inside your own projects.
So, over the last few months, I worked hard to polish Flyde: - Cleaned up and simplified the nodes API - Made it possible to fork any node for maximum flexibility - Launched a new online playground for quick experimenting and sharing (https://www.flyde.dev/playground) - Created a new CLI tool to speed up development and setup - Fixed a ton of bugs - Simplified the UI/UX to make it smoother and less confusing
There’s still a lot of missing stuff. Better templates, docs, and nodes, but I think it’s finally stable and useful enough to give it another shot.
My plan is to first make sure that Flyde is usable and valuable as an OS project, and then try to provide additional value via “Flyde Studio” - a SaaS that will help non-engineers iterate on Flyde flows from a web-app. Changes become a PR in the host repo.
I'd really love some honest feedback and hear whether Flyde resonates with an existing pain/problem.
Check it out here: Playground: https://www.flyde.dev/playground
GitHub: https://github.com/flydelabs/flyde
Looking forward to hearing your thoughts! - Gabriel
More from Show
Show HN: KeyEnv – CLI-first secrets manager for dev teams (Rust)
Show HN: KeyEnv – CLI-first secrets manager for dev teams (Rust) Hi HN,<p>I built KeyEnv because I was tired of the "can you Slack me the Stripe key?" workflow.<p><pre><code> The problem: My team's secrets lived in a mix of Slack DMs, shared Google Docs, and .env files that definitely weren't in .gitignore at some point. Enterprise tools like Vault required more DevOps time than we had. Doppler was close but felt heavier than we needed. What KeyEnv does: keyenv init # link project keyenv pull # sync secrets to local .env keyenv run -- npm start # inject secrets, run command That's basically it. Secrets are encrypted client-side (AES-256-GCM) before leaving your machine. Zero-knowledge architecture—we can't read your secrets even if we wanted to. Technical details: - Single Rust binary, no runtime dependencies - Works offline (cached secrets) - RBAC for teams (owner/admin/member/viewer) - Service tokens for CI/CD - Full audit trail Honest tradeoffs: - SaaS only, no self-hosted option - Fewer integrations than Doppler - If you need dynamic secrets or PKI, use Vault Pricing: Free tier (3 projects, 100 secrets), $12/user/month for teams. Would love feedback on the CLI UX and any rough edges. Happy to answer questions about the architecture. </code></pre> <a href="https://www.keyenv.dev" rel="nofollow">https://www.keyenv.dev</a>
Show HN: WebTerm – Browser-based terminal emulator
Show HN: WebTerm – Browser-based terminal emulator
Show HN: WebGPU React Renderer Using Vello
Show HN: WebGPU React Renderer Using Vello I've built a package to use Raph Levien's Vello as a blazing fast 2D renderer for React on WebGPU. It uses WASM to hook into the Rust code
Show HN: On the edge of Apple Silicon memory speeds
Show HN: On the edge of Apple Silicon memory speeds I have developed open source CLI-tool for Apple Silicon macOS. It measures memory speeds in different ways and also latency. It can achieve up to 96-97% efficiency on read speed on M4 base what is advertised as 120GB/s. All memory operations are in assembly.<p>I would really appreciate for results on different CPU's how benchmark works on those. I have been able to test this on M1 and M4.<p>command : 'memory_benchmark -non-cacheable -count 5 -output results.JSON' (close all applications before running)<p>This will generate JSON file where you find sections copy_gb_s, read_gb_s and write_gb_s statics.<p>Example M4 with 10 loops: "copy_gb_s": { "statistics": { "average": 106.65421233311835, "max": 106.70240696071005, "median": 106.65069297260811, "min": 106.6336774994254, "p90": 106.66606919223108, "p95": 106.68423807647056, "p99": 106.69877318386216, "stddev": 0.01930653530818627 }, "values": [ 106.70240696071005, 106.66203166240008, 106.64410802226159, 106.65831409449595, 106.64148106986977, 106.6482935780762, 106.63974821679058, 106.65896986001393, 106.6336774994254, 106.65309236714002 ] }, "read_gb_s": { "statistics": { "average": 115.83111228356601, "max": 116.11098114619033, "median": 115.84480882265643, "min": 115.56959026587722, "p90": 115.99667266786554, "p95": 116.05382690702793, "p99": 116.09955029835784, "stddev": 0.1768243167963439 }, "values": [ 115.79154681380165, 115.56959026587722, 115.60574235736468, 115.72112860271632, 115.72147129262802, 115.89807083151123, 115.95527337086908, 115.95334642887214, 115.98397172582945, 116.11098114619033 ] }, "write_gb_s": { "statistics": { "average": 65.55966046805113, "max": 65.59040040480241, "median": 65.55933583741347, "min": 65.50911885624045, "p90": 65.5840272860955, "p95": 65.58721384544896, "p99": 65.58976309293172, "stddev": 0.02388146120866979 },<p>Patterns benchmark also shows bit more of memory speeds. command: 'memory_benchmark -patterns -non-cacheable -count 5 -output patterns.JSON'<p>Example M4 from 100 loops: "sequential_forward": { "bandwidth": { "read_gb_s": { "statistics": { "average": 116.38363691482549, "max": 116.61212708384109, "median": 116.41264548721367, "min": 115.449510036971, "p90": 116.54143114134801, "p95": 116.57314206456576, "p99": 116.60095068065866, "stddev": 0.17026641589059727 } } } }<p>"strided_4096": { "bandwidth": { "read_gb_s": { "statistics": { "average": 26.460392735220456, "max": 27.7722419653915, "median": 26.457051473208285, "min": 25.519925729459107, "p90": 27.105171215736604, "p95": 27.190715938337473, "p99": 27.360449534513144, "stddev": 0.4730857335572576 } } } }<p>"random": { "bandwidth": { "read_gb_s": { "statistics": { "average": 26.71367836895143, "max": 26.966820487564327, "median": 26.69907406197067, "min": 26.49374804466308, "p90": 26.845236287807374, "p95": 26.882004355057887, "p99": 26.95742242818151, "stddev": 0.09600564296001704 } } } }<p>Thank you for reading :)
No other tools from this source yet.