Show HN: I built a biological network visualization tool
Hacker News (score: 16)Description
The tech stack combines modern frontend technologies with robust backend architecture. The frontend uses Next.js 14 with TypeScript and Cytoscape.js for the visualization engine. The backend is built with FastAPI and Python.
The featured demo showcases a Traumatic Brain Injury Nasal Spray mechanism of action visualization, demonstrating the tool's capability to handle complex biological pathway mapping.
You can explore the live demo at <https://nodes.bio> to see the TBI Nasal Spray visualization in action, along with other biological network examples.
I'd love feedback on the visualization capabilities or any suggestions for biological data integration. What do you think?
More from Hacker
Show HN: An open source access logs analytics script to block bot attacks
Show HN: An open source access logs analytics script to block bot attacks This is a small PoC Python project for web server access logs analyzing to classify and dynamically block bad bots, such as L7 (application-level) DDoS bots, web scrappers and so on.<p>We'll be happy to gather initial feedback on usability and features, especialy from people having good or bad experience wit bots.<p>*Requirements*<p>The analyzer relies on 3 Tempesta FW specific features which you still can get with other HTTP servers or accelerators:<p>1. JA5 client fingerprinting (<a href="https://tempesta-tech.com/knowledge-base/Traffic-Filtering-by-Fingerprints/" rel="nofollow">https://tempesta-tech.com/knowledge-base/Traffic-Filtering-b...</a>). This is a HTTP and TLS layers fingerprinting, similar to JA4 (<a href="https://blog.foxio.io/ja4%2B-network-fingerprinting" rel="nofollow">https://blog.foxio.io/ja4%2B-network-fingerprinting</a>) and JA3 fingerprints. The last is also available in Envoy (<a href="https://www.envoyproxy.io/docs/envoy/latest/api-v3/extensions/filters/listener/tls_inspector/v3/tls_inspector.proto.html" rel="nofollow">https://www.envoyproxy.io/docs/envoy/latest/api-v3/extension...</a>) or Nginx module (<a href="https://github.com/fooinha/nginx-ssl-ja3" rel="nofollow">https://github.com/fooinha/nginx-ssl-ja3</a>), so check the documentation for your web server<p>2. Access logs are directly written to Clickhouse analytics database, which can cunsume large data batches and quickly run analytic queries. For other web proxies beside Tempesta FW, you typically need to build a custom pipeline to load access logs into Clickhouse. Such pipeliens aren't so rare though.<p>3. Abbility to block web clients by IP or JA5 hashes. IP blocking is probably available in any HTTP proxy.<p>*How does it work*<p>This is a daemon, which<p>1. Learns normal traffic profiles: means and standard deviations for client requests per second, error responses, bytes per second and so on. Also it remembers client IPs and fingerprints.<p>2. If it sees a spike in z-score (<a href="https://en.wikipedia.org/wiki/Standard_score" rel="nofollow">https://en.wikipedia.org/wiki/Standard_score</a>) for traffic characteristics or can be triggered manually. Next, it goes in data model search mode<p>3. For example, the first model could be top 100 JA5 HTTP hashes, which produce the most error responses per second (typical for password crackers). Or it could be top 1000 IP addresses generating the most requests per second (L7 DDoS). Next, this model is going to be verified<p>4. The daemon repeats the query, but for some time, long enough history, in the past to see if in the past we saw a hige fraction of clients in both the query results. If yes, then the model is bad and we got to previous step to try another one. If not, then we (likely) has found the representative query.<p>5. Transfer the IP addresses or JA5 hashes from the query results into the web proxy blocking configuration and reload the proxy configuration (on-the-fly).
Show HN: Gitcasso – Syntax Highlighting and Draft Recovery for GitHub Comments
Show HN: Gitcasso – Syntax Highlighting and Draft Recovery for GitHub Comments I built a browser extension called Gitcasso which:<p>- Adds markdown syntax highlighting to GitHub textareas<p>- Lists every open PR/issue tab and any drafts<p>- (Optional, unimplemented) autosaves your comment drafts so you don’t lose work<p>I made it because I was impressed by <a href="https://overtype.dev/" rel="nofollow">https://overtype.dev/</a> (a markdown textarea syntax highlighter) which went big on here on HN a few weeks ago, and it seemed like a perfect fit for a GitHub browser extension. Keeping up with changes on upstream GitHub would normally be a pain, but with with Playwright and Claude Code it seemed possible for it to be nearly automatic, which has turned out to be mostly true!<p>This was the first time where I built a tool, gave the tool to AI, and then AI used the tool to make the thing I hoped it would be able to make. I'm pretty sold on the general technique...<p>GitHub repo (Apache2-licensed, open source): <a href="https://github.com/diffplug/gitcasso" rel="nofollow">https://github.com/diffplug/gitcasso</a><p>Video walkthrough (2 mins of the tool, 12 mins of its development tooling): <a href="https://www.youtube.com/watch?v=wm7fVg4DWqk" rel="nofollow">https://www.youtube.com/watch?v=wm7fVg4DWqk</a><p>And a text writeup with timestamps to the video walkthrough <a href="https://nedshed.dev/p/meet-gitcasso" rel="nofollow">https://nedshed.dev/p/meet-gitcasso</a>
Automated code reviews via mutation testing
Automated code reviews via mutation testing
Show HN: Open-source AI data generator (now hosted)
Show HN: Open-source AI data generator (now hosted) Hey HN! A few months ago we shared our AI dataset generator as an open source repo, and the response was incredible (<a href="https://news.ycombinator.com/item?id=44388093">https://news.ycombinator.com/item?id=44388093</a>). We got requests from folks who wanted to use it without the hosting overhead, so we created both options: a hosted version (<a href="https://www.metabase.com/ai-data-generator" rel="nofollow">https://www.metabase.com/ai-data-generator</a> for instant use and the source code fully open (<a href="https://github.com/metabase/dataset-generator" rel="nofollow">https://github.com/metabase/dataset-generator</a>) for anyone who wants to self-host or contribute.<p>Looking forward to seeing how you use it and what you build on top of it!<p>Bonus: The repo now supports multi-provider LLM integration with LiteLLM, thanks to a great contribution from their team.
Row-level transformations in Postgres CDC using Lua
Row-level transformations in Postgres CDC using Lua
Show HN: Ggc – A Git CLI tool written in Go with interactive UI
Show HN: Ggc – A Git CLI tool written in Go with interactive UI A while ago I shared an early version of ggc, a Git helper I built in Go. Since then the project has grown quite a bit, and I’d love to share the latest updates (v6.0).<p>Repo: <a href="https://github.com/bmf-san/ggc" rel="nofollow">https://github.com/bmf-san/ggc</a><p>Install: - macOS/Linux: `brew install ggc` - Go: `go install github.com/bmf-san/ggc/v6@latest` - Homebrew: `brew install ggc` - Or grab binaries: <a href="https://github.com/bmf-san/ggc/releases" rel="nofollow">https://github.com/bmf-san/ggc/releases</a><p>Features: Dual modes: Traditional CLI commands (ggc add, etc.) and interactive mode (launch with just ggc) Intuitive command structure: Simplified interface for common Git operations Incremental search UI: Quickly find and execute commands with real-time filtering Fast and lightweight: Implemented in Go with minimal dependencies Shell completions: Included for Bash, Zsh, and Fish Custom aliases: Chain multiple commands with user-defined aliases Cross-platform: Works on macOS, Linux, and Windows<p>Technical details: Built with Go standard library and minimal external packages Supports 50+ Git operations (add, commit, branch, pull, etc.)<p>I'd appreciate any feedback or contributions!
PyPI Blog: Token Exfiltration Campaign via GitHub Actions Workflows
PyPI Blog: Token Exfiltration Campaign via GitHub Actions Workflows
Show HN: Pyproc – Call Python from Go Without CGO or Microservices
Show HN: Pyproc – Call Python from Go Without CGO or Microservices Hi HN!I built *pyproc* to let Go services call Python like a local function — *no CGO and no separate microservice*. It runs a pool of Python worker processes and talks over *Unix Domain Sockets* on the same host/pod, so you get low overhead, process isolation, and parallelism beyond the GIL.<p>*Why this exists*<p>* Keep your Go service, reuse Python/NumPy/pandas/PyTorch/scikit-learn. * Avoid network hops, service discovery, and ops burden of a separate Python service.<p>*Quick try (\~5 minutes)*<p>Go (app):<p>``` go get github.com/YuminosukeSato/pyproc@latest ```<p>Python (worker):<p>``` pip install pyproc-worker ```<p>Minimal worker (Python):<p>``` from pyproc_worker import expose, run_worker @expose def predict(req): return {"result": req["value"] * 2} if __name__ == "__main__": run_worker() ```<p>Call from Go:<p>``` import ( "context" "fmt" "github.com/YuminosukeSato/pyproc/pkg/pyproc" ) func main() { pool, _ := pyproc.NewPool(pyproc.PoolOptions{ Config: pyproc.PoolConfig{Workers: 4, MaxInFlight: 10}, WorkerConfig: pyproc.WorkerConfig{SocketPath: "/tmp/pyproc.sock", PythonExec: "python3", WorkerScript: "worker.py"}, }, nil) _ = pool.Start(context.Background()) defer pool.Shutdown(context.Background()) var out map[string]any _ = pool.Call(context.Background(), "predict", map[string]any{"value": 42}, &out) fmt.Println(out["result"]) // 84 } ```<p>*Scope / limits*<p>* Same-host/pod only (UDS). Linux/macOS supported; Windows named pipes not yet. * Best for request/response payloads ≲ \~100 KB JSON; GPU orchestration and cross-host serving are out of scope.<p>*Benchmarks (indicative)*<p>* Local M1, simple JSON: \~*45µs p50* and *\~200k req/s* with 8 workers. Your numbers will vary.<p>*What’s included*<p>* Pure Go client (no CGO), Python worker lib, pool, health checks, graceful restarts, and examples.<p>*Docs & code*<p>* README, design/ops/security docs, pkg.go.dev: [<a href="https://github.com/YuminosukeSato/pyproc" rel="nofollow">https://github.com/YuminosukeSato/pyproc</a>](<a href="https://github.com/YuminosukeSato/pyproc" rel="nofollow">https://github.com/YuminosukeSato/pyproc</a>)<p>*License*<p>* Apache-2.0. Current release: v0.2.x.<p>*Feedback welcome*<p>* API ergonomics, failure modes under load, and priorities for codecs/transports (e.g., Arrow IPC, gRPC-over-UDS).<p>---<p><i>Source for details: project README and docs.</i> ([github.com][1])<p>[1]: <a href="https://github.com/YuminosukeSato/pyproc" rel="nofollow">https://github.com/YuminosukeSato/pyproc</a> "GitHub - YuminosukeSato/pyproc: Call Python from Go without CGO or microservices - Unix domain socket based IPC for ML inference and data processin"
Show HN: Daffodil – Open-Source Ecommerce Framework to connect to any platform
Show HN: Daffodil – Open-Source Ecommerce Framework to connect to any platform Hello everyone!<p>I’ve been building an Open Source Ecommerce framework for Angular called Daffodil. I think Daffodil is really cool because it allows you to connect to any arbitrary ecommerce platform. I’ve been hacking away at it slowly (for 7 years now) as I’ve had time and it's finally feeling “ready”. I would love feedback from anyone who’s spent any time in ecommerce (especially as a frontend developer).<p>For those who are not javascript ecosystem devs, here’s a demo of the concept: <a href="https://demo.daff.io/" rel="nofollow">https://demo.daff.io/</a><p>For those who are familiar with Angular, you can just run the following from a new Angular app (use Angular 19, we’re working on support for Angular 20!) to get the exact same result as the demo above:<p>```bash ng add @daffodil/commerce ```<p>I’m trying to solve two distinct challenges:<p>First, I absolutely hate having to learn a new ecommerce platform. We have drivers for printers, mice, keyboards, microphones, and many other physical widgets in the operating system, why not have them for ecommerce software? It’s not that I hate the existing platforms, their UIs or APIs, it's that every platform repeats the same concepts and I always have to learn some new fangled way of doing the same thing. I’ve long desired for these platforms to act more like operating systems on the Web than like custom built software. Ideally, I would like to call them through a standard interface and forget about their existence beyond that.<p>Second, I’d like to keep it simple to start. I’d like to (on day 1) not have to set up any additional software beyond the core frontend stack (essentially yarn/npm + Angular). All too often, I’m forced to set up docker-compose, Kubernetes, pay for a SaaS, wait for IT at the merchant to get me access, or run a VM somewhere just to build some UI for an ecommerce platform that a company uses. More often than not, I just want to start up a little local http server and start writing.<p>I currently have support for Magento/MageOS/Adobe Commerce, I have partial support for Shopify and I recently wrote a product driver for Medusa - <a href="https://github.com/graycoreio/daffodil/pull/3939" rel="nofollow">https://github.com/graycoreio/daffodil/pull/3939</a>.<p>Finally, if you’re thinking “this isn’t performant, can’t you just do all of this with GraphQl on the server”, you’re exactly correct! That’s where I’d like to get to eventually, but that’s a “yet another tool” barrier to “getting started” that I’d like to be able to allow developers to do without for as long as I can in the development cycle. I’m shooting to eventually ship the same “driver” code that we run in the browser in a GraphQl server once all is said and done with just another driver (albeit much simpler than all the others) that uses the native GraphQl format.<p>Any suggestions for drivers and platforms are welcome, though I can’t promise I will implement them. :)
Show HN: Haystack – Review pull requests like you wrote them yourself
Show HN: Haystack – Review pull requests like you wrote them yourself Hi HN!<p>We’re Akshay and Jake. We put together a tool called Haystack to make pull requests straightforward to read.<p>What Haystack does:<p>-- Builds a clear narrative. Changes in Haystack aren’t just arranged as unordered diffs. Instead, they unfold in a logical order, each paired with an explanation in plain, precise language<p>-- Focuses attention where it counts. Routine plumbing and refactors are put into skimmable sections so you can spend your time on design and correctness<p>-- Provides full cross-file context. Every new or changed function/variable is traced across the codebase, showing how it’s used beyond the immediate diff<p>Here’s a quick demo: <a href="https://youtu.be/w5Lq5wBUS-I" rel="nofollow">https://youtu.be/w5Lq5wBUS-I</a><p>If you’d like to give it a spin, head over to haystackeditor.com/review! We set up some demo PRs that you should be able to understand and review even if you’ve never seen the repos before!<p>We used to work at big companies, where reviewing non-trivial pull requests felt like reading a book with its pages out of order. We would jump and scroll between files, trying to piece together the author’s intent before we could even start reviewing. And, as authors, we would spend time to restructure our own commits just to make them readable. AI has made this even trickier. Today it’s not uncommon for a pull request to contain code the author doesn’t fully understand themselves!<p>So, we built Haystack to help reviewers spend less time untangling code and more time giving meaningful feedback. We would love to hear about whether it gets the job done for you!<p>How we got here:<p>Haystack began as (yet another) VS Code fork where we experimented with visualizing code changes on a canvas. At first, it was a neat way to show how pieces of code worked together. But customers started laying out their entire codebase just to make sense of it. That’s when we realized the deeper problem: understanding a codebase is hard, and engineers need better ways to quickly understand unfamiliar code.<p>As we kept building, another insight emerged: with AI woven into workflows, engineers don’t always need to master every corner of a codebase to ship features. But in code review, deep and continuous context still matters, especially to separate what’s important to review from plumbing and follow-on changes.<p>So we pivoted. We took what we’d learned and worked closely with engineers to refine the idea. We started with simple code analysis (using language servers, tree-sitter, etc.) to show how changes relate. Then we added AI to explain and organize those changes and to trace how data moves through a pull request. Finally, we fused the two by empowering AI agents to use static analyses. Step by step, that became the Haystack we’re showing today.<p>We’d love to hear your thoughts, feedback, or suggestions!
IRHash: Efficient Multi-Language Compiler Caching by IR-Level Hashing
IRHash: Efficient Multi-Language Compiler Caching by IR-Level Hashing
Show HN: Stagewise – frontend coding agent for real codebases
Show HN: Stagewise – frontend coding agent for real codebases Hey HN, we're Glenn and Julian, and we're building stagewise (<a href="https://stagewise.io">https://stagewise.io</a>), a frontend coding agent that inside your app’s dev mode and that makes changes in your local codebase.<p>We’re compatible with any framework and any component library. Think of it like a v0 of Lovable that works locally and with any existing codebase.<p>You can spawn the agent into locally running web apps in dev mode with `npx stagewise` from the project root. The agent lets you then click on HTML Elements in your app, enter prompts like 'increase the height here' and will implement the changes in your source code.<p>Before stagewise, we were building a vertical SaaS for logistics from scratch and loved using prototyping tools like v0 or lovable to get to the first version. But when switching from v0/ lovable to Cursor for local development, we felt like the frontend magic was gone. So, we decided to build stagewise to bring that same magic to local development.<p>The first version of stagewise just forwarded a prompt with browser context to existing IDEs and agents (Cursor, Cline, ..) and went viral on X after we open sourced it. However, the APIs of existing coding agents were very limiting, so we figured that building our own agent would unlock the full potential of stagewise.<p>Since our last Show HN (<a href="https://news.ycombinator.com/item?id=44798553">https://news.ycombinator.com/item?id=44798553</a>), we launched a few very important features and changes: You now have a proprietary chat history with the agent, an undo button to revert changes, and we increased the amount of free credits AND reduced the pricing by 50%. We made a video about all these changes, showing you how stagewise works: <a href="https://x.com/goetzejulian/status/1959835222712955140/video/1" rel="nofollow">https://x.com/goetzejulian/status/1959835222712955140/video/...</a>.<p>So far, we've seen great adoption from non-technical users who wanted to continue building their lovable prototype locally. We personally use the agent almost daily to make changes to our landing page and to build the UI of new features on our console (<a href="https://console.stagewise.io">https://console.stagewise.io</a>).<p>If you have an app running in dev mode, simply `cd` into the app directory and run `npx stagewise` - the agent should appear, ready to play with.<p>We're very excited to hear your feedback!
Show HN: Typed-arrow – compile‑time Arrow schemas for Rust
Show HN: Typed-arrow – compile‑time Arrow schemas for Rust Hi community, we just released <a href="https://github.com/tonbo-io/typed-arrow" rel="nofollow">https://github.com/tonbo-io/typed-arrow</a>.<p>When working with arrow-rs, we noticed that schemas are declared at runtime. This often leads to runtime errors and makes development less safe.<p>typed-arrow takes a different approach:<p>- Schemas are declared at compile time with Rust’s type system.<p>- This eliminates runtime schema errors.<p>- And introduces no runtime overhead — everything is checked and generated by the compiler.<p>If you’ve run into Arrow runtime schema issues, and your schema is stable (not defined or switched at runtime), this project might be useful.
Show HN: ASCII Tree Editor
Show HN: ASCII Tree Editor Show HN: ASCII Tree Editor<p>I've created a web-based editor for ASCII file directory trees called asciitreeman. It's designed to make it easier to edit and reorganize the output of the tree command.<p>You can try it out here: <a href="https://reorx.github.io/asciitreeman/" rel="nofollow">https://reorx.github.io/asciitreeman/</a><p>And the source code is on GitHub: <a href="https://github.com/reorx/asciitreeman" rel="nofollow">https://github.com/reorx/asciitreeman</a><p>Some of the key features include visual tree editing with drag-and-drop-like operations, real-time sync where changes are immediately reflected in the ASCII output, keyboard shortcuts for navigation (J/K or arrow keys), and auto-saving your work to local storage.<p>What's interesting is that I used Claude Code to "vibe-code" this project in a very short amount of time. It was a fun experiment in AI-assisted development. For those curious about the process, I've included the prompts and specifications I used in the source code. You can check them out in the spec.md and CLAUDE.md files in the repository.<p>Hop you find it useful!
Show HN: Real-time privacy protection for smart glasses
Show HN: Real-time privacy protection for smart glasses I built a live video privacy filter that helps smart glasses app developers handle privacy automatically.<p>How it works: You can replace a raw camera feed with the filtered stream in your app. The filter processes a live video stream, applies privacy protections, and outputs a privacy-compliant stream in real time. You can use this processed stream for AI apps, social apps, or anything else.<p>Features: Currently, the filter blurs all faces except those who have given consent. Consent can be granted verbally by saying something like "I consent to be captured" to the camera. I'll be adding more features, such as detecting and redacting other private information, speech anonymization, and automatic video shut-off in certain locations or situations.<p>Why I built it: While developing an always-on AI assistant/memory for glasses, I realized privacy concerns would be a critical problem, for both bystanders and the wearer. Addressing this involves complex issues like GDPR, CCPA, data deletion requests, and consent management, so I built this privacy layer first for myself and other developers.<p>Reference app: There's a sample app (./examples/rewind/) that uses the filter. The demo video is in the README, please check it out! The app shows the current camera stream and past recordings, both privacy-protected, and will include AI features using the recordings.<p>Tech: Runs offline on a laptop. Built with FFmpeg (stream decode/encode), OpenCV (face recognition/blurring), Faster Whisper (voice transcription), and Phi-3.1 Mini (LLM for transcription analysis).<p>I'd love feedback and ideas for tackling the privacy challenges in wearable camera apps!
China Dominates 44% of Visible Fishing Activity Worldwide
China Dominates 44% of Visible Fishing Activity Worldwide
No other tools from this source yet.