🛠️ All DevTools
Showing 1–20 of 4092 tools
Last Updated
April 09, 2026 at 04:00 PM
Launch HN: Relvy (YC F24) – On-call runbooks, automated
Hacker News (score: 20)[DevOps] Launch HN: Relvy (YC F24) – On-call runbooks, automated Hey HN! We are Bharath, and Simranjit from Relvy AI (<a href="https://www.relvy.ai">https://www.relvy.ai</a>). Relvy automates on-call runbooks for software engineering teams. It is an AI agent equipped with tools that can analyze telemetry data and code at scale, helping teams debug and resolve production issues in minutes. Here’s a video: [[[<a href="https://www.youtube.com/watch?v=BXr4_XlWXc0" rel="nofollow">https://www.youtube.com/watch?v=BXr4_XlWXc0</a>]]]<p>A lot of teams are using AI in some form to reduce their on-call burden. You may be pasting logs into Cursor, or using Claude Code with Datadog’s MCP server to help debug. What we’ve seen is that autonomous root cause analysis is a hard problem for AI. This shows up in benchmarks - Claude Opus 4.6 is currently at 36% accuracy on the OpenRCA dataset, in contrast to coding tasks.<p>There are three main reasons for this: (1) Telemetry data volume can drown the model in noise; (2) Data interpretation / reasoning is enterprise context dependent; (3) On-call is a time-constrained, high-stakes problem, with little room for AI to explore during investigation time. Errors that send the user down the wrong path are not easily forgiven.<p>At Relvy, we are tackling these problems by building specialized tools for telemetry data analysis. Our tools can detect anomalies and identify problem slices from dense time series data, do log pattern search, and reason about span trees, all without overwhelming the agent context.<p>Anchoring the agent around runbooks leads to less agentic exploration and more deterministic steps that reflect the most useful steps that an experienced engineer would take. That results in faster analysis, and less cognitive load on engineers to review and understand what the AI did.<p>How it works: Relvy is installed on a local machine via docker-compose (or via helm charts, or sign up on our cloud), connect your stack (observability and code), create your first runbook and have Relvy investigate a recent alert.<p>Each investigation is presented as a notebook in our web UI, with data visualizations that help engineers verify and build trust with the AI. From there on, Relvy can be configured to automatically respond to alerts from Slack<p>Some example runbook steps that Relvy automates: - Check so-and-so dashboard, see if the errors are isolated to a specific shard. - Check if there’s a throughput surge on the APM page, and if so, is it from a few IPs? - Check recent commits to see if anything changed for this endpoint.<p>You can also configure AWS CLI commands that Relvy can run to automate mitigation actions, with human approval.<p>A little bit about us - We did YC back in fall 2024. We started our journey experimenting with continuous log monitoring with small language models - that was too slow. We then invested deeply into solving root cause analysis effectively, and our product today is the result of about a year of work with our early customers.<p>Give us a try today. Happy to hear feedback, or about how you are tackling on-call burden at your company. Appreciate any comments or suggestions!
Show HN: CSS Studio. Design by hand, code by agent
Hacker News (score: 79)[Other] Show HN: CSS Studio. Design by hand, code by agent Hi HN! I've just released CSS Studio, a design tool that lives on your site, runs on your browser, sends updates to your existing AI agent, which edits any codebase. You can actually play around with the latest version directly on the site.<p>Technically, the way this works is you view your site in dev mode and start editing it. In your agent, you can run /studio which then polls (or uses Claude Channels) an MCP server. Changes are streamed as JSON via the MCP, along with some viewport and URL information, and the skill has some instructions on how best to implement them.<p>It contains a lot of the tools you'd expect from a visual editing tool, like text editing, styles and an animation timeline editor.
[Other] Show HN: I built a local data lake for AI powered data engineering and analytics I got tired of the overhead required to run even a simple data analysis - cloud setup, ETL pipelines, orchestration, cost monitoring - so I built a fully local data-stack/IDE where I can write SQL/Py, run it, see results, and iterate quickly and interactively.<p>You get data lake like catalog, zero-ETL, lineage, versioning, and analytics running entirely on your machine. You can import from a database, webpage, CSV, etc. and query in natural language or do your own work in SQL/Pyspark. Connect to local models like Gemma or cloud LLMs like Claude for querying and analysis. You don’t have to setup local LLMs, it comes built in.<p>This is completely free. No cloud account required.<p>Downloading the software - <a href="https://getnile.ai/downloads" rel="nofollow">https://getnile.ai/downloads</a><p>Watch a demo - <a href="https://www.youtube.com/watch?v=C6qSFLylryk" rel="nofollow">https://www.youtube.com/watch?v=C6qSFLylryk</a><p>Check the code repo - <a href="https://github.com/NileData/local" rel="nofollow">https://github.com/NileData/local</a><p>This is still early and I'd genuinely love your feedback on what's broken, what's missing, and if you find this useful for your data and analytics work.
USB for Software Developers: An introduction to writing userspace USB drivers
Hacker News (score: 18)[Other] USB for Software Developers: An introduction to writing userspace USB drivers
Expanding Swift's IDE Support
Hacker News (score: 67)[IDE/Editor] Expanding Swift's IDE Support
Show HN: 500k+ events/sec transformations for ClickHouse ingestion
Show HN (score: 5)[Other] Show HN: 500k+ events/sec transformations for ClickHouse ingestion Hi HN! We are Ashish and Armend, founders of GlassFlow.<p>Over the last year, we worked with teams running high-throughput pipelines into self-hosted ClickHouse. Mostly for observability and real-time analytics.<p>A question that came repeatedly was: What happens when throughput grows?<p>Usually, things work fine at 10k events/sec, but we started seeing backpressure and errors at >100k.<p>When the throughput per pipeline stops scaling, then adding more CPU/memory doesn’t help because often parts of the pipeline are not parallelized or are bottlenecked by state handling.<p>At this point, engineers usually scale by adding more pipeline instances.<p>That works but comes with some trade-offs: - You have to split the workload (e.g., multiple pipelines reading from the same source) - Transformation logic gets duplicated across pipelines - Stateful logic becomes harder to manage and keep consistent - Debugging and changes get more difficult because the data flow is fragmented<p>Another challenge arises when working with high-cardinality keys like user IDs, session IDs, or request IDs, and when you need to handle longer time windows (24h or more). The state grows quickly and many systems rely on in-memory state, which makes it expensive and harder to recover from failures.<p>We wanted to solve this problem and rebuild our approach at GlassFlow.<p>Instead of scaling by adding more pipelines, we scale within a single pipeline by using replicas. Each replica consumes, processes, and writes independently, and the workload is distributed across them.<p>In the benchmarks we’re sharing, this scales to 500k+ events/sec while still running stateful transformations and writing into ClickHouse.<p>A few things we think are interesting: - Scaling is close to linear as you add replicas - Works with stateful transformations (not just stateless ingestion) - State is backed by a file-based KV store instead of relying purely on memory - The ClickHouse sink is optimized for batching to avoid small inserts - The product is built with Go<p>Full write-up + benchmarks: <a href="https://www.glassflow.dev/blog/glassflow-now-scales-to-500k-events-per-sec" rel="nofollow">https://www.glassflow.dev/blog/glassflow-now-scales-to-500k-...</a><p>Repo: <a href="https://github.com/glassflow/clickhouse-etl" rel="nofollow">https://github.com/glassflow/clickhouse-etl</a><p>Happy to answer questions about the design or trade-offs.
Show HN: TUI-use: Let AI agents control interactive terminal programs
Hacker News (score: 25)[Other] Show HN: TUI-use: Let AI agents control interactive terminal programs
Show HN: BAREmail ʕ·ᴥ·ʔ – minimalist Gmail client for bad WiFi
Hacker News (score: 39)[Other] Show HN: BAREmail ʕ·ᴥ·ʔ – minimalist Gmail client for bad WiFi I've been frustrated one too many times by terrible airplane wifi and not being able to load Gmail or Superhuman when all I want to do is get a few simple text-only emails out the door.<p>These clients have become pretty bloated with the assumption you've always got great bandwidth.<p>So I vibe coded BAREMAIL. It's open source, has no backend, and you can just set it up for yourself. Takes ~3 mins to setup API access via Google Cloud Platform (thanks for making this not super easy Google!)<p>I tried to maintain nice design and some important keyboard shortcuts without getting to overBEARing.
Show HN: We fingerprinted 178 AI models' writing styles and similarity clusters
Hacker News (score: 55)[Other] Show HN: We fingerprinted 178 AI models' writing styles and similarity clusters We have a dataset of 3,095 standardized AI responses across 43 prompts. From each response, we extract a 32-dimension stylometric fingerprint (lexical richness, sentence structure, punctuation habits, formatting patterns, discourse markers).<p>Some findings:<p>- 9 clone clusters (>90% cosine similarity on z-normalized feature vectors) - Mistral Large 2 and Large 3 2512 score 84.8% on a composite metric combining 5 independent signals - Gemini 2.5 Flash Lite writes 78% like Claude 3 Opus. Costs 185x less - Meta has the strongest provider "house style" (37.5x distinctiveness ratio) - "Satirical fake news" is the prompt that causes the most writing convergence across all models - "Count letters" causes the most divergence<p>The composite clone score combines: prompt-controlled head-to-head similarity, per-feature Pearson correlation across challenges, response length correlation, cross-prompt consistency, and aggregate cosine similarity.<p>Tech: stylometric extraction in Node.js, z-score normalization, cosine similarity for aggregate, Pearson correlation for per-feature tracking. Analysis script is ~1400 lines.
Show HN: Skrun – Deploy any agent skill as an API
Hacker News (score: 39)[API/SDK] Show HN: Skrun – Deploy any agent skill as an API
MegaTrain: Full Precision Training of 100B+ Parameter LLMs on a Single GPU
Hacker News (score: 306)[Other] MegaTrain: Full Precision Training of 100B+ Parameter LLMs on a Single GPU
newton-physics/newton
GitHub Trending[Other] An open-source, GPU-accelerated physics simulation engine built upon NVIDIA Warp, specifically targeting roboticists and simulation researchers.
The Git Commands I Run Before Reading Any Code
Hacker News (score: 278)[Other] The Git Commands I Run Before Reading Any Code
We moved Railway's frontend off Next.js. Builds went from 10+ mins to under two
Hacker News (score: 14)[Other] We moved Railway's frontend off Next.js. Builds went from 10+ mins to under two
Xilem – An experimental Rust native UI framework
Hacker News (score: 45)[Other] Xilem – An experimental Rust native UI framework
Show HN: Mo – checks GitHub PRs against decisions approved in Slack
Show HN (score: 7)[Other] Show HN: Mo – checks GitHub PRs against decisions approved in Slack Built this after a recurring frustration at our agency: the team would agree on something in Slack ("only admins can export users"), someone would open a PR two weeks later that quietly broke it, and nobody caught it until QA or after deploy.<p>Mo watches a Slack channel for decisions. When someone tags @mo to approve something, it stores it. When a PR opens, Mo checks the diff against the approved decisions and flags conflicts before merge.<p>It doesn't review code quality. It only cares whether the code matches what the team actually agreed to.<p>Would love feedback, especially from anyone who's been burned by this exact problem.<p>Try it here: <a href="https://hey-mo.io" rel="nofollow">https://hey-mo.io</a>
S3 Files
Hacker News (score: 164)[Other] S3 Files <a href="https://aws.amazon.com/blogs/aws/launching-s3-files-making-s3-buckets-accessible-as-file-systems/" rel="nofollow">https://aws.amazon.com/blogs/aws/launching-s3-files-making-s...</a>
Show HN: Gemma 4 Multimodal Fine-Tuner for Apple Silicon
Hacker News (score: 11)[Other] Show HN: Gemma 4 Multimodal Fine-Tuner for Apple Silicon About six months ago, I started working on a project to fine-tune Whisper locally on my M2 Ultra Mac Studio with a limited compute budget. I got into it. The problem I had at the time was I had 15,000 hours of audio data in Google Cloud Storage, and there was no way I could fit all the audio onto my local machine, so I built a system to stream data from my GCS to my machine during training.<p>Gemma 3n came out, so I added that. Kinda went nuts, tbh.<p>Then I put it on the shelf.<p>When Gemma 4 came out a few days ago, I dusted it off, cleaned it up, broke out the Gemma part from the Whisper fine-tuning and added support for Gemma 4.<p>I'm presenting it for you here today to play with, fork and improve upon.<p>One thing I have learned so far: It's very easy to OOM when you fine-tune on longer sequences! My local Mac Studio has 64GB RAM, so I run out of memory constantly.<p>Anywho, given how much interest there is in Gemma 4, and frankly, the fact that you can't really do audio fine-tuning with MLX, that's really the reason this exists (in addition to my personal interest). I would have preferred to use MLX and not have had to make this, but here we are. Welcome to my little side quest.<p>And so I made this. I hope you have as much fun using it as I had fun making it.<p>-Matt
Building a framework-agnostic Ruby gem (and making sure it doesn't break)
Hacker News (score: 17)[Other] Building a framework-agnostic Ruby gem (and making sure it doesn't break)
Tailslayer: Library for reducing tail latency in RAM reads
Hacker News (score: 42)[Other] Tailslayer: Library for reducing tail latency in RAM reads