Show HN: MCP Jetpack – The easiest way to get started with MCP in Cursor
Show HN (score: 7)Description
Two problems we are trying to solve:
Friction - Normally if you want to give Cursor access to GitHub, you have to install the right MCP server and login before you can use GitHub with Cursor’s chat. With MCP Jetpack, you can ask Cursor to list your GitHub issues, and it will automatically execute the right tool behind the scenes to accomplish your task. For services that require authentication, you will be asked to login the first time you interact with the service. However, it all happens within the Cursor chat so you never have to context switch and fiddle with Cursor’s settings.
Tool Limits - Cursor warns you if you have more than 50 MCP tools installed as it says having more will degrade performance. However, just installing the GitHub MCP server itself adds 74 MCP tools. With MCP Jetpack, you get access to GitHub, Atlassian and 15 other services with just two tools: “FindTool” and “ExecTool”.
Here are the 17 services we support today: GitHub, Atlassian, Canva, Linear, Notion, Intercom, Monday.com, Neon, PayPal, Hugging Face, Sentry, Square, Webflow, Wix, Cloudflare Docs, Cloudflare AI Gateway, Cloudflare Workers Bindings.
We’ll continue to add more services as companies launch remote MCP servers. If yours isn’t listed and you’d like it to be added, please email us at team@mcpjetpack.com.
MCP Jetpack is in alpha so please let us know if you run into any problems or have any feedback - thanks!
More from Show
Show HN: We built an open-source alternative to expensive pair programming apps
Show HN: We built an open-source alternative to expensive pair programming apps My friend and I grew frustrated with the high cost of existing pair programming tools, and of course of grainy screens when we used Huddle or similar tools.<p>We believe core developer collaboration shouldn't be locked behind an expensive subscription.<p>So for the past year we spent our nights and weekend building Hopp, an open-source alternative.<p>We would love your feedback and we are here to answer any and all questions.
Show HN: AI Agent in Jupyter – Runcell
Show HN: AI Agent in Jupyter – Runcell I build runcell, an AI Agent in Jupyter Lab. It can understand context (data, charts, code, etc) in your jupyterlab and write code for you.<p>Runcell has built-in tools that can edit or execute cells, read/write files, search web, etc.<p>Comparing with AI IDE like cursor, runcell focus on building context for code agent in jupyter environment, which means the agent can understand different types of information in jupyter notebook, access kernel state, edit/execute specific cells instead of handling jupyter as static ipynb file.<p>Comparing with jupyter ai, runcell is more like an agent instead of a chatbot. It have access to lots of tools to work and take actions by its own.<p>You can use runcell with simple "pip install runcell" to start.<p>Any comments and suggestions are welcome.
Show HN: I built AI that turns 4 hours of financial analysis into 30 seconds
Show HN: I built AI that turns 4 hours of financial analysis into 30 seconds I built Duebase AI to solve a problem I kept running into in fintech - analyzing UK company financial health takes forever. The process usually goes: download PDFs from Companies House → manually extract data to spreadsheets → calculate ratios → interpret trends. Takes 3-4 hours per company and requires serious financial expertise. The technical challenge: Companies House filings are messy. Inconsistent formats, complex accounting structures, missing data, and you need to understand UK accounting standards to make sense of it all. My approach:<p>Parse 15M+ UK company records from Companies House API Built ML models to extract and normalize financial data from varied filing formats Created scoring algorithms that weight liquidity, profitability, leverage, and growth trends Generate 1-5 health scores with explanations in plain English<p>What it does:<p>Instant financial analysis of any UK company (30 seconds vs 4 hours) Real-time monitoring with alerts for new filings/director changes Risk detection that catches declining trends early No financial background needed to understand results<p>The hardest part was handling the data inconsistencies - UK companies file in different formats, use various accounting frameworks, and often have incomplete information. Had to build a lot of data cleaning and normalization logic. Currently focused on the UK market since I know the regulatory landscape well, but the approach could work for other countries with similar public filing systems. Link: <a href="https://duebase.com" rel="nofollow">https://duebase.com</a>
Show HN: Mixing Deterministic Codegen with LLM Codegen for Client SDKs
Show HN: Mixing Deterministic Codegen with LLM Codegen for Client SDKs Hi HN, I’m Patrick. Elias, Kevin, and I are building Sideko (<a href="https://sideko.dev" rel="nofollow">https://sideko.dev</a>), a new type of code generator for building and maintaining API client SDKs from OpenAPI specs.<p>Our approach differs significantly from traditional SDK generators in that we use structured pattern matching queries to create and update the code. Other SDK generators use templates, which overwrite custom changes and produce code that looks machine generated.<p>We’ve mixed in LLM codegen by creating this workflow: Run deterministic codegen to establish the SDK structure. Let LLMs enhance specific components where adaptability adds value and include agent rules files that enforce consistency and correctness with type checking and integration tests against mock servers. The system will retain the LLM edits, while the rest of the SDK is automatically maintained by the deterministic generator (keeping it in sync with the API). LLMs can edit most the files (see python rules and typescript rules).<p>You can try it out from your terminal: Install: npm install -g @sideko/cli Login: sideko login Initialize: sideko sdk init Prompt: “Add a new function that…”<p>Check out the repo for more details: <a href="https://github.com/Sideko-Inc/sideko" rel="nofollow">https://github.com/Sideko-Inc/sideko</a> We’d love to hear your thoughts!
Show HN: Okapi – a metrics engine based on open data formats
Show HN: Okapi – a metrics engine based on open data formats Hi All I wanted to share an early preview of Okapi an in-memory metrics engine that also integrates with existing datalakes. Modern software systems produce a mammoth amount of telemetry. While we can discuss whether or not this is necessary, we can all agree that it happens.<p>Most metrics engines today use proprietary formats to store data and don’t use disaggregated storage and compute. Okapi changes that by leveraging open data formats and integrating with existing data lakes. This makes it possible to use standard OLAP tools like Snowflake, Databricks, DuckDB or even Jupyter / Polars to run analysis workflows (such as anomaly detection) while avoiding vendor lock-in in two ways - you can bring your own workflows and have a swappable compute engine. Disaggregation also reduces Ops burden of maintaining your own storage and the compute engine can be scaled up and down on demand.<p>Not all data can reside in a data-lake/object store though - this doesn’t work for recent data. To ease realtime queries Okapi first writes all metrics data to an in memory store and reads on recent data are served from this store. Metrics are rolled up as they arrive which helps ease memory pressure. Metrics are held in-memory for a configurable retention period after which it gets shipped out to object storage/datalake (currently only Parquet export is supported). This allows fast reads on recent data while offloading query-processing for older data. On benchmarks queries on in-memory data finish in under a millisecond while having write throughput of ~280k samples per second. On a real deployment, there’d be network delays so YMMV.<p>Okapi it is still early — feedback, critiques, and contributions welcome. Cheers !
Show HN: Code-snippets for developing eBPF Programs
Show HN: Code-snippets for developing eBPF Programs When developing eBPF-programs, we need to figure correct program-section SEC() and program-context.<p>Similary while creating eBPF-maps, we need to add certain fields such as; map-type, key/values, map_options etc..<p>If you’re like me, you probably end up digging through documentation or browsing open-source projects just to piece this together every time.<p>So, I created a vscode-extension to help with these repetitive tasks.<p>Try it out and do share your feedback. I hope you like it.<p>Thanks !
Show HN: I integrated Ollama into Excel to run local LLMs
Show HN: I integrated Ollama into Excel to run local LLMs I built an Excel add-in that connects to Ollama, so you can run local LLMs like Llama3 directly inside Excel. I call it XLlama.<p>You can use it like a regular formula: =XLlamaPrompt("Is Excel a database") or run it on an entire range.<p>No API calls. No cloud. No subscriptions. Everything runs locally.<p>It’s useful for quick tasks like extracting names, emails, or phone numbers from text, or for doing light data analysis without leaving Excel.<p>Would love feedback, especially from people who use Excel daily.
Show HN: Embeddable -build interactive experiences you can drop into any website
Show HN: Embeddable -build interactive experiences you can drop into any website Hi HN, I’m a co‑founder of Embeddable AI.<p>After struggling to add interactive AI experiences to Wix, Shopify, Webflow, WordPress sites, I built this tool to let marketers build chatbots, quizzes or assistants and embed them anywhere with a snippet.<p>Built in React/TypeScript front end and Node.js logic engine. It loads fast and works across CMS platforms.<p>I’d love feedback from builders and marketers on use cases, missing features, or integration ideas.
Show HN: Using DSPy to enrich a dataset of the Nobel laureate network
Show HN: Using DSPy to enrich a dataset of the Nobel laureate network I've been working a fair bit with DSPy lately, and I did some work in combining the benefits of vector search and LLMs (via a DSPy pipeline) to disambiguate records with a high degree of accuracy to help enrich a dataset. The blog post shows how this approach scales well, is very cost-effective and super concise - all it takes is < 100 lines of DSPy code and it all runs async.<p>The code to reproduce is in this repo if anyone's interested (all tools are 100% free and open source, and the methodology will work with open weight LLMs too). <a href="https://github.com/kuzudb/dspy-kuzu-demo" rel="nofollow">https://github.com/kuzudb/dspy-kuzu-demo</a>
Show HN: WhiteLightning – ultra-lightweight ONNX text classifiers trained w LLMs
Show HN: WhiteLightning – ultra-lightweight ONNX text classifiers trained w LLMs Hey HN,<p>We’re Volodymyr and Volodymyr—two developers from Ukraine building WhiteLightning. It’s a tool that turns large LLMs (Claude 4, Grok 4, GPT-4o via OpenRouter) into tiny ONNX text classifiers that run anywhere—even on drones at the edge.<p>Why we built this: Many developers want custom models (spam filters, sentiment analysis, PII detection, moderation tools), but don’t want to deal with constant API calls or deploy heavy models in production.<p>How it works: WhiteLightning uses LLMs to generate training data and distills it into KB-sized ONNX models you can run on any device and in any language. Just describe your task in a sentence, grab the ONNX model, and run it locally—Python, JS, Rust, Java, Swift, C++, you name it.<p>Try it instantly in your browser: <a href="https://whitelightning.ai/playground.html" rel="nofollow">https://whitelightning.ai/playground.html</a><p>Code & docs: <a href="https://github.com/Inoxoft/whitelightning">https://github.com/Inoxoft/whitelightning</a><p>Community model library: <a href="https://github.com/Inoxoft/whitelightning-model-library">https://github.com/Inoxoft/whitelightning-model-library</a><p>We’d love your feedback—what works, what doesn’t, and what to improve.
Show HN: A tool for complete WebSocket traffic control
Show HN: A tool for complete WebSocket traffic control I built a Chrome extension that acts as a WebSocket proxy, allowing real-time monitoring, message simulation, and traffic interception. Think "Proxyman for WebSockets" but integrated into Chrome DevTools.<p>Key features: Real-time WebSocket monitoring and message capture Send custom messages in both directions (client ↔ server) Block incoming/outgoing messages for testing Background monitoring (captures connections even when DevTools is closed) Why: I was debugging a WebSocket chat app and needed better tools than browser DevTools. Existing solutions required external proxies or were too basic.<p>Tech: Injects proxy script to intercept WebSocket constructor, React + Vite UI, Chrome DevTools API integration, MIT licensed. Perfect for debugging WebSocket apps, testing error scenarios, reverse engineering APIs, and QA testing real-time features.<p>Links:<p>GitHub: <a href="https://github.com/law-chain-hot/websocket-devtools">https://github.com/law-chain-hot/websocket-devtools</a><p>YouTube Demo: <a href="https://www.youtube.com/watch?v=L64x__1xORQ" rel="nofollow">https://www.youtube.com/watch?v=L64x__1xORQ</a><p>Would love feedback from developers who work with WebSockets regularly!
Show HN: I made a web app for structured podcast summaries
Show HN: I made a web app for structured podcast summaries Hey HN,<p>I follow a lot of podcasts and the episodes are often 2-3 hours long, so I made a web app that gives me a structured podcast summary with applicable habits and recommendations.<p>My goal isn’t to discourage you from listening to the podcast, but rather to help you decide whether the episode is worthwhile and to provide notes.<p>The endgame is to give you a personal feed of podcasts that you follow, maybe even deliver it to your inbox or via RSS.<p>My tech stack is quite simple - React Router(v7), node.js and PostgreSQL with Redis. I'm using OpenAI API to generate the summary from the episode transcript.
Show HN: Mark 1.0, a notation that unifies JSON, HTML, JSX, XML, YAML, and more
Show HN: Mark 1.0, a notation that unifies JSON, HTML, JSX, XML, YAML, and more Author of Mark Notation here.<p>Mark is a unified notation for both object and markup data, combining the best of JSON, HTML, and XML with a clean syntax and succinct data model.<p>I'm glad to announce the 1.0 Release of Mark. This release is just the start of a long journey to make web a better platform to store and exchange data.<p>Your feedback welcome!
Show HN: GitGuard - Painless GitHub PR Automations
Show HN: GitGuard - Painless GitHub PR Automations Hey HN,<p>Every team I've been on has cobbled together some sort of combination of GitHub branch protections and custom scripts to make sure that PRs conform to organization policies and best practices.<p>Things like:<p>- When {X} file is changed, require review from team {Y}<p>- When a new db migration is added, ensure that a special set of tests pass<p>- Require multiple approvals when the PR is very large<p>- Add a special label to PRs that include breaking changes<p>- Allow emergencies / hotfixes to break glass and bypass all of the above<p>Most teams tend to start out with a little script running in GitHub actions to enforce all of these policies but it tends to get out of hand and become hard to maintain. PRs that should require scrutiny slip through the cracks, and others that should be allowed through are unnecessarily blocked.<p>That's why I made GitGuard (<a href="https://gitguard.dev/" rel="nofollow">https://gitguard.dev/</a>)<p>GitGuard lets you write and maintain these policies in a custom DSL so simple it looks like pseudocode. The policies are checked on every single PR nearly instantly (no need to wait for a GitHub actions runner) and the results are reported in plain english.<p>Right now policies can make simple assertions about PR metadata and take some stateful actions (adding labels, requesting review) but I'd love to hear more from HN about how GitGuard could be even more useful.
Show HN: HN v0.2.1 – Native macOS app for reading HN
Show HN: HN v0.2.1 – Native macOS app for reading HN Hey HN folks,<p>Last week I released the first public version of 120 HN — a fully native Hacker News client for macOS. This week, I'm excited to share version 0.2.1 with some essential updates:<p>The app now remembers window size, position, and sidebar state between sessions<p>Improved AI summaries: more concise, relevant with link back to the comments, easier to read with short paragraph form.<p>Minor tweaks to navigation and layout for a smoother experience<p>Sneak peek: local LLM support is coming in the next release.<p>As always, the app is free to use. Any suggestions or comments would be appreciated.<p>Thanks for checking it out.
Show HN: Timep – a next-gen profiler and flamegraph-generator for bash code
Show HN: Timep – a next-gen profiler and flamegraph-generator for bash code timep is a TIME Profiler for bash code that will give you an accurate per-command execution time breakdown of any bash script or function.<p>Unlike other profilers, timep also recovers and hierarchally records metadata on subshell and function nesting, allowing it to recreate the full call-stack tree for the bash code being profiled. If you call timep with the `--flame` flag, it will automatically generate a flamegraph .svg image where each block represents the wall-clock time spent on a particular command (top level) or its parent subshells/functions (all the other levels).<p>Using timep is simple - just source the timep.bash file then add timep before whatever you want to profile. You do not need to change in the code being profiled - timep handles everything for you. Example usage:<p><pre><code> . ./timep.bash timep someFunc timep -flame someScript <inputFile </code></pre> timep will generate 2 profiles for you: one showing each individual command (with full subshell/function nesting chains), and one that combines repeated loops commands into a count + total runtime line with minimal "extra" metadata.<p>See the github README for more info on the available flags and output profile specifics.<p>timep works by cramming all the timing instrumentation logic into a DEBUG trap that roughly does the following:<p>1. record end timestamp for previous command 2. compare current state to state saved in variables last DEBUG trap to determine what sort of command is happening. e.g., if BASH_SUBSHELL increased then we know we just entered a subshell or background fork. 3. once we know what type of command is happening, generate a log line for the previous command (now that we have its end time 4. save current state in various variables (for use next debug trap) 5. record start time for the next command<p>then after the profiled code is done running, timep post-processes the logs to produce the final profile
Show HN: c0admin – A terminal-based AI assistant for Linux sysadmins
Show HN: c0admin – A terminal-based AI assistant for Linux sysadmins I made a small CLI tool called `c0admin`. Runs locally using your own Gemini API key. No signup, no server, no tracking.<p>You just run `c0admin` in your terminal, and it gives you suggestions interactively.
Show HN: Ncrypt – Query encrypted files privately with FHE
Show HN: Ncrypt – Query encrypted files privately with FHE Hey HN,<p>We're building ncrypt, an open-source encrypted file manager that allows you to store, manage, and privately query your files using fully homomorphic encryption (FHE). This project originally started as a simple SFTP-like CLI for my personal S3 buckets which I used to send and retrieve encrypted files and have more granular control over key rotation.<p>As the number of files that I was storing grew, file discovery started to become a problem, and I found myself frequently having to download and decrypt files to inspect their contents. Rather than leaving them unencrypted in S3 and therefore easier to search, I started looking into the concept of searching over encrypted data using fully homomorphic encryption. This led me to Zama's concrete-python library (<a href="https://github.com/zama-ai/concrete">https://github.com/zama-ai/concrete</a>), which provides a simple Python interface for performing FHE operations.<p>FHE is notoriously slow, so rather than trying to search over entire files I focused on a more tractable problem, indexing and searching over file metadata (summaries, keywords, embeddings, etc) which was small enough to make search practical. While still not fast compared to traditional file management tools, ncrypts search performance is decent if you keep directory sizes relatively small (under 25 files), and most of the heavy lifting happens during metadata extraction, not at search time.<p>The two types of encrypted queries we currently support are keyword search and cosine similarity search over vector embeddings, which are generated using user-specified huggingface models. Ncrypt currently supports metadata extraction for text, image, and audio files. Check out our code and give it a try at <a href="https://github.com/ncryptai/ncrypt">https://github.com/ncryptai/ncrypt</a>.<p>We love feedback!
Show HN: Clai - Vendor agnostic Claude Code/Gemini CLI written in Go
Show HN: Clai - Vendor agnostic Claude Code/Gemini CLI written in Go
Show HN: Reactylon – Open-source framework for building 3D/XR apps with React
Show HN: Reactylon – Open-source framework for building 3D/XR apps with React
No other tools from this source yet.