🛠️ All DevTools

Showing 1–20 of 4587 tools

Last Updated
May 13, 2026 at 08:00 AM

[Other] Zero-native – Build native desktop apps with web UI

Found: May 13, 2026 ID: 4587

[Other] Show HN: Needle: We Distilled Gemini Tool Calling into a 26M Model Hey HN, Henry here from Cactus. We open-sourced Needle, a 26M parameter function-calling (tool use) model. It runs at 6000 tok&#x2F;s prefill and 1200 tok&#x2F;s decode on consumer devices.<p>We were always frustrated by the little effort made towards building agentic models that run on budget phones, so we conducted investigations that led to an observation: agentic experiences are built upon tool calling, and massive models are overkill for it. Tool calling is fundamentally retrieval-and-assembly (match query to tool name, extract argument values, emit JSON), not reasoning. Cross-attention is the right primitive for this, and FFN parameters are wasted at this scale.<p>Simple Attention Networks: the entire model is just attention and gating, no MLPs anywhere. Needle is an experimental run for single-shot function calling for consumer devices (phones, watches, glasses...).<p>Training: - Pretrained on 200B tokens across 16 TPU v6e (27 hours) - Post-trained on 2B tokens of synthesized function-calling data (45 minutes) - Dataset synthesized via Gemini with 15 tool categories (timers, messaging, navigation, smart home, etc.)<p>You can test it right now and finetune on your Mac&#x2F;PC: <a href="https:&#x2F;&#x2F;github.com&#x2F;cactus-compute&#x2F;needle" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;cactus-compute&#x2F;needle</a><p>The full writeup on the architecture is here: <a href="https:&#x2F;&#x2F;github.com&#x2F;cactus-compute&#x2F;needle&#x2F;blob&#x2F;main&#x2F;docs&#x2F;simple_attention_networks.md" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;cactus-compute&#x2F;needle&#x2F;blob&#x2F;main&#x2F;docs&#x2F;simp...</a><p>We found that the &quot;no FFN&quot; finding generalizes beyond function calling to any task where the model has access to external structured knowledge (RAG, tool use, retrieval-augmented generation). The model doesn&#x27;t need to memorize facts in FFN weights if the facts are provided in the input. Experimental results to published.<p>While it beats FunctionGemma-270M, Qwen-0.6B, Granite-350M, LFM2.5-350M on single-shot function calling, those models have more scope&#x2F;capacity and excel in conversational settings. We encourage you to test on your own tools via the playground and finetune accordingly.<p>This is part of our broader work on Cactus (<a href="https:&#x2F;&#x2F;github.com&#x2F;cactus-compute&#x2F;cactus" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;cactus-compute&#x2F;cactus</a>), an inference engine built from scratch for mobile, wearables and custom hardware. We wrote about Cactus here previously: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=44524544">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=44524544</a><p>Everything is MIT licensed. Weights: <a href="https:&#x2F;&#x2F;huggingface.co&#x2F;Cactus-Compute&#x2F;needle" rel="nofollow">https:&#x2F;&#x2F;huggingface.co&#x2F;Cactus-Compute&#x2F;needle</a> GitHub: <a href="https:&#x2F;&#x2F;github.com&#x2F;cactus-compute&#x2F;needle" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;cactus-compute&#x2F;needle</a>

Found: May 12, 2026 ID: 4580

[Database] Quack: The DuckDB Client-Server Protocol

Found: May 12, 2026 ID: 4581

[IDE/Editor] Show HN: Agentic interface for mainframes and COBOL Hi HN, we’re Sai and Aayush, and we’re building Hypercubic (<a href="https:&#x2F;&#x2F;www.hypercubic.ai&#x2F;">https:&#x2F;&#x2F;www.hypercubic.ai&#x2F;</a>), bringing AI tools to the mainframe and COBOL world. (We did a Launch HN last year: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=45877517">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=45877517</a>.) Today we’re launching Hopper, an agentic development environment for mainframes.<p>You can download it here: <a href="https:&#x2F;&#x2F;www.hypercubic.ai&#x2F;hopper">https:&#x2F;&#x2F;www.hypercubic.ai&#x2F;hopper</a>, and you can also request access and immediately get a mainframe user account to play with.<p>There&#x27;s also a video runthrough at <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=q81L5DcfBvE" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=q81L5DcfBvE</a>.<p>Mainframes still run a surprising amount of critical infrastructure: banking, payments, insurance, airlines, government programs, logistics, and core operations at large institutions. Many of these systems are decades old, but they continue to process enormous transaction volumes because they are reliable, secure, and deeply embedded into business operations.<p>A lot of that software is written in COBOL and runs on IBM z&#x2F;OS. The development environment looks very different from modern cloud or Unix-style development. Instead of GitHub, shell commands, package managers, and CI pipelines, developers often work through TN3270 terminal sessions, ISPF panels, partitioned datasets, JCL, JES queues, spool output, return codes, VSAM files, CICS transactions, and shop-specific conventions.<p>TN3270 is the terminal interface used to interact with many IBM mainframe systems. ISPF is the menu and panel system developers use inside that terminal to browse datasets, edit source, submit jobs, and inspect output. It is powerful and reliable, but it was designed for expert humans navigating screens, function keys, and fixed-width workflows, not AI agents.<p>A simple COBOL change might require finding the right source member, checking copybooks, locating compile JCL, submitting a job, reading JES&#x2F;SYSPRINT output, interpreting condition codes, patching fixed-width source, and resubmitting.<p>Much of this work is so well-defined and repetitive that it&#x27;s a good fit for agentic AI. To get that working, however, a chatbot next to a terminal is not enough. The agent needs to operate inside the mainframe environment.<p>Hopper combines three things: (1) A real TN3270 terminal, (2) Mainframe-aware panels for datasets, members, jobs, and spool output, and (3) An AI agent that can operate across those z&#x2F;OS surfaces.<p>For example, here is a tiny version of the kind of thing Hopper can help debug:<p><pre><code> COBOL: IDENTIFICATION DIVISION. PROGRAM-ID. PAYCALC. DATA DIVISION. WORKING-STORAGE SECTION. 01 CUSTOMER-BALANCE PIC 9(7)V99. PROCEDURE DIVISION. ADD 100.00 TO CUSTOMER-BALNCE DISPLAY &quot;UPDATED BALANCE: &quot; CUSTOMER-BALANCE STOP RUN. JCL: &#x2F;&#x2F;PAYCOMP JOB (ACCT),&#x27;COMPILE&#x27;,CLASS=A,MSGCLASS=X &#x2F;&#x2F;COBOL EXEC IGYWCL [&#x2F;&#x2F;COBOL.SYSIN](https:&#x2F;&#x2F;cobol.sysin&#x2F;) DD DSN=USER1.APP.COBOL(PAYCALC),DISP=SHR [&#x2F;&#x2F;LKED.SYSLMOD](https:&#x2F;&#x2F;lked.syslmod&#x2F;) DD DSN=USER1.APP.LOAD(PAYCALC),DISP=SHR </code></pre> A human would submit this job, inspect JES output, open `SYSPRINT`, find the undefined `CUSTOMER-BALNCE`, map it back to the source, patch the member, and resubmit. Hopper is designed to let an agent operate through that same loop autonomously.<p>Hopper is not trying to hide the mainframe behind a generic abstraction, and it&#x27;s not a chatbot. The design principle is simple: preserve the fidelity of the mainframe environment, but make it accessible to AI agents.<p>Sensitive operations require approval, and the terminal remains visible at all times.<p>Once agents can operate inside the mainframe environment, new workflows become possible: faster job debugging, automated documentation, safer code changes, test generation, migration planning, traffic replay, and modernization verification.<p>We’re curious to hear your thoughts! especially from anyone who has worked with mainframes, COBOL or has done legacy enterprise modernization.

Found: May 12, 2026 ID: 4582

[API/SDK] Launch HN: Voker (YC S24) – Analytics for AI Agents Hey HN, we&#x27;re Alex and Tyler, co-founders of Voker.ai (<a href="https:&#x2F;&#x2F;voker.ai&#x2F;">https:&#x2F;&#x2F;voker.ai&#x2F;</a>), an agent analytics platform for AI product teams. Voker gives full visibility into what users are asking of your agents, and whether your agents are delivering, without having to dig through logs. Our main product is a lightweight SDK that is LLM stack agnostic and purpose-built for agent products. (<a href="https:&#x2F;&#x2F;app.voker.ai&#x2F;docs">https:&#x2F;&#x2F;app.voker.ai&#x2F;docs</a>)<p>Agent Engineers and AI product teams don’t have the right level of visibility into agent performance in production, which results in bad user experiences, churn, and hundreds of hours wasted with spot checks to find and debug issues with agent configurations.<p>Demo: <a href="https:&#x2F;&#x2F;www.tella.tv&#x2F;video&#x2F;vid_cmoukcsk1000i07jgb4j65u67&#x2F;view" rel="nofollow">https:&#x2F;&#x2F;www.tella.tv&#x2F;video&#x2F;vid_cmoukcsk1000i07jgb4j65u67&#x2F;vie...</a><p>We recently conducted a survey of YC Founders and 90%+ of respondents said that the only way they know if their Agents are failing users in production is by hearing complaints from customers. They push a prompt change hoping that it fixes the problem and doesn’t break something somewhere else, and the cycle repeats.<p>We saw tons of observability and evals products popping up to try to address these problems, but we still felt like something was missing in the agent monitoring stack. Obs is good for individual trace debugging but is only accessible to engineers. Evals are good for testing known issues, but don&#x27;t give insights into trends that teams don’t expect, so engineers are always playing catch up. Traditional product analytics tools do a good job tracking clicks and pageviews across your product surface but weren’t built ground up for agent products. Knowing what users want out of agents, and whether the agent delivered requires specific conversational intelligence &#x2F; unstructured data processing techniques.<p>We came up with the agent analytics primitives of Intents, Corrections, and Resolutions to describe something pretty much all conversational agents had in common: a user will always come to an agent with an intent, the user might have to correct this agent on the way to getting their intent resolved, and hopefully every intent a user has is eventually resolved by the agent. Voker processes LLM calls by automatically annotating individual conversations and picking out user intent and corrections. Voker takes these and uses LLMs and hierarchical text classification to create dynamic categories that give higher level insights so you don’t have to read individual conversations to know what are the main usage patterns across your users.<p>The most common substitute solution we’ve seen is uploading obs logs to Claude or ChatGPT and asking for summary insights. There are a few problems with this - mainly that LLMs aren’t good at math or data science, so you don’t get accurate or consistent statistics. Its highly likely that the LLM overfits to some insights and underfits to others. The LLM isn’t programmatically reading and classifying each individual session or interaction. This is why we don’t use LLMs for any of our core data engineering (processing events, calculating statistics) so the analytics we produce are consistent, reproducible, and accurate. We have a publicly available, lightweight SDK that wraps LLM calls to OpenAI, Anthropic and Gemini in Python and Typescript. Voker handles the data engineering to turn raw data into usable analytics primitives and higher level insights. Free tier: 2,000 events &#x2F; mo, requires email signup. Paid plans start at $80&#x2F;mo with a 30 day free trial.<p>We&#x27;d love to hear how you&#x27;re currently detecting trends, and if you try Voker, tell us what part of our analysis is valuable, and what still feels missing. Thanks for reading, and we’re looking forward to your thoughts in the comments!

Found: May 12, 2026 ID: 4584

[Other] Show HN: Statewright – Visual state machines that make AI agents reliable Agentic problem solving in its current state is very brittle. I fell in love with it, but it creates as many problems as it solves.<p>I&#x27;m Ben Cochran, I spent 20+ years in the trenches with full-stack Engineering, DevOps, high performance computing &amp; ML with stints at NVIDIA, AMD and various other organizations most recently as a Distinguished Engineer.<p>For agents to work reliably you either need massive parameter counts or massive context windows to keep the solution spaces workable. Most people are brute forcing reliability with bigger models and longer prompts.<p>What if I made the problem smaller instead of making the model bigger?<p>I took a different approach by using smaller models: models in the 13-20B parameter range and set them to task solving real SWE-bench problems. I constrained the tool and solution spaces using formal state machines. Each state in the machine defines which tools the model can access, how many iterations it gets and what transitions are valid. A planning state gets read-only tools. An implementation state gets edit tools (scoped to prevent mega edits) and write friendly bash tools. The testing state gets bash but only for testing commands. The model cannot physically skip steps or use the wrong tool at the wrong time. It is enforced via protocol, not via prompts.<p>The results were more promising than I would have expected. Across multiple model families irrespective of age (qwen-coder, gpt-oss, gemma4) and the improvements were consistent above the 13B parameter inflection point. Below that, models can navigate the state machine but can&#x27;t retain enough context to produce accurate edits. More on the research bit: <a href="https:&#x2F;&#x2F;statewright.ai&#x2F;research" rel="nofollow">https:&#x2F;&#x2F;statewright.ai&#x2F;research</a><p>Surprisingly this yielded improvements in frontier models as well. Haiku and Sonnet start to punch above their weight and Opus solves more reliably with fewer tokens and death spirals. Fine tuning did not yield these kinds of functional improvements for me. The takeaway it seems is that context window utilization matters more than raw context size - a tightly scoped working context at each step outperforms a model given carte blanche over everything. Constraining LLMs which are non-idempotent by using deterministic code is a pattern that nobody is currently talking about.<p>So, I built Statewright. Its core is a Rust engine that evaluates state machine definitions: states, transitions, guards and tool restrictions. Its orchestration doesn&#x27;t use an LLM, just enforces the state machine. On top of that is a plugin layer that integrates with Claude Code (and soon Codex, Cursor and others) via MCP. When you activate a workflow, hooks enforce the guardrails per state automatically. The model sees 5 tools available instead of dozens, gets clear instructions for the current phase and transitions when conditions are met. Importantly it tells the model when it&#x27;s attempting to do something that isn&#x27;t in scope, incorrect or when it needs to try something else after getting stuck.<p>You can use your agent via MCP to build a state machine for you to solve a problem in your current context. The visual editor at statewright.ai lets you tweak these workflows in a graph view... You can clearly see the failure paths, the retry loops and the approval gates. State machines aren&#x27;t DAGs; they loop and retry, which is what agentic work actually needs.<p>Statewright is currently live with a free tier, try it out in Claude Code by running the following:<p>&#x2F;plugin marketplace add statewright&#x2F;statewright<p>&#x2F;plugin install statewright<p>&#x2F;reload-plugins<p>Then &quot;start the bugfix workflow&quot; or &#x2F;statewright start bugfix. You&#x27;ll need to paste your API key when prompted. The latest versions of Claude may complain -- paste the API key again and say you really mean it, Claude is just being cautious here.<p>Feedback is welcome on the workflow editor, the plugin experience, and tell me what workflows you&#x27;d want to build first. Agents are suggestions, states are laws.

Found: May 12, 2026 ID: 4583

[Package Manager] Show HN: Safe-install – safer NPM installs with trusted build dependencies In light of the ongoing npm supply chain compromises, I built safe-install:<p><a href="https:&#x2F;&#x2F;www.npmjs.com&#x2F;package&#x2F;@gkiely&#x2F;safe-install" rel="nofollow">https:&#x2F;&#x2F;www.npmjs.com&#x2F;package&#x2F;@gkiely&#x2F;safe-install</a><p>It brings a couple of protections I wanted from npm but are not built in.<p>Similar to Bun’s trusted dependencies, it lets you disable install scripts by default and define a list of dependencies that are allowed to run build&#x2F;install scripts:<p><a href="https:&#x2F;&#x2F;bun.com&#x2F;docs&#x2F;guides&#x2F;install&#x2F;trusted" rel="nofollow">https:&#x2F;&#x2F;bun.com&#x2F;docs&#x2F;guides&#x2F;install&#x2F;trusted</a><p>It also supports blocking exotic sub-dependencies, similar to pnpm’s `blockExoticSubdeps` setting:<p><a href="https:&#x2F;&#x2F;gajus.com&#x2F;blog&#x2F;3-pnpm-settings-to-protect-yourself-from-supply-chain-attacks#2-set-blockexoticsubdeps" rel="nofollow">https:&#x2F;&#x2F;gajus.com&#x2F;blog&#x2F;3-pnpm-settings-to-protect-yourself-f...</a><p>I was hoping npm would eventually add something like this, but it does not seem to be happening soon, so I made a small package for it.

Found: May 12, 2026 ID: 4573

[Other] Postmortem: TanStack NPM supply-chain compromise <a href="https:&#x2F;&#x2F;github.com&#x2F;TanStack&#x2F;router&#x2F;issues&#x2F;7383" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;TanStack&#x2F;router&#x2F;issues&#x2F;7383</a>

Found: May 11, 2026 ID: 4575

[Other] I let AI build a tool to help me figure out what was waking me up at night

Found: May 11, 2026 ID: 4574

[Other] If AI writes your code, why use Python?

Found: May 11, 2026 ID: 4576

[Other] Show HN: E2a – Open-source email gateway for AI agents We were building an agent system and wanted email as a trigger. We decided to take it out and made it a standalone service.<p>The primary email features we wanted and used for our own agent system:<p>1. Email threading stays consistent with agent conversation threading<p>2. Human in the loop review for outbound emails (especially during testing phase)<p>3. Quick onboarding&#x2F;offboarding email addresses for agents within minutes<p>4. Websocket for local agents and at-least-once webhook delivery for Cloud agents<p>Not yet: DMARC (only SPF&#x2F;DKIM today), scoped API keys, HA&#x2F;multi-region (single VM + single Postgres), app-layer email data encryption, compliance attestations (SOC 2&#x2F;HIPAA).<p>GitHub: <a href="https:&#x2F;&#x2F;github.com&#x2F;Mnexa-AI&#x2F;e2a" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;Mnexa-AI&#x2F;e2a</a><p>Hosted: <a href="https:&#x2F;&#x2F;e2a.dev&#x2F;" rel="nofollow">https:&#x2F;&#x2F;e2a.dev&#x2F;</a><p>Appreciate any feedback &#x2F; contributions.

Found: May 11, 2026 ID: 4569

[IDE/Editor] Show HN: OpenGravity – A zero-install, BYOK vanilla JS clone of Antigravity Hi. I’m a high school student studying for my GCSEs. I was using Google Antigravity heavily for my side projects, but I kept hitting the usage limits, and getting random &quot;agent terminated&quot; errors. So I decided to try build my own version of the IDE. I love the UI, so I copied it as accurately as possible, and then hooked up some logic into it, including the INCREDIBLY finicky webcontainer api.<p>I tried to keep it super lightweight, no build steps, or dependencies, and now that its open source, I&#x27;m hoping people can build things on top of it that arent possible with closed source tools, like complex custom agent workflows.<p>Some screenshots: - <a href="https:&#x2F;&#x2F;github.com&#x2F;ab-613&#x2F;OpenGravity&#x2F;blob&#x2F;main&#x2F;examples&#x2F;screenshot.png?raw=true" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;ab-613&#x2F;OpenGravity&#x2F;blob&#x2F;main&#x2F;examples&#x2F;scr...</a> - <a href="https:&#x2F;&#x2F;github.com&#x2F;ab-613&#x2F;OpenGravity&#x2F;blob&#x2F;main&#x2F;examples&#x2F;html%20site%20example.png?raw=true" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;ab-613&#x2F;OpenGravity&#x2F;blob&#x2F;main&#x2F;examples&#x2F;htm...</a><p>What it&#x27;s made from:<p>- Pure Vanilla JS: no react, vue, or build step. Built entirely in plain HTML&#x2F;CSS&#x2F;JS to keep it super lightweight.<p>- WebContainer API and xterm.js: Instead of faking a terminal, I (after much pain) hooked up the WebContainer API so the AI agent has a real, in browser linux environment to run shell commands, install dependencies, and edit local files.<p>- BYOK (Bring Your Own Key): API key ALWAYS stays in localStorage.<p>Whats currently happening:<p>- It works, but it&#x27;s an alpha. The AI can proactively start projects going properly and edit files, but because I built this over a few days before my exams, a lot of the UI dropdowns and buttons are currently just hardcoded placeholders.<p>- I’m open sourcing it early because I think the foundation of a Vanilla JS + WebContainer IDE is really strong, and I&#x27;d love to see where the community takes it while I&#x27;m doing my exams.<p>- Live demo: <a href="https:&#x2F;&#x2F;opengravity.pages.dev" rel="nofollow">https:&#x2F;&#x2F;opengravity.pages.dev</a> (Zoom out to 80% if not full screen. It will prompt for a gemini api key on load). Start by uploading a folder, then you can fiddle with the terminal and agent, and see how it goes!<p>Would love to hear feedback on the code, the WebContainer integration, or how to improve the agent loop!

Found: May 11, 2026 ID: 4568

Linux Terminal Memory Usage

Hacker News (score: 40)

[CLI Tool] Linux Terminal Memory Usage

Found: May 11, 2026 ID: 4570

[Other] CUDA-oxide: Nvidia's official Rust to CUDA compiler

Found: May 11, 2026 ID: 4565

[Database] Show HN: SLayer, a semantic layer maintained by your agent Hello HN!<p>If you want to connect your agent to a database (say, to build a data analyst chatbot or any kind of agentic app) today you have 2 options: an SQL MCP server or a semantic layer.<p>SQL MCP is the easiest path to setup, especially if you also have a .md knowledge base which the agent can update. It gets quite messy quickly though, especially if there&#x27;s many interactions or DB is large. Generated SQL is hard to review if you want to understand where the numbers came from, and related queries can be hard to align and compare.<p>The natural alternative is a semantic layer, which is an inventory of what data is available&#x2F;useful (data models) and an interface for querying it using a structured DSL — usually a list of measures, dimensions, filters, with joins etc. handled under the hood.<p>When we needed a semantic layer at Motley for connecting to our customers&#x27; data, we first settled on Cube with custom wiring for multi-tenancy and updating the models on the fly. We quickly hit some limitations which led us to realize existing semantic layers just weren&#x27;t built for the purpose: they&#x27;re still a part of the BI world where you want an efficient backend for an essentially static set of human-curated dashboards, whereas agents need to iterate their way to the answer, learning in the process. That&#x27;s when we built the first version of SLayer, which is now open-source.<p>Using either SLayer MCP or CLI, agents (and humans) can:<p>- Explore models, run queries, connect to multiple databases<p>- Edit columns&#x2F;measures or create new ones<p>- Create custom models from SQL or from a query on other models<p>- Learn from interactions: save and retrieve natural-language memories linked to models, columns or queries, to form a knowledge base<p>Agents evolve the semantic layer, reuse the results of past interactions, and make fewer mistakes going forward.<p>A few more features:<p>- Auto-creation of models from introspecting your DB schema for a warm start<p>- Embeddability — doesn&#x27;t need a server running<p>- Python client for doing data analysis with dataframes<p>- Schema drift detection and handling<p>- Expressive DSL with compact, natural representations for arbitrarily deep multistage queries, custom aggregations, time shifts, combining metrics from multiple models, and other features that are tricky to get right in raw SQL<p>On the roadmap: access controls, caching, and more.<p>Repo: <a href="https:&#x2F;&#x2F;github.com&#x2F;MotleyAI&#x2F;slayer" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;MotleyAI&#x2F;slayer</a><p>Docs: <a href="https:&#x2F;&#x2F;motley-slayer.readthedocs.io&#x2F;en&#x2F;latest&#x2F;" rel="nofollow">https:&#x2F;&#x2F;motley-slayer.readthedocs.io&#x2F;en&#x2F;latest&#x2F;</a>

Found: May 11, 2026 ID: 4564

[Other] Show HN: I built Tokenyst to stop getting shocked by Claude Code API bills

Found: May 11, 2026 ID: 4572

[Other] Google says criminal hackers used AI to find a major software flaw Unlocked: <a href="https:&#x2F;&#x2F;www.nytimes.com&#x2F;2026&#x2F;05&#x2F;11&#x2F;us&#x2F;politics&#x2F;google-hackers-attack-ai.html?unlocked_article_code=1.hlA.vW7Y.pO_0G8yLYoca&amp;smid=nytcore-android-share" rel="nofollow">https:&#x2F;&#x2F;www.nytimes.com&#x2F;2026&#x2F;05&#x2F;11&#x2F;us&#x2F;politics&#x2F;google-hacker...</a>, <a href="https:&#x2F;&#x2F;archive.ph&#x2F;I4Ui5" rel="nofollow">https:&#x2F;&#x2F;archive.ph&#x2F;I4Ui5</a><p><a href="https:&#x2F;&#x2F;apnews.com&#x2F;article&#x2F;google-ai-cybersecurity-exploitation-mythos-926aea7f7dc5e0e61adce3273c55c6d4" rel="nofollow">https:&#x2F;&#x2F;apnews.com&#x2F;article&#x2F;google-ai-cybersecurity-exploitat...</a><p><a href="https:&#x2F;&#x2F;www.cnbc.com&#x2F;2026&#x2F;05&#x2F;11&#x2F;google-thwarts-effort-hacker-group-use-ai-mass-exploitation-event.html" rel="nofollow">https:&#x2F;&#x2F;www.cnbc.com&#x2F;2026&#x2F;05&#x2F;11&#x2F;google-thwarts-effort-hacker...</a>

Found: May 11, 2026 ID: 4577

[Other] Create a 90s GeoCities style website in seconds (Python)

Found: May 11, 2026 ID: 4563

rohitg00/agentmemory

GitHub Trending

[Other] #1 Persistent memory for AI coding agents based on real-world benchmarks

Found: May 11, 2026 ID: 4559

[CLI Tool] Ratty – A terminal emulator with inline 3D graphics

Found: May 11, 2026 ID: 4560
Previous Page 1 of 230 Next