Tomo: A statically typed, imperative language that cross-compiles to C [video]
Hacker News (score: 13)Description
More from Hacker
Show HN: Inverting Agent Model (App as Clients, Chat as Server and Reflection)
Show HN: Inverting Agent Model (App as Clients, Chat as Server and Reflection) Hello HN. Iād like to start by saying that I am a developer who started this research project to challenge myself. I know standard protocols like MCP exist, but I wanted to explore a different path and have some fun creating a communication layer tailored specifically for desktop applications.<p>The project is designed to handle communication between desktop apps in an agentic manner, so the focus is strictly on this IPC layer (forget about HTTP API calls).<p>At the heart of RAIL (Remote Agent Invocation Layer) are two fundamental concepts. The names might sound scary, but remember this is a research project:<p>Memory Logic Injection + Reflection Paradigm shift: The Chat is the Server, and the Apps are the Clients.<p>Why this approach? The idea was to avoid creating huge wrappers or API endpoints just to call internal methods. Instead, the agent application passes its own instance to the SDK (e.g., RailEngine.Ignite(this)).<p>Here is the flow that I find fascinating:<p>-The App passes its instance to the RailEngine library running inside its own process.<p>-The Chat (Orchestrator) receives the manifest of available methods.The Model decides what to do and sends the command back via Named Pipe.<p>-The Trigger: The RailEngine inside the App receives the command and uses Reflection on the held instance to directly perform the .Invoke().<p>Essentially, I am injecting the "Agent Logic" directly into the application memory space via the SDK, allowing the Chat to pull the trigger on local methods remotely.<p>A note on the Repo: The GitHub repository has become large. The core focus is RailEngine and RailOrchestrator. You will find other connectors (C++, Python) that are frankly "trash code" or incomplete experiments. I forced RTTR in C++ to achieve reflection, but I'm not convinced by it. Please skip those; they aren't relevant to the architectural discussion.<p>Iād love to focus the discussion on memory-managed languages (like C#/.NET) and ask you:<p>-Architecture: Does this inverted architecture (Apps "dialing home" via IPC) make sense for local agents compared to the standard Server/API model?<p>-Performance: Regarding the use of Reflection for every callāwould it be worth implementing a mechanism to cache methods as Delegates at startup? Or is the optimization irrelevant considering the latency of the LLM itself?<p>-Security: Since we are effectively bypassing the API layer, what would be a hypothetical security layer to prevent malicious use? (e.g., a capability manifest signed by the user?)<p>I would love to hear architectural comparisons and critiques.
Show HN: Amla Sandbox ā WASM bash shell sandbox for AI agents
Show HN: Amla Sandbox ā WASM bash shell sandbox for AI agents WASM sandbox for running LLM-generated code safely.<p>Agents get a bash-like shell and can only call tools you provide, with constraints you define. No Docker, no subprocess, no SaaS ā just pip install amla-sandbox
Show HN: See the carbon impact of your cloud as you code
Show HN: See the carbon impact of your cloud as you code Hey folks, Iām Hassan, one of the co-founders of Infracost (<a href="https://www.infracost.io">https://www.infracost.io</a>). Infracost helps engineers see and reduce the cloud cost of each infrastructure change before they merge their code. The way Infracost works is we gather pricing data from Amazon Web Services, Microsoft Azure and Google Cloud. What we call a āPricing Serviceā, which now holds around 9 million live price points (!!). Then we map these prices to infrastructure code. Once the mapping is done, it enables us to show the cost impact of a code change before it is merged, directly in GitHub, GitLab etc. Kind of like a checkout-screen for cloud infrastructure.<p>Weāve been building since 2020 (we were part of YC W21 batch), and iterating on the product, building out a team etc. However, back in 2020 one of our users asked if we can also show the carbon impact alongside costs.<p>It has been itching my brain since then. The biggest challenge has always been the carbon data. The mapping of carbon data to infrastructure is time consuming, but it is possible since weāve done it with cloud costs. But we need the raw carbon data first. The discussions that have happened in the last few years finally led me to a company called Greenpixie in the UK. A few of our existing customers were using them already, so I immediately connected with the founder, John.<p>Greenpixie said they have the data (AHA!!) And their data is verified (ISO-14064 & aligned with the Greenhouse Gas Protocol). As soon as I talked to a few of their customers, I asked my team to see if we can actually finally do this, and build it.<p>My thinking is this: some engineers will care, and some will not (or maybe some will love it and some will hate it!). For those who care, cost and carbon are actually linked; meaning if you reduce the carbon, you usually reduce the cost of the cloud too. It can act as another motivation factor.<p>And now, it is here, and Iād love your feedback. Try it out by going to <a href="https://dashboard.infracost.io/">https://dashboard.infracost.io/</a>, create an account, set up with the GitHub app or GitLab app, and send a pull request with Terraform changes (you can use our example terraform file). It will then show you the cost impact alongside the carbon impact, and how you can optimize it.<p>Iād especially love to hear your feedback on if you think carbon is a big driver for engineers within your teams, or if carbon is a big driver for your company (i.e. is there anything top-down about carbon).<p>AMA - Iāll be monitoring the thread :)<p>Thanks
Security vulnerability found in Rust Linux kernel code
Security vulnerability found in Rust Linux kernel code
No other tools from this source yet.