Sunday — October 5, 2025
Cloudflare introduces the NET Dollar stablecoin, researchers discover a technique to turn high-performance mice into microphones using AI, and a new Large Language Model architecture called Dragon Hatchling is proposed, offering strong theoretical foundations and performance comparable to Transformer models.
News
Cloudflare Introduces NET Dollar stable coin
Cloudflare has introduced the NET Dollar, a US dollar-backed stablecoin that will enable instant and secure transactions for the AI-driven internet. The NET Dollar is part of a new business model that aims to reward originality, sustain creativity, and enable innovation in an AI-driven world, where humans are increasingly delegating tasks to artificial intelligence.
High-performance mice can be used as a microphone to spy on users
Researchers have discovered a technique called Mic-E-Mouse that uses the motion sensors in high-performance mice to capture acoustic vibrations and convert them into speech, effectively turning the mouse into a microphone to spy on users. This method, which utilizes AI and machine learning, can work with mice that have a high DPI (20,000 or higher) and can be used to eavesdrop on conversations, posing a potential cybersecurity risk.
Whiteboarding with AI
The author has optimized their software development workflow by using AI coding agents to think through problems and explore solutions before writing code, rather than asking them to write code immediately. This "whiteboarding-first" approach, which utilizes tools like Markdown and Mermaid diagrams, allows for better planning, design, and documentation, resulting in higher-quality code and reduced bugs.
Cory Doctorow Says the AI Industry Is About to Collapse
Cory Doctorow, a sci-fi author and tech journalist, is warning of an impending collapse of the AI industry, which he believes is a hype-fueled financial disaster that will harm hundreds of millions of people. He argues that the industry is propped up by a myth that large language models can replace human workers, and that the bubble will eventually burst, taking the whole economy with it, unless it is punctured soon to prevent further damage.
Relatively few Americans are getting news from AI chatbots like ChatGPT
Relatively few Americans, about 9%, get news from AI chatbots like ChatGPT, with 75% saying they never get news this way. Among those who do use chatbots for news, many have mixed experiences, with about half saying they at least sometimes see news they think is inaccurate, and a third finding it difficult to determine what is true and what is not.
Research
The Missing Link Between the Transformer and Models of the Brain
The Dragon Hatchling (BDH) is a new Large Language Model architecture inspired by the brain's scale-free biological networks, offering strong theoretical foundations, interpretability, and performance comparable to Transformer models like GPT2. BDH's biologically plausible design, which relies on synaptic plasticity and Hebbian learning, allows for sparse and positive activation vectors, enabling interpretability of state and demonstrating monosemanticity in language tasks.
How to inject knowledge efficiently? Knowledge infusion scaling law for LLMs
Large language models often underperform on specialized tasks without domain-specific optimization, and infusing too much domain knowledge can lead to "catastrophic forgetting" of previously acquired knowledge. Researchers have identified a "critical collapse point" where knowledge retention degrades sharply and proposed a "knowledge infusion scaling law" to predict the optimal amount of domain knowledge to inject, validated through experiments across different model sizes.
Space Mission Options for Reconnaissance and Mitigation of Asteroid 2024 YR4
Near-Earth asteroid 2024 YR4, discovered in December 2024, initially had a 3% chance of impacting Earth in 2032, but was later ruled out, and its probability of lunar impact rose to about 4%. Researchers have explored options for space missions to the asteroid, including reconnaissance, deflection, and disruption, with the best reconnaissance mission options launching in late 2028, and disruption missions possible with launches between 2029 and 2032.
Provable scaling laws of feature emergence from learning dynamics of grokking
The proposed framework, $\mathbf{Li_2}$, characterizes the grokking behavior of 2-layer nonlinear networks through three stages: lazy learning, independent feature learning, and interactive feature learning. This framework provides insights into how features emerge, their generalizability, and the roles of key hyperparameters, shedding light on the underlying mechanisms of grokking and leading to provable scaling laws of feature emergence, memorization, and generalization.
Characterizing Realistic Workloads on a Commercial Compute-in-SRAM Device
Compute-in-SRAM architectures have shown promise for improving performance and energy efficiency in data-intensive applications, and a comprehensive evaluation of a commercial compute-in-SRAM device, the GSI APU, reveals its potential to outperform traditional CPUs and GPUs. The GSI APU achieved significant performance and energy efficiency gains, including a 54.4$\times$-117.9$\times$ reduction in energy consumption compared to an NVIDIA A6000 GPU, while matching its performance in retrieval-augmented generation tasks.
Code
ProofOfThought: LLM-based reasoning using Z3 theorem proving
The ProofOfThought system utilizes a combination of LLM-based reasoning and Z3 theorem proving to evaluate complex queries, with a simple Python interface provided through the z3dsl.reasoning module. The system can be installed using pip and supports batch evaluation of datasets, with examples and an Azure OpenAI integration available in the examples/ directory.
Show HN: Run – a CLI universal code runner I built while learning Rust
run is a polyglot command runner and smart REPL that allows scripting, compiling, and iterating in 25+ languages without needing to touch another CLI. It supports a wide range of languages, including scripting, web, and compiled languages, and provides a consistent CLI, persistent REPLs, and examples for each language.
AI-powered open-source code laundering
rEFui is a Retained Mode JavaScript framework that allows developers to build UI projects across web, native, and embedded platforms, with built-in support for Hot Module Replacement (HMR). The framework provides a set of tools and components, including renderers for DOM, HTML, and other platforms, and can be used with various bundlers and build tools, such as Vite and Webpack.
Show HN: RenderarXiv – Search ArXiv from terminal, HTML to read/paste into LLM
Renderarxiv is a tool that allows users to search arXiv from their terminal and receive beautiful HTML results that can be read or copied into AI assistants like ChatGPT/Claude. The tool provides various options, including filtering by category, ranking modes, and saving results to a file, making it easy to find and summarize relevant research papers.
Ovi: Open-source video and audio generator model
Ovi is a video and audio generation model that can create synchronized video and audio content from text or text+image inputs, generating 5-second videos at 24 FPS with various aspect ratios. The model is available on platforms like Hugging Face and wavespeed.ai, and its behavior and output can be customized by modifying configuration files, with example prompts and installation instructions provided for easy use.