Tuesday — November 4, 2025
An analysis of 180M jobs shows creative roles declining 30%, a developer builds a Raspberry Pi dog cam using Claude, and a new attention architecture outperforms full attention.
News
AI's Dial-Up Era
The current AI landscape is analogous to the 1995 internet era, with similar debates on job displacement and market bubbles. The text argues that AI's impact on employment is not binary but a race between automation-driven productivity gains and the growth of unmet market demand, which varies by industry until demand saturates. While the boom shows signs of a bubble, the massive infrastructure investments by hyperscalers will likely outlast any correction and enable future innovation, similar to how the dot-com bust laid the groundwork for Web 2.0. The most significant outcome will be the transformation of job categories and the creation of new markets as AI makes previously cost-prohibitive applications economically viable.
I analyzed 180M jobs to see what jobs AI is replacing today
An analysis of 180M global job postings, classified using a fine-tuned sentence transformer and a Random Forest model, reveals AI's selective impact on the job market. Creative execution roles like graphic artists and writers saw significant declines of around 30%, while ML Engineer positions surged by 40%, leading growth across the entire AI infrastructure stack. Notably, software engineering and customer service roles have proven resilient, declining less than the market benchmark, suggesting AI is currently augmenting rather than replacing these jobs.
The Case That A.I. Is Thinking
The article challenges the "stochastic parrot" view of LLMs, arguing that their ability to achieve massive data compression is a form of genuine understanding. It draws parallels between the high-dimensional vector spaces in Transformer architectures and cognitive science theories like Hofstadter's "cognition as recognition" and Kanerva's "Sparse Distributed Memory." While acknowledging current limitations like the lack of embodied and continuous learning, the piece highlights how LLMs are becoming model organisms for neuroscience, demystifying cognition and raising fundamental questions about intelligence.
Syllabi – Open-source agentic AI with tools, RAG, and multi-channel deploy
This open-source, self-hostable platform builds agentic chatbots using a RAG framework over diverse knowledge sources. It supports multi-channel deployment (web, Slack, API) and extends beyond simple Q&A with custom tools and webhooks. Key features include rich outputs like client-side Python/R code execution and diagram generation, offering full control over data, infrastructure, and LLM configuration.
Agent-o-rama: build, trace, evaluate, and monitor LLM agents in Java or Clojure
Agent-o-rama is a new open-source library for building scalable, stateful LLM agents on the JVM, with first-class APIs for Java and Clojure. It aims to provide an integrated, end-to-end solution for the JVM ecosystem, bringing features similar to LangGraph and LangSmith, such as structured agent graphs, tracing, evaluation, and observability. Built on the Rama platform, it enables agents defined as graphs of parallel-executing functions, with built-in high-performance storage, deployment, and a UI for monitoring and experimentation.
Research
Google Quantum AI revived a decades-old concept known as quantum money
A new construction for single-use quantum money addresses the impracticality and privacy flaws of existing schemes. It introduces classical bill validation, which eliminates the need for long-term quantum memory or quantum communication. The protocol also includes a provably secure auditing procedure, enabling users to detect if the issuing authority is tracking them.
Calculus: A Limitless Perspective
This paper proposes a novel foundation for calculus that replaces the concept of limits with approximations. Differentiability is defined as linear approximation, and a formal calculus of error functions is used to rigorously derive standard results, including the Fundamental Theorem of Calculus. The entire framework reframes calculus through the lens of function approximation.
Kimi Linear: An Expressive, Efficient Attention Architecture
Kimi Linear is a new hybrid linear attention architecture that, for the first time, outperforms full attention in fair comparisons across short-context, long-context, and RL scaling. Its core module, Kimi Delta Attention (KDA), is an expressive linear attention mechanism with a specialized DPLR matrix for high hardware efficiency. A 3B parameter model using Kimi Linear demonstrated superior performance over a full attention baseline, reducing KV cache usage by up to 75% and achieving up to 6x decoding throughput at 1M context, positioning it as an efficient drop-in replacement.
Rateless Bloom Filters
Existing set reconciliation protocols are inefficient for variable-sized elements when the difference cardinality, d, is large. This work introduces a two-stage hybrid protocol featuring a novel Rateless Bloom Filter (RBF) that dynamically adapts to an unknown d. The RBF matches the communication complexity of an optimally-sized static filter without requiring prior parametrization. The resulting RBF-IBLT hybrid protocol reduces communication costs by over 20% for sets with Jaccard indices below 85%.
Deep sequence models tend to memorize geometrically
This paper contrasts the traditional associative view of parametric memory with a geometric one, arguing that Transformers synthesize a global geometry of facts. This emergent structure allows the model to reason about non-co-occurring entities by simplifying complex reasoning into a 1-step geometric task. The authors attribute the formation of this geometry to a natural spectral bias, suggesting this perspective offers new ways to improve knowledge representation in LLMs.
Code
Show HN: React-like Declarative DSL for building synthetic LLM datasets
Torque is a declarative, typesafe DSL for generating complex synthetic datasets for LLMs. It allows developers to compose conversation schemas like components, using AI to create realistic variations for fine-tuning. The library is provider-agnostic and features Zod-based schemas for typesafe tool calling, integrated Faker.js for reproducible data, and optimizations for cost and concurrent generation.
Octocode MCP – AI Researcher for Smart, Deep Multi-Repo Code Context
Octocode MCP is a Model Context Protocol server that provides AI assistants with structured access to GitHub for advanced code research and analysis. It features an agentic /research command that orchestrates specialized tools for searching code, repositories, and PRs, enabling deep analysis of codebases while optimizing for token efficiency. This allows LLMs to learn directly from production code, moving beyond their training data to generate more context-aware plans and implementations.
Show HN: I built a Raspberry Pi webcam to train my dog (using Claude)
A product manager built a DIY dog camera using a Raspberry Pi, Python, and Flask to monitor their pet's separation anxiety. They used Claude to generate the initial code for the livestreaming web server, demonstrating how LLMs can accelerate rapid prototyping. The project was later improved with a custom stopwatch UI and ngrok for remote access, showcasing an iterative development process enabled by AI assistance.
Show HN: AgentML – SCXML for Deterministic AI Agents (MIT)
AgentML is a universal language for defining AI agent behavior, analogous to HTML for web browsers, designed to combat framework fragmentation. It leverages the W3C SCXML standard to create deterministic state machines, ensuring predictable agent execution and using schema-guided events to constrain LLM outputs to structured JSON. Agents defined in .aml files are executed by a native Go/WASM runtime, with planned transformers for interoperability with frameworks like LangGraph and a vision for extensibility via WASM components.
Show HN: Extrai – An open-source tool to fight LLM randomness in data extraction
Extrai is a Python library that uses LLMs to extract structured data from text and format it into SQLModels for database persistence. Its core feature is a consensus mechanism that improves accuracy by running multiple extraction requests with one or more LLM providers and consolidating the results. The library also supports dynamic SQLModel generation from prompts, hierarchical extraction for complex nested data, and automatic few-shot example generation.