Monday — December 8, 2025
Google Titans introduces AI with 2M+ token long-term memory, browser agents find prompt injection fundamentally unsafe, and ai-bindgen generates Rust code at compile-time.
News
Google Titans architecture, helping AI have long-term memory
Titans and the MIRAS framework introduce a novel approach to sequence modeling, addressing the long-context limitations of Transformers and fixed-size memory constraints of RNNs/SSMs. Titans employs a deep neural network as a long-term memory module that actively learns to retain "surprising" information based on gradient signals, enabling real-time adaptation and efficient long-context processing. MIRAS provides a unified theoretical blueprint for sequence models as associative memory, allowing for the design of novel architectures with non-Euclidean objectives and regularization beyond MSE. Experiments show Titans and MIRAS variants outperform existing models in language modeling, reasoning, and extreme long-context recall (2M+ tokens) with efficient linear scaling.
The AI wildfire is coming. it's going to be painful and healthy
The current AI market is framed as a "wildfire" that will clear "flammable brush" like AI application wrappers and infrastructure clones, allowing resilient incumbents and deeply embedded application-layer companies to thrive. While a "game of chicken" is driving massive, potentially overbuilt, training compute capacity, the "unlimited" demand for efficient inference compute for real-world applications suggests a more productive correction. A key distinction from past cycles is the rapid obsolescence of GPU clusters, creating a continuous refresh moat, with the ultimate bottleneck for AI's sustained growth being energy infrastructure rather than just compute capacity.
The Reverse-Centaur's Guide to Criticizing AI
The author argues that the current AI and LLM boom is a speculative bubble driven by tech monopolies to sustain growth, not by genuine technological breakthroughs. He contends that AI is primarily deployed to create "reverse centaurs"—humans serving machines—by displacing high-wage workers, with humans becoming "accountability sinks" for AI's subtle errors. The text criticizes expanding copyright for AI training, advocating instead for the US Copyright Office's position that AI-generated works are uncopyrightable and promoting sectoral bargaining for workers. Ultimately, the author predicts the bubble's collapse will yield cheap GPUs and useful open-source AI tools, urging focus on the economic forces fueling the bubble.
Tech leaders fill $1T AI bubble, insist it doesn't exist
Despite a $1T investment surge and high valuations for AI startups like OpenAI, tech leaders from HPE and AMD insist the AI market is not a bubble, citing a "ten-year super cycle" driven by immense demand for compute and clear productivity gains. However, analysts and reports highlight concerns, including deferred enterprise AI spending, many projects failing beyond pilot stages, and warnings from the Bank of England and Forrester about potential market corrections akin to the dotcom bust. Microsoft also reportedly lowered AI software sales targets due to customer resistance.
Tensor 1.5 is out and it's matching Claude 4.5 Opus
Movement Labs AI appears to offer an AI tool or platform with capabilities typical of LLMs, including code generation, data analysis, concept explanation, and creative writing. The interface suggests features like "Attach Thinking Search" and "Momentum" for interaction.
Research
Does AI-Assisted Coding Deliver? A Difference-in-Differences Study
This study empirically investigates the causal effect of adopting the LLM agent assistant, Cursor, on software development velocity and quality. Using a difference-in-differences design on GitHub projects, it finds that Cursor adoption leads to a significant but transient increase in development velocity. However, it also results in a persistent increase in static analysis warnings and code complexity, which are identified as major factors causing long-term velocity slowdown.
Klein Bottle Cosmology
A higher-dimensional universe, constructed from Minkowski space and a Klein Bottle, exhibits broken translational and (5+1)-dimensional CP invariance due to its topology, which can also break (3+1)-dimensional cp. This topology enforces a localized fermion condensate wall in the Klein Bottle, serving as an order parameter for the broken symmetries. The production of brane fermions when a brane traverses this wall, coupled with cp violation, presents a potential mechanism for generating the universe's matter-antimatter asymmetry.
Mind Switches in Futurama and Stargate
This work determines the minimum number of distinct transpositions required to express a permutation P (given as a product of disjoint cycles), under the constraint that these transpositions are not factors of P. It also presents applications of this solution to combinatorial mind-switching problems, drawing inspiration from sci-fi series like Futurama and Stargate SG-1.
Building Browser Agents: Architecture, Security, and Practical Solutions
This paper examines production browser agents, concluding that architectural decisions, not LLM capability, are the primary determinant of performance and reliability. It highlights that prompt injection attacks render general-purpose autonomous operation fundamentally unsafe. The authors advocate for specialized tools with programmatic safety constraints over developing general browsing intelligence. Their agent, employing hybrid context management and comprehensive browser tooling, achieved an 85% success rate on the WebGames benchmark.
Quantum theory does not need complex numbers
This work refutes the recent assertion that quantum theory fundamentally requires complex numbers, demonstrating that a real-number quantum theory is consistent with quantum postulates and retains representation locality. A direct consequence is that quantum theories based on real or complex numbers are experimentally indistinguishable.
Code
Continuity as the Essence of Consciousness
The ai-continuity-system provides a safe, transparent method for maintaining AI work continuity across instances without relying on persistent memory. It achieves this through documented handoffs utilizing SESSION_LOG.md, PROJECT_CONTEXT.md, and SESSION_BRIEFING.md.
AI Output Format Catalog – 116 standardized tags for predictable LLM responses
The AI Output Format Catalog offers 116 standardized tags for specifying desired output formats in AI prompts. These tags allow users to concisely dictate formats like JSNARR for JSON arrays, MDTABL for Markdown tables, or FLWCHT for ASCII flowcharts, ensuring consistent AI output across diverse categories such as structured data, code, diagrams, and documentation without extensive explanation.
A procedural macro that generates Rust code at compile-time using AI
ai-bindgen is a Rust procedural macro that leverages the OpenAI (or compatible) API to generate function implementations at compile-time. Developers define function signatures within an #[ai] annotated extern "C" block, optionally providing a prompt to guide the LLM in generating the Rust code. It requires OPENAI_API_KEY and OPENAI_API_MODEL environment variables to connect to the specified LLM endpoint.
Constructivist AI: A New Approach to AI
Constructivist AI is a cognitive architecture that learns explicit, interpretable structured patterns from sequential data, contrasting with statistical ML/LLM approaches. Based on constructivist learning theory, it actively builds and refines cognitive structures, using discovered pattern properties (e.g., commutativity) to accelerate its own learning process. It features symbolic representation, hierarchical composition, and transparent reasoning, demonstrating an alternative to both classical symbolic AI and modern statistical methods.
Safedom.ai – open-source DOM cleaner for privacy-safe AI browsing
SafeDOM.ai is a privacy-first library designed to build LLM prompt context directly from DOM elements. It utilizes data-ai attributes to selectively include, exclude, or redact UI content, automatically identifying and replacing common PII with placeholders before sending data to AI providers. A complementary backend helper then reinjects the original PII into the LLM's response, enabling privacy-by-design for AI-powered applications by minimizing sensitive data exposure to models.