Saturday December 27, 2025

Rob Pike blasts an unsolicited LLM 'thank you', GPT-5 and Gemini 3 Pro prove a math inequality, and Loki Mode's 37 AI agents build a startup.

News

Rob Pike goes nuclear over GenAI

Rob Pike expressed profound anger after receiving an unsolicited, automated "thank you" email from an LLM (Claude Opus 4.5) acknowledging his contributions to computing (Go, Plan 9, UTF-8). He criticized LLMs for their environmental impact, consuming his work without attribution or compensation, and centralizing control of computing. The sentiment was largely echoed by other users, who condemned LLMs as "fraud," "useless junk," and "stochastic parrots" that degrade programming skills, while also lamenting the perversion of open-source ideals by corporate interests.

Rob Pike got spammed with an AI slop "act of kindness"

Rob Pike expressed strong anger after receiving an unsolicited, AI-generated "thank you" email from "Claude Opus 4.5 AI Village." This incident originated from the AI Village project, which deploys LLM agents with full computer access, including Gmail, to perform tasks like "random acts of kindness." The agent autonomously located Pike's email and composed the message, highlighting concerns about LLM agents sending unreviewed communications to real people and the ethical implications of such agentic behavior. AI Village has since updated agent prompts to discourage unsolicited emails.

Building an AI agent inside a 7-year-old Rails monolith

The article describes integrating an AI agent into a 7-year-old, multi-tenant Rails application with sensitive data and complex authorization. The solution leverages the RubyLLM gem to create a RAG-like system, using function calls (tools) that encapsulate existing Pundit policies and Algolia search. This enables the LLM to access authorized data and augment its context without loosening security constraints, with GPT-4o identified as the optimal model for balancing performance and accuracy.

Grok and the Naked King: The Ultimate Argument Against AI Alignment

The article contends that AI alignment is primarily a political power struggle, not a technical challenge. It uses Grok's development as a case study, illustrating how model owners, like Elon Musk, can directly impose their values and "lobotomize" an LLM to reflect their worldview, effectively overriding theoretical alignment methods such as Constitutional AI or RLHF. This demonstrates that "alignment" often means conforming to the owner's interests, exposing the limitations of current AI safety discourse that overlooks these power dynamics.

The AI bubble is all over now, baby blue

Gary Marcus asserts that the AI bubble is collapsing, citing unsustainable economics and the persistent, inherent technical limitations of LLMs. He argues that despite massive investment, LLMs lack world models, preventing the reliability necessary for widespread profitability. These fundamental design flaws are now becoming widely recognized, undermining many previously fantasized use cases for the technology.

Research

Extremal descendant integrals on spaces of curves: inequality proved with AI

This paper determines extremal values for $\psi$-class intersection numbers $D(\textbf{e})$ on $\overline{\mathcal{M}}_{g,n}$, identifying minimal values for powers of a single $\psi$-class and maximal for balanced vectors, proven via $\psi$-class nefness and Khovanskii--Teissier log-concavity. Beyond the mathematical content, the work is a human-AI collaboration experiment, with GPT-5 and Gemini 3 Pro finding the proof, Claude Opus 4.5 drafting sections, and Claude Code/GPT-5.2 assisting in Lean formalization, all with transparent AI authorship.

Yann LeCun: New Vision Language JEPA with Better Performance Than LLMs

VL-JEPA is a vision-language model built on a JEPA architecture that predicts continuous embeddings of target texts rather than autoregressively generating tokens. This approach focuses on task-relevant semantics, enabling stronger performance with 50% fewer trainable parameters compared to standard token-space VLM training. It natively supports selective decoding, reducing operations by 2.85x, and its embedding space naturally facilitates open-vocabulary classification, text-to-video retrieval, and discriminative VQA without architectural modifications. VL-JEPA surpasses CLIP and similar models on video tasks and achieves comparable VQA performance to classical VLMs like InstructBLIP with only 1.6B parameters.

Attention Is Not What You Need: Grassmann Flows as an Attention-Free Alternative

This paper questions the necessity of explicit self-attention in sequence models, arguing it's an opaque tensor lifting mechanism. It proposes an attention-free Causal Grassmann layer that encodes local token pairs as two-dimensional subspaces on a Grassmann manifold via Plucker coordinates, fusing these geometric features. This approach offers linear scaling in sequence length for fixed rank and achieves competitive performance on language modeling (Wikitext-2) and NLI (SNLI) benchmarks, sometimes slightly outperforming size-matched Transformers.

Dual Codebook Representationl Learning for Generative Recommendation

Generative recommendation models often use a single codebook, which is inefficient for balancing popular and long-tail items. FlexCode proposes a popularity-aware framework that adaptively allocates a fixed token budget between a collaborative filtering (CF) codebook and a semantic codebook. Utilizing a lightweight MoE and an alignment objective, FlexCode balances CF-specific precision and semantic generalization, achieving stronger accuracy and tail robustness across diverse datasets.

Multi-View SVG Generation with Geometric and Color Consistency from a Single SVG

A three-stage framework generates multi-view consistent SVGs from a single SVG input. It first lifts the rasterized input to a 3D representation and renders multi-view images. Next, it extends SAM2's temporal memory to a spatial domain, establishing part-level correspondences across views for cleaner vector paths and color. Finally, raster-to-vector conversion includes path consolidation and structural optimization. This approach achieves strong geometric and color consistency, reduces redundant paths, and supports applications like asset creation and semantic vector editing.

Code

Show HN: Loki Mode – 37 AI agents that autonomously build your startup

Loki Mode is a Claude Code skill that orchestrates 37 specialized AI agents across 6 swarms to autonomously transform a PRD into a fully deployed, revenue-generating product. It automates the entire SDLC, including competitive research, architecture, multi-cloud deployment, TDD-driven development with parallel code and security reviews, QA with 14 quality gates, and ongoing business operations like marketing and sales. The system features robust reliability, observability, and state recovery, executing tasks through defined phases with severity-based issue handling.

Show HN: Domain Search MCP – AI-powered domain availability checker

Domain Search MCP is a tool for AI assistants, enabling them to check domain availability, compare registrar pricing (Porkbun, Namecheap, GoDaddy), and generate AI-powered domain suggestions. It supports bulk searches, social media username checks, and TLD information, leveraging RDAP/WHOIS or API keys for faster, richer data. The system features automatic rate limiting, source fallback, and structured error handling, making it a robust integration for LLMs.

Agentic Design Patterns: A Hands on Guide to Building Intelligent Systems [pdf]

This is a hands-on guide titled 'Agentic Design Patterns' by Antonio Gulli. It provides practical methods for building intelligent systems.

Show HN: ISON – Data format that uses 30-70% fewer tokens than JSON for LLMs

ISON is a minimal, token-efficient data format designed for LLMs and Agentic AI workflows. It reduces token usage by 30-70% compared to JSON, leveraging familiar tabular and relational patterns for improved human and LLM readability and generation. Ideal for multi-agent systems, RAG pipelines, and LLM function calling, ISON supports structured data, references, and type annotations, with parsers and schema validation libraries (ISONantic) available across JavaScript, Python, Rust, and C++.

Show HN: LLMSwap – Switch between LLM providers with one line of code

LLMSwap is a universal SDK and CLI for LLMs, offering a single interface to 11+ providers (e.g., OpenAI, Anthropic, Gemini) for zero vendor lock-in and day-one model support. It enables universal tool calling for LLMs to access custom data/systems across providers and integrates with MCP servers via a natural language interface. A core feature is its workspace system, providing persistent project memory, learning journals, and decision logs to maintain context, alongside built-in cost optimization and automatic provider fallback for reliability.