Thursday — November 27, 2025
US Census data reveals a decline in enterprise AI adoption, a new protocol fixes complex coding by destroying the AI instance after each task and image diffusion models are repurposed for zero-shot video object tracking.
News
I don't care how well your "AI" works
The author argues that focusing on LLM output quality misses the technology's more fundamental, intentional problems. They posit that LLMs erode individual cognitive processes by subtly shaping a user's thoughts, leading to a devaluation of skilled crafts like programming. The piece frames AI as a tool for power centralization, where its resource-intensive nature is a feature designed to reinforce existing capitalist structures by dismantling individual craft and agency.
Show HN: A WordPress plugin that rewrites image URLs for near-zero-cost delivery
This WordPress plugin functions as an image-only CDN by leveraging Cloudflare Workers and R2. It rewrites frontend image URLs without requiring DNS changes; on the first request, a Worker pulls the image from the origin and caches it in R2 for subsequent delivery from the edge. The solution is offered as a managed service or as a free, self-hosted option using an open-source Worker on the user's own Cloudflare account.
MIT study finds AI can replace 11.7% of U.S. workforce
An MIT study using a labor simulation tool called the Iceberg Index concludes that current AI systems can replace 11.7% of the U.S. workforce, equivalent to $1.2 trillion in wages. The tool, a "digital twin" of the labor market, maps over 32,000 skills to occupations to assess AI's current capabilities, revealing the largest exposure is in non-tech roles like HR, finance, and administration. The index is designed as a sandbox for policymakers to model workforce disruption and test interventions rather than as a predictive engine.
Investors expect AI use to soar. That's not happening
Despite high investor expectations, recent US Census Bureau data reveals a decline in enterprise AI adoption. The employment-weighted share of workers using AI has fallen to 11%, with the most significant drop occurring in businesses with over 250 employees. This trend suggests unexpectedly weak demand for generative AI technologies in the workplace three years after the initial boom.
Has the bailout of generative AI begun?
The recently announced "Genesis" program, initiated by a White House Executive Order, involves large-scale government procurement of chips from AI companies. The article posits that this program is being widely interpreted as a potential bailout for overextended and unprofitable firms in the AI sector. This speculation is fueled by the timing of the announcement and a sudden reversal of anti-bailout stances from influential figures.
Research
Bayesian Neural Networks (2018) [pdf]
Bayesian Neural Networks (BNNs) integrate probabilistic models with neural networks, combining the function approximation capabilities of NNs with the uncertainty modeling of stochastic methods. This allows BNNs to generate a full posterior distribution for both predictions and learned parameters. The key benefits are robust uncertainty quantification and the ability to inspect the distribution of the model's weights, driving their adoption with support from modern probabilistic programming libraries.
The Iceberg Index: Measuring Workforce Exposure Across the AI Economy
Project Iceberg uses Large Population Models to simulate the human-AI labor market, addressing the failure of traditional metrics to capture pre-adoption disruption. It introduces the Iceberg Index, a skills-based metric measuring the wage value of occupational tasks that AI can technically perform. The model reveals that AI's technical capability for cognitive automation in administrative and professional services (~$1.2T) is fivefold larger and more geographically distributed than the visible adoption in the tech sector, enabling proactive identification of exposure hotspots.
LLM Inference Beyond a Single Node: From Bottlenecks to Mitigations
This work presents a performance study of multi-node distributed inference for LLMs, identifying all-reduce operations as a key bottleneck in model-parallel schemes. The authors introduce NVRAR, a hierarchical all-reduce algorithm using NVSHMEM that achieves up to 3.6x lower latency than NCCL for certain message sizes. Integrating NVRAR into a prototype inference engine reduced end-to-end batch latency by up to 1.72x for a Llama 3.1 405B model using tensor parallelism in decode-heavy workloads.
Image Diffusion Models Exhibit Emergent Temporal Propagation in Videos
This work repurposes the self-attention maps of pretrained image diffusion models as semantic label propagation kernels for pixel-level correspondence. By extending this mechanism temporally, the authors enable zero-shot object tracking and segmentation in videos. They introduce DRIFT, a framework that uses test-time optimization and SAM-guided refinement to achieve state-of-the-art zero-shot performance on video object segmentation benchmarks.
State of Brain Emulation Report (2025)
The State of Brain Emulation Report 2025 reassesses progress since the 2008 Sandberg/Bostrom roadmap. It is structured around three core capabilities: Neural Dynamics for recording function, Connectomics for mapping structure, and Computational Neuroscience for emulation and embodiment. The report also identifies current challenges and outlines strategic priorities for the field.
Code
Show HN: Era – Open-source local sandbox for AI agents
ERA Agent is a sandbox for safely executing untrusted or AI-generated code within lightweight microVMs. It provides a container-like developer experience with fast 200ms launch times and strong security isolation. The tool can be run locally via a CLI, which uses krunvm and buildah, or accessed through a managed Cloudflare Worker API.
F*ck Wispr Flow – Open-sourcing Jarvis: private, local, free
Jarvis is an open-source, privacy-first voice dictation and AI assistant for macOS. It provides hotkey-activated transcription by leveraging models like OpenAI Whisper or Deepgram Nova-3, with optional AI formatting via Gemini. The tool operates locally using the user's own API keys, ensuring no data is sent to a central cloud service.
Show HN: White-Box-Coder – AI that self-reviews and fixes its own code"
White-Box AI Coder is a transparent code generation tool that visualizes an LLM's self-correction process. Using the Gemini API and a detailed system prompt to enforce architectural rules, it performs a Generate-Review-Fix cycle in a single API call. The tool explicitly shows the AI's chain-of-thought as it critiques and refactors its initial draft into a more robust, compliant final version.
Show HN: Database-replicator – Replicate any DB to PostgreSQL
database-replicator is a Rust-based CLI tool designed to centralize data for AI agent workloads by replicating various databases like PostgreSQL, MongoDB, and MySQL into a PostgreSQL target. For PostgreSQL sources, it offers zero-downtime continuous sync via logical replication, while other databases are converted into an AI-friendly JSONB format. The tool can offload replication jobs to SerenAI's managed cloud infrastructure, especially when targeting SerenDB, a PostgreSQL variant optimized for AI agents.
I forced 4 Big AI to admit structural failure in complex coding. Here is the fix
The Misuraca Protocol argues that long-context windows in SOTA LLMs fail for complex engineering tasks due to "Catastrophic Context Saturation," where models hallucinate logic as conversational entropy increases. The proposed solution abandons the "Continuous Chat" model for "Deterministic Segmentation." This involves using short, isolated sessions for each logical module, destroying the AI instance after each task, and re-initializing a new one with only a clean, verified context block to enforce external state and treat constraints as inviolable.