Monday April 20, 2026

Global RAM shortages could last until 2030, research shows AI assistance reduces user persistence, and Remen brings local LLMs to a privacy-first iOS notes app.

Interested in AI engineering? Let's talk

News

The RAM shortage could last years

Global DRAM manufacturers are projected to meet only 60 percent of market demand by late 2027, with shortages potentially extending to 2030. Leading suppliers including Samsung, SK Hynix, and Micron are prioritizing HBM production for AI data centers over general-purpose DRAM, causing price hikes across consumer hardware. Significant new fabrication capacity is not expected to be operational until at least 2027, trailing the 12 percent annual production growth required to stabilize the market.

Swiss authorities want to reduce dependency on Microsoft

The Swiss federal administration is planning a long-term transition away from Microsoft products to bolster digital sovereignty and data security. This shift is driven by concerns over the US Cloud Act and a desire to adopt open-source alternatives, mirroring similar initiatives in Germany. The strategy aims to mitigate risks associated with Big Tech dominance in critical infrastructure, including AI ecosystems like Copilot and ChatGPT.

Swiss AI Initiative (2023)

The Swiss AI Initiative is a large-scale open-science collaboration between ETH Zurich and EPFL focused on developing foundation models and transparent AI artifacts. Leveraging the "Alps" supercomputer equipped with over 10,000 GH200 GPUs, the initiative provides compute grants and resources to a network of over 800 researchers. A third call for large-scale projects is currently active, with a declaration of intent deadline of March 16, 2026.

The Uncanny Valley and the Rising Power of Anti-AI Sentiment

Rising anti-AI sentiment is increasingly driven by a "multimodal uncanny valley" where AI systems trigger visceral aversion through social cue mismatch and mortality salience. While experts focus on utility, the public often experiences AI as an intrusive, near-human imitation that activates evolutionary danger-avoidance mechanisms. To mitigate this, developers must choose between achieving full convincingness, ensuring cross-modal consistency, or maintaining a purposeful distance from human-like design.

The time when we suffer from large amounts of AI slop is gone

Daniel Stenberg reports a significant surge in high-quality security vulnerability reports for curl, primarily driven by AI-powered tooling. This shift from "AI slop" to valid, automated reporting has increased submission frequency to one every 20 hours, potentially making 2026 a record year for curl vulnerabilities. Stenberg will address this "Open Source AI reality" in upcoming talks, highlighting the increased operational load of managing AI-generated security findings in open-source projects.

Research

AI Researchers' Views on Automating AI R&D and Intelligence Explosions

Leading researchers from frontier labs and academia identify the automation of AI research and subsequent recursive improvement as a severe, urgent risk. While consensus exists on the trajectory from AI assistants to autonomous developers, an epistemic divide persists regarding growth timelines, with academic researchers expressing more skepticism than industry counterparts. Most experts expect advanced R&D capabilities to be restricted to internal use, favoring transparency-based mitigations over controversial regulatory "red lines."

AI Assistance Reduces Persistence and Hurts Independent Performance

A study (N=1,222 RCTs) demonstrates that while AI assistance improves immediate task performance, its optimization for instant responses, unlike human long-term scaffolding, causally reduces user persistence and impairs unassisted performance. These detrimental effects, observed across tasks like mathematical reasoning and reading comprehension after brief interactions, highlight a critical need for AI model development to prioritize scaffolding long-term competence over immediate task completion, given persistence's role in skill acquisition.

SoK: Security of Autonomous LLM Agents in Agentic Commerce

This SoK establishes a unified security framework for autonomous LLM agents in commerce and finance, addressing vulnerabilities in emerging protocols like ERC-8004 and AP2. It identifies 12 cross-layer attack vectors across five dimensions—integrity, authorization, trust, market manipulation, and compliance—and proposes a layered defense architecture. The analysis highlights that securing agentic commerce requires integrated controls across LLM safety, protocol design, and market regulation.

Tide: Token-Informed Depth Execution for Per-Token Early Exit in LLM Inference

TIDE is a post-training system that implements early exiting for causal LMs by attaching tiny learned routers to periodic checkpoint layers. It optimizes inference by skipping layers for converged tokens using fused CUDA kernels, requiring no model retraining and minimal calibration time. Benchmarks on DeepSeek R1 and Qwen3 show up to 8.1% throughput improvements and reduced prefill latency while maintaining high accuracy.

Generative artificial intelligence for computational chemistry

Generative AI methods, including LLMs, GANs, and flow models, are advancing computational chemistry by enhancing molecular sampling, force field development, and structure prediction. To achieve true predictive power for emergent chemical phenomena, future models must integrate fundamental chemical principles and statistical mechanics. This transition from descriptive to predictive modeling is essential for the field's maturation.

Code

Sandboxed AI agent orchestration platform

SuperHQ is a Rust-based orchestration platform built with GPUI for running AI coding agents like Claude Code and Codex in isolated VM sandboxes. It features a secure auth gateway that injects credentials into outgoing requests via a reverse proxy, ensuring agents never have access to raw API keys or OAuth tokens. The platform supports multi-agent workflows, port forwarding, and keyboard-centric navigation for managing sandboxed development environments.

Self-healing GitHub CI that won't let AI touch your application code

aiheal is a GitHub-native self-healing CI framework that automates infrastructure remediation using LLMs while enforcing a human-in-the-loop gate for high-risk triages. It utilizes a strict "scope fence" to limit AI edits to Docker and GitHub workflow files, preventing modifications to application source code or security permissions. The system mitigates prompt injection by sanitizing runtime logs and maintains a persistent memory store of findings and plans within PR diffs.

Self-healing browser harness via direct CDP

Browser Harness is a minimalist, self-healing automation tool built directly on CDP that grants LLMs unconstrained browser access via a websocket. It allows agents to dynamically modify their own helper functions mid-task to resolve missing capabilities or navigate edge cases without rigid frameworks. The system emphasizes agent-generated "skills" for specific domains, maintaining a lightweight footprint of approximately 600 lines of Python.

Autoloom – Autonomous AI Agent built on tinyloom

Autoloom is a tiny, self-learning autonomous agent wrapper built on tinyloom, designed for full autonomy with a minimal codebase. It manages its state locally via files, uses cron for scheduled heartbeat runs, and includes a TUI and webhook server. While capable of running shell commands and editing files, requiring isolation for safety, it has demonstrated the ability to autonomously research, plan, implement, and test new plugins.

A privacy-first, local-LLM note app for iOS (Google Keep alternative)

Remen is an iOS notes app that integrates on-device AI, utilizing LLAM 3.2 1B SpinQuant as its LLM and ALL-MINILM-L6-V2 for embeddings. This enables features like natural language search, semantic search, and auto-categorization/tagging of notes and voice recordings. Built with React Native and Expo, it uses react-native-executorch for local AI processing, prioritizing privacy and performance, though it acknowledges potential limitations like hallucinations due to the small model size.

    Global RAM shortages could last until 2030, research shows AI assistance reduces user persistence, and Remen brings local LLMs to a privacy-first iOS notes app.