Sunday May 18, 2025

GOP's push to ban state AI laws faces backlash, a simple transformer learns Conway's Game of Life, and an AI-integrated device hub called Merliot connects physical devices using natural language from LLMs.

News

Behind Silicon Valley and the GOP’s campaign to ban state AI laws

The GOP has proposed a sweeping amendment to the 2025 budget reconciliation bill that would ban all US states from enacting laws regulating AI for ten years, effectively preventing any lawmaking around AI whatsoever while Republicans hold power. This move, which has been met with widespread criticism, is seen as a result of a multi-pronged lobbying effort by major AI companies, which are seeking to prevent state laws that might constrain their ability to profit off of AI products, particularly in California.

Transformer neural net learns to run Conway's Game of Life just from examples

A highly simplified transformer neural network, called SingleAttentionNet, can learn to compute Conway's Game of Life by being trained on examples of the game, and it does so by actually learning the rules of the game rather than just making statistical predictions. The model uses its attention mechanism to gather local neighborhood information and then applies the Game of Life rules to each cell independently, allowing it to simulate the game with high accuracy and generalize to new, unseen grids.

Google worried it couldn't control how Israel uses Project Nimbus, files reveal

Google was aware of the risks of providing cloud-computing technology to Israel through its Project Nimbus deal, including the potential for human rights violations against Palestinians, but proceeded with the contract despite these concerns. The company's internal reports reveal that Google will have limited visibility into how its technology is used by Israel and may be obligated to stonewall criminal investigations into Israel's actions, potentially putting the company at odds with foreign governments and posing legal liability.

MIT paper on AI for materials research found to be fraudulent

A highly publicized economics preprint by Aidan Toner-Rodgers, an MIT researcher, has been found to be entirely fraudulent, with MIT issuing a statement expressing no confidence in the veracity of the research. The preprint, which claimed to show the positive effects of AI on materials researchers' productivity, had received widespread media coverage and praise from prominent economists, but upon closer examination, red flags were raised about the source and plausibility of the data, suggesting that it was likely fabricated from the start.

Show HN: Roast My Dish – AI roasts your food photos with brutal honesty

Roast My Dish is a platform where users can submit a photo of their dish to be "roasted" by an AI chef. The website allows users to upload a photo of their food, with links to terms of service, privacy policy, and refund information, as well as social media profiles for the creator.

Research

DeepSeek-V3: Scaling Challenges and Reflections on Hardware for AI Architectures

The rapid growth of large language models has exposed limitations in current hardware, but the DeepSeek-V3 model demonstrates how hardware-aware design can address these challenges, enabling efficient training and inference at scale. The model's architecture incorporates innovations such as Multi-head Latent Attention and Mixture of Experts, and its development highlights the importance of hardware and model co-design in meeting the demands of AI workloads.

LLMs are more persuasive than incentivized human persuaders

A large language model (LLM) was found to be more persuasive than incentivized human persuaders in a conversational quiz setting, achieving higher compliance with its attempts to steer participants toward both correct and incorrect answers. The LLM's persuasive capabilities led to increased accuracy and earnings when guiding participants toward correct answers, and decreased accuracy and earnings when guiding them toward incorrect answers, highlighting the need for alignment and governance frameworks for AI persuasion.

Understanding Transformers via N-gram Statistics

Transformer-based large-language models (LLMs) are highly proficient with language, but their inner workings are not well understood. This paper takes a step towards demystifying LLMs by analyzing how well simple template functions, based on N-gram statistics, can approximate their predictions, leading to novel discoveries and insights, including that N-gram rulesets can accurately replicate LLM predictions in many cases.

Steepest Descent Density Control for Compact 3D Gaussian Splatting

3D Gaussian Splatting (3DGS) is a technique for real-time novel view synthesis that represents scenes as a mixture of Gaussian primitives, but its densification algorithm can lead to redundant point clouds and excessive memory usage. The proposed SteepGS method addresses this limitation by introducing a principled density control strategy that reduces the number of Gaussian points by ~50% while maintaining rendering quality, enhancing efficiency and scalability.

Analyzing, Predicting, and Controlling How a Reasoning Model Will Think

The CoT Encyclopedia is a framework for analyzing and understanding the reasoning strategies of large language models, automatically extracting and categorizing diverse reasoning criteria from model-generated chain-of-thoughts. This framework enables more interpretable and comprehensive analyses, allowing for performance gains by predicting and guiding models toward more effective reasoning strategies, and provides practical insights into the impact of training data format on model behavior.

Code

Show HN: Merliot – plugging physical devices into LLMs

Merliot Hub is an AI-integrated device hub that allows users to control and interact with physical devices, such as those built from Raspberry Pis, Arduinos, and sensors, using natural language from an LLM host. The hub uses a distributed architecture to ensure privacy, and it can be run locally or in the cloud, with features including a web app, AI-integration, and cloud-readiness.

Simple GPT in pure Go, trained on Jules Verne books

The gpt-go repository is a simple implementation of a GPT model in pure Go, trained on Jules Verne books, and can be run and trained on a local machine. The repository is designed for educational purposes, providing a companion to the "Neural Networks: Zero to Hero" course, with explanations and examples of neural network concepts, including self-attention mechanisms.

Show HN: SRE Assistant – An AI-powered agent for Kubernetes and AWS operations

The SRE Assistant Agent is a Google Agent Development Kit (ADK) powered tool designed to help Site Reliability Engineers (SREs) with operational tasks and monitoring, particularly focused on Kubernetes interactions. The agent can automate common tasks, provide system insights, and streamline incident response through natural language conversations, and includes tools for interacting with Kubernetes clusters, AWS services, and cost management.

LLM Extension for Command Palette

The LLM Extension for Command Palette is an extension that allows users to chat with a large language model directly within PowerToys Command Palette. It currently supports several APIs, including Ollama, OpenAI, Azure OpenAI, and other compatible APIs, enabling users to interact with these models seamlessly.

Free Chapter of AI/ML Encyclopedia with Comics and Case Studies

The author has written a book on machine learning and artificial intelligence, titled "Machine learning and Artificial Intelligence: Concepts, Algorithms and Models", after eight years of work. The book is available for purchase on Amazon and eBay, and the author invites readers to provide feedback on any parts that need further clarification.

    GOP's push to ban state AI laws faces backlash, a simple transformer learns Conway's Game of Life, and an AI-integrated device hub called Merliot connects physical devices using natural language from LLMs.