Tuesday — July 29, 2025
Researchers study AI companions' persuasive power, developers build AI phone call agents like Piper, and a new method called PLEX provides perturbation-free local explanations for LLM-based text classification.
News
GLM-4.5: Reasoning, Coding, and Agentic Abililties
Maxwell's Equations are a set of four fundamental equations that describe the interaction between electric and magnetic fields, forming the foundation of classical electromagnetism and predicting the existence of electromagnetic waves. These equations, which include Gauss's Law for electricity and magnetism, Faraday's Law of induction, and the Ampere-Maxwell Law, have numerous applications in technology and science, including radio broadcasting, mobile communications, and understanding the electromagnetic spectrum.
Principles for production AI agents
The author shares six key learnings from their experience with agentic development, including the importance of providing clear and direct context to large language models (LLMs) without relying on tricks or manipulation. Effective agentic solutions combine the strengths of LLMs and traditional software, such as using a two-phase algorithm with an actor and critic, where the actor generates actions and the critic evaluates them to ensure they meet certain criteria.
AI Companion Piece
Researchers are studying the persuasive power of AI companions, with findings suggesting that larger models and post-training techniques can significantly increase persuasion, while personalization has a smaller effect, at least for now. The study's results also show that techniques that increase persuasion can decrease factual accuracy, and that conversations with AI are more persuasive than static messages, raising concerns about the potential risks and consequences of advanced AI systems.
Show HN: Companies use AI to take your calls. I built AI to make them for you
Piper is an AI-powered phone call agent that can make calls on your behalf, handling tasks such as booking restaurant reservations, scheduling appointments, and disputing bills, allowing you to avoid the hassle of making phone calls. With Piper, you can simply click on a phone number, type what you need, and let the AI handle the call, providing you with live status updates and full transcripts of the conversation.
Generative AI. "Slop Generators, are unsuitable for use [ ]"
The Asahi Linux project prohibits the use of Large Language Models (LLMs), referred to as "Slop Generators", due to concerns over intellectual property infringement, waste of resources, and their inability to provide accurate or reliable information. The project views LLMs as nothing more than statistical calculation tools, prone to confidently providing incorrect information, and therefore unsuitable for use in software engineering, particularly in the Free and Open Source Software movement.
Research
A Unified Frontier in Neuroscience, AI and Neuromorphic Systems
The convergence of neuroscience, artificial general intelligence, and neuromorphic computing is leading to a unified research paradigm, with design principles from brain physiology informing the development of next-generation AGI systems. This emerging field faces four key challenges, including integrating spiking dynamics with foundation models and enforcing ethical safeguards, and offers a promising agenda for advancing neuroscience, computation, and hardware.
Cascade: LLM-Powered JavaScript Deobfuscator
CASCADE is a hybrid approach that combines advanced coding capabilities with compiler Intermediate Representation to effectively deobfuscate JavaScript code, recovering semantic elements and revealing original program behaviors. This method has been successfully deployed in Google's production environment, overcoming limitations of existing techniques and achieving substantial improvements in JavaScript deobfuscation efficiency.
Plex: Perturbation-Free Local Explanations for LLM-Based Text Classification
Large Language Models (LLMs) are difficult to interpret due to their complexity, but Explainable AI (XAI) methods like LIME and SHAP can provide local explanations, although they are computationally expensive. A new method called PLEX (Perturbation-free Local Explanation) is proposed, which leverages contextual embeddings and a neural network to provide efficient explanations without the need for perturbations, achieving over 92% agreement with LIME and SHAP while reducing computational overhead by several orders of magnitude.
Working with AI: Measuring the Occupational Implications of Generative AI
Researchers analyzed 200,000 conversations between users and a generative AI system to understand how AI is being used in various work activities, finding that people most often seek AI assistance for tasks like gathering information and writing. The study computed an "AI applicability score" for each occupation, revealing that knowledge-based jobs, such as computer and mathematical work, office support, and sales, are most likely to be impacted by AI.
Self-attention transforms a prompt into a low-rank weight-update
Large Language Models (LLMs) can learn new patterns at inference time without additional weight updates when presented with examples in the prompt, even if those patterns were not seen during training. Researchers have discovered that the combination of a self-attention layer with a multilayer perceptron (MLP) in a transformer block allows the model to implicitly modify the weights of the MLP layer based on the context, which may be the key to LLMs' ability to learn in context.
Code
Show HN: Kiln – AI Boilerplate with Evals, Fine-Tuning, Synthetic Data, and Git
This repository contains demos of Kiln AI projects, including a boilerplate project that showcases integrated tools for evaluations, synthetic data generation, fine-tuning, and collaboration. The demo project features an AI tool that generates ffmpeg commands, and users can download and import the project into the Kiln app to explore its features.
Show HN: Red Candle – Run LLMs Natively in Ruby with Rust
Red-candle is a Ruby library that allows users to run state-of-the-art language models directly from Ruby, without relying on Python, APIs, or external services, and is accelerated by Metal on Mac and CUDA on NVIDIA. The library supports various models, including Gemma, Llama, Mistral, and others, and provides features such as tokenizers, embedding models, rerankers, and named entity recognition, all while keeping data private and local to the user's machine.
Show HN: A semantic code search tool for cross-repo context retrieval
H-codex is a semantic code search tool that uses Abstract Syntax Trees and OpenAI's text-embedding model to provide intelligent, cross-repo context retrieval, supporting multiple languages and projects. It can be integrated with AI assistants through the Model Context Protocol and has a range of configuration options, with plans to support additional embedding providers and language support in the future.
Python OpenAI API create Pinecone embeddings from PDF documents and RAG examples
This project demonstrates a Hybrid Search and Augmented Generation solution using Python, OpenAI API Embeddings, and a Pinecone vector database, showcasing features like system prompting, templates, and retrieval augmented generation. The solution allows for creating, loading, and querying a Pinecone vector database, and includes examples of how to use the system prompt to modify LLM text completion behavior and create templates to keep prompts DRY.
Show HN: MCP server that lets Claude Code consult other LLMs
The Consult LLM MCP server allows Claude Code to consult stronger AI models, such as Gemini 2.5 Pro and DeepSeek Reasoner, to help with tasks like optimizing SQL queries and debugging code. The server can be installed and configured to provide direct queries with optional file context, and it includes features like comprehensive logging and cost estimation.