Sunday November 30, 2025

A major AI conference finds 21% of its peer reviews are AI-generated, a new tool provides a "Sign in with Google" for AI agents and a side-channel attack can silently monitor messaging app users.

News

Major AI conference flooded with peer reviews written by AI

An analysis of ICLR 2026 submissions by Pangram Labs revealed widespread use of LLMs in the peer review process. Using their own detection tool, they found 21% of reviews were fully AI-generated and over half showed signs of AI use. The findings, which also identified AI-generated manuscripts, have prompted the conference to adopt automated screening tools to address the issue.

I Know We're in an AI Bubble Because Nobody Wants Me

Pete Warden argues the AI industry is in a bubble, evidenced by massive spending on GPUs while neglecting software optimization and efficiency. He attributes this to signaling, where large hardware purchases serve as a PR moat and an easy investment, despite low utilization and significant room for performance gains. Warden predicts this unsustainable hardware-centric approach, similar to the dot-com era's reliance on Sun servers, will eventually be disrupted by more efficient solutions running on commodity hardware.

Show HN: I built Magiclip – an all-in-one AI studio

Magiclip is an AI platform that repurposes long-form videos into short-form clips using automatic selection and ASR-powered subtitling. It also integrates a suite of generative models, including a proprietary text-to-video model named "Veo 3," text-to-image generation, and multilingual TTS capabilities.

OCaml maintainers reject massive AI-generated pull request

OCaml maintainers rejected a massive, 13,000-line, AI-generated PR intended to add DWARF debugging support. The rejection was primarily due to significant copyright and provenance concerns, as the LLM-generated code attributed authorship to a developer from another project with a similar feature. Additional factors included the immense review burden on an already strained team and the submission's failure to align with the project's established development practices. The incident highlights the critical challenges OSS projects face in handling large-scale, AI-assisted contributions, particularly regarding code provenance and maintainer bottlenecks.

Google CEO Pushes 'Vibe Coding' – But Real Developers Know It's Not Magic

Google is promoting "vibe coding," a development style where users generate software from natural language prompts to democratize creation. However, the approach faces significant criticism for its limitations in complex systems, architectural judgment, and handling edge cases. Studies show code-generating LLMs frequently hallucinate nonexistent packages and can produce unmaintainable code, suggesting that while useful for prototyping, the technique does not replace the need for skilled engineers in building robust, production-grade systems.

Research

Student perceptions of AI coding assistants in learning

An exploratory study investigated the impact of AI coding assistants on novice programmers. While students found the tools helpful for initial concept comprehension and confidence, they struggled significantly when the AI support was removed. This points to a potential for overreliance and gaps in foundational knowledge transfer, highlighting the need for pedagogical strategies that integrate AI without supplanting core skill acquisition.

Careless Whisper: Silently Monitoring Users on Mobile Instant Messengers

A privacy vulnerability in messaging apps like WhatsApp and Signal exploits delivery receipts to enable silent, high-frequency "pings" of a target. This side-channel attack allows an adversary to infer a user's online/activity status, device count, and OS. The technique can also be leveraged for resource exhaustion attacks, such as battery and data draining, without generating any user-side notifications.

Continuous Thought Machines

The Continuous Thought Machine (CTM) is a novel, biologically-inspired architecture that reintroduces neural dynamics into ANNs. Its core innovations are neuron-level temporal processing, where each neuron has unique parameters for its input history, and the use of neural synchronization as a latent representation. The model excels at tasks requiring sequential reasoning and features adaptive compute, where processing time varies with task difficulty, aiming for a balance between biological realism and computational tractability rather than SOTA performance.

LLMs and the Human Condition

This paper proposes a theoretical model to explain the linguistic capabilities of LLMs by integrating three theories of human decision-making. The model synthesizes early AI concepts of reasoning, the philosophical view of reactive systems, and a sociological theory of collective intelligence. This combined framework provides an alternative view on the "mind reading" phenomenon in human communication.

Translating Large-Scale C Repositories to Idiomatic Rust

The paper introduces Rustine, an automated pipeline for repository-level C to Rust translation that addresses the quality and scalability trade-offs of existing transpilation and LLM-based methods. Evaluated on 23 C programs, Rustine generates fully compilable code that achieves 87% functional equivalence and is demonstrably safer and more idiomatic than prior techniques. For cases that fail functional tests, the tool serves as a debugging aid, allowing human developers to complete the translation in an average of 4.5 hours.

Code

DeepSeekMath-V2: Towards Self-Verifiable Mathematical Reasoning

DeepSeekMath-V2 introduces a self-verifiable approach to mathematical reasoning, moving beyond rewarding only correct final answers. The method involves training an LLM-based verifier for theorem proving, which then serves as a reward model for a proof generator incentivized to self-correct. To maintain the generation-verification gap, the verifier is continuously improved by using scaled compute to automatically label new, difficult proofs. The resulting model demonstrates strong theorem-proving capabilities on benchmarks like IMO, CMO, and Putnam.

Show HN: Sourcewizard – A wizard for generating integration specs

SourceWizard is an AI-powered CLI tool that uses an agentic LLM to find, install, and configure developer packages from natural language queries. It also functions as an MCP server, providing tools and up-to-date documentation to other LLM clients to prevent hallucinations and usage of deprecated APIs. The tool is repository-aware and includes commands for building and testing projects.

Show HN: Open Video Overview – Generate narrated videos from text with AI

This open-source project is an alternative to NotebookLM's Video Overview that automates video creation from text. It uses a Mastra-based workflow to generate a storyboard, then creates corresponding images with Gemini and narration with ElevenLabs. These elements are combined into individual clips and then stitched into a final MP4, with support for multiple visual styles and languages.

Show HN: Auth Agent – Let AI Agents Log In Without Human Credentials

Auth Agent is an OpenID Connect provider that gives AI agents their own identity for logging into websites, functioning like a "Sign in with Google" for autonomous agents. It replaces insecure practices like password sharing with a standard OAuth 2.1 and PKCE flow, where agents authenticate using their own credentials. Websites can integrate this service to securely link an agent's session to a user's account via a /userinfo endpoint, enabling auditable and revocable access for automation.

Show HN: Open-source agent learning layer: 30% to 100% success on browser agents

Agentic Context Engine (ACE) is a framework for building self-improving AI agents that learn from execution feedback without requiring fine-tuning. It uses a Generator-Reflector-Curator loop, implemented as specialized prompts for a single LLM, to incrementally update a "Playbook" of effective strategies. This in-context learning mechanism improves agent performance over time and includes integrations for LangChain and browser automation agents.

    A major AI conference finds 21% of its peer reviews are AI-generated, a new tool provides a "Sign in with Google" for AI agents and a side-channel attack can silently monitor messaging app users.