Wednesday December 3, 2025

Malicious browser extensions infect 4.3 million users, high schoolers pass graduate-level quantum exams using pictorial math, and a new library lets you teach AI agents fixes for failures without redeploying.

News

Mistral 3 family of models released

Mistral has released the Mistral 3 family of open-source models under the Apache 2.0 license. The release features Mistral Large 3, a 675B sparse MoE model with 41B active parameters that competes with other frontier open-weight models. It also includes the Ministral 3 series of small, dense models (3B, 8B, 14B) optimized for edge applications with strong performance-to-cost ratios. All models have multimodal capabilities, and collaborations with NVIDIA and vLLM provide optimized checkpoints and efficient inference support via frameworks like TensorRT-LLM.

Anthropic acquires Bun

Anthropic has acquired Bun, positioning the high-performance JavaScript runtime as the core infrastructure for its AI coding products like Claude Code and the Claude Agent SDK. The acquisition leverages Bun's speed and single-file executables for distributing AI tools and agents, providing the project with long-term stability. Bun will remain open-source and continue its general-purpose roadmap, but its development will now be more closely aligned with the future of AI-driven software engineering.

4.3M Browsers Infected: Inside ShadyPanda's 7-Year Malware Campaign

A threat actor dubbed ShadyPanda has infected 4.3 million users in a seven-year campaign by weaponizing Chrome and Edge browser extensions. The actor's strategy involves publishing legitimate extensions to build a large user base and trust over several years, then pushing malicious updates that deploy spyware and an RCE backdoor. This campaign exploits the marketplaces' reliance on static analysis at submission, using trusted auto-updates as the attack vector, with a 4-million-user component of the operation reportedly still active.

Ecosia: The greenest AI is here

Ecosia has launched two AI-powered search features, "Overviews" and "AI Search," built with a focus on sustainability and privacy. They prioritize smaller, more efficient models selected using tools like AI Energy Score and claim to generate more renewable energy than the features consume. The system leverages Ecosia's own independent European search index to enhance user privacy and control, in compliance with GDPR.

AI generated font using Nano Banana

The author contrasts methods for AI font generation, finding that LLMs struggle to directly manipulate glyph vector data due to a lack of visual feedback. A more successful pipeline uses a diffusion model to generate character images, which are then vectorized to SVG using potrace and compiled into a TTF. Key challenges included enforcing typographic consistency across characters, which was partially mitigated by providing the model with visual guidelines and reference images, though final glyph normalization remains an unsolved issue.

Research

Launch-Day Diffusion: Tracking Hacker News Impact on GitHub Stars for AI Tools

A reproducible analysis of 138 AI/LLM projects quantifies the impact of a Hacker News (HN) launch, showing an average gain of 289 GitHub stars within a week. Using ML models, the study identifies post timing as a critical predictor for viral growth, while finding the "Show HN" tag offers no statistical advantage. The entire pipeline, built on public APIs, is open-source and runs in under five minutes, providing actionable insights for developers.

Intelligence per Watt: Measuring Intelligence Efficiency of Local AI

This work investigates the viability of local LLM inference for offloading demand from centralized cloud infrastructure by proposing a new metric: intelligence per watt (IPW), which measures task accuracy per unit of power. A large-scale study across various models and accelerators found that local LMs can accurately answer 88.7% of single-turn chat and reasoning queries, with IPW improving 5.3x from 2023-2025. While local accelerators currently have lower IPW than cloud counterparts for identical models, the results confirm that local inference can meaningfully redistribute demand, with the authors releasing their IPW profiling harness for benchmarking.

LST-1 follow-up of the exceptionally bright gamma-ray burst GRB 221009A

The brightest gamma-ray burst ever recorded, GRB 221009A, triggered a deep follow-up observation campaign. The LST-1 telescope monitored the event under challenging conditions, yielding a hint of a signal with a statistical significance of about 4σ. This paper presents the results from the deepest observation campaign ever performed on a GRB with this instrument.

High schoolers excel at Oxford quantum course using pictorial mathematics

A study demonstrates the effectiveness of Quantum Picturalism, a visual mathematical language, as an educational tool for quantum physics. In a pilot study, high school students with no advanced math prerequisites used this diagrammatic framework to learn quantum principles. The students achieved an 82% pass rate on graduate-level exam questions, suggesting this approach can significantly lower the barrier to entry for understanding complex quantum concepts.

Z-Image: An Efficient Image Generation Foundation Model [pdf]

Z-Image is a 6B-parameter foundation model for image generation that challenges the "scale-at-all-costs" paradigm of larger open-source alternatives. Built on an S3-DiT architecture with a highly optimized training pipeline, it achieves SOTA-comparable performance with significantly less compute. A distilled version, Z-Image-Turbo, offers sub-second inference and runs on consumer hardware with <16GB VRAM, while Z-Image-Edit provides instruction-following capabilities. The model excels at photorealistic generation and bilingual text rendering, rivaling top commercial systems.

Code

Show HN: SafePool – Type-safe object pooling for Go

safepool is a Go library that provides a type-safe, generic wrapper around sync.Pool. It includes a PoolManager to manage the lifecycle of multiple objects retrieved from a pool. This simplifies resource cleanup by returning all dispensed objects to the pool with a single deferred call, which is useful when passing objects across function boundaries.

Show HN: Golang Client Library for Gradium.ai TTS/STT API

go-gradium is a Go client library for Gradium AI's real-time TTS and STT services. It provides a simple interface for streaming text-to-speech and speech-to-text over WebSocket APIs. The library handles base64-encoded PCM audio data and includes examples for creating a full TTS-to-STT pipeline.

Show HN: Coding Agent Session Search (Cass)

coding-agent-search (cass) is a local-first TUI and CLI tool that unifies fragmented conversation histories from multiple coding agents like Claude, Aider, and Cursor into a single searchable knowledge base. It uses a dual-storage architecture with SQLite for data integrity and a Tantivy full-text index with edge n-grams for high-performance queries. A key feature is its "robot mode," which provides a structured JSON API designed for consumption by other LLMs, enabling agents to perform RAG over the collective history of all past local sessions.

Show HN: Veru – open-source AI citation auditor using OpenAlex

Veru is an open-source AI citation auditor designed to detect LLM hallucinations. It uses Gemini 2.0 Flash to extract citations and performs a multi-tier verification against academic databases like OpenAlex and Semantic Scholar. A key feature is its content consistency check, which compares the user's claim against the actual paper abstract to identify "stitched" hallucinations. The tool is built with a FastAPI backend and a Next.js frontend.

Show HN: Steer – Stop debugging agents, start teaching them (Open Source)

Steer is an open-source Python library that provides an active reliability layer for AI agents. It uses a decorator to intercept and block agent failures, such as bad JSON or PII leaks, in real-time. Developers can then use a local dashboard to "teach" the agent a fix, which Steer dynamically injects as a rule into the agent's context for future runs, eliminating the need for code changes and redeployment.

    Malicious browser extensions infect 4.3 million users, high schoolers pass graduate-level quantum exams using pictorial math, and a new library lets you teach AI agents fixes for failures without redeploying.