Tuesday October 28, 2025

ICE will use AI to surveil social media, a new tool runs Claude Skills locally using any LLM, and researchers warn of a "survey paper DDoS attack" from AI.

News

It's insulting to read AI-generated blog posts

The author argues against using LLMs for writing, framing it as a lazy practice that inserts a "sterile robo-liaison" between the writer and reader. They contend this use of AI prevents genuine human connection and circumvents the essential process of learning from one's own mistakes. The author advocates for seeking human collaboration instead and suggests that AI should be relegated to quantitative tasks, leaving communication to authentic human thought.

ICE Will Use AI to Surveil Social Media

ICE has procured a $5.7 million contract for Zignal Labs, an AI-powered platform used to surveil social media in real-time. The system leverages AI and ML to analyze over eight billion daily posts, providing "curated detection feeds" for intelligence and criminal investigations. This tool joins ICE's expanding arsenal of AI-driven surveillance technologies, raising civil liberties concerns over the use of "black box" systems to monitor online political speech.

AI can code, but it can't build software

The text posits that while LLMs can code, they cannot yet perform software engineering. This gap is evidenced by non-technical founders who can use AI to create demos but still need human engineers to build production-ready products. The author argues that the core of software engineering is managing systemic complexity—integrating hundreds of simple parts into a maintainable and expandable whole—a task at which current AI models fail.

The new calculus of AI-based coding

An engineering team is achieving 10x coding throughput by using a human-in-the-loop "agentic coding" model, where engineers direct and review AI-generated code. This high velocity, however, creates new bottlenecks and increases the absolute number of bugs, threatening to overwhelm traditional testing and CICD pipelines. The author argues that to sustain this speed, the entire development lifecycle must be re-engineered, proposing that AI agents can be used to build previously cost-prohibitive infrastructure, such as high-fidelity mock dependencies for robust local end-to-end testing.

Show HN: Erdos – open-source, AI data science IDE

Erdos is a data science IDE featuring an integrated AI assistant designed to edit Jupyter notebooks, iterate on plots, and interpret documentation for contextual help. The platform supports local model usage, remote development via SSH/containers, and the option to use your own OpenAI/Anthropic API keys. It is an open-source tool with built-in consoles for Python, R, and Julia.

Research

Stop DDoS Attacking the Research Community with AI-Generated Survey Papers

The research community is facing a "survey paper DDoS attack" due to the proliferation of low-quality, redundant, and often hallucinated surveys generated by LLMs. This influx overwhelms researchers and erodes trust in the scientific record. The authors advocate for stronger norms in AI-assisted writing and propose developing new infrastructures like "Dynamic Live Surveys," which are community-maintained repositories blending automated updates with expert human curation to safeguard scientific integrity.

Merge and Conquer: Evolutionarily Optimizing AI for 2048

This paper explores evolutionary training methods for an AI playing the stochastic game 2048. A single-agent system that refined a value function for a limited Monte Carlo Tree Search demonstrated substantial and consistent performance improvements, with the LLM developing more advanced strategies over time. In contrast, a two-agent LLM system using metaprompting failed to show significant gains, highlighting the limitations of that approach for this task.

EntropyLong: Effective Long-Context Training via Predictive Uncertainty

EntropyLong is a novel data construction method for creating training samples with verified long-range dependencies. The technique uses a model-in-the-loop approach to find high-entropy positions, retrieve relevant context, and verify its utility by confirming it reduces predictive uncertainty. Models trained on the resulting dataset show significant improvements on long-context benchmarks like RULER and LongBenchv2, validating the effectiveness of entropy-based verification.

Extreme-temperature single-particle heat engine

A single-particle engine was created using an electrically levitated microparticle, achieving record temperature ratios up to 110 by synthesizing reservoir temperatures over 10^7 K with noisy electric fields. This extreme system exhibits giant thermodynamic fluctuations and dynamics that deviate from standard Brownian motion. The deviation is caused by an effective position-dependent temperature, which was successfully described by a theoretical model incorporating multiplicative noise. This platform enables the emulation of complex stochastic dynamics found in biological and nanoscale systems.

The Shape of Math to Come by Alex Kontorovich

This text provides an overview of how computational tools currently intersect with mathematical practice. It also reflects on the short-to-medium term implications for research mathematics, specifically in the context of emerging AI and formal verification systems.

Code

MCP-Scanner – Scan MCP Servers for vulnerabilities

MCP Scanner is a Python tool for identifying security vulnerabilities in MCP servers and tools. It employs a multi-engine approach, combining the Cisco AI Defense API, customizable YARA rules, and an LLM-as-a-judge for analysis of tools, prompts, and resources. The scanner is available as a CLI, a REST API server, or a Python SDK.

Show HN: Git Auto Commit (GAC) – LLM-powered Git commit command line tool

Git Auto Commit (gac) is a CLI tool that leverages LLMs to automatically generate contextual git commit messages by analyzing code changes. It understands the intent behind the diff, supports numerous LLM providers, and offers various output formats from one-liners to verbose explanations. The tool also features an interactive feedback loop for regenerating messages and includes built-in secret scanning.

Show HN: OpenSkills - Run Claude Skills Locally Using Any LLM

OpenSkills is a tool for running Anthropic's Claude Skills locally on a Mac in a secure, sandboxed environment. It functions as a local MCP server, allowing LLMs like Claude or Gemini to execute code and process local files for specialized tasks without uploading data, ensuring privacy. The system leverages Apple's native containers for VM-level isolation and supports importing official Anthropic skills or creating custom ones.

Show HN: Whatdidido – CLI to summarize your work from Jira/Linear

whatdidido is a local-first CLI tool that syncs work items from ticketing systems like Jira and Linear. It then leverages an LLM via the OpenAI API to generate summaries of your activities, creating a markdown report. The tool operates on a BYOK model, ensuring all API credentials remain under user control.

Internationalization layer that adds AI translation to your existing setup

Intlayer is an open-source i18n toolkit that leverages AI for automated translations, allowing developers to integrate their own provider API keys within a CI/CD workflow. It is designed for modern, component-based development with features like type-safety, tree-shakable dictionaries, and a co-located content structure. The toolkit also includes a free CMS and visual editor to streamline the localization process for technical and non-technical users.

    ICE will use AI to surveil social media, a new tool runs Claude Skills locally using any LLM, and researchers warn of a "survey paper DDoS attack" from AI.