Saturday — October 11, 2025
Cognitive scientist Hagen Blix warns that AI is being used to control and depress wages, researchers discover that impolite LLM prompts outperform polite ones, and Open-Agent offers an open-source alternative to Agentic AI systems like Claude Agent SDK and ChatGPT Agents.
News
AI is an attack from above on wages": cognitive scientist Hagen Blix
The author had planned to release a podcast with cognitive scientist and author Hagen Blix, but due to poor sound quality, they instead transcribed and edited their conversation, which discusses Blix's book "Why We Fear AI". Blix argues that the fear of AI is rooted in its potential to be used as a tool of control and wage depression, rather than a productivity tool, and that it resonates with people on a material level, particularly workers whose lives are already controlled by AI systems.
Fears over AI bubble bursting grow in Silicon Valley
Fears are growing in Silicon Valley that the AI industry is experiencing a bubble that may soon burst, with concerns that companies are overvalued and that the rapid rise in their valuations may be the result of "financial engineering". Experts, including OpenAI boss Sam Altman and early AI entrepreneur Jerry Kaplan, are warning that if the bubble bursts, it could have severe consequences for the economy, with Kaplan stating "it's going to be really bad, and not just for people in AI".
Tron: Ares is so bad it makes you wish AI would hurry up and destroy Hollywood
The film "Tron: Ares" is panned by the reviewer, who calls it a "shambolic" and "bloated" sci-fi film with "aggressively charmless characters". The movie, starring Jared Leto as a philosopher-ninja AI program, is criticized for its poor plot and lack of absorbing storyline, making it a disappointing sequel to the original "Tron" film.
Indonesia's film industry embraces AI to make Hollywood-style movies for cheap
OpenAI's new artificial intelligence model, Sora 2, is being used in Indonesia's film industry to generate high-definition video clips with sound and realistic physics, allowing entertainment companies to produce ambitious movies on smaller budgets. However, the increased use of AI tools is also leading to job losses among creatives such as storyboarders, scriptwriters, and visual effects artists, as companies adopt AI to become more efficient and reduce production costs.
Bitter lessons building AI products
The author reflects on their experience building AI products and realizes they've learned "the Bitter Lesson," which states that general methods leveraging computation are ultimately the most effective, and that trying to make AI work with existing roadmaps through clever engineering is often obsolete with the next major model upgrade. The author now prioritizes understanding model capabilities and pivoting their roadmap accordingly, and has changed their approach to building AI features by ditching demos, spotting capability shifts, and killing projects faster to avoid sunk cost fallacy.
Research
Barbarians at the Gate: How AI Is Upending Systems Research
Artificial Intelligence (AI) is transforming the research process by automating the discovery of new solutions, particularly in systems research where reliable verifiers can accurately determine solution effectiveness. The AI-Driven Research for Systems (ADRS) approach has been shown to discover algorithms that outperform human-designed ones, and its adoption is expected to shift the focus of human researchers from algorithm design to problem formulation and strategic guidance.
Impolite LLM prompts consistently outperform polite ones
Researchers found that large language models performed better on multiple-choice questions when given impolite prompts, with accuracy increasing from 80.8% for very polite prompts to 84.8% for very rude ones. This unexpected result suggests that newer language models may respond differently to tone and politeness in prompts, highlighting the need to study the social dimensions of human-AI interaction.
The Missing Link Between the Transformer and Models of the Brain
The Dragon Hatchling (BDH) is a new Large Language Model architecture inspired by the brain's scale-free biological networks, offering strong theoretical foundations, interpretability, and performance comparable to Transformer models like GPT2. BDH's biologically plausible design, which relies on synaptic plasticity and Hebbian learning, allows for sparse and positive activation vectors, enabling interpretability of state and demonstrating monosemanticity in language tasks.
Advancing medical artificial intelligence using a century of cases
Researchers created a benchmark called CPC-Bench to evaluate the performance of large language models (LLMs) in medical diagnosis and presentation, and found that LLMs can outperform physicians in complex text-based differential diagnosis and emulate expert medical presentations. However, LLMs still struggle with image interpretation and literature retrieval, and the researchers are releasing their tools, including an AI discussant called "Dr. CaBot", to promote further research and track progress in medical AI.
New paper: A single character can make or break your LLM evals
The choice of delimiter used to separate in-context examples can significantly impact the response quality of large language models, with performance varying by up to 23% depending on the delimiter used. This brittleness is pervasive across models, topics, and scales, but can be mitigated by specifying the delimiter in the prompt or using certain well-performing delimiters.
Code
Open-Source Agentic AI
Open-Agent is an open-source alternative to Agentic AI systems like Claude Agent SDK and ChatGPT Agents, allowing users to create a highly customizable AI that integrates multiple models to work together seamlessly. The project is self-hostable, free to modify, and welcomes contributions, providing a multi-agent framework where various AI models collaborate to complete tasks, and offering features like spec and context engineering for structured decision-making.
Show HN: Gitcasso – Syntax Highlighting and Draft Recovery for GitHub Comments
Gitcasso is a browser extension that provides syntax highlighting and autosave for comments on GitHub and other markdown-friendly websites. The extension is available for Chrome and Microsoft Edge, and its development is open to contributions, with thanks to several other projects and individuals for their contributions to its functionality.
GPT-OSS from Scratch on AMD GPUs
There is no text to summarize. The provided message appears to be an error notification indicating that a README file could not be retrieved.
Headscale QA test using Claude AI|.claude/agents/headscale-integration-tester.md
Headscale is an open-source, self-hosted implementation of the Tailscale control server, allowing users to create and manage their own private networks using Wireguard. The project aims to provide a self-hosted alternative to the Tailscale control server, suitable for personal use or small organizations, and is not associated with Tailscale Inc., although one of its maintainers is employed by the company.
Show HN: I extracted BASIC listings for Tim Hartnell's 1986 book
This repository preserves the BASIC source code listings from the 1986 book "Exploring Artificial Intelligence on Your IBM PC" by Tim Hartnell, providing a historical snapshot of accessible AI programming education. The code is made available for educational and historical preservation purposes, along with a ready-to-use runtime environment using PC-BASIC, allowing users to run the programs with minimal setup.