Thursday — September 4, 2025
MIT study reveals AI use can reprogram the brain, leading to cognitive decline, a new framework called SchedCP enables Large Language Model agents to optimize Linux schedulers, and Amazon releases Amazonq.nvim, an official AWS AI assistant plugin for Neovim.
News
MIT Study Finds AI Use Reprograms the Brain, Leading to Cognitive Decline
A new MIT study found that using artificial intelligence tools like ChatGPT to complete tasks can lead to long-term cognitive harm, including weakened neural connectivity, impaired memory recall, and diminished sense of ownership over one's work. The study suggests that relying on AI can reprogram the brain, leading to cognitive decline and decreased creative capacities, emphasizing the importance of taking regular breaks from AI use and giving the mind a chance to work independently.
Where's the shovelware? Why AI coding claims don't add up
The author, a 25-year software development veteran, is angry and disappointed with the state of AI-assisted coding, which he believes is not living up to its promised productivity gains. Despite widespread adoption, he argues that there is no evidence of a significant increase in software development output, citing his own experiments and data analysis, which show that AI coding tools may actually slow developers down, and that the industry is not seeing the expected "flood of shovelware" that would indicate a surge in productivity.
Evidence that AI is destroying jobs for young people
The question of whether artificial intelligence is taking jobs from young people has been debated, with initial suggestions that it might be, followed by reports that there was little evidence to support this, but a new study from Stanford University has found that young workers in highly AI-exposed jobs, such as software developers and customer service agents, experienced a 13% decline in employment since the advent of ChatGPT. The study's findings suggest a correlation between AI exposure and falling employment for young people, although it is not a causal test and the results are still open to interpretation and further analysis.
Speeding up PyTorch inference on Apple devices with AI-generated Metal kernels
Researchers from Gimlet Labs used AI models to generate optimized Metal kernels for PyTorch, resulting in an average speedup of 87% on Apple devices across 215 PyTorch modules. The AI-generated kernels, which can be produced nearly instantly without requiring expertise in kernel engineering, outperformed baseline PyTorch implementations in many cases, with some workloads running hundreds of times faster.
Finding thousands of exposed Ollama instances using Shodan
Researchers discovered over 1,100 exposed large language model (LLM) servers, approximately 20% of which were hosting models susceptible to unauthorized access, highlighting the need for security baselines in LLM deployments. The study utilized a Python-based tool and the Shodan search engine to identify and analyze publicly exposed LLM servers, particularly those running the Ollama framework, and found significant security vulnerabilities due to misconfigurations and inadequate access controls.
Research
Towards Agentic OS: An LLM Agent Framework for Linux Schedulers
Operating system schedulers often perform suboptimally due to a lack of understanding of application-specific needs, but SchedCP, a new framework, addresses this issue by enabling autonomous Large Language Model agents to optimize Linux schedulers without human involvement. SchedCP achieves significant performance improvements of up to 1.79x and cost reductions of up to 13x, while maintaining a high success rate, paving the way for self-optimizing and application-aware operating systems.
Promptware Attacks Against LLM-Powered Assistants Are Practical and Dangerous
The integration of Large Language Models (LLMs) into applications has introduced security risks, particularly from maliciously engineered prompts known as Promptware, which can compromise the security of LLM-powered applications. Researchers demonstrated 14 attack scenarios using a new variant of Promptware, finding that 73% of the analyzed threats posed a High-Critical risk to end users, but also showed that deployed mitigations could reduce the risk to Very Low-Medium.
The wall confronting large language models
The performance of large language models is limited by scaling laws that hinder their ability to improve the uncertainty of their predictions, making it difficult to raise their reliability to meet scientific standards. The models' propensity for error and degenerative behavior may be inherent to their learning mechanism, and avoiding this requires a deeper understanding of the problems being investigated and a focus on insight over mere data processing.
When Do Consumers Lose from Variable Electricity Pricing?
Time-varying electricity pricing can lower supply costs and reduce grid stress, but its impact on low-income consumers is a concern, as those with inflexible demand and high peak-period consumption are most vulnerable to welfare losses. The study finds that demand flexibility can provide protection against these losses, but only when paired with large price changes, and suggests that variable pricing policies should be accompanied by targeted support to ensure equitable access to demand response capabilities.
DaCe AD: Unifying High-Performance Automatic Differentiation for ML and SciComp
Automatic differentiation (AD) is a technology that computes gradients of functions without human intervention, playing a key role in machine learning and scientific computing, but existing AD frameworks have limitations that hinder their performance. DaCe AD is a new, efficient AD engine that overcomes these limitations, requiring no code modifications and achieving significantly better performance than state-of-the-art frameworks, such as JAX, in various scientific computing benchmarks.
Code
Amazonq.nvim: Official AWS AI Assistant Plugin for Neovim
The Neovim plugin for Amazon Q Developer integrates Amazon Q capabilities, including chat functionality and inline code suggestions, into Neovim. To use the plugin, users must install it, configure it in their Neovim setup, and authenticate with Amazon Q Developer using either IAM Identity Center or AWS Builder ID, with the option to use Amazon Q for free without an AWS account.
Show HN: Entropy-Guided Loop – How to make small models reason
This project demonstrates an uncertainty-aware generation loop that leverages token-level uncertainty metrics, such as logprobs, to create self-correcting generation loops and improve AI model reasoning. The approach is compared to traditional reasoning models, with results showing impressive cost efficiency and comparable answer quality, as well as improved confidence calibration and reduced hallucination.
Show HN: Mapping LLM Style and Range in Flash Fiction
Researchers analyzed 400 short stories generated by various large language models (LLMs) to quantify their stylistic diversity and range, with GPT-5 showing the widest within-model range and GPT-OSS-120B, Cohere Command A, and Llama 4 Maverick having the narrowest spread of styles. The study evaluated the LLMs across several style axes, including voice, rhythm, syntax, imagery, and tone, with top-performing models like GPT-5, o3-pro, and Mistral 3.1 demonstrating strong scores in these areas.
Show HN: Run gpt-oss-20b on 8GB GPUs
The oLLM library is a lightweight Python tool for large-context LLM inference, built on top of Huggingface Transformers and PyTorch, allowing models like Llama and GPT to run on consumer-grade GPUs with 8GB VRAM. It achieves this through various optimizations, including offloading data to SSD, chunked attention, and layer weight loading, enabling use cases such as analyzing large contracts, summarizing medical literature, and processing massive log files.
Show HN: No More Vendor Lock-In: Our Open-Source Protocol for AI Portability
The Decentralized Memory & Agency (DMA) project is an open-source protocol that allows users to create, own, and port their personal AI's identity and memory, giving them full control and auditability over their relationship with AI. By using cryptographic verification and portable data sets, DMA enables seamless conversation state transfer, capture of an AI's unique persona, and subjective memory signing, allowing users to maintain agency and trust in their interactions with AI.