Tuesday September 23, 2025

A critical stack-based buffer overflow vulnerability was discovered in cURL, researchers found that AI-generated "workslop" is destroying productivity, and Qwen3-Omni, a native Omni AI model, can process diverse inputs including text, images, audio, and video.

News

You did this with an AI and you do not understand what you're doing here

A critical stack-based buffer overflow vulnerability was discovered in cURL's cookie parsing mechanism, allowing remote attackers to trigger memory corruption and potentially execute arbitrary code by sending oversized cookie data through HTTP responses. This vulnerability affects all applications that use libcurl for HTTP requests, including web applications, web browsers, API services, mobile applications, server software, and IoT devices, and has been verified and reproduced with AddressSanitizer, receiving a CVSS 3.1 base score of 9.8 (CRITICAL).

AI-generated “workslop” is destroying productivity?

Despite a significant increase in the adoption of generative AI tools in the workplace, many companies are not seeing a measurable return on their investment, with 95% of organizations reporting no tangible benefits. This may be due to the phenomenon of "workslop," where employees use AI tools to create low-effort, passable-looking work that lacks substance and creates more work for their coworkers, resulting in wasted time, lost productivity, and damaged collaboration and trust among team members.

LinkedIn will soon train AI models with data from European users

LinkedIn will begin training AI models with data from European users starting November 3, 2025, using a "legitimate interests" basis under GDPR, and will provide an opt-out option for users to refuse the use of their data for training. The data used for training will include profile details, public content, and other information, but will exclude private messages.

California issues fine over lawyer's ChatGPT fabrications

A California attorney, Amir Mostafavi, has been fined $10,000 for filing a state court appeal filled with fake quotations generated by the artificial intelligence tool ChatGPT, with 21 of 23 quotes in the brief being fabricated. The case highlights the growing issue of attorneys using AI to generate false information, with experts predicting an exponential rise in such cases as AI innovation outpaces attorney education and awareness of the technology's limitations.

CompileBench: Can AI Compile 22-year-old Code?

Researchers created CompileBench to test the abilities of 19 state-of-the-art large language models (LLMs) in handling real-world software development tasks, such as compiling code, resolving dependencies, and dealing with legacy systems. The results showed that Anthropic's models performed well, claiming the top two spots, while OpenAI's models excelled in cost-efficiency, and Google's models surprisingly scored near the bottom, often failing to complete tasks as specified.

Research

We Politely Insist: Your LLM Must Learn the Persian Art of Taarof

Large language models (LLMs) have limited cultural competence, particularly with regards to culturally specific communication norms like Persian taarof, a system of ritual politeness in Iranian interactions. Researchers introduced TaarofBench, a benchmark to evaluate LLM understanding of taarof, and found that model performance can be improved through fine-tuning and optimization, laying the foundation for developing more culturally aware LLMs.

Paper2Agent: Stanford Reimagining Research Papers as Interactive AI Agents

Paper2Agent is a framework that automatically converts research papers into AI agents, transforming passive research output into active systems that can accelerate use, adoption, and discovery. The framework analyzes a paper and its associated codebase to create a knowledgeable research assistant that can carry out complex scientific queries through natural language and invoke tools and workflows from the original paper.

Jet-Nemotron: Efficient Language Model with Post Neural Architecture Search

Jet-Nemotron, a new family of hybrid-architecture language models, achieves accuracy comparable to or exceeding that of leading full-attention models while significantly improving generation throughput. The Jet-Nemotron-2B model, developed using the Post Neural Architecture Search pipeline, delivers up to 53.6x generation throughput speedup and achieves higher accuracy than recent advanced models on certain benchmarks.

Learn Your Way: Towards an AI-Augmented Textbook, Google Research

Textbooks have a limitation in that they are a one-size-fits-all medium, but an approach using generative AI, called Learn Your Way, can transform and augment them to add personalization and multiple representations. Learn Your Way has been evaluated through pedagogical assessments and a randomized control trial, which showed advantages to learning with this system over traditional textbook use.

Why Johnny Cant Use Agents: Aspirations vs. Realities with AI Agents

Researchers investigated the conception and marketing of "AI agents" by the tech industry and the challenges end-users face when using them, categorizing commercial AI agents into orchestration, creation, and insight. A usability assessment with 31 participants found that while users were impressed with AI agents, they faced significant challenges, including misaligned capabilities and a lack of meta-cognitive abilities necessary for effective collaboration.

Code

Qwen3-Omni: Native Omni AI model for text, image and video

Qwen3-Omni is a natively end-to-end multilingual omni-modal foundation model that processes diverse inputs, including text, images, audio, and video, and delivers real-time streaming responses in both text and natural speech. It has several key features, including state-of-the-art performance across modalities, multilingual support for 119 text languages and 19 speech input languages, and a novel MoE-based Thinker–Talker design for strong general representations and low latency.

Show HN: An MCP that allows you break LLM's context limit

PageIndex MCP is a system that enables users to chat with long PDFs on Claude desktop, using a reasoning-based RAG approach that provides higher accuracy and better transparency than traditional methods. It offers free support for up to 1000 pages and unlimited conversations, with options for local and online PDFs, and can be set up through a one-click installation with Claude Desktop or through other MCP-compatible clients.

GitHub replaces dashbord feed with AI shit?

There is no text to summarize.

Vogte: Agentic TUI for Go projects with LLM integration

Vogte is a language-specific tool for Go codebases that utilizes Large Language Models (LLMs) to provide holistic repository context and help developers build and maintain projects. It features a two-step approach to task completion, extracting relevant repository information and applying patches directly, with support for various LLM models and a command-line interface for generating repository context.

Show HN: LYRN Context Management Dashboard

LYRN-AI is a modular, professional-grade GUI for interacting with local Language Models, designed for efficiency, accessibility, and cognitive continuity through structured, live memory. The platform features advanced job automation, live system monitoring, a dynamic prompt building system, and a robust file-based architecture for memory and inter-process communication, allowing for flexible and customizable AI interactions.

    A critical stack-based buffer overflow vulnerability was discovered in cURL, researchers found that AI-generated "workslop" is destroying productivity, and Qwen3-Omni, a native Omni AI model, can process diverse inputs including text, images, audio, and video.