Wednesday August 20, 2025

DeepSeek-V3.1-Base model boasts 685B parameters, researchers identify six challenges to AI-assisted codebase generation, and OpenAI's Reflect project introduces a physical AI assistant that illuminates users' lives through sound, light, and color.

News

DeepSeek-v3.1-Base

The DeepSeek-V3.1-Base model has 685B parameters and supports various tensor types, but it is not currently deployed by any inference provider. The model has been used as a base for finetunes and quantizations, and is included in several collections and spaces, including the DeepSeek-V3.1 Collection, which was updated about 16 hours ago.

Show HN: Kuse 2.0 – AI Visual Folder: Chaos In, Genius Out

Kuse is a platform that can be used in various ways, including through a dashboard, web page, image, or mixed content, to facilitate collaboration and other tasks. The platform also features tools like the Magic Pen Guide and a library, and has been recognized as a top post on Product Hunt, a website that showcases new and innovative products.

95 per cent of organisations are getting zero return from AI according to MIT

US tech stocks have been hit by concerns over the future of the AI boom, with investors worrying about the potential impact on the industry. The article is locked behind a paywall, but it appears to discuss the current state of the tech industry and the potential risks and challenges facing companies involved in AI development.

Tidewave Web: in-browser coding agent for Rails and Phoenix

Tidewave Web is a coding agent that runs directly in the browser alongside web applications, providing shared page context and deep framework integration to eliminate the need for constant back-and-forth between tools. It is currently available for Rails and Phoenix, with plans to expand to other frameworks, and offers features such as collaborative browser testing and automatic code execution within the running app.

DeepSeek-v3.1

DeepSeek-V3.1 is a collection on the Hugging Face platform, which is a part of the deepseek-ai collections, and it was updated about 16 hours ago. The collection has 135 upvotes and includes models such as DeepSeek-V3.1-Base, which has 511 views and 135 upvotes, and is contributed to by multiple users including cedric, clem, lhoestq, and victor.

Research

Exploring the Challenges and Opportunities of AI-Assisted Codebase Generation

Recent codebase AI assistants (CBAs) can generate entire codebases from textual descriptions, but despite their potential, they are not widely adopted, with users expressing low satisfaction due to issues such as poor functionality, code quality, and communication. A study of 16 developers found six underlying challenges and five barriers to using CBAs, highlighting opportunities for improvement and informing the design of more efficient and useful CBAs.

A Utility-Driven Mathematical Framework for Agent-Centric AI Adoption

The authors formalize three design axioms for the sustained adoption of agent-centric AI systems and model adoption as a combination of novelty and utility terms. They also introduce and evaluate various analytical tools and methods, including identifiability analysis, model comparison, and residual analysis, to provide a comprehensive understanding of the adoption process.

Is radicalization reinforced by social media censorship? (2021)

Radicalized beliefs, such as those tied to conspiracy theories, can lead to violent behavior and understanding how these beliefs spread is crucial to mitigating radicalization. A model of a social media network found that both decentralized and centralized censorship can increase certainty in radicalized views, but centralized censorship, such as banning individuals, has the strongest effect on radicalization.

Rough Numbers Between Consecutive Primes

Almost all gaps between consecutive primes contain a natural number whose least prime factor is at least the length of the gap, confirming a prediction of Erdős. The number of exceptional gaps is relatively small, with an upper bound of $O(X/\log^2 X)$, and assuming a form of the Hardy-Littlewood prime tuples conjecture, a more precise asymptotic of $N(X) \sim c X / \log^2 X$ can be established.

Dissecting CPU-GPU Unified Physical Memory on AMD MI300A APUs

The introduction of AMD's MI300A Accelerated Processing Units (APUs) enables HPC systems to feature integrated CPU and GPU with Unified Physical Memory (UPM), simplifying memory management. This work characterizes the UPM architecture and proposes porting strategies, showing that applications using the unified memory model can match or outperform those with explicit memory management while reducing memory costs by up to 44%.

Code

Show HN: OpenAI/reflect – Physical AI Assistant that illuminates your life

Reflect is a hardware AI assistant built during an OpenAI hackathon, currently targeting espressif devices, that allows for natural communication through sound, light, and color, with features like calendar event tracking and music playback. The device creates a WiFi Access Point, and users can interact with it through a web interface at http://192.168.4.1 after joining the network, with the code provided as-is without warranty or guarantees.

Show HN: Lemonade: Run LLMs Locally with GPU and NPU Acceleration

Lemonade is a local LLM server that utilizes GPU and NPU acceleration to provide high-performance inference for large language models, supporting both GGUF and ONNX models. It offers a user-friendly interface, CLI, and API for easy integration with various applications, and is actively maintained by AMD with a permissive Apache license.

Show HN: Tambo Add A Cursor style assistant for React apps (OSS, self-hosted)

Tambo AI is a React package that enables developers to build AI-powered applications with generative UI, allowing users to interact through natural language. The package provides a range of tools and features, including a client-side registry of React components, a TamboProvider for wrapping apps, and hooks for submitting user messages and rendering AI-generated components.

Show HN: Built a memory layer that stops AI agents from forgetting everything

In Memoria is an MCP server that provides AI coding assistants with persistent memory and pattern learning, allowing them to access and build upon a developer's codebase history and preferences. By running as a server that AI tools can connect to, In Memoria enables features like context-aware suggestions, architectural decision recall, and coding pattern learning, enhancing the capabilities of AI coding assistants like Claude, Copilot, and Cursor.

Show HN: Rucat – Cat for Prompt Engineers

rucat is a versatile cat clone written in Rust, designed for the era of Large Language Models (LLMs), offering features such as multiple output formats, line numbering, syntax-aware formatting, and clipboard support. It excels at consolidating context from multiple files into a single, well-structured block of text, making it an ideal tool for developers, system administrators, and anyone working with code or text files in the terminal.

    DeepSeek-V3.1-Base model boasts 685B parameters, researchers identify six challenges to AI-assisted codebase generation, and OpenAI's Reflect project introduces a physical AI assistant that illuminates users' lives through sound, light, and color.