Thursday — June 5, 2025
Google Cloud's Cloud Run now supports NVIDIA GPUs for enhanced AI workloads, Mistral AI introduces Mistral Code to revolutionize coding workflows, and researchers create a language model that predictably accelerates the success rate of AI research ideas.
News
Cloud Run GPUs, now GA, makes running AI workloads easier for everyone
Google Cloud's serverless runtime, Cloud Run, now supports NVIDIA GPUs, offering a powerful and cost-efficient runtime for various use cases, including AI inference, batch processing, and media processing. With features like pay-per-second billing, scale to zero, and rapid startup, Cloud Run with GPU support is now generally available, allowing developers to build and deploy applications with ease, scalability, and reliability.
"AI Will Replace All the Jobs " Is Just Tech Execs Doing Marketing
Despite widespread claims that AI will replace a large percentage of jobs, there is no evidence to support this, and hundreds of years of data and analyses suggest that AI will create more jobs than it displaces, just like other technologies before it. Historical examples, such as the automation of farm work and the personal computer revolution, have shown that technological advancements can lead to the creation of new industries and jobs, resulting in a net gain of employment opportunities.
Mistral Code
Mistral AI offers an AI-powered coding assistant called Mistral Code, which provides intelligent code completion, generation, and autonomous task execution to transform development workflows. The platform delivers state-of-the-art AI-powered software development, allowing developers to work more efficiently and effectively, with features such as context-aware understanding, autonomous coding, and enterprise-grade control.
Show HN: GPT image editing, but for 3D models
AdamCAD is an AI-powered CAD platform that generates 3D designs in seconds, allowing users to create models by speaking or typing prompts, refining them, and exporting the final product. The platform also features an image-to-3D mode and integrates with existing CAD software, making it a useful tool for professionals in the field.
The Sky's the limit: AI automation on Mac
The author is frustrated with Apple's lack of progress in automation and AI on the Mac, particularly in light of the new app Sky, which was created by former Apple employees and offers advanced automation capabilities that surpass anything Apple has developed. The author questions why Apple failed to harness the expertise of these employees and why they have not prioritized automation and AI on the Mac, suggesting that it may be due to corporate culture, siloed nature, or concerns about control and privacy.
Research
What do software developers need to know to succeed in an age of AI?
Research with 21 cutting-edge developers found that using generative AI requires a combination of technical and soft skills across four domains, which can be applied throughout a 6-step task workflow. To prepare developers for an AI-driven future, education and training programs should focus on reskilling and upskilling in these areas to prevent deskilling and ensure long-term success.
Predicting Empirical AI Research Outcomes with Language Models
Researchers have developed a system that uses a fine-tuned language model to predict the success of AI research ideas, outperforming human experts by a significant margin with 77% accuracy on a test set. The system's effectiveness was verified through extensive testing, including on unpublished novel ideas, demonstrating its potential to accelerate empirical AI research by identifying promising ideas and improving idea generation models.
Leancode: Understanding Models Better for Code Simplification of Pre-Trained LLM
LeanCode is a code simplification method that reduces computational complexity by selectively removing tokens based on their importance, as determined by context-aware attention scores. The approach outperforms state-of-the-art methods, achieving improvements of up to 60% for code search and 29% for code summarization tasks.
Quantum computing and artificial intelligence: status and perspectives
This white paper explores the intersection of quantum computing and artificial intelligence, discussing how they can support and benefit each other, and proposes a research agenda to address foundational questions about their interaction. It concludes with recommendations and challenges, including optimizing energy consumption, advancing hybrid software engineering, and enhancing industrial competitiveness while considering societal implications.
Not all tokens are meant to be forgotten
Large Language Models (LLMs) can memorize unwanted information, such as private or copyrighted content, raising privacy and legal concerns, and existing unlearning methods often result in over-forgetting, leading to a loss of model utility. The Targeted Information Forgetting (TIF) framework addresses this issue by selectively identifying and unlearning unwanted information while preserving general knowledge, resulting in improved unlearning effectiveness and state-of-the-art results in experiments.
Code
Show HN: Awesome-A2A – curated resources for Google's Agent2Agent protocol
The Agent2Agent (A2A) protocol is an open protocol developed by Google and partners that enables different AI agents to communicate securely and collaborate on tasks, breaking down silos between isolated agent systems. The protocol is designed to be simple, enterprise-ready, and modality-agnostic, allowing agents to interact without sharing internal logic or tools, and provides a range of resources, including official samples, libraries, and documentation, to help developers get started with building A2A-compatible agents and clients.
Show HN: Go AI SDK – A Unified Go API for LLMs
The Jetify AI SDK for Go is a unified interface for interacting with multiple AI providers, including OpenAI and Anthropic, allowing developers to build powerful AI applications with a single API. The SDK provides a range of features, including multi-modal inputs, tool calling, language models, and extensible architecture, and is currently in public alpha, with a stable but potentially changing API.
The biggest list of Shadcn/UI Related stuff on GitHub
The text lists various components and libraries related to shadcn/ui, a user interface framework, including SERP UI Blocks, Aceternity UI, and App Tailwind V4, among others. These components and libraries offer a range of features and tools for building modern, accessible web applications, including pre-built blocks, customizable components, and integration with popular frameworks like React and Tailwind CSS.
Show HN: Claude-Bridge – Use GPT, Gemini, and Other LLMs with Claude Code
Lemmy is a TypeScript ecosystem for building AI applications, providing a unified interface for multiple LLM providers, including Anthropic, OpenAI, and Google Gemini, along with tools for conversation management and terminal UIs. The ecosystem consists of core packages, such as @mariozechner/lemmy and @mariozechner/lemmy-tui, as well as various applications, including chat and red teaming examples, that demonstrate the capabilities of the Lemmy ecosystem.
Show HN: Arkflow(stream processing engine) will soon support Python processors
ArkFlow is a high-performance Rust stream processing engine that provides powerful data stream processing capabilities, supporting multiple input/output sources and processors. It features high performance, multiple data sources, powerful processing capabilities, and extensibility, making it a versatile tool for handling various data processing tasks.