| title | Product updates |
|---|---|
| sidebarTitle | Product updates |
| description | New features, fixes, and improvements for the Runpod platform. |
Flash is now in public beta. Flash is a Python SDK that lets you run functions on Runpod Serverless GPUs with a single decorator:
from runpod_flash import Endpoint, GpuType
@Endpoint(
name="hello-gpu",
gpu=GpuType.NVIDIA_GEFORCE_RTX_4090,
dependencies=["torch"]
)
async def hello(): # This function runs on Runpod
import torch
gpu_name = torch.cuda.get_device_name(0)
print(f"Hello from your GPU! ({gpu_name})")
return {"gpu": gpu_name}
asyncio.run(hello())
print("Done!") # This runs locallyKey features:
- Remote execution: Mark functions with
@Endpointto run on GPUs/CPUs automatically. - Auto-scaling: Workers scale from 0 to N based on demand.
- Dependency management: Packages install automatically on remote workers.
- Two patterns: Queue-based endpoints for batch work, load-balanced endpoints for REST APIs
- Flash apps: Build production-ready APIs with
flash init,flash run, andflash deploy
Get started:
Learn more about Flash. Run your first GPU workload in 5 minutes. Learn queue-based and load-balanced patterns. Development and deployment commands.Flash now supports deploying endpoints to multiple datacenters simultaneously. Pass a list of datacenters to distribute your workload across regions for improved availability and reduced latency. You can also attach network volumes per datacenter for region-specific data access.
## New Public Endpoints and expanded examplesNew Public Endpoints: Expansion of available models across all categories.
- Video: SORA 2 and SORA 2 Pro, Kling v2.1 and v2.6 Motion Control, WAN 2.6.
- Image: Seedream 4.0.
- Text: Qwen3 32B, IBM Granite 4.0.
- Audio: Chatterbox Turbo for text-to-speech.
New integrations and guides:
- Vercel AI SDK integration: New
@runpod/ai-sdk-providerpackage for TypeScript projects with streaming, text generation, and image generation support. - AI coding tools guide: Configure OpenCode, Cursor, and Cline to use Runpod Public Endpoints as your model provider.
- Build a text-to-video pipeline: Chain multiple Public Endpoints to generate videos from text prompts.
- Deploy cached models: Reduce cold start times with model caching.
- Integrate Serverless with web applications: Build a complete image generation app.
- Build a chatbot with Gemma 3: Deploy vLLM with OpenAI API compatibility.
- Run Ollama on Pods: Set up Ollama for LLM inference.
- Build Docker images with Bazel: Containerize your applications.
- GitHub release rollback: Roll back your Serverless endpoint to any previous build from the console. Restore an earlier version when you encounter issues without waiting for a new GitHub release.
- Load balancing Serverless repos (beta): Load balancing endpoints are now available in the Hub. Publish or convert any listing to load balancer type by setting
"endpointType": "LB"in your hub.json file, then deploy as a Serverless endpoint or Pod from the Hub page. Maintain a single listing for your model and let users choose their deployment method—autoscaling Serverless or dedicated Pod resources.
- Pod migration (beta): Migrate your Pod to a new machine when your stopped Pod's GPU is occupied. Provisions a new Pod with the same specifications and automatically transfers your data to an available machine.
- New Serverless development guides: We've added a comprehensive new set of guides for developing, testing, and debugging Serverless endpoints.
- Slurm Clusters are now generally available: Deploy production-ready HPC clusters in seconds. These clusters support multi-node performance for distributed training and large-scale simulations with pay-as-you-go billing and no idle costs.
- Cached models are now in beta: Eliminate model download times when starting workers. The system places cached models on host machines before workers start, prioritizing hosts with your model already available for instant startup.
- New Public Endpoints available: WAN 2.5 combines image and audio to create lifelike videos, while Nano Banana merges multiple images for composite creations.
- Hub revenue share model: Publish to the Runpod Hub and earn credits when others deploy your repo. Earn up to 7% of compute revenue through monthly tiers with credits auto-deposited into your account.
- Pods UI updated: Refreshed modern interface for interacting with Runpod Pods.
- Public Endpoints: Access state-of-the-art AI models through simple API calls with an integrated playground. Available endpoints include Whisper V3, Seedance 1.0 Pro, Seedream 3.0, Qwen Image Edit, Flux Kontext, Cogito 671B, and Minimax Speech.
- Slurm Clusters (beta): Create on-demand multi-node clusters instantly with full Slurm scheduling support.
- S3-compatible API for network volumes: Upload and retrieve files from your network volumes without compute using AWS S3 CLI or Boto3. Integrate Runpod storage into any AI pipeline with zero-config ease and object-level control.
- Referral program revamp: Updated rewards and tiers with clearer dashboards to track performance.
- Port labeling: Name exposed ports in the UI and API to help team members identify services like Jupyter or TensorBoard.
- Price drops: Additional price reductions on popular GPU SKUs to lower training and inference costs.
- Runpod Hub: A curated catalog of one-click endpoints and templates for deploying community projects without starting from scratch.
- Tetra beta test: A Python library for running code on GPU with Runpod. Add a
@remote()decorator to functions that need GPU power while the rest of your code runs locally.
- Login with GitHub: OAuth sign-in and linking for faster onboarding and repo-driven workflows.
- RTX 5090s on Runpod: High-performance RTX 5090 availability for cost-efficient training and inference.
- Global networking expansion: Rollout to additional data centers approaching full global coverage.
- CPU Pods get network storage access: GA support for network volumes on CPU Pods for persistent, shareable storage.
- SOC 2 Type I certification: Independent attestation of security controls for enterprise readiness.
- REST API release: REST API GA with broad resource coverage for full infrastructure-as-code workflows.
- Instant Clusters: Spin up multi-node GPU clusters in minutes with private interconnect and per-second billing.
- Bare metal: Reserve dedicated GPU servers for maximum control, performance, and long-term savings.
- AP-JP-1: New Fukushima region for low-latency APAC access and in-country data residency.
- REST API beta test: RESTful endpoints for Pods, endpoints, and volumes for simpler automation than GraphQL.
- Full-time community manager hire: Dedicated programs, content, and faster community response.
- Serverless GitHub integration release: GA for GitHub-based Serverless deploys with production-ready stability.
- CPU Pods v2: Docker runtime parity with GPU Pods for faster starts with network volume support.
- H200s on Runpod: NVIDIA H200 GPUs available for larger models and higher memory bandwidth.
- Serverless upgrades: Higher GPU counts per worker, new quick-deploy runtimes, and simpler model selection.
- Global networking expansion: Added to CA-MTL-3, US-GA-1, US-GA-2, and US-KS-2 for expanded private mesh coverage.
- Serverless GitHub integration beta test: Deploy endpoints directly from GitHub repos with automatic builds.
- Scoped API keys: Least-privilege tokens with fine-grained scopes and expirations for safer automation.
- Passkey auth: Passwordless WebAuthn sign-in for phishing-resistant account access.
- US-GA-2 added to network storage: Enable network volumes in US-GA-2.
- Global networking: Private cross-data-center networking with internal DNS for secure service-to-service traffic.
- US-TX-3 and EUR-IS-1 added to network storage: Network volumes available in more regions for local persistence.
- Runpod slashes GPU prices: Broad GPU price reductions to lower training and inference total cost of ownership.
- Referral program revamp: Updated commissions and bonuses with an affiliate tier and improved tracking.
- $20M seed by Intel Capital and Dell Technologies Capital: Funds infrastructure expansion and product acceleration.
- First in-person hackathon: Community projects, workshops, and real-world feedback.
- Serverless CPU Pods: Scale-to-zero CPU endpoints for services that don't need a GPU.
- AMD GPUs: AMD ROCm-compatible GPU SKUs as cost and performance alternatives to NVIDIA.
- CPU Pods: CPU-only instances with the same networking and storage primitives for cheaper non-GPU stages.
- runpodctl: Official CLI for Pods, endpoints, and volumes to enable scripting and CI/CD workflows.
- New navigational changes to Runpod UI: Consolidated menus, consistent action placement, and fewer clicks for common tasks.
- Docs revamp: New information architecture, improved search, and more runnable examples and quickstarts.
- Zhen AMA: Roadmap Q&A and community feedback session.
- US-OR-1: Additional US region for lower latency and more capacity in the Pacific Northwest.
- CA-MTL-1: New Canadian region to improve latency and meet in-country data needs.
- First community manager hire: Dedicated community programs and faster feedback loops.
- Building out the support team: Expanded coverage and expertise for complex issues.
- Serverless quick deploy: One-click deploy of curated model templates with sensible defaults.
- EU domain for Serverless: EU-specific domain briefly offered for data residency, superseded by other region controls.
- Data-center filter for Serverless: Filter and manage endpoints by region for multi-region fleets.
- Self-service worker upgrade: Rebuild and roll workers from the dashboard without support tickets.
- Edit template from endpoint page: Inline edit and redeploy the underlying template directly from the endpoint view.
- Improved Serverless metrics page: Refinements to charts and filters for quicker root-cause analysis.
- Flex and active workers: Discounted always-on "active" capacity for baseline load with on-demand "flex" workers for bursts.
- Billing explorer: Inspect costs by resource, region, and time to identify optimization opportunities.
- Teams: Organization workspaces with role-based access control for Pods, endpoints, and billing.
- Savings plans: Plans surfaced prominently in console with easier purchase and management for steady usage.
- Network storage to US-KS-1: Enable network volumes in US-KS-1 for local, persistent data workflows.
- Serverless log view: Stream worker stdout and stderr in the UI and API for real-time debugging.
- Serverless health endpoint: Lightweight /health probe returning endpoint and worker status without creating a billable job.
- SOC 2 Type II compliant: Security and compliance certification for enterprise customers.
- Serverless metrics page: Time-series charts for pXX latencies, queue delay, throughput, and worker states for faster debugging and tuning.
- H100s on Runpod: NVIDIA H100 instances for higher throughput and larger model footprints.
- Savings plans: Commitment-based discounts for predictable workloads to lower effective hourly rates.
- The new and improved Runpod login experience: Streamlined sign-in and team access for faster, more consistent auth flows.
- Network volumes added to Serverless: Attach persistent storage to Serverless workers to retain models and artifacts across restarts and speed cold starts through caching.
- Serverless region support: Pin or allow specific regions for endpoints to reduce latency and meet data-residency needs.
- Serverless scaling strategies: Scale by queue delay and/or concurrency with min/max worker bounds to balance latency and cost.
- Queue delay: Expose time-in-queue as a first-class metric to drive autoscaling and SLO monitoring.
- Request count: Track success and failure totals over windows for quick health checks and alerting.
- runsync: Synchronous invocation path that returns results in the same HTTP call for short-running jobs.
- Network storage beta: Region-scoped, attachable volumes shareable across Pods and endpoints for model caches and datasets.
- Job cancel API: Programmatically terminate queued or running jobs to free capacity and enforce client timeouts.
- Serverless API v2: Revised request and response schema with improved error semantics and new endpoints for better control over job lifecycle and observability.
- Notification preferences: Configure which platform events trigger alerts to reduce noise for teams and CI systems.
- GPU priorities: Influence scheduling by marking workloads as higher priority to reduce queue time for critical jobs.
- Runpod now offers encrypted volumes: Enable at-rest encryption for persistent volumes with no application changes required using platform-managed keys.