Releases: NVIDIA/aistore
4.1
AIStore 4.1 delivers upgrades across retrieval, security, and cluster operation. The GetBatch API is significantly expanded for ML training workloads, with client-side streaming, improved resilience and error handling under resource shortages. Authentication is redesigned with OIDC support, structured JWT validation, and cluster-key HMAC signing for HTTP redirects. This release also adds the rechunk job for converting datasets between monolithic and chunked layouts, unifies multipart-upload behavior across all cloud backends, and enhances the blob downloader with load-aware throttling. The Python SDK advances to v1.18 with a redesigned Batch API and revised timeout configuration. Configuration validation is strengthened throughout, including automatic migration from v4.0 auth settings.
This release arrives with over 200 commits since v4.0 and maintains backward compatibility, supporting rolling upgrades.
Table of Contents
- GetBatch: Distributed Multi-Object Retrieval
- Authentication and Security
- Chunked Objects
- Blob Downloader
- Rechunk Job
- Unified Load and Throttling
- Transport Layer
- Multipart Upload
- Python SDK
- S3 Compatibility
- Build System and Tooling
- Xaction Lifecycle
- ETL and Transform Pipeline
- Observability
- Configuration Changes
- Tools:
aisloader
GetBatch: Distributed Multi-Object Retrieval
The GetBatch workflow now has a robust implementation across the cluster. Retrieval is streaming-oriented, supports multi-bucket batches, and includes tunable soft-error handling. The request path incorporates load-based throttling and may return HTTP 429 ("too many requests") when the system is under severe pressure. Memory and disk pressure are taken into account, and connection resets are handled transparently.
Configuration is available via a new "get_batch" section:
{
"max_wait": "30s", // Wait time for remote targets (range: 1s-1m)
"warmup_workers": 2, // Pagecache read-ahead workers (-1=disabled, 0-10)
"max_soft_errs": 6 // Recoverable error limit per request
}Observability has improved through consolidated counters, Prometheus metrics, and clearer status reporting.
Client and tooling updates include a new Batch API in the Python SDK, extended aisloader support, and ansible composer playbooks for distributed benchmarks.
Reference: https://github.com/NVIDIA/aistore/blob/main/docs/get_batch.md
Commit Highlights
- 5e3382dfa: Refine throttling under memory/disk pressure
- d093b5c0f: Add Prometheus metrics for GetBatch
- e71ad3e3c: Consolidate statistics and add counters
- 2ab4c289a: Handle stream-breakages (ErrSBR); per-request polling and cleanup
- 1c8dc4b29: Recover from connection drops/resets via SharedDM
- 9316804d5: Add client-side streaming GetBatch API (breaking change)
- 174931299: Avoid aborting x-moss; rename internal intra-cluster headers
Authentication and Security
AIS v4.1 introduces a standardized JWT validation model, reorganized configuration for external authentication, and an expanded cluster-key mechanism for securing intra-cluster redirects. Together, these changes provide clearer semantics, better interoperability with third-party identity providers, and more uniform behavior across proxies and targets.
JWT Validation Model and Token Requirements
AIS uses JWTs to both authenticate and authorize requests. Version 4.1 formalizes this process and documents the complete validation flow in auth_validation.md.
Tokens may be issued by the first-party AuthN service or by compatible third-party identity providers (Keycloak, Auth0, custom OAuth services). AIS makes authorization decisions directly from JWT claims; no external role lookups are performed during request execution.
When auth.enabled=true is set in the cluster configuration, proxies validate tokens before routing requests to targets; targets verify redirect signatures when cluster-key signing is enabled.
Token Requirements
AIS accepts tokens signed with supported HMAC or RSA algorithms (HS256/384/512, RS256/384/512).
All tokens must include the standard sub and exp claims; aud and iss are validated when required by configuration.
AIS also recognizes several AIS-specific claims:
admin: Full administrative access; overrides all other claimsclusters: Cluster-scoped permissions (specific cluster UUID or wildcard)buckets: Bucket-scoped permissions tied to individual buckets within a cluster
Cluster and bucket permissions use the access-flag bitmask defined in api/apc/access.go.
Signature Verification: Static or OIDC
AIS supports two mutually exclusive approaches for signature verification:
-
Static verification (
auth.signature)- HMAC (shared secret) or RSA public-keyβbased
- Suitable for the AIS AuthN service or controlled token issuers
- Verifies tokens using the configured secret or public key
-
OIDC verification (
auth.oidc)- Automatic discovery using
/.well-known/openid-configuration - JWKS retrieval and caching with periodic refresh
- Validates issuer (
iss) againstallowed_iss - Supports custom CA bundles for TLS verification
- Automatic discovery using
Both modes accept standard Authorization: Bearer <token> headers and AWS-compatible X-Amz-Security-Token.
Authentication Flow
- Extract token from
AuthorizationorX-Amz-Security-Token. - Validate signature via static credentials or OIDC discovery.
- Check standard claims (
sub,exp, and optionallyaud,iss). - Evaluate AIS-specific claims (
admin,clusters,buckets) to authorize the operation. - If cluster-key signing is enabled, sign redirect URLs before forwarding to a target; targets verify signatures prior to execution.
This flow applies to all AIS APIs, including S3-compatible requests.
Configuration Changes and Compatibility (v4.0 => v4.1)
The authentication configuration has been reorganized for clarity in v4.1, but the previous format remains fully supported:
{
"auth": {
"enabled": true,
"secret": "your-hmac-secret"
}
}Version 4.1 introduces explicit sections for signature verification, required claims, and OIDC issuer configuration:
{
"auth": {
"enabled": true,
"signature": {
"key": "your-key",
"method": "HS256" // or RS256, RS384, RS512
},
"required_claims": {
"aud": ["your-audience"]
},
"oidc": {
"allowed_iss": ["https://your-issuer.com"],
"issuer_ca_bundle": "/path/to/ca.pem"
}
}
}OIDC handling includes JWKS caching, issuer validation, and optional CA bundles.
Token-cache sharding reduces lock contention under heavy concurrency.
See the JWT Validation Model and Token Requirements subsection above for validation flow and claim semantics.
Cluster-Key Authentication
Cluster-key authentication provides HMAC-based signing for internal redirect URLs.
It is independent from user JWT authentication and ensures that targets accept only authenticated, correctly routed internal redirects.
{
"auth": {
"cluster_key": {
"enabled": true,
"ttl": "24h", // 0 = never expire, min: 1h
"nonce_window": "1m", // Clock-skew tolerance (max: 10m)
"rotation_grace": "1m" // Accept old+new key during rotation (max: 1h)
}
}
}When enabled, the primary proxy generates a versioned secret and distributes it via metasync.
Proxies sign redirect URLs after validating the callerβs token; targets verify the signature before performing redirected operations.
The mechanism enforces correct routing, provides defense-in-depth against forged redirect traffic, and integrates with timestamp and nonce validation.
Commit Highlights
- 8da7e0ef3: Sign and verify redirect URLs (part six); DPQ
- 5b61f2571: Shared-secret handling; config sanitization and public clone
- a2f1b5fbd: Follow-up tests for auth config validation
- d304febdf: Refactor JWKS cache to support dynamic issuer registration
- 3d11b1ee1: Enable S3 JWT auth as automatic fallback
- c0b5aee40: Optimize locking for token validation and revocation
Chunked Objects
The chunked-object subsystem adds a new hard limit on maximum monolithic object size. This prevents ingestion of extremely large single-object payloads that exceed the clusterβs cap...
4.0
AIStore 4.0 is a major release that introduces a v2 object-metadata format and a chunked object representation (objects stored and managed as multiple chunks). This release also adds native multipart upload and a new GetBatch API for high-throughput retrieval of batches (objects and/or archived files).
In-cluster ETL has been extended with ETL pipelines, allowing users to chain transformations without intermediate buckets. Observability in 4.0 consolidates on Prometheus (as the sole monitoring backend) and adds disk-level capacity alerts
All subsystems, extensions, and modules were updated to support the new functionality. The CLI adds a cluster dashboard, improved feature-flag management, and numerous usability improvements. Configuration updates include a new chunks section and additional knobs to support clusters with hundreds of millions of objects and to improve runtime throttling.
This release arrives with nearly 300 commits since the previous 3.31 and maintains compatibility with the previous version, supporting rolling upgrades.
Table of Contents
- Object Metadata Version 2
- Chunked Objects
- Native API: Multipart Upload (unified implementation)
- GetBatch API (ML Endpoint)
- ETL
- Observability and Metrics
- Python SDK 1.16
- CLI
- Configuration Changes
- Performance Optimizations
- Build, CI/CD, and Testing
- Documentation
Object Metadata Version 2
For the first time since AIStore's inception in 2018, the on-disk object metadata format has been upgraded.
Metadata v2 is now the default for all new writes, while v1 remains fully supported for backward compatibility. The system reads both versions seamlessly.
What's New
The upgrade introduces persistent bucket identity and durable flags. Each object now stores its bucket ID (BID) alongside metadata. On load, AIStore checks the stored BID against the current bucket's BID from BMD (bucket metadata). A mismatch marks the object as defunct and evicts it from the cache, enforcing strong referential integrity between every object and the exact bucket generation it was written to.
The previous format had limited room for new features. Version 2 adds a dedicated 8-byte field for future capabilities (e.g., storage-class, compression, encryption, write-back). The system now returns explicit, typed errors (for example, ErrLmetaCorrupted) when metadata is damaged or unparseable, improving troubleshooting and debuggability.
Legacy flags (filename-too-long, lom-is-chunked) keep their original on-disk placement for backward compatibility, but we no longer carve bits out of the BID for new features.
The integrity model is now stronger: BID and flags are verified on load (before any object access). If the bucket is missing or has a different BID, loads fail with explicit errors (ErrBckNotFound, ErrObjDefunct). From the user's perspective, a defunct object effectively disappears, though it remains on disk until the next space-cleanup.
The serialized layout now packs BID and flags alongside checksum and other fields. The (future-proof) format remains compact.
No manual migration is required. Clusters continue to read v1 metadata while all new writes automatically persist v2.
Commit Highlights
- ae9ed1d2: Introduce LOM (Local Object Metadata) v2 with BID persistence and durable flags
- cdd3beef: Enforce defunct object detection when BID mismatches current bucket
- b543dab7: Extend serialized layout with new packed fields (LID, flags)
- 12ac88fe: Update error handling and
unpack()logic for v2 metadata
Another closely related and notable on-disk metadata change is chunked objects and chunk manifests - see Chunked Objects.
Chunked Objects
Version 4.0 introduces chunked objects as a new persistent storage format. The previous limitation that required storing all objects (small or large) as single monoliths has been removed. For chunked objects, AIStore maintains a chunk manifest that describes their content: chunk sizes, checksums, and ordering. The manifest itself is compressed and protected by a checksum.
Each in-progress multipart upload creates a uniquely identified partial manifest, tied to the bucket, object name, and upload ID.
Multiple uploads can proceed in parallel, with each upload identified by its own manifest ID. Partial manifests are persistent, and the cluster automatically checkpoints them after every N chunks (configured via the "checkpoint_every" setting below):
{
"chunks": {
"objsize_limit": 0,
"chunk_size": "1GiB",
"checkpoint_every": 4
}
}The full set of chunk-related options is described in the Configuration Changes section.
At any given time, however, only one completed manifest exists for an object, serving as the current object version.
There is no limit β hard or soft β on the number of chunks. Chunks are sequentially numbered and distributed across mountpaths, while the manifest includes fields such as MD5 and ETag for S3 compatibility and reserved flags for future extensions like compression or encryption.
Interrupted or abandoned uploads are also cleaned up automatically. Any orphaned chunks or manifests are discovered and removed by the space-cleanup job, which runs on its own when a mountpath goes out of space, but can just as easily be invoked by an admin at any time:
$ ais space-cleanup --helpFor users and applications, chunked objects are fully transparent. Chunked and monolithic formats coexist side by side, and the same is true across both the S3-compatible and native APIs. Standard GETs, range reads, and archive reads all continue to work out of the box.
Note: Throughout this document, we use the terms "archive" and "shard" interchangeably to refer to packaged collections of files (TAR, TGZ, ZIP, etc.).
Commit Highlights
- c4133361: Introduce object chunks and chunk manifest (part one); add initial unit test
- 61a0ba6b: LZ4-compress manifest with XXHash64 trailer; include per-chunk path; set max manifest size
- 05174b66: On-disk structure: manifest flags, per-chunk flags; move to
cos.Cksum; unit test pass - 2c2afc1a: Deterministic serialization order; enforce uint16 limits; reset
completedon error - 54a03394: Refactor "none" checksum handling; cap per-entry metadata size
- 9eaaba8d: Add
StoreCompleted/StorePartial; atomic finalize viaCompleteUfest; reader/reader-at; checksums/ETag helpers - a2866fc6: Manifest
LoadPartial; HRW/ordering fixes; expand unit tests and scripted tests - 445f6e35: Revamp content resolution; register
ut(manifest) CT; rename chunk CT toch; remove hardcoded limits - 9bad95b4: Transition manifest from xattr to a content type
- efbacca6: Refine manifest loading paths (completed vs partial)
- 984b7f25: S3 multipart integration with chunk manifest
Native API: Multipart Upload (unified implementation)
AIStore 4.0 extends its native API to support multipart uploads (MPU) β a capability previously available only through the S3-compatible interface. The native flow now mirrors the (de-facto) S3 standard: initiate => upload-part => complete (or abort).
The implementation unifies MPU handling across both native and S3 APIs, making it a core storage feature rather than a protocol-specific one.
Each upload gets a unique ID; multiple uploads can run in parallel against the same object. As parts arrive, AIStore records them as chunks and keeps a partial manifest on disk (checkpointed), so long uploads survive restarts without losing state.
Completion doesn't stitch bytes into one monolith anymore β it finalizes the manifest and promotes the object to the new chunked format (see Chunked Objects).
Completion rules follow S3 semantics with one clarification: AIStore requires all parts from 1 to N (no gaps) to finalize.
One practical note from testing: some providers enforce a minimum part size (5MiB for S3 with the last part excluded).
Chunks (a.k.a., "parts") may arrive unordered, duplicates are tolerated, and the most recent copy of a given partNumber (chunk number) wins. Partial completion is rejected.
For S3 and compatible Cloud backends, the whole-object ETag is derived from the per-chunk MD5s, and full-object checksums are computed at finalize.
Range and archival reads behave as expected after completion (the reader streams across chunks).
Backends are handled unifor...
3.31
Changelog
Core
- 63367e4: Do not reverse-proxy to self
- aeac54b: Reverse traffic now uses intra-cluster control net (excludes S3)
- 803fc4d: Remove legacy CONNECT tunnel
Global Rebalance
- 1b310f1: Add operation-scope
--latestand--sync - 62793b1: Limited-scope fix for empty buckets
- cf31b87: Introduce three-way version tie-breakers: local/sender/cloud
S3 multipart upload
CLI
- 45d625c, 59c9bd8: New
ais show dashboardcommand: at-a-glance cluster dashboard β node counts, capacity, performance, health, and version info - 19a83de: Fix
ais show remote-clusterwhen remote is a different HTTP(s)
See also:
ais clustercommand
ETL
- b98e109: Add support for
ETLArgsin single-object transform flow
See also: Single-Object Copy/Transform Capability
Deployment & Monitoring
- 85ad0a4: Reduce recommended file descriptor limits; update docs
- 59ec6b9: Check and periodically log FD table usage
See also: Maximum number of open files
Refactoring & Lint
- bea853c: Upgrade golangci-lint version
- 1b67381: Fix
noctxlinter errors - 3373ae3: refactor
ErrBucketNotFound
Documentation
- 49cac6a: CLI: add inline help; update supported subcommands
Full changelog:
git log --oneline v1.3.30...v1.3.31(β 20 commits).
3.30
This AIStore release, version 3.30, arrives two months after the previous release with a cycle spanning over 300 commits. As always, 3.30 maintains compatibility with the previous version and supports rolling upgrades.
This release adds the capability to handle batch workloads. The idea is to serve hundreds or thousands of objects (or archived files) in a single serialized streaming (or multipart) response.
AIStore 3.30 delivers performance improvements across multiple subsystems, with particular focus on I/O efficiency, connection management, and ETL operations. The updated and restructured ETL subsystem now features direct filesystem access (by ETL containers), eliminates the WebSocket communicatorβs io.Pipe bottlenecks, and enables the container to perform direct PUT operations. It also simplifies configuration using minimal runtime specs in place of full Kubernetes Pod YAML.
Python SDK 1.15 introduces high-performance batch processing with streaming decode for large archives and powerful new ETL capabilities. This breaking release removes the deprecated init_code ETL API while adding improved resilience with better retry logic.
For observability, Prometheus now exports disk write latency and pending I/O depth metrics, with automatic capacity refresh triggered by disk alerts. StatsD exporters, while still available, are now disabled by default as we transition to Prometheus and OpenTelemetry as first-class monitoring solutions.
For tooling, the CLI gains a new ml namespace with Lhotse CutSet helpers for ML pipelines. This CLI upgrade also delivers Hugging Face repository integration (including batched downloads) and multiple usability improvements.
Cloud backend enhancements include Oracle Cloud Infrastructure multipart upload support enabling S3 clients (boto3, s3cmd, AWS CLI, etc.) to perform multipart uploads against OCI backends without code changes, plus AWS configuration management improvements and related bug fixes.
New ais object cp and ais object etl commands (and the respective APIs) provide synchronous copy and transform operations without engaging asynchronous multi-object xactions (batch jobs).
Documentation updates include a complete ETL CLI (docs) rewrite, new operational guides for connection management and ML workflows, enhanced Python SDK documentation, and improved AWS backend configuration guidance.
Infrastructure improvements include automatic macOS/arm64 CLI binary builds for GitHub releases and upgrades to all open-source dependencies (except Kubernetes client libraries), bringing security patches and performance improvements across the codebase.
Table of Contents
- Batch Workflows
- ETL
- Performance & Scalability
- Observability
- CLI
- Python SDK 1.15
- Single Object Copy/Transform
- Cloud Backend Enhancements
- Bug and Security Fixes
- Build & CI
- Documentation
- Deprecations & Compatibility
Detailed changelog is available at this link.
1. Batch Workflows
AIStore 3.30 introduces the new GetBatch API. Instead of reading objects and files one at a time, you bundle any number of items β plain objects and/or archived files β into a single request. The cluster then streams back the entire batch as an ordered archive, eliminating multiple network round-trips. Location wise, the specified data items can reside in-cluster or in remote (cloud) buckets.
The response itself may be streaming or multipart, with formatting options that universally include (.tar, .tar.gz, .tar.lz4, and .zip).
The response always preserves the specified order, and in streaming mode it begins flowing immediately so you donβt have to wait for the entire archive to assemble. If you enable βcontinue on error,β missing files wonβt halt the request β instead, those items appear as zero-length files with a special prefix, and the transfer proceeds with the remaining data.
Lhotse Integration
AIStore 3.30's first vertical GetBatch integration supports Lhotse speech-processing toolkit.
You can now provide a Lhotse CutSet (cuts.jsonl or .gz or .lz4) to the CLI, and AIStore will assemble each cut's audio frames into training-ready serialized (.tar | .tar.gz | .tar.lz4 | .zip) files.
In your batch manifest, each entry can reference one of the following:
- A complete object (
bucket/audio.wav) - A file within an archive (
shard.tar/images/003.jpg) - A time range in seconds (
start_time,duration) from Lhotse cutsΒΉ
This integration is intended for speech ML pipelines where audio files are often stored as compressed archives, training requires precise range extraction, and batch sizes can reach thousands of cuts.
AIStore's batch processing groups cuts by source file, minimizing redundant reads when multiple cuts reference the same audio file. Rather than reading byte ranges (which would require multiple I/O operations per file), the system downloads complete files once and performs cut extraction in-memory, delivering superior performance for typical speech training workloads.
Further, large manifests can be automatically split using --batch-size and --output-template parameters, producing multiple equal-sized archives instead of one massive output.
ΒΉ Note: Current implementation processes complete audio files and extracts cuts in-memory for optimal I/O efficiency. Byte-range reading support can be added upon request, though this would impact performance for workloads with multiple cuts per file.
CLI Examples
# Stream a cross-bucket batch directly to disk
ais ml get-batch output.zip --spec manifest.yaml --streaming
# Process Lhotse cuts into 1000-sample shards
ais ml lhotse-get-batch --cuts training.cuts.jsonl.lz4 \
--batch-size 1000 \
--output-template "shard-{001..999}.tar"Python SDK Integration
from aistore.sdk.batch import BatchRequest, BatchLoader
# Build streaming batch request
req = BatchRequest(streaming=True, continue_on_err=True)
req.add_object_request(obj, archpath="img/0001.jpg", opaque=b"metadata")
# Execute with streaming decode
stream = BatchLoader(cluster_url).get_batch(
req, return_raw=True, decode_as_stream=True
)Commit Highlights
- 2f18344e: Complete CLI refactor for batch operations
- 404d0011: Implement streaming path with memory optimization
- 726da0d: Multi-batch generator with automatic chunking
- 0affbd75: Ordered multi-node assembly protocol
- f8ee6c2d: Shared stream pool implementation
2. ETL
AIStore 3.30 represents a major restructure of the ETL component that consumed the majority of this development cycle. The overhaul focused primarily on performance improvements, introducing direct PUT operations and eliminating io.Pipe (in the previous WebSocket-based implementation) that were limiting throughput at scale. This restructure required breaking changes to the ETL metadata format and removal of the deprecated init-code API, while also adding automatic filesystem access and a two-phase commit protocol for deployment reliability.
Performance-Focused Restructure
The core motivation for this restructure was addressing performance bottlenecks that became apparent under heavy production workloads. The previous ETL architecture suffered from sub-optimal data flows that created significant overhead for large-scale transformations.
Direct PUT Operations: ETL containers can now write transformed objects directly back to AIStore targets without intermediate hops or staging. This eliminates a full network round-trip and the associated serialization overhead, dramatically improving throughput for write-heavy transformations. Previously, transformed data had to flow back through the proxy layer, creating both latency and bandwidth bottlenecks.
WebSocket io.Pipe Elimination: The WebSocket communicator has been completely rewritten to remove the io.Pipe bottleneck that was causing blocking I/O operations. The new implementation writes directly to file handles instead of using goroutine-coordinated pipes, eliminating unnecessary buffering, memory allocation pressure, and synchronization overhead. This change alone reduces goroutine count by thousands for large ETL jobs.
Streamlined Transport Layer: The per-job transport mechanism now uses a single read-write loop rather than complex goroutine orchestration, reducing resource consumption and improving predictability under load. A one-million-object ETL run on a 20-node cluster now operates with significantly lower memory footprint and CPU overhead.
Breaking Changes
This restructure required two significant breaking changes that affect existing ETL workflows.
ETL Metadata Format: The metadata format has been updated to support the new performance architecture and deployment protocols. Clusters must achieve uniform version status before starting new ETL operations to ensure consistent behavior across all nodes during the transition period.
init_code API Removal: The deprecated init_code method for ETL initialization...
3.29
Changelog
Core
- d41ca5d: Disable alternative cold-GET logic; deprecate but keep it for now
- aa7c05e: Reduce cold-GET conflict timeout; fix jitter (typo)
- 2ec8b22: Calibrate 'num-workers' when 'high-number-of-goroutines' (fix)
- 0292c1c: Multi-object copy/transform: when targets run different UUIDs (fix)
CLI
- 3bfe3d6: Add
ais advanced check-lockcommand (in re:CheckObjectLockAPI) - 2e4e6c1: Fix progress monitor prompt for ETL TCO (multi-object copy/transform) job
- 4ef854b: Add
object-timeoutargument toetl initCLI command
API and Configuration Changes
- b3bc65d: Add
CheckObjectLockAPI (advanced usage) - 2507229: Add configurable cold-GET conflict timeout - new knob
timeout.cold_get_conflict(default 5s) - 312af7b: Introduce new ETL init spec YAML format
- 25513e7: Add timeout option for individual object transform request
- 82fcb58: Include object path from control message in WebSocket handler args
Python SDK
- e25c8b9: Add serialization utilities for
ETLServersubclasses - 1016bda: Add workaround for
Streaming-Cold-GETlimitation inObjectFileReader - 4e0f356: Improve retry behavior and logging; bump Python version to 1.13.8
- 8503952:
ObjectFileerror handling fixes - 197f866:
ObjectFileReaderminor follow-up fixes
Documentation
- d3bf4f0: New observability guide (part four)
- 0d03b71: Unified S3 compatibility documentation and consolidation
- 4d6c7c9: Update cross-references
- e45a4f3: Getting-started guide improvements (part three)
- 8b2a1da: Overview/CLI docs; remove old demos; fix transport package readme
- 804abda: Fix external links in ETL main docs
- 8a4eea5: Minor typo fixes
Website and Blog
- 011ff05: Remove formatting preamble (part three)
- 3b0aa31: Remove formatting preamble; preprocess
- 84b948a: Update ETL blog
Tests
- 5e03c66: Fix
CopyBucketSimpleby retrying bucket listing after abort
CI
3.28
The latest AIStore release, version 3.28, arrives nearly three months after the previous release. As always, v3.28 maintains compatibility with the previous version. We fully expect it to upgrade cleanly from earlier versions.
This release delivers significantly improved ETL offload with a newly added WebSocket communicator and optimized data flows between ETL pods and AIS targets in a Kubernetes cluster.
For Python users, we added resilient retrying logic that maintains seamless connectivity during lifecycle events - the capability that can be critical when running multi-hour training workloads. We've also improved JobStats and JobSnapshot models, added MultiJobSnapshot, extended and fixed URL encoding, and added props accessor method to the Object class.
Python SDK's ETL has been extended with a new ETL server framework that provides three Python-based web server implementations: FastAPI, Flask, and HTTPMultiThreadedServer.
Separately, 3.28 adds a dual-layer rate-limiting capability with configurable support for both frontend (client-facing) and backend (cloud-facing, adaptive) operation.
On the CLI side, there are multiple usability improvements listed below. Users now have further improved list-objects (ais ls) operation, amended and improved inline helps and CLI documentation. The ais show job command now displays cluster-wide objects and bytes totals for distributed jobs.
Enhancements to observability are also detailed below and include new metrics to track rate-limited operations and extended job statistics. Most of the supported jobs will now report a j-w-f metric: number of mountpath joggers, number of (user-specified) workers, and a work-channel-full count.
Other improvements include new (and faster) content checksum, fast URL parsing (for Go API), optimized buffer allocation for multi-object operations and ETL, support for Unicode and special characters in object names. We've refactored and micro-optimized numerous components, and amended numerous docs, including the main readme and overview.
Last but not least, for better networking parallelism, we now support multiple long-lived peer-to-peer connections. The number of connections is configurable, and the supported batch jobs include distributed sort, erasure coding, multi-object and bucket-to-bucket copy, ETL, global rebalance, and more.
Table of Contents
- Configuration Changes
- New Default Checksum
- Rate Limiting
- ETL
- API Enhancements; Batch Jobs
- AWS S3
- CLI
- Observability
- Python SDK
- Benchmarking Tools
- Build and CI/CD
- Miscellaneous Improvements
Assorted commits for each section are also included below with detailed changelog available at this link.
Configuration Changes
We made several additions to global (cluster-wide) and bucket configuration settings.
Multiple xactions (jobs) now universally include a standard configuration triplet that provides for:
- In-flight compression
- Minimum size of work channel(s)
- Number of peer-to-peer TCP connections (referred to as
stream bundle multiplier)
The following jobs are now separately configurable at the cluster level:
EC(Erasure Coding)Dsort(Distributed Shuffle)Rebalance(Global Rebalance)TCB(Bucket-to-Bucket Copy/Transform)TCO(Multi-Object Copy/Transform)Archive(Multi-Object Archiving or Sharding)
In addition, EC is also configurable on a per-bucket basis, allowing for further fine-tuning.
Commit Highlights
- 15cf1ca: Add backward compatible config and BMD changes.
- fc3d8f3: Add cluster config (tco, arch) sections and tcb burst.
- 7b46f0c: Update configuration part two.
- 8c49b6b: Add rate-limiting sections to global configuration and BMD (bucket metadata).
- 15d4ed5: [config change] and [BMD change]: the following jobs now universally support
XactConftriplet
New Default Checksum
AIStore 3.28 adds a new default content checksum. While still using xxhash, it uses a different implementation that delivers better performance in large-size streaming scenarios.
The system now makes a clear internal delineation between classic xxhash for system metadata (node IDs, bucket names, object metadata, etc.) and cespare/xxhash (designated as "xxhash2" in configuration) for user data. All newly created buckets now use "xxhash2" by default.
Benchmark tests show improved performance with the new implementation, especially for large objects and streaming operations.
Commit Highlights
- a045b21: Implement new content checksum.
- 7b69dc5: Update modules for new content checksum.
- 9fa9265: Refine cespare vs one-of-one implementation.
- d630c1f: Add cespare to hash micro-benchmark.
Rate Limiting
Version 3.28 introduces rate-limiting capability that operates at both frontend (client-facing) and backend (cloud-facing) layers.
On the frontend, each AIS proxy enforces configurable limits with burst capacity allowance. You can set different limits for each bucket based on its usage patterns, with separate configurations for GET, PUT, and DELETE operations.
For backend operations, the system implements an adaptive rate shaping mechanism that dynamically adjusts request rates based on cloud provider responses. This approach prevents hitting cloud provider limits proactively and implements exponential backoff when 429/503 responses are received. The implementation ensures zero overhead when rate limiting is disabled.
Configuration follows a hierarchical model with cluster-wide defaults that can be overridden per bucket. You can adjust intervals, token counts, burst sizes, and retry policies without service disruption.
Commit Highlights
- e71c2b1: Implemented frontend/backend dual-layer rate limiting system.
- 9f4d321: Added per-bucket overrides and exponential backoff for cloud 429 errors.
- 12e5787: Not rate-limiting remote bucket with no props.
- be309fd: docs: add rate limiting readme and blog.
- b011945: Rate-limited backend: complete transition.
- fcba62b: Rate-limit: add stats; prometheus metrics.
- 8ee8b44: Rate-limited backend; context to propagate vlabs; prefetch.
- c4b796a: Enable/disable rate-limited backends.
- 666796f: Core: rate-limited backends (major update).
ETL
ETL (Extract, Transform, Load) is a cornerstone feature designed to execute transformations close to the data with an extremely high level of node-level parallelism across all nodes in the AIS cluster.
WebSocket
Version 3.28 adds WebSocket (ws://) as yet another fundamental communication mechanism between AIS nodes and ETL containers, complementing the existing HTTP and IO (STDIN/STDOUT) communications.
The WebSocket implementation supports multiple concurrent connections per transform session, preserves message order and boundaries for reliable communication, and provides stateful session management for long-lived, per-xaction sessions.
Direct PUT
The release implements a new direct PUT capability for ETL transformations that optimizes the data flow between components. Traditionally, data would flow from a source AIS target to an ETL container, back to the source AIS target, and finally to the destination target. With direct PUT, data flows directly from the ETL container to the destination AIS target.
Stress tests show 3x to 5x performance improvement with direct PUT enabled. This capability is available acro...
3.27
Changelog
List objects
- "skip-lookup followed by list-remote (fix)" 7762faf2c
- refactor and fix the relevant snippet
- main loop: check 'aborted' also after getting next page
CLI
- "show all supported feature-flags and descriptions (usability)" 9f222a215
- cluster scope and bucket scope
- set and show operations
- color current (set) features
- update readme
- "colored help (all variations)" e95f3ac7f628
- commands, subcommands, and the app ('ais --help') itself
- a bunch of colored templates and wiring
- separately,
morepagination: replace memsys with simple buffer - with refactoring and cleanup
Python & ETL
- "fix ETL tests" b639c0d68
- "feat: add 'max_pool_size' parameter to Client and SessionManager" 8630b853b
- "[Go API change] extend
api.ETLObject- add transform args" 4b434184a- add
etl_argsargument - add
TestETLInlineObjWithMetadataintegration test
- add
- "add ETL transformation args QParam support" 8d9f2d11ae7d
- introduce
ETLConfigdataclass to encapsulate ETL-related parameters. - update
get_readerandgetmethods to supportETLConfig, ensuring consistent handling of ETL metadata. - add ETL-related query parameter (
QPARAM_ETL_ARGS) in Python SDK. - refactor
get_readerandgetto use the new ETL configuration approach.
- introduce
Build & Lint
3.26
Version 3.26 arrives 4 months after the previous release and contains more than 400 commits.
The core changes in v3.26 address the last remaining limitations. A new scrub capability has been added, supporting bidirectional diffing to detect remote out-of-band deletions and version changes. The cluster can now also reload updated user credentials at runtime without requiring downtime.
Enhancements to observability are detailed below, and performance improvements include memory pooling for HTTP requests, global rebalance optimizations, and micro-optimizations across the codebase. Key fixes include better error-handling logic (with a new category for IO errors and improvements to the filesystem health checker) and enhanced object metadata caching.
The release also introduces the ability to resolve split-brain scenarios by merging splintered clusters. When and if a network partition occurs and two islands of nodes independently elect primaries, the "set primary with force" feature enables the administrative action of joining one cluster to another, effectively restoring the original node count. This functionality provides greater control for handling extreme and unlikely events that involve network partitioning.
On the CLI side, users can now view not only the fact that a specific UUID-ed instance of operations like prefetch, copy, etl, or rebalance is running, but also the exact command line that was used to launch the batch operation. This makes it easier to track and understand batch job context.
For the detailed changelog, please see link.
Table of Contents
CLI
The CLI in v3.26 features revamped inline help, reorganized command-line options with clearer descriptions, and added usage examples. Fixes include support for multi-object PUT with client-side checksumming and universal prefix support for all multi-object commands.
A notable new feature is the ais scrub command for validating in-cluster content. Additionally, the ais performance command has received several updates, including improved calculation of cluster-wide throughput. Top-level commands and their options have been reorganized for better clarity.
The ais scrub command in v3.26 focuses on detection rather than correction. It detects:
- Misplaced objects (cluster-wide or within a specific multi-disk target)
- Objects missing from the remote backend, and vice versa
- In-cluster objects that no longer exist remotely
- Objects with insufficient replicas
- Objects larger or smaller than a specified size
The command generates both summary statistics and detailed reports for each identified issue. However, it does not attempt to fix misplaced or corrupted objects (those with invalid checksums). The ability to correct such issues is planned for v3.27.
For more details, see the full changelog here.
Observability
Version 3.26 includes several important updates. Prometheus metrics are now updated in real-time, eliminating the previous periodic updates via the prometheus.Collect interface.
Latencies and throughputs are no longer published as internally computed metrics; instead, .ns.total (nanoseconds) and .size (bytes) metrics are used to compute latency and throughput based on time intervals controlled by the monitoring client.
Default Prometheus go_* counters and gauges, including metrics for tracking goroutines and garbage collection, have been removed.
In addition to the total aggregated metrics, separate latency and throughput metrics are now included for each backend.
Metrics resulting from actions on a specific bucket now include the bucket name as a Prometheus variable label.
In-cluster writing generated by xactions (jobs) also now includes xaction labels, including the respective kind and ID, which results in more PUT metrics, including those not generated from user PUT requests.
Finally, all GET, PUT, and DELETE errors include the bucket label, and FSHC-related IO errors now include the mount path (faulty disk) label.
Commit Highlights
- Commit e6814a2: Added Prometheus variable labels; removed collector.
- Commit 3b323ff: Polymorphic
statsValue, removedswitch kind. - Commit 9290dc5: Amended re-initializing backends.
- Commit d2ceca3: Removed default metrics (
go_gc_*, go_memstats_*), started counting PUTs generated by xactions. - Commit 118a821: Major update (with partial rewrite) - added variable labels.
- Commit 2d181ab: Tracked and showed jobs run options (prefix, sync, range, etc.)
- Commit 8690876: API change for xactions to provide initiating control message, added
ctlmsgto all supported x-kinds. - Commit afef76b: Added CPU utilization tracking and alerts.
Separately and in addition, AIStore now supports distributed tracing via OpenTelemetry (OTEL). To enable, use oteltracing build tag.
- Commit 1f19cde13: Added support for distributed tracing.
For more details, see the full changelog here.
Python SDK
Added the ObjectFileWriter class (extending io.BufferedWriter) for file-like writing operations. This enhancement builds upon the ObjectFile feature introduced in the previous release, providing zero-copy and resilient streaming capabilities. More information can be found in the tech blogs on enhancing ObjectFile performance and resilient streaming.
Additionally, this update includes various fixes and minor improvements, such as memory optimizations for ObjectFile, improved error handling, and enhancements to the API's usability and performance.
Support has also been added for:
- multi-object transforms
- OCI backend, and more.
Complete changelog is available here.
Erasure Coding
The v3.26 release introduces significant improvements to Erasure Coding in AIStore, focusing on enhanced performance, better data recovery, improved configuration options, and seamless integration with other features. Key updates include the ability to recover EC data in scenarios where multiple parts are lost, a reduced memory footprint, fixed descriptor leakage when rebuilding object from slices, and improved CPU utilization during EC operations. Additionally, intra-cluster networking has been optimized, with reduced overhead when erasure coding is not in use.
Oracle (OCI) Object Storage
Until recently, AIStore natively supported three cloud storage providers: AWS S3, GCS, and Microsoft Azure Blob Storage. With the v3.26 release, OCI (Oracle Cloud Infrastructure) Object Storage has been added as the fourth supported backend. This enhancement allows AIStore to utilize OCI Object Storage directly, providing improved performance for large object uploads and downloads.
Native support for OCI Object Storage includes tunable optimizations for efficient data transfer between AIStore and OCI's infrastructure. This new addition ensures that AIStore offers the same level of support and value-added functionality for OCI as it does for AWS S3, GCS, and Microsoft Azure Blob Storage.
For more details, see:
- Blog: OCI backend
- [OCI changelog](https://...
3.25
Changelog
-
"S3 compatibility API: add missing access control" c046cb8
-
"core: async shutdown/decommission; primary to reject node-level requests" 2e17aaf
| * primary will now fail node-level decommission and similar lifecycle and cluster membership (changing) requests
| * keeping shutdown-cluster exception whenforced(in re: local playground)
| * when shutting down or decommissioning an entire cluster primary will now perform the final step asynchronously
| * (so that the API caller receives ok) -
"python/sdk: improve error handling and logging for
ObjectFile" b61b3db -
"core: cold-GET vs upgrading rlock to wlock" 9857e78
| * remove allsync.Condrelated state and logic
| * reduce low-levellock-infoto just rc and wlock
| * poll for up tohost-busytimeout
| * returnerr-busyif unsuccessful -
"CLI
show clusterto sort rows by POD names with primary on top" e469684 -
"health check to be forwarded to primary when invoked with "primary-ready-to-rebalance" query param a59f921
| * (previously, non-primary would fail the request) -
"python: avoid module level import of webds; remove 'webds' dependency 228f23f
| * refactor dataset_config.py: avoid module-level import of ShardWriter
| * update pyproject.toml: add webdataset==0.2.86 as an optional dependency" -
"aisloader: '--subdir' vs prefix (clarify)" 7e7e8e4
-
"CLI: directory walk: do not call
lstaton every entry (optimize)" 4a22b88
| * skip errors iff "continue-on-error"
| * add verbose mode to see all warnings - especially when invoked with the "continue-on-error" option
| * otherwise, stop walking and return the error in question
| * with partial rewrite -
"docs: add tips for copying files from Lustre;
ais putvsais promote" 3cb20f6 -
"CLI:
--num-workersoption ('ais put', 'ais archive', and more)" d5e6fbc
| * add; amend
| * an option to execute serially (consistent with aistore)
| * limit not to exceed (2 * num-CPUs)
| * remove--concflag (obsolete)
| * fix inline help -
"CLI: PUT and archive files from multiple matching directories" 16edff7
| *GLOBalize
| * PUT: add back--include-src-diroption -
"trim prefix: list-objects; bucket-summary; multi-obj operations" 7cf1546
| *rtrim(prefix, '*')to satisfy one common expectation
| * proxy only (leaving CLI intact) -
"unify 'validate-prefix' & 'validate-objname'; count list-objects errors" 5789273
| * addErrInvalidPrefix(type-code)
| * refactor and micro-optimizevalidate-*helpers; unify
| * move object name validation to proxies; proxies to (also) counterr.list.n
| * refactorver-changedandobj-move
3.24
Version 3.24 arrives nearly 4 months after the previous one and contains more than 400 commits that fall into several main categories, topics, and sub-topics:
1. Core
1.1 Observability
We improved and optimized stats-reporting logic and introduced multiple new metrics and new management alerts.
There's now an easy way to observe per-backend performance and errors, if any. Instead of (or rather, in addition to) a single combined counter or latency, the system separately tracks requests that utilize AWS, GCP, and/or Azure backends.
For latencies, we additionally added cumulative "total-time" metrics:
- "GET: total cumulative time (nanoseconds)"
- "PUT: total cumulative time (nanoseconds)"
- and more
Together with respective counters, those total-times can be used to compute precise latencies and throughputs over arbitrary time intervals - either on a per-backend basis or averaged across all remote backends, if any.
New management alerts include keep-alive, tls-certificate-will-soon-expire (see next section), low-memory, low-capacity, and more.
Build-wise, aisnode with StatsD will now require the corresponding build tag.
Prometheus is effectively the default; for details, see related:
1.2 HTTPS; TLS
HTTPS deployment implies (and requires) that each AIS node (aisnode) has a valid TLS (X.509) certificate.
TLS certificates tend to expire from time to time, or eventually. Each TLS certificate expires, with a standard-defined maximum of 13 months - roughly, 397 days.
AIS v3.24 automatically reloads updated certificates, tracks expiration times, and reports any inconsistencies between certificates in a cluster:
Associated Grafana and CLI-visible management alerts:
| alert | comment |
|---|---|
tls-cert-will-soon-expire |
Warning: less than 3 days remain until the current X.509 cert expires |
tls-cert-expired |
Critical (red) alert (as the name implies) |
tls-cert-invalid |
ditto |
Finally, there's a brand-new management API and ais tls CLI.
1.3 Filesystem Health Checker (FSHC)
FSHC component detects disk faults, raises associated alerts, and disables degraded mountpaths.
AIS v3.24 comes with FSHC a major (version 2) update, with new capabilities that include:
- detect mountpath changed at runtime;
- differentiate in-cluster IO errors from network and remote backend (errors);
- support associated configuration (section "API changes; Config changes" below);
- resolve (mountpath, filesystem) to disk(s), and handle:
- no-disks exception;
- disk loss, disk fault;
- new disk attachments.
1.4 Keep-Alive; Primary Election
In-cluster keep-alive mechanism (a.k.a. heartbeat) was generally micro-optimized and improved. In particular, when and if failing to ping primary via intra-cluster control, an AIS node will now utilize its public network, if available.
And vice versa.
As an aside, AIS does not require provisioning 3 different networks at deployment time. This has always been and remains a recommended option. But our experience running Kubernetes clusters in production environments proves that it is, well, highly recommended.
1.5 Rebalance; Erasure Coding: Intra-Cluster streams
Needless to say, erasure coding produces a lot of in-cluster traffic. For all those erasure-coded slice-sending-receiving transactions, AIS targets establish long-living peer-to-peer connections dubbed streams.
Long story short, any operation on an erasure bucket requires streams. But, there's also the motivation not to keep those streams open when there's no erasure coding. The associated overhead (expectedly) grows proportionally with the size of the cluster.
In AIS v3.24, we solve this problem, or part of this problem, by piggybacking on keep-alive messages that provide timely updates. Closing EC streams is a lazy process that may take several extra minutes, which is still preferable given that AIS clusters may run for days and weeks at a time with no EC traffic at all.
1.6 List Virtual Directories
Unlike hierarchical POSIX, object storage is flat, treating forward slash ('/') in object names as simply another symbol.
But that's not the entire truth. The other part of it is that users may want to operate on (ie., list, load, shuffle, copy, transform, etc.) a subset of objects in a dataset that, for lack of a better word, looks exactly like a directory.
For details, please refer to:
1.7 API changes; Config changes
Including:
- "[API change] show TLS certificate details; add top-level 'ais tls' command" 091f7b0
- "[API change]: extend HEAD(object) to check remote metadata" c1004dd
- "[config change]: FSHC v2: track and handle total number of soft errors" a2d04da
- and more
1.8 Performance Optimization; Bug fixes; Improvements
Including:
- "new RMD not to trigger rebalance when disabled in the config" 550cade20
- "prefetch/copy/transform: number of concurrent workers" a5a30247d, 8aa832619
- "intra-cluster notifications: reduce locking, mem allocations" b7965b7be
- and much more
2. Initial Sharding (ishard); Distributed Shuffle (dsort)
Initial Sharding utility (ishard) is intended to create well-formed WebDataset-formatted shards from the original dataset.
Goes without saying: original ML datasets will have an arbitrary structure, a massive number of small files and/or very large files, and deeply nested directories. Notwithstanding, there's almost always the need to batch associated files (that constitute computable samples) together and maybe pre-shuffle them for immediate consumption by a model.
Hence, ishard:
3. Authentication; Access Control
Other than code improvements and micro-optimizations (as in continuous refactoring) of the AuthN codebase, the most notable updates also include:
| topic | what changed |
|---|---|
| CLI | improved token handling; user-friendly (and improved) error management; easy-to-use configuration that entails admin credentials, secret keys, and tokens |
| Configuration | notable (and related) environment variables: AIS_AUTHN_SECRET_KEY, AIS_AUTHN_SU_NAME, AIS_AUTHN_SU_PASS, and AIS_AUTHN_TOKEN |
AuthN container image (new) |
tailored specifically for Kubernetes deployments - for seamless integration and easy setup in K8s environments |
4. CLI
Usability improvements across the board, including:
- "add 'ais tls validate-certificates' command" 0a2f25c
- "'ais put --retries ' with increasing timeout, if need be" 99b7a96
- "copy/transform: add '--num-workers' (number of concurrent workers) option" 2414c68
- "extend 'show cluster' - add 'alert' column" 40d6580df
- "show configured backend providers" ba492a1
- "per-backend...