Accelerated Storage for
AI Training
Every second your GPUs wait for data is wasted money. TAG is a local caching proxy that delivers near-local throughput for AI training — with zero code changes.
Architecture
Inside the Training Instance.
TAG sits between your training code and cloud storage, caching hot data on local NVMe SSDs. Click each layer to explore the full stack.
H100 Tensor Core GPUs processing training workloads with massive parallel compute power.
Standard S3 API interface. Drop-in replacement requiring zero code changes to existing training scripts.
Tigris Acceleration Gateway. Intelligent hot-data routing with zero-copy architecture for maximum throughput.
High-speed NVMe SSD pool for frequently accessed data. Local NVMe performance with cloud-scale capacity.
Tigris Object Storage. S3-compatible global persistence tier with unlimited scalability and durability.
How It Works
A Local Cache That Speaks S3.
TAG runs as a sidecar on your training instance. Epoch 1 fetches from Tigris, epoch 2+ reads from local NVMe at disk speed. Drop-in S3 API — zero code changes required.
Near-Local Throughput
NVMe-speed reads after the first epoch. Training data served from local disk, not the network.
Zero Code Changes
Drop-in S3 API compatibility. Point your training script at TAG and it handles the rest.
Intelligent Prefetching
Anticipates data access patterns to keep your GPU pipeline full and idle time near zero.
Performance
Keep GPUs Saturated During Training.
TAG achieves 99.4% GPU utilization with just 4 workers — matching 16 workers without TAG. Stop paying for idle GPUs.
Accelerate Your Training Pipeline.
TAG is available in early access. Get near-local storage performance for your AI training workloads — across clouds.