Fork & Snapshots are here

Acceleration Gateway

Accelerated Storage for
AI Training

Every second your GPUs wait for data is wasted money. TAG is a local caching proxy that delivers near-local throughput for AI training — with zero code changes.

View Documentation
99.4%
GPU Utilization
5.7×
Faster Warm Epochs
0
Code Changes

Architecture

Inside the Training Instance.

TAG sits between your training code and cloud storage, caching hot data on local NVMe SSDs. Click each layer to explore the full stack.

LAYER.01 // COMPUTE
H100_01
H100_02
H100_03
H100_04
LAYER.02 // SOFTWARE LOGIC
import tigris as tg
model = tg.Train(
"s3://model-data"
)
# Accelerated standard API
API
ROUTER
LAYER.03 // ACCELERATION GATEWAY

TAG

CORE ENGINE
LAYER.04 // LOCAL CACHE
LAYER.05 // PERSISTENCE
TIGRIS S3-COMPATIBLE STORAGE

How It Works

A Local Cache That Speaks S3.

TAG runs as a sidecar on your training instance. Epoch 1 fetches from Tigris, epoch 2+ reads from local NVMe at disk speed. Drop-in S3 API — zero code changes required.

us-west-1Zone AAcceleration Gatewayus-central-1Zone BAcceleration Gatewayus-east-1Zone AAcceleration GatewayTigris Object StorageStore once and access anywhere across clouds

Near-Local Throughput

NVMe-speed reads after the first epoch. Training data served from local disk, not the network.

Zero Code Changes

Drop-in S3 API compatibility. Point your training script at TAG and it handles the rest.

Intelligent Prefetching

Anticipates data access patterns to keep your GPU pipeline full and idle time near zero.

Performance

Keep GPUs Saturated During Training.

TAG achieves 99.4% GPU utilization with just 4 workers — matching 16 workers without TAG. Stop paying for idle GPUs.

GPU Utilization During Training — Tigris Object Storage25%50%75%100%1 worker8.7%8 workers68%16 workers98%4 workers + TAG99.4%Source: tigrisdata.com/blog/training-object-storage

Accelerate Your Training Pipeline.

TAG is available in early access. Get near-local storage performance for your AI training workloads — across clouds.

Explore AI Workload Docs