The storage layer purpose built for AI

Tigris storage scales infinitely, can be shared across any cloud, and provides instant access to all your data, all over the globe.

globally distributed S3-compatible object storage
Trusted by
Fly.io logoPlayground logoFal logoAmplified logoBeam logoQuickwit logoArcjet logoKrea logo
Fly.io logoPlayground logoFal logoAmplified logoBeam logoQuickwit logoArcjet logoKrea logo
Fly.io logoPlayground logoFal logoAmplified logoBeam logoQuickwit logoArcjet logoKrea logo
LogSeam Logoreconfigured logoMappa logoParasail logoHydra logoMyhren ai logoTheb ai logoHumaner Ai logo
LogSeam Logoreconfigured logoMappa logoParasail logoHydra logoMyhren ai logoTheb ai logoHumaner Ai logo
Humaner Ai logoreconfigured logoMappa logoParasail logoHydra logoMyhren ai logoTheb ai logoHumaner Ai logo
Benefits of tigris
Single Global Endpoint

Bottomless, global-by-default storage

Low-latency access from anywhere — scale infinitely by default

Store Data Near Users

Predictable data access cost

Zero egress fees — scale AI without surprises

S3 Compatible API

Integration with existing tools

S3-compatible — use your existing code just by updating the endpoint

Fast Small Object Retrieval

Vector-ready object storage

Fast access to small files like embeddings & model slices

Store anything. Access instantly. Scale globally.

As data demand grows, efficient and scalable AI infrastructure is more critical than ever. Tigris is S3-compatible object storage, reimagined for high-performance data + AI workloads. Perfect for powering tables, vector embeddings, ML artifacts, and multi modal AI pipelines.

Predictable data costs for training and inference

Eliminate data transfer costs across clouds, enabling AI companies to optimize GPU usage and lower their overall operational expenses

Security and reliability

The same availability and durability guarantees as the big clouds, with live data to prove it—no filters, no spin.SOC2 certified, GDPR Compliant, & HIPAA compatible.

OUt of the box compatibility with

It's Easy to Get Going

import { S3Client } from "@aws-sdk/client-s3";
import { Upload } from "@aws-sdk/lib-storage";
import fs from "fs";

const client = new S3Client({
  region: "auto",
  endpoint: "https://fly.storage.tigris.dev",
});

const fileStream = fs.createReadStream("Docker.dmg");
(async () => {
  const upload = new Upload({
    params: {
      Bucket: "foo-bucket",
      Key: "Docker-100.dmg",
      Body: fileStream,
    },
    client: client,
  });

  await upload.done();
})();
require "aws-sdk"

bucket_name = "foo-bucket"

s3 = Aws::S3::Client.new(
    region: "auto",
    endpoint: "https://fly.storage.tigris.dev",
)

# List the first ten objects in the bucket
resp = s3.list_objects(bucket: 'foo-bucket', max_keys: 10)
resp.contents.each do |object|
    puts "#{object.key} => #{object.etag}"
end

# Put an object into the bucket
file_name = "bar-file-#{Time.now.to_i}"
begin
    s3.put_object(
        bucket: bucket_name,
        key: file_name,
        body: File.read("bar.txt")
    )
    puts "Uploaded #{file_name} to #{bucket_name}."
rescue Exception => e
    puts "Failed to upload #{file_name} with error: #{e.message}"
    exit "Please fix error with file upload before continuing."
end
import boto3

# Create S3 service client
svc = boto3.client('s3', endpoint_url='https://fly.storage.tigris.dev')

# List buckets
response = svc.list_buckets()

for bucket in response['Buckets']:
    print(f'  {bucket["Name"]}')

# List objects
response = svc.list_objects_v2(Bucket='foo-bucket')

for obj in response['Contents']:
    print(f'  {obj["Key"]}')

# Upload file
response = svc.upload_file('bar.txt', 'foo-bucket', 'bar.txt')

# Download file
response = svc.download_file('foo-bucket', 'bar.txt', 'bar-downloaded.txt')
package main

import (
	"context"
	"log"
	"os"

	"github.com/aws/aws-sdk-go-v2/aws"
	"github.com/aws/aws-sdk-go-v2/config"
	"github.com/aws/aws-sdk-go-v2/service/s3"
)

func main() {
	sdkConfig, err := config.LoadDefaultConfig(context.TODO())
	if err != nil {
		log.Printf("Couldn't load default configuration. Here's why: %v\n", err)
		return
	}

	// Create S3 service client
	svc := s3.NewFromConfig(sdkConfig, func(o *s3.Options) {
		o.BaseEndpoint = aws.String("https://fly.storage.tigris.dev")
	})

	file, err := os.Open("bar.txt")
	if err != nil {
		log.Printf("Couldn't open file to upload. Here's why: %v\n", err)
		return
	} else {
		defer file.Close()
		_, err = svc.PutObject(context.TODO(), &s3.PutObjectInput{
			Bucket: aws.String("foo-bucket"),
			Key:    aws.String("bar.txt"),
			Body:   file,
		})
		if err != nil {
			log.Printf("Couldn't upload file. Here's why: %v\n", err)
		}
	}
}

Use your existing code

Works out of the box with existing S3 SDKs

  • Set your endpoint to Tigris

  • Update your access keys

  • Your existing code Just Works™️

View SDKs

Zero downtime migration

Tigris Shadow Buckets eliminate migration risk by automatically syncing your new and old buckets for however long you need—no hard cutovers, no downtime.

View the Migration Guide

Usage in numbers.

1B+
requests per day
20K+
buckets
10PB+
of storage
10B+
objects

FAQs

What is Tigris?

Tigris is a modern object storage platform built for global performance, cost efficiency, and AI. It makes data instantly accessible worldwide, removes egress fees, and integrates seamlessly with any compute or GPU provider.

When should I choose to use Tigris storage?

AI teams should choose Tigris storage for fast, reliable access to massive datasets used in training, inference, and vector search. It also delivers global low-latency performance without complex setup, zero egress fees, and freedom from vendor lock-in.

How does Tigris compare to other storage solutions?

Unlike traditional storage, Tigris is purpose-built for AI patterns like handling massive numbers of small files and large-scale dataset scans. It delivers global low-latency access without setup complexity, eliminates egress fees, and avoids vendor lock-in.

Is Tigris secure?

Yes. Tigris is SOC 2 Type II compliant and follows enterprise-grade security best practices. All data is encrypted at rest and in transit, with fine-grained access controls to ensure your workloads stay protected.

How much data can I put in a Tigris bucket?

There’s no fixed limit. Tigris scales seamlessly with your workloads, from gigabytes to petabytes.

Eliminate vendor lock-in.
Scale infinitely.