Skip to main content

Python

There are three ways to use Tigris with Python:

  • boto3 — use the standard AWS SDK directly, just point it at Tigris
  • tigris-boto3-ext — a lightweight extension that adds Tigris-specific features like snapshots and bucket forking on top of boto3
  • AWS SDK for Python — if you have existing code using the AWS Python SDK, you can migrate to Tigris by changing the endpoint and credentials

All approaches are fully S3-compatible. Pick whichever fits your needs.

Prerequisites

Install

pip install boto3

Configure credentials

Set your Tigris credentials as environment variables:

export AWS_ACCESS_KEY_ID="tid_your_access_key"
export AWS_SECRET_ACCESS_KEY="tsec_your_secret_key"
export AWS_ENDPOINT_URL="https://t3.storage.dev"
export AWS_REGION="auto"

Create a client

With AWS_ENDPOINT_URL set in your environment:

import boto3

s3 = boto3.client("s3")

Or pass the endpoint explicitly:

import boto3

s3 = boto3.client(
"s3",
endpoint_url="https://t3.storage.dev",
aws_access_key_id="tid_your_access_key",
aws_secret_access_key="tsec_your_secret_key",
region_name="auto",
)

Basic operations

These work the same with both boto3 and the extension.

Create a bucket

s3.create_bucket(Bucket="my-bucket")

Upload a file

# From a file on disk
s3.upload_file("data.csv", "my-bucket", "data.csv")

# From a string
s3.put_object(Bucket="my-bucket", Key="hello.txt", Body="Hello, World!")

Download a file

s3.download_file("my-bucket", "data.csv", "local-copy.csv")

List objects

response = s3.list_objects_v2(Bucket="my-bucket")

for obj in response.get("Contents", []):
print(f" {obj['Key']} ({obj['Size']} bytes)")

Generate a presigned URL

url = s3.generate_presigned_url(
"get_object",
Params={"Bucket": "my-bucket", "Key": "data.csv"},
ExpiresIn=3600,
)
print(url)

Snapshots and forks

You can use snapshots and forks with plain boto3 by passing Tigris-specific headers on each request, but the tigris-boto3-ext package handles this for you automatically.

Create a snapshot-enabled bucket

from tigris_boto3_ext import create_snapshot_bucket

create_snapshot_bucket(s3, "my-snapshots")

Take a snapshot

from tigris_boto3_ext import create_snapshot, list_snapshots

# Upload some data
s3.put_object(Bucket="my-snapshots", Key="model.bin", Body=b"v1 weights")

# Snapshot the current state
snapshot = create_snapshot(s3, "my-snapshots")
print(f"Snapshot version: {snapshot}")

# List all snapshots
for snap in list_snapshots(s3, "my-snapshots"):
print(snap)

Read from a snapshot

from tigris_boto3_ext import get_object_from_snapshot

# Read the object as it was at snapshot time — even if it's been
# overwritten or deleted since
obj = get_object_from_snapshot(s3, "my-snapshots", "model.bin", snapshot)
data = obj["Body"].read()

Fork a bucket

Forking creates a copy-on-write clone — instant, no data copying:

from tigris_boto3_ext import create_fork

create_fork(s3, source_bucket="my-snapshots", fork_bucket="experiment-lr-1e-4")

# The fork has all the same objects but writes are independent
s3.put_object(Bucket="experiment-lr-1e-4", Key="model.bin", Body=b"new weights")

# Original bucket is unchanged

Context managers

For scoped operations, use context managers:

from tigris_boto3_ext import TigrisSnapshot, TigrisFork

# Read from a specific snapshot
with TigrisSnapshot(s3, "my-snapshots", snapshot_version=snapshot):
obj = s3.get_object(Bucket="my-snapshots", Key="model.bin")
print(obj["Body"].read())

# Work inside a fork
with TigrisFork(s3, source_bucket="my-snapshots", fork_bucket="test-fork"):
s3.put_object(Bucket="test-fork", Key="results.json", Body=b"{}")

Decorators

You can also use decorators to scope snapshot/fork behavior to a function:

from tigris_boto3_ext import snapshot_enabled, with_snapshot, forked_from

@snapshot_enabled
def backup_workflow(s3_client):
s3_client.put_object(Bucket="backups", Key="data.bak", Body=b"backup data")

@with_snapshot(snapshot_version="v1")
def read_historical(s3_client):
return s3_client.get_object(Bucket="backups", Key="data.bak")

@forked_from(source_bucket="production")
def run_test(s3_client):
# Writes go to the fork, production is untouched
s3_client.put_object(Bucket="test-env", Key="test.txt", Body=b"test")

Next steps