Skip to main content

Tigris Blog

A multi-cloud, S3-compatible object storage service for low latency data access anywhere.

How do large language models get so large?How do large language models get so large?

AI models, comprised mainly of floating-point numbers, function by processing inputs through various components like tokenizers and embedding models. They range in size from gigabytes to terabytes, with larger parameter counts enhancing performance and nuance representation. How do they get so large though?
8 min read

Using Tigris as a FilesystemUsing Tigris as a Filesystem

Object storage can be used as a filesystem in Kubernetes with the right setup. Here's how I set it up on my homelab cluster and the tradeoffs I made in the process.
11 min read

How Beam runs GPUs anywhereHow Beam runs GPUs anywhere

Learn how Beam offers serverless GPUs optimized for developer productivity, across many clouds. By moving object storage to a separate managed service, Beam no longer needed to worry about it as another variable when designing for consistency across clouds.
6 min read

Training with Big Data on Any CloudTraining with Big Data on Any Cloud

Training AI models on Big Data can be challenging due to the need for flexible storage and compute. Tigris, a cloud-agnostic storage layer, enables decoupling of storage from compute, making it easier to manage data and model weights across clouds. Tools like SkyPilot that abstract cloud providers and operating system configuration simplifies the compute layer, allowing for seamless data transfer between clouds using Tigris.
22 min read

Nomadic Infrastructure Design for AI WorkloadsNomadic Infrastructure Design for AI Workloads

This AI stuff is cool, but GPU inference is not needed all of the time. Most of the time your instances stay idle, which means you're just burning investor money without any real benefit. Today we'll learn how to make your compute spread between hosts nomadically, hunting deals and using Tigris to make it all possible.
19 min read