Skip to main content

Tigris Blog

A multi-cloud, S3-compatible object storage service for low latency data access anywhere.

How do large language models get so large?How do large language models get so large?

AI models, comprised mainly of floating-point numbers, function by processing inputs through various components like tokenizers and embedding models. They range in size from gigabytes to terabytes, with larger parameter counts enhancing performance and nuance representation. How do they get so large though?
8 min read

Nomadic Infrastructure Design for AI WorkloadsNomadic Infrastructure Design for AI Workloads

This AI stuff is cool, but GPU inference is not needed all of the time. Most of the time your instances stay idle, which means you're just burning investor money without any real benefit. Today we'll learn how to make your compute spread between hosts nomadically, hunting deals and using Tigris to make it all possible.
19 min read

We're making our availability metrics publicWe're making our availability metrics public

We're Tigris, a globally distributed object storage platform built for high-performance workloads. We're making our reliability metrics public! Check out our live dashboard and see how we're providing 99.99% global availability. We're pretty sure we're the first to share this type of live production data publicly.
4 min read