Skip to main content

Tigris Blog

A multi-cloud, S3-compatible object storage service for low latency data access anywhere.

Depot saves us from another Google deprecationDepot saves us from another Google deprecation

Google Container Registry's deprecation has left developers seeking alternatives. Depot's container registry built on Tigris offers a fast and reliable solution for storing and distributing Docker images globally, with faster image pulls and no rate limits.
7 min read

Setting up a Docker Hub pull-through cache with TigrisSetting up a Docker Hub pull-through cache with Tigris

A pull-through cache is a local cache of Docker images that can be used to speed up deployments and defend against the upcoming rate limit decreases for the Docker Hub. Learn how to make one on top of Tigris!
5 min read

AI’s Impending Left-pad ScenarioAI’s Impending Left-pad Scenario

Your AI workflows rely on models that other people post on the Internet. How can you be sure that they'll stay up? Today Xe covers the history of the infamous left-pad incident of 2016 and how it could happen again with AI.
10 min read

DeepSeek R1 is good enoughDeepSeek R1 is good enough

DeepSeek R1 is a frontier-grade reasoning model that you can run on your own hardware. In this article, Xe digs through the papers and slices past the hype to explain what DeepSeek R1 really gives users and how the model is revolutionary for what it is.
20 min read

How do large language models get so large?How do large language models get so large?

AI models, comprised mainly of floating-point numbers, function by processing inputs through various components like tokenizers and embedding models. They range in size from gigabytes to terabytes, with larger parameter counts enhancing performance and nuance representation. How do they get so large though?
8 min read