Skip to main content

· 9 min read
Xe Iaso
Katie Schilling

At Tigris, globally replicated object storage is our thing. But why should you want your objects “globally replicated”? Today I’m gonna peel back the curtain and show you how Tigris keeps your objects exactly where you need them, when you need them, by default.

Global replication matters because computers are ephemeral and there’s a tradeoff between performance and reliability. But does there have to be?

A cartoon tiger laying across the globe, protecting and distributing buckets to users. Image input made with Flux 1.1 [ultra] and video made with Veo 2.

· 2 min read
Katie Schilling

Tigris has achieved SOC 2 Type II certification, signifying our high standard for security and operations. We have partnered with an independent third party to thoroughly review our policies and procedures and verify our compliance. We’re excited to provide secure object storage for everyone, whether you’re storing megabytes or petabytes.

A majestic blue tiger wearing socks.

A majestic blue tiger wearing socks.

· 7 min read
Xe Iaso
Katie Schilling

As the saying goes, the only constants in life are death and taxes. When you work with the Internet, you get to add another fun thing to that list: service deprecations. This is a frustrating thing that we all have to just live with as a compromise for not owning every single piece of software and service that we use. In their effort to keep life entertaining, Google has announced that they’re deprecating Google Container Registry. This is one of the bigger container registry services outside of the Docker Hub and it’s suddenly everyone’s emergency.

Google Container Registry will be deprecated on March 18, 2025. This date is coming up fast. Are you ready for the change?

· 5 min read
Xe Iaso

The Docker Hub is going to drastically decrease the rate limits for free accounts on April first. Are you ready for those changes? Here’s a high level overview of the rate limit changes we’re about to have:

Account typeOld rate limitNew rate limit (April 1st)
Free, authenticated200 image pulls per hour100 image pulls per hour
Free, unauthenticated100 image pulls per hour10 image pulls per hour (per IPv4 address or IPv6 /64 subnet)

What if you could easily cache images so these rate limits don’t impact your workloads? Today I’m going to show you how you can set up your own pull-through cache of the docker hub so that your workloads only have to download images once.

· 3 min read
Katie Schilling

We’re transitioning to virtual hosted style URLs for all new buckets created after February 19, 2025. For new buckets, we will stop supporting path style URLs. Buckets created before February 19, 2025 will continue to work with either path style or virtual host style URLs.

The path style URL looks like this: https://fly.storage.tigris.dev/tigris-example/bar.txt

The virtual host style URL looks like this: https://tigris-example.fly.storage.tigris.dev/bar.txt

With the path style URL, the subdomain is always fly.storage.tigris.dev. By moving to virtual host style URLs, the subdomain is specific to the bucket. This additional specificity allows us to make some key improvements for security and scalability.

Why make this change now?

Recently some ISPs blocked the Tigris subdomain after malicious content was briefly shared using our platform. Though we removed the malicious content, the subdomain was the common denominator across several reports and added to blocklist maintained by security vendors. This block of our domain resulted in failed downloads on several ISPs with unclear error messages. Either the DNS resolved to another IP not owned by Tigris, or there were connection errors that implied a network issue. We’re sure this was frustrating for folks to debug.

We have been working with the security vendors to remove our domain from their blocklists. However, the long term solution is to move to virtual hosted style URLs so that the subdomains are no longer the common denominator when identifying content.

How does this impact your code?

You’ll need to update your code anywhere you have path based access like for presigned URLs. You’ll also need to configure your S3 client libraries to use the virtual hosted style URL. Some examples are below. If we’ve missed your framework, please reach out, and we’ll help.

svc = boto3.client(
's3',
endpoint_url='https://fly.storage.tigris.dev',
config=Config(s3={'addressing_style': 'virtual'}),
)

With this move to virtual hosted style URLs, we’re undoubtedly going to break some existing workflows as new buckets are created. If this creates a hardship on you, please contact us at help@tigrisdata.com and we'll find a solution.

Want to try Tigris?

Make a bucket and store your models, training data, and artifacts across the globe! No egress fees.

· 9 min read
Xe Iaso

A cartoon tiger desperately runs away from a datacentre fire

A cartoon tiger desperately runs away from a datacentre fire. Image generated using Flux [pro].

The software ecosystem is built on a bedrock of implicit trust. We trust the software won’t have deliberately placed security vulnerabilities and won’t be yanked away offline without warning. AI models aren’t exactly software, but they’re distributed using a lot of the same platforms and technology as software. Thus, people assume they’re distributed using the same social contract as with software.

The AI ecosystem has a lot of the same distribution and trust challenges as software ecosystems do, but with much larger blobs of data that are harder to introspect. There are fears that something bad is going to happen with some large model and create a splash even greater than the infamous left-pad incident of 2016. These kinds of attacks seem unthinkable, but are inevitable.

How can you defend against AI supply-chain attacks? What are the risks? Today I’m going to cover what we can learn from the left-pad incident and how making a copy of the models you depend on can make your products more resilient.

· 19 min read
Xe Iaso

A majestic blue tiger surfing on the back of a killer whale. The image evokes Ukiyo-E style framing.

A majestic blue tiger surfing on the back of a killer whale. The image evokes Ukiyo-E style framing. Image generated using Flux [pro].

DeepSeek R1 is a mixture of experts reasoning frontier AI model; it was released by DeepSeek on January 20th, 2025. Along with the model being available by DeepSeek's API, they released the model weights on HuggingFace and a paper about how they got it working.

DeepSeek R1 is a Mixture of Experts model. This means that instead of all of the model weights being trained and used at the same time, the model is broken up into 256 "experts" that each handle different aspects of the response. This doesn't mean that one "expert" is best at philosophy, music, or other subjects; in practice one expert will end up specializing with the special tokens (begin message, end message, role of interlocutor, etc), another will specialize on punctuation, some will focus on visual description words or verbs, and some can even focus on proper names or numbers. The main advantage of a Mixture of Experts model is that it allows you to get much better results with much less compute spent in training and at inference time. There are some minor difficulties involved in making sure that tokens get spread out between the experts in training, but it works out in the end.

· One min read
Xe Iaso

A bunch of wrenches on a tool rack.

Recently Amazon made changes to the S3 libraries that broke Tigris support. We have made fixes on our end and you can upgrade to the latest releases of the AWS CLI, AWS SDK for Python (boto3), AWS SDK for JavaScript, AWS SDK for Java and AWS SDK for PHP.

If you are running into any issues with these updated SDK releases, please reach out via Bluesky, LinkedIn, or X (formerly Twitter).

· 8 min read
Xe Iaso

A majestic blue tiger riding on a sailing ship. The tiger is very large.

A majestic blue tiger riding on a sailing ship. The tiger is very large. Image generated using PonyXL.

AI models can get pretty darn large. Larger models seem to perform better than smaller models, but we don’t quite know why. My work MacBook has 64 gigabytes of RAM and I’m able to use nearly all of it when I do AI inference. Somehow these 40+ gigabyte blobs of floating point numbers are able to take a question about the color of the sky and spit out an answer. At some level this is a miracle of technology, but how does it work?

Today I’m going to cover what an AI model really is and the parts that make it up. I’m not going to cover the linear algebra at play nor any of the neural networks. Most people want to start with an off the shelf model, anyway.