Skip to main content

7 posts tagged with "object storage"

View All Tags

· 6 min read
Lars Wikman

Introduction

I admit it. My first Tigris blog post about Eager and Lazy caching was kind of basic. It was important to cover the ground-work. The CDN aspect is important and I do like the summon-your-data pre-fetch header a lot. Now we get to the significantly more disruptive stuff. The things that while Tigris is an S3-compatible API it also provides features that enable entirely new use-cases and push the boundaries of what you can do with object storage. Let's see if we can't set your internal constraint-solver aflame with possibilities.

Tigris globally distributed object
storage

First a bit of setup. Again, this post is also a Livebook which means that you can run all of it either locally with Livebook Desktop or on Fly.

Mix.install(
[
{:ex_aws, "~> 2.5"},
{:ex_aws_s3, "~> 2.5"},
{:hackney, "~> 1.20"},
{:poison, "~> 3.0"},
{:sweet_xml, "~> 0.6"},
:jason,
:req
],
config: [
ex_aws: [
access_key_id: [{:system, "LB_AWS_ACCESS_KEY_ID"}],
secret_access_key: [{:system, "LB_AWS_SECRET_ACCESS_KEY"}],
endpoint_url_s3: [{:system, "LB_AWS_ENDPOINT_URL_S3"}],
region: [{:system, "LB_AWS_REGION"}],
s3: [scheme: "https://", host: "fly.storage.tigris.dev", port: 443]
]
]
)
alias ExAws.S3

bucket = System.fetch_env!("LB_BUCKET_NAME")

# Get some files, upload some files
# If you run this many times github might get cranky

%{
"manifesto.txt" =>
"https://ia800408.us.archive.org/26/items/HackersManifesto/Hackers-manafesto.txt",
"sample.jpg" => "https://underjord.io/assets/images/lawik-square.jpg",
"lawik.json" => "https://api.github.com/users/lawik",
"underjord.svg" => "https://underjord.io/img/logo2.svg",
"globe.webp" =>
"https://cdn.prod.website-files.com/657988158c7fb30f4d9ef37b/6582a4f8d777a7f9c79bee68_Globally%20Distributed%20S3-compatible.webp"
}
|> Enum.map(fn {name, url} ->
%{body: body, headers: headers} = Req.get!(url, decode_body: false)
type = headers["content-type"] |> hd() |> String.split(";") |> hd()
# Spacing in time for demo purposes
:timer.sleep(1000)
S3.put_object(bucket, name, body, content_type: type) |> ExAws.request!()
end)

What is Metadata Querying?

If you read back on the blog the engineering team at Tigris are very excited about Foundation DB and that is the underpinning for the entire metadata system and particularly for this feature. A fast and scalable metadata system lets Tigris find and fetch data with much lower latency than is typical of object storage. A highly capable metadata system allows Tigris to do more with metadata.

Let's talk metadata querying. It allows us to perform SQL-style queries on our object metadata and importantly sort it based on metadata. Currently three fields are supported:

  • Content-Type meaning the mimetype, so "image/jpeg", "text/html" or "application/json".
  • Content-Length which holds the number of bytes the object takes up on disk.
  • Last-Modified a timestamp for when the object last changed.

It is all done via a custom header to fit within the bounds of the S3 API. You can do a lot with this. Some of it is straight up practical.

Fetching a range of mime types

Fetching a set of content types. Not just specific ones but even based on a prefix. The comparisons specified here are a little bit unusual as they are range queries. Anything between "image/" and "image0" indicates essentially everything that starts with "image/".

This is incredibly awkward to do in many object storage providers.

bucket
|> S3.list_objects_v2(
headers: %{
"X-Tigris-Query" => ~s(`Content-Type` > "image/" and `Content-Type` < "image0")
}
)
|> ExAws.request!()
|> Map.fetch!(:body)
|> Map.fetch!(:contents)

Others are very flexible and quite the "we can't wait to see what you do with it" such as:

Ordering/sorting

You can query three values at this point. Content-Type, Content-Length (file size) and Last-Modified. All of these are practical but combined with the ORDER BY feature which lets you sort the result of a query on on of these fields you suddenly get something wildly powerful. You get ordering.

bucket
|> S3.list_objects_v2(
headers: %{
"X-Tigris-Query" => ~s(`Content-Length` >= 0 ORDER BY `Last-Modified` DESC)
}
)
|> ExAws.request!()
|> Map.fetch!(:body)
|> Map.fetch!(:contents)

Ordering gives us the fundamentals for a lot of interesting stuff. It is a building block for many types of data stores and most object storage don't make this an easy question to answer without the risk of paging through thousands of items to find the most recent one.

You can build a Write-Ahead Log. You can do Event Sourcing. These require consistent ordering and a source that will serialize incoming data for you and determine the order. Tigris will.

Write at local speed, anywhere, globally. Read out locally as if talking to a CDN. But use it for complex and interesting functionality. Use it as the source of truth, the coordinating layer, a super simple cloud sync for your local-first application. Along with presigned URLs or even signed prefix uploads that you can have some very interesting stuff where you very rarely talk to your application server. Very .. dare I say serverless? Oh, right, no, that was taken.

Check the docs to see what operators you have available to you. I'm sure you can come up with more neat tricks to pull with this.

I see the content types, file sizes and timestamp all matter in various ways depending on what type product you are building. What could it enable for yours?

Fundamentally this functionality can remove the need for a separate metadata store to track typical things, like the order of files or mimetypes for later filtering. And if you've tried to lean on object storage for file storage without also writing a metadata record in some traditional database you've regretted it the moment you want to provide some sortable headers for a file list. Or want to show the 3 most recent uploads among thousands.

Because of how I'm wired it makes me want to try absolutely filthy cheap ideas about how to implement a microblog with a chronological timeline. Your servers only job would be providing presigned URLs or even sign for public uploads for a whole prefix for creating new entries and presigned cookies for fetching entries if the bucket isn't public. By making a very particular design you can really make something quite remarakble in terms of pricing and efficiencies.

It reminds me of the first time I heard of the potential latency advantages of satellite Internet. Shoot up. Shoot straight across. Shoot down. In this case. Upload to the closest bit of cloud, spread all across, download where needed. You don't need the roundtrips. You can answer simple queries by only speaking S3 API with some Tigris special sauce. CDN-speed reads, CDN-speed writes and no in-between application server.

· 4 min read
Lars Wikman

At Tigris we offer a number of novel and practical improvements beyond what your typical Object Storage does. We fit these within the existing common APIs or as graceful extensions when necessary. In this post we look at how you can take control of the Tigris caching mechanism if you feel the need.

To try the examples and follow along you can install the flyctl CLI tool and then run flyctl storage create to get credentials. If you use Livebook, the collaborative coding notebook for Elixir, this entire post can be used from Livebook by going here. Or you can copy and paste the examples into a .exs file to run Elixir as a script.

You need to install: ex_aws, ex_aws_s3, hackney, poison and sweet_xml

And you need the following config, similar setup will be needed in your preferred language if you don't do Elixir:

Mix.install([
{:ex_aws, "~> 2.5"},
{:ex_aws_s3, "~> 2.5"},
{:hackney, "~> 1.20"},
{:poison, "~> 3.0"},
{:sweet_xml, "~> 0.6"}
])

Application.put_env(:ex_aws, :s3,
scheme: "https://",
host: "fly.storage.tigris.dev",
port: 443
)

# Livebook prefixes env vars with LB_ and
# we strip that out for ex_aws_s3
[
"AWS_ACCESS_KEY_ID",
"AWS_ENDPOINT_URL_S3",
"AWS_REGION",
"AWS_SECRET_ACCESS_KEY",
"BUCKET_NAME"
]
|> Enum.each(fn key ->
val = System.fetch_env!("LB_#{key}")
System.put_env(key, val)
end)

alias ExAws.S3

bucket = System.fetch_env!("BUCKET_NAME")

Regular PUT and GET

With Tigris we can do the usual things for object storage. PUT things in the bucket and then GET them out of the bucket.

# The PUT
bucket
|> S3.put_object("myfile.txt", "mycontents")
|> ExAws.request!()
|> Map.fetch!(:status_code)
# The GET
bucket
|> S3.get_object("myfile.txt")
|> ExAws.request!()
|> Map.fetch!(:body)

Not quite so usual is that we offer a whole CDN experience along with your bucket. And as with any reasonable CDN we will do adaptive caching based on where requests originate from and try to offer your users the best low-latency experience in that way. The automatic way is very often the best way. As outlined in the caching docs there are also some sound defaults in place for file types that are typically static assets.

Of course you deserve more control for when that is desirable. So you can upload with a particular cache header set which will then be honored by Tigris.

# The PUT with cache headers
bucket
|> S3.put_object("myfile-2.txt", "mycontents")
# a minor kludge to set the custom header, will see if we can PR the library :)
|> then(fn op ->
headers = Map.put(op.headers, "Cache-Control", "public, max-age=30")
%{op | headers: headers}
end)
|> ExAws.request!()

# The GET, now with cache headers
bucket
|> S3.get_object("myfile-2.txt")
|> ExAws.request!()
|> Map.fetch!(:headers)

But in many cases you may know your access patterns or have particular plans. You may want to ensure eager caching of uploaded files, where every file uploaded gets spread across a decent chunk of the world. This is possible by setting the bucket accelerate configuration. I would try this for bucket intended to do HTTP Live Streaming (HLS) where latency can really matter or for podcast recordings where you might expect a lot of geographically distributed clients will request the thing at once.

With AWS CLI it looks like this, unfortunately ExAws doesn't expose a way to do this so far:

aws s3api put-bucket-accelerate-configuration \
--bucket foo-bucket \
--accelerate-configuration Status=Enabled

Pre-fetching object listings

Another cool way to control the flow of caching and replication is via Eager caching when listing objects. This allows you to tell Tigris that you'd like all the files you are listing to move to your region and be ready and nearby for subsequent fetching. With one header.

bucket
|> S3.list_objects_v2(headers: %{"X-Tigris-Prefetch" => "true"})
|> ExAws.request!()
|> Map.fetch!(:body)
|> Map.fetch!(:contents)
|> Enum.map(& &1.key)

With that all the object data behind the keys you have listed immediately start moving to you across the network. This is incredibly convenient when traversing a large number of files stored and wanting to make fetching efficient for many small files where latency can otherwise ruin your throughput.

Now we've covered how to wrangle eager and lazy caching on Tigris with Elixir. This is just the beginning, we have a lot more to cover.

Check back soon :)

· 4 min read
Ovais Tariq

Since we launched our public beta three months ago, our usage has skyrocketed, and hundreds of early adopters have picked Tigris as their storage solution. We've implemented tons of requested features and invested heavily in Tigris' performance, security, and reliability. We're grateful for your feedback and confident that we are on track to make the most developer-friendly object storage service.

Tigris globally distributed object
storage [Credits: Xe Iaso - https://xeiaso.net/]

With that, we will start billing for Tigris usage in July because we're confident that Tigris is reliable enough for us to justify doing that. Check out our pricing page for details on the pricing structure. The beta tag will stay, but we'll offer the same support expected from a highly reliable production-ready platform. Check out our SLA page for details about our uptime commitment.

TL;DR:

  • Data storage is $0.02 per GB
    • Unless you elect to control the data distribution and store multiple copies of your data in different regions, if you create primary copies in two regions, you will be charged twice for the object.
    • Note that this doesn't apply to Tigris's default behavior of managing the data distribution for you. As always, that counts as a single copy. Neither does this apply to pull-through caching, which is free, as always.
  • PUT, COPY, POST, and LIST requests are $0.005/1000 requests
  • GET, SELECT, and all other requests are $0.0005/ 1000 requests
  • Data egress: $0.00 per GB
  • Unauthorized requests to your buckets: $0.00 per request

Despite all of this, if you suffer an attack and get an unexpectedly large bill, please contact us at help@tigrisdata.com. We are more than happy to discuss a refund.

Starting June 1st, 2024, you will see detailed usage and costs in your Fly.io organization billing dashboard. The actual charge will be made on your July 1st Fly.io invoice.

Beta program in numbers

We thought you might also be interested in some stats to see how well we have done in the last three months:

  • ~1 PB of storage
  • ~1 B objects
  • ~250 M requests per day
  • ~1 K buckets

New features and enhancements

Supporting Tigris adoption and usage with feature work has been heartwarming. Some of the new features added since our launch include:

And there's more to come!

We hope you have fun building with Tigris and would love to see what you build. Reach out to us on X (formerly Twitter) or the Fly.io community forum, and give us your feedback. We can't wait to see what you all dream up.

· 9 min read
Brian Morrison II

Generative AI is a fantastic tool to use to quickly create images based on prompts.

One of the issues with some of these platforms is that they don’t actually store the images in a way that makes them easier to retrieve after they’ve been created. Oftentimes you have to make sure to save it immediately after the process is completed, otherwise, it's gone. Luckily, Stability offers an API that can be used to programmatically generate images, and Tigris is the perfect solution to store those images for retrieval.

In this article, you’ll learn how to deploy an app to Fly.io that will allow you to generate images using the Stability API and automatically store them in a Tigris bucket.

The Stability AI Tigris Database

Let’s take a look at what you’ll be deploying. There are several key components of the project:

  • A Next.js app that the user interacts with
  • An API endpoint (which is part of the Next app) that processes jobs.
  • A background job that periodically polls for new jobs.
  • A Postgres database to store jobs
  • A Tigris bucket to store the generated images.

The Next.js app

The first part of the project is a Next.js project that contains a single page that users will interact with. There is a simple form that accepts a prompt and image dimensions. This form uses server actions to store the request in the jobs table of a Fly Postgres database. Each grid item will periodically poll the table to check on the execution status of each job.

The API processing endpoint

The Next project also contains a single API endpoint that is used to execute jobs against the Stability API before storing the results in a Tigris bucket. This allows for a queue-like structure where jobs can be processed asynchronously.

This endpoint does much of the heavy lifting to make this app possible. Let’s step through what happens when it’s called.

It will start by checking to see if there are any jobs with a status of pending (0):

let res = await db
.select()
.from(jobs)
.where(eq(jobs.status, 0))
.limit(1)
.execute();

If a job is found, the status is set to in progress (1). This prevents other executions from processing a job twice.

await db.update(jobs).set({ status: 1 }).where(eq(jobs.id, job.id)).execute();

Next, the prompt and image dimensions are sent to the Stability API for generating an image. The base64 encoded image is returned in the response from Stability.

const engineId = "stable-diffusion-v1-6";
const apiHost = process.env.API_HOST ?? "https://api.stability.ai";

// Request an image from Stability
const stabilityRes = await fetch(
`${apiHost}/v1/generation/${engineId}/text-to-image`,
{
method: "POST",
headers: {
"Content-Type": "application/json",
Accept: "application/json",
Authorization: `Bearer ${process.env.STABILITY_API_KEY}`,
},
body: JSON.stringify({
text_prompts: [
{
text: job.prompt,
},
],
cfg_scale: 7,
height: job.height,
width: job.width,
steps: 30,
samples: 1,
}),
}
);

let rb = await stabilityRes.json();
if (!rb.artifacts) {
throw new Error(`${rb.name} -- ${rb.message} -- ${rb.details}`);
}

Then we can take that image and upload it to Tigris using the AWS SDK before setting the job to done (2.

let artifact = rb.artifacts[0];
if (artifact.finishReason == "SUCCESS") {
let imgdata = artifact.base64;
var buf = Buffer.from(imgdata, "base64");
const upload = new Upload({
params: {
Bucket: process.env.BUCKET_NAME,
Key: `${job.id}.png`,
Body: buf,
},
client: S3,
queueSize: 3,
});

// Upload the file to Tigris
await upload.done();

await db.update(jobs).set({ status: 2 }).where(eq(jobs.id, job.id)).execute();
}

The background job

Using node-cron , a simple background job is used to poll the API endpoint from the Next.js app. When polled, that endpoint will handle the next job in the list. This is run as a secondary process in Fly using concurrently to avoid needing unnecessary infrastructure, keeping the project isolated to a single container. The following diagram demonstrates what’s performed in the background job:

  1. The background job polls the API endpoint in Next regularly.
  2. When a job is detected, the API will set the status to in progress.
  3. Next will then dispatch a message to the Stability API, which will respond with a base64 encoded image when processing is complete.
  4. That image will be stored in a Tigris bucket.
  5. The database record is set to complete.

CleanShot 2024-04-10 at 11.40.50.png

See it in action

When a user provides a prompt, a new grid item will appear with an hourglass icon, indicating that it is waiting to be processed.

CleanShot 2024-04-10 at 12.06.01.png

When the background job picks up the new request, the status will be updated in the database and the grid item will change to a spinner to show that it’s currently being processed.

CleanShot 2024-04-10 at 12.07.06.png

Once the job is completed and the image is available, hovering over the thumbnail will show you the original prompt, as well as provide options to download the image or copy the pre-signed URL to your clipboard for sharing.

CleanShot 2024-04-10 at 12.07.24.png

Create a Stability API key

Before you can deploy the application, you’ll need to create an API key that will allow you to programmatically generate images using the Stability API. Start by heading to https://platform.stability.ai and create an account.

Once your account is created, you’ll be able to access your profile where you can create an API key. To do this, click on your avatar in the upper right.

CleanShot 2024-04-04 at 20.48.40@2x.png

Then click the Create API Key button.

CleanShot 2024-04-04 at 20.48.45@2x.png

Take note of the API key that is generated as you’ll need it in a later step.

Deploy to Fly.io

Start the deployment process by cloning the repository to your computer. Open a terminal and run the following command to do so:

git clone https://github.com/bmorrisondev/sd-tigris-database.git

Navigate into the sd-tigris-database directory. Since all apps on Fly.io require globally unique names, you’ll need to customize the name of the app in the fly.toml file. You can set it to something manually, or you can run the following command to automatically customize the name:

npm install node rename.mjs
## Output:
## App name changed to sd-tigris-database-65b013f6fc

Next, run the following to deploy the application and database to Fly.io:

fly launch

Since a fly.toml is stored with the repository, it should contain all of the necessary configurations to launch the app. When asked if you want to copy the configuration, type y to do so.

Next, you’ll be asked to review the app that will be launched:

Organization:
YOUR ORGANIZATION (fly
launch defaults to the personal org)
Name:
sd-tigris-database-65b013f6fc (from
your fly.toml)
Region:
Ashburn, Virginia (US) (from
your fly.toml)
App Machines:
shared-cpu-1x, 1GB RAM (from
your fly.toml)
Postgres:
(Fly Postgres) 1 Node, shared-cpu-1x, 256MB RAM (1GB RAM), 10GB disk
(determined from app source)
Redis:
<none> (not
requested)
Sentry:
false (not
requested)

When asked if you want to tweak the settings, type n to proceed. The main part of your app will start deploying. Wait until the deployment is finished and take note of the URL at the end:

Visit your newly deployed app at https://sd-tigris-database-65b013f6fc.fly.dev/

Configure the Postgres database

A Postgres database will be configured as part of the deployment, but you’ll need to create the schema for the application before it will function properly. This will be done using drizzle-kit and the provided schema.ts file.

Scroll up through the output of the deployment and locate the value for DATABASE_URL. It should look something like this:

DATABASE_URL=postgres://sd_tigris_database_65b013f6fc:Eb2tnGHch9m9u90@sd-tigris-database-65b013f6fc-db.flycast:5432/sd_tigris_database_65b013f6fc?sslmode=disable

As it is now, this connection string won’t work locally, but we can tweak it a bit before configuring a proxy using the Fly.io CLI tool. Create a file in the root of the project named .env.local and paste the connection string in it. Replace the hostname with 127.0.0.1. It should look similar to this, but with different credentials:

DATABASE_URL=postgres://sd_tigris_database_65b013f6fc:Eb2tnGHch9m9u90@127.0.0.1:5432/sd_tigris_database_65b013f6fc?sslmode=disable

In the terminal, run fly apps ls to get a list of your applications. Take note of the name ending in -db as this is the app that is the Postgres cluster you’ll need to create the proxy to.

> fly app ls
NAME OWNER STATUS LATEST DEPLOY
fly-builder-young-water-4407 personal deployed
sd-tigris-database-65b013f6fc personal suspended 8m31s ago
sd-tigris-database-65b013f6fc-db personal deployed

Next, create a proxy using the following command, but replace the sd-tigris-database-65b013f6fc-db with the name of your cluster:

fly proxy -a sd-tigris-database-65b013f6fc-db 5432

The fly proxy will prevent any further commands in that terminal window while it’s running, so open another terminal at the root of the project and run the following command to apply the schema changes:

npm run db:push

You should see a list of changes that will be made to the database, confirm these changes.

> sd-tigris-database@0.1.0 db:push
> drizzle-kit push:pg --config ./drizzle.config.ts

drizzle-kit: v0.20.14
drizzle-orm: v0.30.4

Custom config path was provided, using './drizzle.config.ts'
Reading config file '/Users/brian/Repos/sd-tigris-database/sd-tigris-database/drizzle.config.ts'
postgres://sd_tigris_database_65b013f6fc:Eb2tnGHch9m9u90@127.0.0.1:5432/sd_tigris_database_65b013f6fc?sslmode=disable

Warning You are about to execute current statements:

CREATE TABLE IF NOT EXISTS "jobs" (
"id" serial PRIMARY KEY NOT NULL,
"prompt" text NOT NULL,
"height" integer DEFAULT 500 NOT NULL,
"width" integer DEFAULT 500 NOT NULL,
"status" integer DEFAULT 0 NOT NULL,
"error" text,
"meta" json
);

No, abort
❯ Yes, I want to execute all statements

You can now close the proxy if needed.

Configure the Tigris bucket

Next up, you’ll need to create the Tigris bucket that will be used to store the generated images. To do this, run the following command:

fly storage create

You can leave the prompt blank to generate a name automatically. This command will automatically update the environment variables of the app in Fly to use the bucket, meaning no further action is required.

Add Stability Key environment variable

The last step is to use the Stability API key you generated earlier in this guide and set it as an environment variable in Fly. Once set, Fly will automatically restart the underlying containers so they receive the newest set of variables.

fly secrets set STABILITY_API_KEY={YOUR_KEY_HERE}

After adding the environment variables, you should be able to access the app using the URL you grabbed earlier.

Conclusion

Creating AI-generated images and storing them is just one excellent use case for Tigris. To learn more about what Tigris can do, check out the documentation portal for a more complete list of features!

· 3 min read
Annie Sexton

Tigris is a globally distributed S3-compatible object storage solution available that can easily be hosted on Fly.io. In this article, we'll explore how Tigris fits into the existing slate of object storage options and why you might choose one over the other.

You don't need a CDN

Probably the most exciting aspect of Tigris is its globally distributed nature. But what does that actually mean?

First, consider a common setup: you want to quickly deliver assets to users from your object storage, so typically you’d need to make use of a content delivery network (CDN) to cache your data in multiple regions, which helps reduce latency. When using Amazon S3, Cloudfront is the CDN most often used.

With Tigris, you don’t need a CDN at all because your data is available in all regions from the start. Global distribution means that the same bucket can store data in many different locations. You can effectively have a CDN-like system for delivering data to any region in a matter of minutes. Unlike a CDN, updates to data can occur in any region, allowing users in that region to see changes faster.

Streamlined developer experience

Getting started with Tigris is as simple as running a single command to create a bucket:

fly storage create

When run in the directory of a deployed Fly App, it will create a new private Tigris bucket and set the following secrets on your app:

BUCKET_NAME
AWS_ENDPOINT_URL_S3
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY

If you were already using the S3 API in your app, there's no need to change your code aside from updating the values of your environment variables/secrets.

Streamlined authorization

Tigris buckets are private by default but can be made public to allow anyone to read their content. Additionally, Tigris supports Role-Based-Access-Control (RBAC), with a few predefined roles: Admin, Editor, and ReadOnly. A complete list of the permitted operations for each of these roles can be found here. You can use the predefined roles mentioned above, or you can define your own roles using custom IAM policies.

Migrating is easy (without being locked in)

Migrating is made easier by the use of shadow buckets. Tigris shadow buckets allow you to incrementally migrate the data from an existing S3 or a compatible bucket without incurring egress costs or downtime. And if you migrate, you won't be locked in – Tigris doesn't charge for egress. 😀 💸

Conclusion

Tigris is revolutionizing object storage for global apps. For developers seeking an efficient, scalable storage solution without the burden of extra costs or technical overhead, Tigris offers a compelling alternative. Give it a try today on your next app on Fly.io!

· 5 min read
Ovais Tariq

Tigris globally distributed object
storage [src: playground.com]

Eighteen years ago today, Amazon completely changed how developers work with data storage by giving us Simple Storage Service (S3).

S3 rewrote the rules of storage and propelled us into a new era of cloud computing. Traditional storage solutions were cumbersome and costly, and they shackled developers to the limitations of the hardware. With S3, Amazon introduced a shift towards Storage as a Service, liberating developers from the burdensome tasks of purchasing, provisioning, and managing physical storage. No longer were they bound by the precarious dance of capacity planning, where overestimating meant wasted resources and underestimating spelled disaster for uptime.

· 4 min read
Ovais Tariq

Hello, world! We're Tigris Data, and today we're announcing the public beta of Tigris. Tigris is a globally distributed object storage service that provides low latency anywhere in the world, enabling developers like you to store and access any amount of data using the S3 libraries you're already using in production. Today, we're launching our public beta on top of Fly.io.

Tigris globally distributed object
storage [Midjourney prompt: tiger face, illustrated in binary code, blue and white.]