Skip to main content

Configuration reference

TAG can be configured via a YAML configuration file and/or environment variables. Environment variables take precedence over file configuration.

Configuration precedence

  1. Command line flags (highest priority)
  2. Environment variables
  3. Configuration file
  4. Default values (lowest priority)

Environment variables

VariableDescriptionDefault
AWS_ACCESS_KEY_IDTigris access key (TAG's own credentials, not client credentials)(required)
AWS_SECRET_ACCESS_KEYTigris secret key(required)
TAG_UPSTREAM_ENDPOINTTigris S3 endpoint URLhttps://t3.storage.dev
TAG_MAX_IDLE_CONNS_PER_HOSTHTTP connection pool size per upstream host100
TAG_TRANSPARENT_PROXYEnable transparent proxy mode. Set to false or 0 to use signing modetrue
TAG_CACHE_DISABLEDDisable caching (true or 1)false
TAG_CACHE_DISK_PATHPath to cache data directory/var/cache/tag
TAG_CACHE_MAX_DISK_USAGEMax disk usage in bytes (0 = unlimited)0
TAG_CACHE_NODE_IDUnique node identifier for cluster mode(none)
TAG_CACHE_CLUSTER_ADDRAddress for memberlist gossip:7000
TAG_CACHE_GRPC_ADDRAddress for gRPC server:9000
TAG_CACHE_ADVERTISE_ADDRAddress advertised to other nodes(defaults to gRPC addr)
TAG_CACHE_SEED_NODESComma-separated seed nodes for cluster discovery(none)
TAG_CACHE_GRPC_AUTHEnable gRPC authentication between cluster nodes (true default; disable with false or 0)true
TAG_LOG_LEVELLog level: debug, info, warn, errorinfo
TAG_LOG_FORMATLog format: json or consolejson
TAG_TLS_CERT_FILEPath to TLS certificate file (PEM format)(none)
TAG_TLS_KEY_FILEPath to TLS private key file (PEM format)(none)
TAG_PPROF_ENABLEDEnable pprof endpoints (true or 1)false

AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY are TAG's own Tigris credentials with read-only access to all buckets accessed through TAG (required). Clients use their own credentials directly.

Configuration file

The configuration file uses YAML format. Specify the path with the --config flag:

./tag --config /etc/tag/config.yaml

Full configuration reference

# Server configuration
server:
# HTTP port for the S3 API
# Default: 8080
http_port: 8080

# IP address to bind to
# Default: "0.0.0.0" (all interfaces)
bind_ip: "0.0.0.0"

# Enable pprof profiling endpoints
# Default: false (disabled for security)
pprof_enabled: false

# Path to TLS certificate file (PEM format)
# When both tls_cert_file and tls_key_file are set, TAG serves HTTPS
# Default: "" (TLS disabled, serves HTTP)
tls_cert_file: ""

# Path to TLS private key file (PEM format)
# Must be set together with tls_cert_file
# Default: "" (TLS disabled, serves HTTP)
tls_key_file: ""

# Upstream Tigris configuration
upstream:
# Tigris S3 endpoint URL
# Default: "https://t3.storage.dev"
endpoint: "https://t3.storage.dev"

# AWS region for request signing
# Default: "auto"
region: "auto"

# HTTP connection pool size per upstream host
# Higher values improve throughput for cache-miss scenarios
# Default: 100
max_idle_conns_per_host: 100

# Enable transparent proxy mode (default: true)
# When true, client requests are forwarded as-is with proxy headers added.
# When false, TAG validates and re-signs requests (signing mode).
transparent_proxy: true

# Cache configuration
cache:
# Enable caching
# Default: true
enabled: true

# Default TTL for cached objects
# Default: 60m
ttl: 60m

# Maximum object size to cache (in bytes)
# Objects larger than this are not cached
# Default: 1073741824 (1GB)
size_threshold: 1073741824

# Path to cache data directory
# Default: /var/cache/tag
disk_path: "/var/cache/tag"

# Max disk usage in bytes (0 = unlimited)
# Default: 0
max_disk_usage_bytes: 0

# Unique node identifier for cluster mode
# Required for multi-node deployments
node_id: "tag-node-1"

# Address for memberlist gossip protocol
# Default: :7000
cluster_addr: ":7000"

# Address for gRPC server (cache cluster routing)
# Default: :9000
grpc_addr: ":9000"

# Address advertised to other nodes
# Defaults to grpc_addr if not specified
advertise_addr: "tag-node-1:9000"

# Seed nodes for cluster discovery
# List of cluster addresses for other nodes
seed_nodes:
- "tag-node-1:7000"
- "tag-node-2:7000"
- "tag-node-3:7000"

# Broadcast configuration (request coalescing)
broadcast:
# Streaming chunk size in bytes
# Default: 65536 (64 KiB)
chunk_size: 65536

# Buffer size per listener in chunks
# Total buffer per listener = chunk_size × channel_buffer
# Default: 32 (~2 MiB with default chunk size)
channel_buffer: 32

# Logging configuration
log:
# Log level: debug, info, warn, error
# Default: "info"
level: "info"

# Log format: json (fast) or console (human-readable)
# Default: "json"
format: "json"

Additional notes

TLS

When both tls_cert_file and tls_key_file are set, TAG serves HTTPS. See TLS/HTTPS for certificate setup across Docker, Kubernetes, and native deployments.

Transparent vs. Signing mode

When transparent_proxy is true (default), TAG forwards the client's original Authorization header to Tigris unchanged and adds cryptographically signed proxy headers. Tigris validates both the client's signature and TAG's proxy signature in a single round-trip.

When transparent_proxy is false, TAG validates incoming signatures locally and re-signs requests with TAG's own credentials before forwarding. See Security and Access Control for the full authentication flow.

Endpoint validation

The upstream endpoint must match one of the allowed host patterns: localhost, *.tigris.dev, or *.storage.dev. TAG exits at startup if the endpoint does not match.

Cluster mode

For multi-node deployments, configure each node with a unique node_id, the same seed_nodes list, and an advertise_addr reachable from other nodes.

PortProtocolPurpose
8080TCPHTTP API (S3-compatible)
7000TCPGossip protocol for cluster discovery
9000TCPgRPC for inter-node cache communication
macOS port conflict

On macOS, port 7000 is used by AirPlay Receiver. Use ports 17000 (gossip) and 19000 (gRPC) instead:

cache:
cluster_addr: ":17000"
grpc_addr: ":19000"
seed_nodes:
- "node1:17000"

Broadcast memory usage

The total in-flight memory per broadcast is chunk_size × channel_buffer × num_listeners. With defaults, 100 concurrent listeners for the same object consume ~200 MiB. Increase chunk_size or channel_buffer for very high concurrency; decrease them if memory is constrained.

Profiling

TAG exposes pprof endpoints for performance profiling when enabled. Disabled by default for security (exposes runtime internals).

TAG_PPROF_ENABLED=true ./tag

Endpoints (when enabled):

  • /debug/pprof/ — Index
  • /debug/pprof/profile?seconds=30 — CPU profile
  • /debug/pprof/heap — Heap profile
  • /debug/pprof/goroutine — Goroutine stacks

Usage with go tool pprof:

go tool pprof http://localhost:8080/debug/pprof/profile?seconds=30
go tool pprof http://localhost:8080/debug/pprof/heap

Command line flags

FlagDescription
--versionPrint version information and exit
--configPath to configuration file
--http-portHTTP listen port (overrides config file and env)
--log-levelLog level (overrides config file and env)
--log-formatLog format (overrides config file and env)
--disable-cacheDisable caching (pass-through mode)
# Print version
./tag --version

# Use configuration file
./tag --config /etc/tag/config.yaml

# Override port and log level via flags
./tag --http-port 9090 --log-level debug

# Disable caching via flag (overrides config)
./tag --config /etc/tag/config.yaml --disable-cache

# Use environment variables only (no config file)
AWS_ACCESS_KEY_ID=xxx AWS_SECRET_ACCESS_KEY=yyy ./tag

Example configurations

Development (standalone)

server:
http_port: 8080

upstream:
endpoint: "https://t3.storage.dev"

cache:
disk_path: "/tmp/tag-cache"
node_id: "dev-node"

log:
level: "debug"

Production (single node)

server:
http_port: 8080
bind_ip: "0.0.0.0"

upstream:
endpoint: "https://t3.storage.dev"
max_idle_conns_per_host: 100

cache:
disk_path: "/var/cache/tag"
max_disk_usage_bytes: 429496729600 # 400 GiB
ttl: 60m
size_threshold: 1073741824
node_id: "tag-prod"

log:
level: "info"
format: "json"

To add TLS to any of these configs, set tls_cert_file and tls_key_file under server. See TLS/HTTPS for full examples.

Production (cluster mode)

Configure each node with a unique node_id and the same seed_nodes list:

server:
http_port: 8080

upstream:
endpoint: "https://t3.storage.dev"
max_idle_conns_per_host: 100

cache:
disk_path: "/var/cache/tag"
max_disk_usage_bytes: 429496729600 # 400 GiB per node
ttl: 1h
size_threshold: 1073741824

# Cluster configuration — unique per node
node_id: "tag-1"
cluster_addr: ":7000"
grpc_addr: ":9000"
advertise_addr: "tag-1.tag-svc.default.svc.cluster.local:9000"
seed_nodes:
- "tag-1.tag-svc.default.svc.cluster.local:7000"
- "tag-2.tag-svc.default.svc.cluster.local:7000"
- "tag-3.tag-svc.default.svc.cluster.local:7000"

broadcast:
chunk_size: 131072 # 128 KiB chunks for high-throughput cluster workloads
channel_buffer: 64

log:
level: "info"
format: "json"