Object storage providers can now be switched without breaking existing runs

Self-hosted deployments can migrate between S3-compatible storage backends like Cloudflare R2 and AWS S3 without any downtime or data loss.
When self-hosted Trigger.dev installations need to switch object storage providers—say, moving from Cloudflare R2 to AWS S3—the previous approach required careful coordination to avoid breaking in-flight runs. Data stored in one provider couldn't be accessed after switching configurations.
Storage paths now carry optional protocol prefixes. A path like routes to AWS S3, while points to Cloudflare R2. Paths without a prefix continue using the default provider, maintaining backward compatibility with existing runs.
New environment variables configure additional providers: OBJECT_STORE_S3_* for S3, OBJECT_STORE_R2_* for R2, and so on. The OBJECT_STORE_DEFAULT_PROTOCOL variable controls which provider receives new uploads, letting operators phase in migrations gradually. Existing data stays accessible in its original location while new data flows to the chosen backend.
The migration path is documented step-by-step: configure the new provider alongside the existing one, test that both work, then flip the default protocol switch. Once all runs using old data complete, the previous provider can be decommissioned at leisure.
On the infrastructure side, the object store client now supports IAM role-based authentication in addition to static credentials. When access keys aren't provided, the AWS SDK's S3Client automatically pulls credentials from the ECS task role or EC2 instance metadata—useful for Kubernetes deployments where pods assume IAM roles.
In the webapp, large payload processing, batch triggers, and waitpoint completions all route through the new abstraction. A v2 packets API endpoint returns canonical storage paths that include protocol prefixes, ensuring the SDK knows exactly where data lives.
View Original GitHub Description
This allows seamless migration to different object storage.
Existing runs that have offloaded payloads/outputs will continue to use the default object store (configured using OBJECT_STORE_* env vars).
You can add additional stores by setting new env vars:
OBJECT_STORE_DEFAULT_PROTOCOLthis determines where new run large payloads will get stored.- If you set that you need to set new env vars for that protocol.
Example:
OBJECT_STORE_DEFAULT_PROTOCOL=“s3"
OBJECT_STORE_S3_BASE_URL=https://s3.us-east-1.amazonaws.com
OBJECT_STORE_S3_ACCESS_KEY_ID=<val>
OBJECT_STORE_S3_SECRET_ACCESS_KEY=<val>
OBJECT_STORE_S3_REGION=us-east-1
OBJECT_STORE_S3_SERVICE=s3