Use this file to discover all available pages before exploring further.
You can view your current limits, quotas, and rate limit usage in real-time by visiting the Limits page in the dashboard (accessible from the left sidebar). This page shows current rate limit token availability, quota usage, and plan features for your organization.
Extra concurrency above the Pro tier limit is available via the dashboard. Click the “Concurrency” page from the left sidebar when on the Pro plan to purchase more.
You can request a higher rate limit from us if you’re on a paid plan.The most common cause of hitting the API rate limit is if you’re calling trigger() on a task in a loop, instead of doing this use batchTrigger() which will trigger multiple tasks in a single API call. You can have up to 1,000 tasks in a single batch trigger call with SDK 4.3.1+ (500 in prior versions).
The maximum number of runs that can be queued per queue (not across all queues in the environment). Each queue can hold up to its limit independently. When a queue hits its limit, new triggers to that queue are rejected.
The limits below apply to Trigger.dev Cloud. If you self-host Trigger.dev, queue size limits are configurable via the MAXIMUM_DEV_QUEUE_SIZE and MAXIMUM_DEPLOYED_QUEUE_SIZE environment variables — see Self-hosting environment variables.
On Trigger.dev Cloud, all runs have an enforced maximum TTL of 14 days. Runs without an explicit TTL automatically receive the 14-day TTL; runs with a TTL longer than 14 days are clamped to 14 days. This prevents queued runs from accumulating indefinitely. If you self-host, you can configure a maximum TTL via the RUN_ENGINE_DEFAULT_MAX_TTL environment variable — see Self-hosting environment variables.
Additional bundles above the Pro tier are available for $10/month per 1,000 schedules. Contact us via email or Discord to request more.When attaching schedules to tasks we strongly recommend you add them in our dashboard if they’re “static”. That way you can control them easily per environment.If you add them dynamically using code make sure you add a deduplicationKey so you don’t add the same schedule to a task multiple times. If you don’t your task will get triggered multiple times, it will cost you more, and you will hit the limit.If you’re creating schedules for your user you will definitely need to request more schedules from us.
Each project receives its own concurrency allocation. If you need to support multiple tenants with the same codebase but different environment variables, see the Multi-tenant applications section for a recommended workaround.
Each item can be up to 3MB (SDK 4.3.1+). Prior: 1MB total combined
Task outputs
Must not exceed 10MB
Payloads and outputs that exceed 512KB will be offloaded to object storage and a presigned URL will be provided to download the data when calling runs.retrieve. You don’t need to do anything to handle this in your tasks however, as we will transparently upload/download these during operation.
Batch triggering uses a token bucket algorithm to rate limit the number of runs you can trigger per environment. Each run in a batch consumes one token.
Pricing tier
Bucket size
Refill rate
Free
1,200 runs
100 runs every 10 sec
Hobby
5,000 runs
500 runs every 5 sec
Pro
5,000 runs
500 runs every 5 sec
How it works: You can burst up to your bucket size, then tokens refill at the specified rate. For example, a Free user can trigger 1,200 runs immediately, then must wait for tokens to refill (100 runs become available every 10 seconds).
When you hit batch rate limits, the SDK throws a BatchTriggerError with isRateLimited: true.
See Handling batch trigger errors for how to detect
and react to rate limits in your code.
When you send a batch trigger, we convert it into individual runs. This limit controls the maximum number of batches being converted into runs simultaneously per environment. It is not a limit on how many batch runs can be executing at once.
The default machine is small-1x which has 0.5 vCPU and 0.5 GB of RAM. You can optionally configure a higher spec machine which will increase the cost of running the task but can also improve the performance of the task if it is CPU or memory bound.See the machine configurations for more details.