Limits
Daytona enforces resource and requests limits to ensure fair usage and stability across all organizations.
Daytona Limits ↗ provides an overview of your organization’s resource limits and usage.
Resources
Resources are shared across all running sandboxes. The number of sandboxes you can run at once depends on their individual usage. Organizations are automatically placed into a tier based on verification status and have access to a compute pool consisting of:
- Compute: Total CPU cores available
- Memory: Total RAM available
- Storage: Total disk space available
Limits are applied to your organization’s default region. To unlock higher limits, complete the following steps:
| Tier | Resources (vCPU / RAM / Storage) | Access Requirements |
|---|---|---|
| Tier 1 | 10 / 10GiB / 30GiB | Email verified |
| Tier 2 | 100 / 200GiB / 300GiB | Credit card linked, $25 top-up, GitHub connected. |
| Tier 3 | 250 / 500GiB / 2000GiB | Business email verified, $500 top-up. |
| Tier 4 | 500 / 1000GiB / 5000GiB | $2000 top-up every 30 days. |
| Custom | Custom limits | Contact support@daytona.io |
Once you meet the criteria for a higher tier, upgrade your tier in the Daytona Dashboard ↗.
Resource usage
Daytona supports managing your resources by changing the state of your Sandboxes. The table below summarizes how each state affects resource usage:
| State | vCPU | Memory | Storage | Description |
|---|---|---|---|---|
| Running | ✅ | ✅ | ✅ | Counts against all limits |
| Stopped | ❌ | ❌ | ✅ | Frees CPU & memory, but storage is still used |
| Archived | ❌ | ❌ | ❌ | Data moved to cold storage, no quota impact |
| Deleted | ❌ | ❌ | ❌ | All resources freed |
Rate limits
Rate limits control how many API requests you can make within a specific time window. These limits are applied based on your tier, authentication status, and the type of operation you’re performing. Rate limits for general authenticated requests are tracked per organization.
The following rate limits are applied for each tier:
- General requests
- Sandbox creation
- Sandbox lifecycle
| Tier | General Requests (per min) | Sandbox Creation (per min) | Sandbox Lifecycle (per min) |
|---|---|---|---|
| Tier 1 | 10,000 | 300 | 10,000 |
| Tier 2 | 20,000 | 400 | 20,000 |
| Tier 3 | 40,000 | 500 | 40,000 |
| Tier 4 | 50,000 | 600 | 50,000 |
| Custom | Custom limits | Custom limits | Custom limits |
The general rate limit for authenticated API requests that don’t fall under sandbox creation or lifecycle operations includes:
- Listing sandboxes
- Getting sandbox details
- Retrieving sandbox regions
- Listing snapshots
- Managing volumes
- Viewing audit logs
- and other read/management operations
When you exceed a rate limit, subsequent requests will fail with:
- HTTP Status:
429 Too Many Requests - Error Response: JSON body with rate limit details
- Retry-After Header: Time to wait before retrying (in seconds)
Understanding these limits helps you build robust applications that handle rate limiting gracefully and avoid service interruptions. For more information, see best practices.
Rate limit headers
Daytona includes rate limit information in API response headers. Header names include a suffix based on which rate limit is triggered (e.g., -anonymous, -authenticated, -sandbox-create, -sandbox-lifecycle):
| Header Pattern | Description |
|---|---|
X-RateLimit-Limit-{throttler} | Maximum number of requests allowed in the time window |
X-RateLimit-Remaining-{throttler} | Number of requests remaining in the current window |
X-RateLimit-Reset-{throttler} | Time in seconds until the rate limit window resets |
Retry-After-{throttler} | Time in seconds to wait before retrying (included when limit is exceeded) |
Sandbox creation
This rate limit applies to all sandbox creation methods, including creation from snapshots, declarative builds and any other parameters passed to daytona.create() (SDK) or POST requests to /api/sandbox (API).
This independent limit prevents resource exhaustion while allowing you to perform lifecycle operations on existing sandboxes without restriction.
Sandbox lifecycle operations
This rate limit applies to lifecycle and state management operations on existing sandboxes:
- Starting sandboxes (
POST /api/sandbox/:id/start) - Stopping sandboxes (
POST /api/sandbox/:id/stop) - Deleting sandboxes (
DELETE /api/sandbox/:id) - Archiving sandboxes (
POST /api/sandbox/:id/archive) - and all corresponding SDK methods
These operations have a higher limit since they’re often performed more frequently during development workflows.
Rate limit errors
Daytona Python or TypeScript SDKs raise or throw a DaytonaRateLimitError exception (Python) or error (TypeScript) when you exceed a rate limit.
All errors include headers and statusCode properties, allowing access to rate limit headers directly from the error object. Headers support case-insensitive access:
try { await daytona.create()} catch (error) { if (error instanceof DaytonaRateLimitError) { console.log(error.headers?.get('x-ratelimit-remaining-sandbox-create')) console.log(error.headers?.get('X-RateLimit-Remaining-Sandbox-Create')) // also works }}try: daytona.create(snapshot="my-snapshot")except DaytonaRateLimitError as e: print(e.headers['x-ratelimit-remaining-sandbox-create']) print(e.headers['X-RateLimit-Remaining-Sandbox-Create']) # also worksFor more information, see the Python SDK and TypeScript SDK references.
Rate limit error response
The rate limit error response is a JSON object with the following properties:
statusCode: The HTTP status code of the errormessage: The error messageerror: The error type
{ "statusCode": 429, "message": "Rate limit exceeded", "error": "Too Many Requests"}Tier upgrade
Unlock more resources and higher rate limits by completing verification steps. For more information, see the Daytona Dashboard ↗.
Best practices
To work effectively within rate limits, always handle 429 errors gracefully with proper retry logic. When you receive a rate limit error, implement exponential backoff—wait progressively longer between retries (1s, 2s, 4s, 8s, etc.) to avoid overwhelming the API.
The following snippet demonstrates how to create a sandbox with retry logic using the TypeScript SDK:
async function createSandboxWithRetry() { let retries = 0 const maxRetries = 5
while (retries < maxRetries) { try { return await daytona.create({ snapshot: 'my-snapshot' }) } catch (error) { if (error instanceof DaytonaRateLimitError && retries < maxRetries - 1) { // Use Retry-After header if available, otherwise exponential backoff const retryAfter = error.headers?.get('retry-after-sandbox-create') const delay = retryAfter ? parseInt(retryAfter) * 1000 : Math.pow(2, retries) * 1000 await new Promise(resolve => setTimeout(resolve, delay)) retries++ } else { throw error } } }}Monitor rate limit headers (e.g., X-RateLimit-Remaining-{throttler}, X-RateLimit-Reset-{throttler}) to track your consumption and implement proactive throttling before hitting limits. These headers are available on all error objects via the headers property.
Cache API responses that don’t frequently change, such as sandbox lists (when relatively static), available regions, and snapshot information. This reduces unnecessary API calls and helps you stay well within your limits.
Batch and optimize operations by creating multiple sandboxes in parallel (within rate limits) rather than sequentially. Consider reusing existing sandboxes when possible instead of creating new ones for every task.
Efficiently manage sandbox lifecycle to reduce API calls. Archive sandboxes instead of deleting and recreating them, stop sandboxes when not in use rather than deleting them, and leverage auto-stop intervals to automatically manage running sandboxes without manual intervention.
Implement request queuing to prevent bursts that exceed limits, and use webhooks instead of polling for state changes to avoid unnecessary API calls. Set up monitoring and alerts for 429 errors in your application logs so you can proactively address rate limiting issues before they impact your users.