Open Source Deployment
This guide will walk you through running Daytona Open Source using Docker Compose locally on your machine or on a server behind a public domain with HTTPS.
The compose file can be found in the docker folder of the Daytona repository.
Overview
Section titled “Overview”The Docker Compose configuration includes all the necessary services to run Daytona:
- API: Main Daytona application server
- Proxy: Request proxy service
- Runner: Service that hosts the Daytona Runner
- SSH Gateway: Service that handles sandbox SSH access
- Database: PostgreSQL database for data persistence
- Redis: In-memory data store for caching and sessions
- Dex: OIDC authentication provider
- Registry: Docker image registry with web UI
- MinIO: S3-compatible object storage
- MailDev: Email testing service
- Jaeger: Distributed tracing
- PgAdmin: Database administration interface
Local Deployment Quick Start
Section titled “Local Deployment Quick Start”-
Clone the Daytona repository
-
Run the following command (from the root of the Daytona repo) to start all services:
Terminal window docker compose -f docker/docker-compose.yaml up -d -
Access the services:
- Daytona Dashboard: http://localhost:3000
- Access Credentials: dev@daytona.io
password - Make sure that the default snapshot is active at http://localhost:3000/dashboard/snapshots
- Access Credentials: dev@daytona.io
- PgAdmin: http://localhost:5050
- Registry UI: http://localhost:5100
- MinIO Console: http://localhost:9001 (minioadmin / minioadmin)
- Daytona Dashboard: http://localhost:3000
Domain Deployment Quick Start
Section titled “Domain Deployment Quick Start”This path deploys Daytona on a server behind a public domain with HTTPS, using Caddy as the reverse proxy and Let’s Encrypt for TLS certificates.
Prerequisites
Section titled “Prerequisites”-
A server or local machine with at least 4 GB RAM (8 GB recommended). Ubuntu 22.04+, Debian 12+, or Fedora 39+ on any cloud provider or bare-metal host — or macOS 13+ (Ventura or later) with Docker Desktop for local domain deployment.
macOS behind a router: If the Mac is connected to a router (e.g. a Mac Mini on a home or office network), you must configure port forwarding and a stable local IP before proceeding. Port forwarding exposes the Mac directly to inbound internet traffic, bypassing the router’s NAT firewall.
- Assign a static local IP. Set a DHCP reservation in the router for the Mac’s MAC address, or configure a static IP on the Mac itself. Port forwards break silently if the Mac’s IP changes after a DHCP lease renewal. Get the current IP with
ipconfig getifaddr en0. - Forward ports 80 and 443 from the router to the Mac’s static local IP. These are required for Caddy to serve HTTPS and complete ACME certificate challenges. If the router serves its own admin panel on port 443 (common on consumer routers), move the admin UI to another port first so it doesn’t intercept HTTPS traffic.
- Forward port 2222 only if you need remote SSH access to sandboxes. The SSH Gateway bypasses Caddy and exposes a raw TCP listener. If you only access sandboxes from the local network, skip this forward and connect to the Mac’s local IP on port 2222 directly. If you do forward it, restrict the source IP range in your router’s port forwarding rules to known addresses (e.g. your office or VPN exit IP).
- ISP limitations: Many residential ISPs block inbound traffic on ports 80/443 or use Carrier-Grade NAT (CGNAT), which prevents port forwarding entirely. Run
curl -4 ifconfig.meand compare the result to the WAN IP in your router’s admin panel — if they differ, you are behind CGNAT and will need a tunnel service (e.g. Cloudflare Tunnel) instead of port forwarding.
Network isolation (recommended): The Mac will share a network segment with every other device on the LAN. If your router supports VLANs or a guest/DMZ network, place the Mac on an isolated segment to prevent lateral access to other devices in the event of a compromise.
- Assign a static local IP. Set a DHCP reservation in the router for the Mac’s MAC address, or configure a static IP on the Mac itself. Port forwards break silently if the Mac’s IP changes after a DHCP lease renewal. Get the current IP with
-
A registered domain name. This guide uses
daytona.example.comas a placeholder — replace it everywhere with your actual domain. -
A DNS provider API token for automated wildcard TLS certificate provisioning. Wildcard certificates require a DNS-01 challenge, which means your certificate tool must be able to create DNS TXT records programmatically. See the DNS Provider Reference section for provider-specific instructions.
-
Ports 80, 443, and 2222 open on both the server’s host firewall and any cloud-provider-level firewall (e.g., DigitalOcean Cloud Firewalls, AWS Security Groups). On macOS, Docker Desktop handles port binding and the built-in firewall does not block these ports by default — but enabling System Settings → Network → Firewall is recommended if the Mac is exposed to the internet via port forwarding.
-
SSH access to the server as root or a user with sudo privileges. On macOS, the script runs as your normal user and prompts for
sudoonly when needed (Caddy binary installation).
-
Clone the Daytona repository
-
Configure DNS records at your DNS provider — see DNS Configuration. This step must be done manually before proceeding.
-
Run the setup wizard:
Terminal window ./scripts/setup-domain-oss-deployment.sh -
During the setup process, input your domain, email address, password, and credentials for the databases you wish to use.
-
Access the services:
- Daytona Dashboard: https://daytona.example.com
- Username: Chosen during setup
- Password: Chosen during setup
- Make sure that the default snapshot is active at https://daytona.example.com/dashboard/snapshots
- PgAdmin: https://daytona.example.com:5050 (credentials set during setup)
- Registry UI: https://daytona.example.com:5100 (admin credentials set during setup)
- MinIO Console: https://daytona.example.com:9001 (credentials set during setup)
- Daytona Dashboard: https://daytona.example.com
DNS Setup for Proxy URLs
Section titled “DNS Setup for Proxy URLs”Local Development
Section titled “Local Development”For local development, you need to resolve *.proxy.localhost domains to 127.0.0.1:
./scripts/setup-proxy-dns.shThis configures dnsmasq with address=/proxy.localhost/127.0.0.1.
Without this setup, SDK examples and direct proxy access won’t work.
Domain Deployment
Section titled “Domain Deployment”For domain deployments, create DNS records at your DNS provider pointing to your server’s public IP:
| Record Type | Name | Value | Proxy Status |
|---|---|---|---|
| A | daytona.example.com | YOUR_SERVER_IP | See note below |
| A or CNAME | proxy.daytona.example.com | YOUR_SERVER_IP (A) or daytona.example.com (CNAME) | Must be DNS-only |
| A or CNAME (wildcard) | *.proxy.daytona.example.com | YOUR_SERVER_IP (A) or daytona.example.com (CNAME) | Must be DNS-only |
The wildcard *.proxy.daytona.example.com covers subdomains like 8080-sandboxid.proxy.daytona.example.com, but it does not cover the bare proxy.daytona.example.com itself. The proxy service uses the bare domain for OIDC callbacks (e.g., proxy.daytona.example.com/callback), so it needs its own record.
Either A records or CNAME records work. CNAME records point to another hostname instead of an IP, so if your server IP changes you only update one A record. The Daytona team recommends CNAME for the proxy records.
Cloudflare-Specific Warning
Section titled “Cloudflare-Specific Warning”If you use Cloudflare, the wildcard record for *.proxy.daytona.example.com must have the proxy toggled OFF (grey cloud / “DNS only”). Cloudflare’s free Universal SSL certificate covers *.example.com but does not cover sub-subdomain wildcards like *.proxy.example.com. If the orange cloud proxy is enabled, Cloudflare will intercept the traffic but fail the TLS handshake because it has no certificate for that depth.
The base domain record (daytona.example.com) can be either proxied or DNS-only. For simplicity, setting all records to DNS-only is recommended so Caddy handles all TLS uniformly.
This limitation is Cloudflare-specific to the free tier. Other DNS providers that do not act as a TLS-terminating proxy (DigitalOcean DNS, AWS Route 53, Namecheap, etc.) do not have this issue.
Verification
Section titled “Verification”Wait 1-5 minutes for DNS propagation, then:
# All three should return your server's actual IP (NOT Cloudflare IPs like 104.x.x.x)dig +short daytona.example.comdig +short proxy.daytona.example.comdig +short anything.proxy.daytona.example.comDomain Deployment: What the Setup Script Does
Section titled “Domain Deployment: What the Setup Script Does”The setup-domain-oss-deployment.sh script automates the manual configuration steps required for a domain deployment. The sections below document what it does, for troubleshooting or if you need to customize the configuration beyond what the wizard supports.
Architecture
Section titled “Architecture”Three services must be reachable from the internet:
| Service | Internal Port | Purpose |
|---|---|---|
| API | 3000 | Dashboard, REST API |
| Proxy | 4000 | Routes browser traffic to sandbox preview ports |
| SSH Gateway | 2222 | SSH access to sandboxes |
Caddy sits in front of the API and Proxy, terminates TLS, and routes requests based on hostname. Dex (the OIDC provider) is proxied through Caddy at the /dex/* path on the main domain. The SSH Gateway on port 2222 is TCP (not HTTP) and bypasses Caddy — it is exposed directly through the firewall.
Sandbox preview URLs use the pattern {{PORT}}-{{sandboxId}}.proxy.daytona.example.com, which is why a wildcard DNS record is required.
Security Credentials
Section titled “Security Credentials”The default Docker Compose file ships with placeholder secrets not safe for any non-localhost deployment. The setup script generates unique values for:
ENCRYPTION_KEYandENCRYPTION_SALTPROXY_API_KEYRUNNER_API_KEY(used as bothDEFAULT_RUNNER_API_KEYandDAYTONA_RUNNER_TOKEN)SSH_GATEWAY_API_KEY
To generate these manually:
openssl rand -hex 16Dex Configuration
Section titled “Dex Configuration”The script updates docker/dex/config.yaml to set:
issuertohttps://YOUR_DOMAIN/dexredirectURIsto usehttps://YOUR_DOMAINinstead oflocalhoststaticPasswordswith your chosen email and bcrypt-hashed password
To generate a bcrypt password hash manually:
echo 'YOUR_PASSWORD' | htpasswd -BinC 10 admin | cut -d: -f2Docker Compose Environment Variables
Section titled “Docker Compose Environment Variables”The script updates several environment variables in docker/docker-compose.yaml across multiple services — switching URLs from localhost to your domain, protocols from http to https, and replacing placeholder secrets with generated values. Key changes include:
- API:
PROXY_DOMAIN,PROXY_PROTOCOL,PROXY_TEMPLATE_URL,DASHBOARD_URL,DASHBOARD_BASE_API_URL,PUBLIC_OIDC_DOMAIN,SSH_GATEWAY_URL,SSH_GATEWAY_COMMAND, and all security key variables - Proxy:
PROXY_PROTOCOL,COOKIE_DOMAIN,OIDC_PUBLIC_DOMAIN, andPROXY_API_KEY - Runner:
DAYTONA_RUNNER_TOKEN - SSH Gateway:
API_KEY
Caddy Installation and Configuration
Section titled “Caddy Installation and Configuration”The script installs Caddy with your DNS provider’s module (needed for wildcard TLS via DNS-01 challenge), creates a Caddyfile, sets up DNS provider credentials, and configures a service — systemd on Linux or a launchd LaunchAgent on macOS.
Caddy handles certificate renewal automatically — no cron jobs are needed.
Firewall Configuration
Section titled “Firewall Configuration”The script opens ports 22, 80, 443, and 2222 on Linux (via ufw or firewalld). If your cloud provider has an external firewall (DigitalOcean Cloud Firewalls, AWS Security Groups, etc.), you must create matching inbound rules there as well — the host firewall and cloud firewall are independent. On macOS, firewall configuration is skipped.
DNS Provider Reference
Section titled “DNS Provider Reference”Cloudflare
Section titled “Cloudflare”API Token: Cloudflare Dashboard > My Profile > API Tokens > Create Token. Use the “Edit zone DNS” template, or create a custom token with Zone: DNS: Edit permissions scoped to your domain.
Caddy module: github.com/caddy-dns/cloudflare
DNS record note: The wildcard record *.proxy.yourdomain.com must be set to DNS-only (grey cloud). See Cloudflare-Specific Warning.
DigitalOcean
Section titled “DigitalOcean”API Token: DigitalOcean Control Panel > API > Tokens > Generate New Token with read and write scopes.
Caddy module: github.com/caddy-dns/digitalocean
AWS Route 53
Section titled “AWS Route 53”Credentials: Create an IAM user with route53:ChangeResourceRecordSets, route53:GetChange, and route53:ListHostedZonesByName permissions. Generate an access key.
Caddy module: github.com/caddy-dns/route53
Hetzner
Section titled “Hetzner”API Token: Hetzner DNS Console > API Tokens > Create Token.
Caddy module: github.com/caddy-dns/hetzner
Domain Deployment Troubleshooting
Section titled “Domain Deployment Troubleshooting”Browser redirects to localhost:5556 during login
Section titled “Browser redirects to localhost:5556 during login”The Dex config file still has issuer: http://localhost:5556/dex. Update it to issuer: https://yourdomain.com/dex and restart Dex, then the API and Proxy services.
502 Bad Gateway on dashboard or sandbox preview URLs
Section titled “502 Bad Gateway on dashboard or sandbox preview URLs”Caddy cannot reach the API (port 3000) or Proxy (port 4000). Verify the containers are running with docker compose ps and that ports are published to the host.
TLS certificate check fails for wildcard
Section titled “TLS certificate check fails for wildcard”Confirm the wildcard DNS records are DNS-only (not proxied through Cloudflare). Check Caddy logs for ACME errors:
# Linuxsudo journalctl -u caddy --since "30 min ago" --no-pager | grep -i "error\|acme\|cert"
# macOScat /usr/local/var/log/caddy/error.log | grep -i "error\|acme\|cert"A common issue is leaving a placeholder email in the Caddyfile — Let’s Encrypt rejects example.com as a contact domain.
If Caddy logs show HTTP 429 rateLimited, you have hit Let’s Encrypt’s rate limit of 5 duplicate certificates per 168 hours. This typically happens when re-deploying to multiple servers within a short period. Caddy will retry automatically — leave it running and certificates will be issued once the limit resets. Certificates are stored locally and reused across re-runs on the same server, so re-running the setup script on the same machine does not count against the limit.
Sandbox URLs show http:// instead of https://
Section titled “Sandbox URLs show http:// instead of https://”PROXY_PROTOCOL is still set to http in the API or Proxy service. Set it to https in both and restart.
Development Notes
Section titled “Development Notes”- The setup uses shared networking for simplified service communication
- Database and storage data is persisted in Docker volumes
- The registry is configured to allow image deletion for testing
- Sandbox resource limits are disabled due to inability to partition cgroups in DinD environment where the sock is not mounted
Security Considerations
Section titled “Security Considerations”The INTER_SANDBOX_NETWORK_ENABLED runner environment variable controls whether sandboxes on the same runner can communicate over the network. In the Docker Compose configuration this defaults to false, creating an isolated bridge network with inter-container communication disabled.
If you are deploying runners outside of the provided Docker Compose setup, ensure INTER_SANDBOX_NETWORK_ENABLED is explicitly set to false unless your use case requires inter-sandbox communication.
For domain deployments, all placeholder secrets in the default Docker Compose file (supersecretkey, super_secret_key, secret_api_token, etc.) must be replaced with generated values. The setup-domain-oss-deployment.sh script handles this automatically.
Auxiliary Service Ports
Section titled “Auxiliary Service Ports”The setup script binds auxiliary service ports (PgAdmin, MinIO Console, Registry UI) to 127.0.0.1 in Docker Compose so they cannot be reached directly over the network. Caddy reverse-proxies each service on its original port with TLS and HTTP Basic Auth — nothing is visible until you authenticate:
| Service | URL | Authentication |
|---|---|---|
| PgAdmin | https://yourdomain.com:5050 | Caddy Basic Auth (PgAdmin email + password set during setup), then PgAdmin native login |
| MinIO Console | https://yourdomain.com:9001 | MinIO native login page (credentials set during setup) |
| Registry UI | https://yourdomain.com:5100 | Caddy Basic Auth (registry username and password set during setup) |
PgAdmin and Registry UI are protected by Caddy’s HTTP Basic Auth — nothing is visible until you authenticate. MinIO Console uses its own native login page instead of Basic Auth because its SPA architecture (JavaScript API calls, WebSocket connections) is incompatible with HTTP Basic Auth.
Additional Network Options
Section titled “Additional Network Options”HTTP Proxy
Section titled “HTTP Proxy”To configurate an outbound HTTP proxy for the Daytona services, you can set the following environment variables in the docker-compose.yaml file for each service that requires proxy access (the API service is the only that requires outbound access to pull images):
HTTP_PROXY: URL of the HTTP proxy serverHTTPS_PROXY: URL of the HTTPS proxy serverNO_PROXY: Comma-separated list of hostnames or IP addresses that should bypass the proxy
The baseline configuration for the API service should be as follows:
environment: - HTTP_PROXY=<your-proxy> - HTTPS_PROXY=<your-proxy> - NO_PROXY=localhost,runner,dex,registry,minio,jaeger,otel-collector,<your-proxy>Extra CA Certificates
Section titled “Extra CA Certificates”To configure extra CA certificates (for example, paired with DB_TLS env vars), set the following environment variable in the API service:
environment: - NODE_EXTRA_CA_CERTS=/path/to/your/cert-bundle.pemThe provided file is a cert bundle. Meaning it can contain multiple CA certificates in PEM format.
Environment Variables
Section titled “Environment Variables”You can customize the deployment by modifying environment variables in the docker-compose.yaml file.
Below is a full list of environment variables with their default values:
API Service
Section titled “API Service”| Variable | Type | Default Value | Description |
|---|---|---|---|
ENCRYPTION_KEY | string | supersecretkey | Encryption key for sensitive data (User must override outside of Compose) |
ENCRYPTION_SALT | string | supersecretsalt | Encryption salt for sensitive data (User must override outside of Compose) |
PORT | number | 3000 | API service port |
DB_HOST | string | db | PostgreSQL database hostname |
DB_PORT | number | 5432 | PostgreSQL database port |
DB_USERNAME | string | user | PostgreSQL database username |
DB_PASSWORD | string | pass | PostgreSQL database password |
DB_DATABASE | string | daytona | PostgreSQL database name |
DB_TLS_ENABLED | boolean | false | Enable TLS for database connection |
DB_TLS_REJECT_UNAUTHORIZED | boolean | true | Reject unauthorized TLS certificates |
REDIS_HOST | string | redis | Redis server hostname |
REDIS_PORT | number | 6379 | Redis server port |
OIDC_CLIENT_ID | string | daytona | OIDC client identifier |
OIDC_ISSUER_BASE_URL | string | http://dex:5556/dex | OIDC issuer base URL |
PUBLIC_OIDC_DOMAIN | string | http://localhost:5556/dex | Public OIDC domain |
OIDC_AUDIENCE | string | daytona | OIDC audience identifier |
OIDC_MANAGEMENT_API_ENABLED | boolean | (empty) | Enable OIDC management API |
OIDC_MANAGEMENT_API_CLIENT_ID | string | (empty) | OIDC management API client ID |
OIDC_MANAGEMENT_API_CLIENT_SECRET | string | (empty) | OIDC management API client secret |
OIDC_MANAGEMENT_API_AUDIENCE | string | (empty) | OIDC management API audience |
DEFAULT_SNAPSHOT | string | daytonaio/sandbox:0.4.3 | Default sandbox snapshot image |
DASHBOARD_URL | string | http://localhost:3000/dashboard | Dashboard URL |
DASHBOARD_BASE_API_URL | string | http://localhost:3000 | Dashboard base API URL |
POSTHOG_API_KEY | string | phc_bYtEsdMDrNLydXPD4tufkBrHKgfO2zbycM30LOowYNv | PostHog API key for analytics |
POSTHOG_HOST | string | https://d18ag4dodbta3l.cloudfront.net | PostHog host URL |
POSTHOG_ENVIRONMENT | string | local | PostHog environment identifier |
TRANSIENT_REGISTRY_URL | string | http://registry:6000 | Transient registry URL |
TRANSIENT_REGISTRY_ADMIN | string | admin | Transient registry admin username |
TRANSIENT_REGISTRY_PASSWORD | string | password | Transient registry admin password |
TRANSIENT_REGISTRY_PROJECT_ID | string | daytona | Transient registry project ID |
INTERNAL_REGISTRY_URL | string | http://registry:6000 | Internal registry URL |
INTERNAL_REGISTRY_ADMIN | string | admin | Internal registry admin username |
INTERNAL_REGISTRY_PASSWORD | string | password | Internal registry admin password |
INTERNAL_REGISTRY_PROJECT_ID | string | daytona | Internal registry project ID |
SMTP_HOST | string | maildev | SMTP server hostname |
SMTP_PORT | number | 1025 | SMTP server port |
SMTP_USER | string | (empty) | SMTP username |
SMTP_PASSWORD | string | (empty) | SMTP password |
SMTP_SECURE | boolean | (empty) | Enable SMTP secure connection |
SMTP_EMAIL_FROM | string | "Daytona Team <no-reply@daytona.io>" | SMTP sender email address |
S3_ENDPOINT | string | http://minio:9000 | S3-compatible storage endpoint |
S3_STS_ENDPOINT | string | http://minio:9000/minio/v1/assume-role | S3 STS endpoint |
S3_REGION | string | us-east-1 | S3 region |
S3_ACCESS_KEY | string | minioadmin | S3 access key |
S3_SECRET_KEY | string | minioadmin | S3 secret key |
S3_DEFAULT_BUCKET | string | daytona | S3 default bucket name |
S3_ACCOUNT_ID | string | / | S3 account ID |
S3_ROLE_NAME | string | / | S3 role name |
ENVIRONMENT | string | dev | Application environment |
MAX_AUTO_ARCHIVE_INTERVAL | number | 43200 | Maximum auto-archive interval (seconds) |
OTEL_ENABLED | boolean | true | Enable OpenTelemetry tracing |
OTEL_COLLECTOR_URL | string | http://jaeger:4318/v1/traces | OpenTelemetry collector URL |
MAINTENANCE_MODE | boolean | false | Enable maintenance mode |
PROXY_DOMAIN | string | proxy.localhost:4000 | Proxy domain |
PROXY_PROTOCOL | string | http | Proxy protocol |
PROXY_API_KEY | string | super_secret_key | Proxy API key |
PROXY_TEMPLATE_URL | string | http://{{PORT}}-{{sandboxId}}.proxy.localhost:4000 | Proxy template URL pattern |
PROXY_TOOLBOX_BASE_URL | string | {PROXY_PROTOCOL}://{PROXY_DOMAIN} | Proxy base URL for toolbox requests |
DEFAULT_RUNNER_DOMAIN | string | runner:3003 | Default runner domain |
DEFAULT_RUNNER_API_URL | string | http://runner:3003 | Default runner API URL |
DEFAULT_RUNNER_PROXY_URL | string | http://runner:3003 | Default runner proxy URL |
DEFAULT_RUNNER_API_KEY | string | secret_api_token | Default runner API key |
DEFAULT_RUNNER_CPU | number | 4 | Default runner CPU allocation |
DEFAULT_RUNNER_MEMORY | number | 8 | Default runner memory allocation (GB) |
DEFAULT_RUNNER_DISK | number | 50 | Default runner disk allocation (GB) |
DEFAULT_RUNNER_API_VERSION | string | 0 | Default runner API version |
DEFAULT_ORG_QUOTA_TOTAL_CPU_QUOTA | number | 10000 | Default organization total CPU quota |
DEFAULT_ORG_QUOTA_TOTAL_MEMORY_QUOTA | number | 10000 | Default organization total memory quota |
DEFAULT_ORG_QUOTA_TOTAL_DISK_QUOTA | number | 100000 | Default organization total disk quota |
DEFAULT_ORG_QUOTA_MAX_CPU_PER_SANDBOX | number | 100 | Default organization max CPU per sandbox |
DEFAULT_ORG_QUOTA_MAX_MEMORY_PER_SANDBOX | number | 100 | Default organization max memory per sandbox |
DEFAULT_ORG_QUOTA_MAX_DISK_PER_SANDBOX | number | 1000 | Default organization max disk per sandbox |
DEFAULT_ORG_QUOTA_SNAPSHOT_QUOTA | number | 1000 | Default organization snapshot quota |
DEFAULT_ORG_QUOTA_MAX_SNAPSHOT_SIZE | number | 1000 | Default organization max snapshot size |
DEFAULT_ORG_QUOTA_VOLUME_QUOTA | number | 10000 | Default organization volume quota |
SSH_GATEWAY_API_KEY | string | ssh_secret_api_token | SSH gateway API key |
SSH_GATEWAY_COMMAND | string | ssh -p 2222 {{TOKEN}}@localhost | SSH gateway command template |
SSH_GATEWAY_PUBLIC_KEY | string | (Base64-encoded OpenSSH public key) | SSH gateway public key for authentication |
SSH_GATEWAY_URL | string | localhost:2222 | SSH gateway URL |
RUNNER_DECLARATIVE_BUILD_SCORE_THRESHOLD | number | 10 | Runner declarative build score threshold |
RUNNER_AVAILABILITY_SCORE_THRESHOLD | number | 10 | Runner availability score threshold |
RUNNER_HEALTH_TIMEOUT_SECONDS | number | 3 | Runner health-check timeout in seconds |
RUNNER_START_SCORE_THRESHOLD | number | 3 | Runner start score threshold |
BUILD_INFO_MAX_SANDBOXES_PER_RUNNER | number | 30 | Max active sandboxes per runner for the same declarative build (0 disables the cap) |
RUN_MIGRATIONS | boolean | true | Enable database migrations on startup |
ADMIN_API_KEY | string | (empty) | Admin API key, auto-generated if empty, used only upon initial setup, not recommended for production |
ADMIN_TOTAL_CPU_QUOTA | number | 0 | Admin total CPU quota, used only upon initial setup |
ADMIN_TOTAL_MEMORY_QUOTA | number | 0 | Admin total memory quota, used only upon initial setup |
ADMIN_TOTAL_DISK_QUOTA | number | 0 | Admin total disk quota, used only upon initial setup |
ADMIN_MAX_CPU_PER_SANDBOX | number | 0 | Admin max CPU per sandbox, used only upon initial setup |
ADMIN_MAX_MEMORY_PER_SANDBOX | number | 0 | Admin max memory per sandbox, used only upon initial setup |
ADMIN_MAX_DISK_PER_SANDBOX | number | 0 | Admin max disk per sandbox, used only upon initial setup |
ADMIN_SNAPSHOT_QUOTA | number | 100 | Admin snapshot quota, used only upon initial setup |
ADMIN_MAX_SNAPSHOT_SIZE | number | 100 | Admin max snapshot size, used only upon initial setup |
ADMIN_VOLUME_QUOTA | number | 0 | Admin volume quota, used only upon initial setup |
SKIP_USER_EMAIL_VERIFICATION | boolean | true | Skip user email verification process |
RATE_LIMIT_ANONYMOUS_TTL | number | (empty) | Anonymous rate limit time-to-live (seconds, empty - rate limit is disabled) |
RATE_LIMIT_ANONYMOUS_LIMIT | number | (empty) | Anonymous rate limit (requests per TTL, empty - rate limit is disabled) |
RATE_LIMIT_AUTHENTICATED_TTL | number | (empty) | Authenticated rate limit time-to-live (seconds, empty - rate limit is disabled) |
RATE_LIMIT_AUTHENTICATED_LIMIT | number | (empty) | Authenticated rate limit (requests per TTL, empty - rate limit is disabled) |
RATE_LIMIT_SANDBOX_CREATE_TTL | number | (empty) | Sandbox create rate limit time-to-live (seconds, empty - rate limit is disabled) |
RATE_LIMIT_SANDBOX_CREATE_LIMIT | number | (empty) | Sandbox create rate limit (requests per TTL, empty - rate limit is disabled) |
RATE_LIMIT_SANDBOX_LIFECYCLE_TTL | number | (empty) | Sandbox lifecycle rate limit time-to-live (seconds, empty - rate limit is disabled) |
RATE_LIMIT_SANDBOX_LIFECYCLE_LIMIT | number | (empty) | Sandbox lifecycle rate limit (requests per TTL, empty - rate limit is disabled) |
RATE_LIMIT_FAILED_AUTH_TTL | number | (empty) | Failed authentication rate limit time-to-live (seconds, empty - rate limit is disabled) |
RATE_LIMIT_FAILED_AUTH_LIMIT | number | (empty) | Failed authentication rate limit (requests per TTL, empty - rate limit is disabled) |
DEFAULT_REGION_ID | string | us | Default region ID |
DEFAULT_REGION_NAME | string | us | Default region name |
DEFAULT_REGION_ENFORCE_QUOTAS | boolean | false | Enable region-based resource limits for default region |
OTEL_COLLECTOR_API_KEY | string | otel_collector_api_key | OpenTelemetry collector API key for authentication (only needed if otel collector is deployed) |
CLICKHOUSE_HOST | string | (empty) | ClickHouse host for querying sandbox otel |
CLICKHOUSE_DATABASE | string | otel | ClickHouse database for querying sandbox otel |
CLICKHOUSE_PORT | number | 8123 | ClickHouse port |
CLICKHOUSE_USERNAME | string | (empty) | ClickHouse username |
CLICKHOUSE_PASSWORD | string | (empty) | ClickHouse password |
CLICKHOUSE_PROTOCOL | string | https | ClickHouse protocol (e.g., http or https) |
OTEL_COLLECTOR_ENDPOINT_URL | string | (empty) | OpenTelemetry collector endpoint URL for sandbox telemetry and organization metrics (also accepts SANDBOX_OTEL_ENDPOINT_URL for backward compatibility) |
HEALTH_CHECK_API_KEY | string | supersecretkey | Authentication key for the readiness health-check route. |
NOTIFICATION_GATEWAY_DISABLED | boolean | false | Disable notification gateway service |
SANDBOX_SNAPSHOTTING_TIMEOUT_MIN | number | 60 | Minutes before a sandbox stuck in SNAPSHOTTING state is considered stale and recovered |
FAILED_SNAPSHOT_RUNNER_RETENTION_HOURS | number | 3 | Hours to retain failed snapshot runner records before cleanup |
BUILDINFO_SNAPSHOT_RUNNER_STALENESS_DAYS | number | 7 | Days of inactivity before a snapshot runner is considered stale and eligible for cleanup |
Runner
Section titled “Runner”| Variable | Type | Default Value | Description |
|---|---|---|---|
DAYTONA_API_URL | string | http://api:3000/api | Daytona API URL |
DAYTONA_RUNNER_TOKEN | string | secret_api_token | Runner API authentication token |
VERSION | string | 0.0.1 | Runner service version |
OTEL_LOGGING_ENABLED | boolean | false | Runner OpenTelemetry logging enabled |
OTEL_TRACING_ENABLED | boolean | false | Runner OpenTelemetry tracing enabled |
OTEL_EXPORTER_OTLP_ENDPOINT | string | (empty) | Runner OpenTelemetry OTLP exporter endpoint |
OTEL_EXPORTER_OTLP_HEADERS | string | (empty) | Runner OpenTelemetry OTLP exporter headers |
ENVIRONMENT | string | development | Application environment |
API_PORT | number | 3003 | Runner API service port |
LOG_FILE_PATH | string | /home/daytona/runner/runner.log | Path to runner log file |
RESOURCE_LIMITS_DISABLED | boolean | true | Disable resource limits for sandboxes |
BUILD_TIMEOUT_MIN | number | 120 | Build timeout in minutes (minimum: 1) |
BUILD_CPU_CORES | number | 4 | CPU cores allocated per build (minimum: 1) |
BUILD_MEMORY_GB | number | 8 | Memory in GB allocated per build (minimum: 1) |
AWS_ENDPOINT_URL | string | http://minio:9000 | AWS S3-compatible storage endpoint |
AWS_REGION | string | us-east-1 | AWS region |
AWS_ACCESS_KEY_ID | string | minioadmin | AWS access key ID |
AWS_SECRET_ACCESS_KEY | string | minioadmin | AWS secret access key |
AWS_DEFAULT_BUCKET | string | daytona | AWS default bucket name |
DAEMON_START_TIMEOUT_SEC | number | 60 | Daemon start timeout in seconds |
SANDBOX_START_TIMEOUT_SEC | number | 30 | Sandbox start timeout in seconds |
USE_SNAPSHOT_ENTRYPOINT | boolean | false | Use snapshot entrypoint for sandbox |
RUNNER_DOMAIN | string | (none) | Runner domain name (hostname for runner URLs) |
VOLUME_CLEANUP_INTERVAL | number | 30s | Volume cleanup interval in seconds (minimum: 10s) |
COLLECTOR_WINDOW_SIZE | number | 60 | Metrics collector window size (number of samples) |
CPU_USAGE_SNAPSHOT_INTERVAL | string | 5s | CPU usage snapshot interval duration (minimum: 1s) |
ALLOCATED_RESOURCES_SNAPSHOT_INTERVAL | string | 5s | Allocated resources snapshot interval (minimum: 1s) |
POLL_TIMEOUT | string | 30s | Poller service timeout duration (e.g., 30s, 1m) |
POLL_LIMIT | number | 10 | Maximum poll attempts per request (min: 1, max: 100) |
HEALTHCHECK_INTERVAL | string | 30s | Interval between health checks (minimum: 10s) |
HEALTHCHECK_TIMEOUT | string | 10s | Health check timeout duration |
API_VERSION | number | 2 | Runner API version (default: 2) |
SNAPSHOT_ERROR_CACHE_RETENTION | string | 10m | Snapshot error cache retention duration (minimum: 5m) |
CONTAINER_NETWORK | string | (none) | Custom docker network for sandboxes |
INTER_SANDBOX_NETWORK_ENABLED | boolean | false | Enable or disable inter-sandbox network connectivity |
BUILD_ENGINE | string | buildkit | Docker image build engine (buildkit or legacy) |
SSH Gateway
Section titled “SSH Gateway”| Variable | Type | Default Value | Description |
|---|---|---|---|
API_URL | string | http://api:3000/api | Daytona API URL |
API_KEY | string | ssh_secret_api_token | API authentication key |
SSH_PRIVATE_KEY | string | (Base64-encoded OpenSSH private key) | SSH private key for auth |
SSH_HOST_KEY | string | (Base64-encoded OpenSSH host key) | SSH host key for server |
SSH_GATEWAY_PORT | number | 2222 | SSH gateway listening port |
| Variable | Type | Default Value | Description |
|---|---|---|---|
DAYTONA_API_URL | string | http://api:3000/api | Daytona API URL |
PROXY_PORT | number | 4000 | Proxy service port |
PROXY_API_KEY | string | super_secret_key | Proxy API authentication key |
PROXY_PROTOCOL | string | http | Proxy protocol (http or https) |
COOKIE_DOMAIN | string | $PROXY_DOMAIN | Cookie domain for proxy cookies |
OIDC_CLIENT_ID | string | daytona | OIDC client identifier |
OIDC_CLIENT_SECRET | string | (empty) | OIDC client secret |
OIDC_DOMAIN | string | http://dex:5556/dex | OIDC domain |
OIDC_PUBLIC_DOMAIN | string | http://localhost:5556/dex | OIDC public domain |
OIDC_AUDIENCE | string | daytona | OIDC audience identifier |
REDIS_HOST | string | redis | Redis server hostname |
REDIS_PORT | number | 6379 | Redis server port |
TOOLBOX_ONLY_MODE | boolean | false | Allow only toolbox requests |
PREVIEW_WARNING_ENABLED | boolean | false | Enable browser preview warning |
SHUTDOWN_TIMEOUT_SEC | number | 3600 | Shutdown timeout in seconds |
[OPTIONAL] Configure Auth0 for Authentication
Section titled “[OPTIONAL] Configure Auth0 for Authentication”The default compose setup uses a local Dex OIDC provider for authentication. However, you can configure Auth0 as an alternative OIDC provider by following these steps:
Step 1: Create Your Auth0 Tenant
Section titled “Step 1: Create Your Auth0 Tenant”Begin by navigating to https://auth0.com/signup and start the signup process. Choose your account type based on your use case - select Company for business applications or Personal for individual projects.
On the “Let’s get setup” page, you’ll need to enter your application name such as My Daytona and select Single Page Application (SPA) as the application type. For authentication methods, you can start with Email and Password since additional social providers like Google, GitHub, or Facebook can be added later. Once you’ve configured these settings, click Create Application in the bottom right corner.
Step 2: Configure Your Single Page Application
Section titled “Step 2: Configure Your Single Page Application”Navigate to Applications > Applications in the left sidebar and select the application you just created. Click the Settings tab and scroll down to find the Application URIs section where you’ll configure the callback and origin URLs.
In the Allowed Callback URIs field, add the following URLs:
http://localhost:3000http://localhost:3000/api/oauth2-redirect.htmlhttp://localhost:4000/callbackhttp://proxy.localhost:4000/callbackFor Allowed Logout URIs, add:
http://localhost:3000And for Allowed Web Origins, add:
http://localhost:3000Remember to click Save Changes at the bottom of the page to apply these configurations.
Step 3: Create Machine-to-Machine Application
Section titled “Step 3: Create Machine-to-Machine Application”You’ll need a Machine-to-Machine application to interact with Auth0’s Management API. Go to Applications > Applications and click Create Application. Choose Machine to Machine Applications as the type and provide a descriptive name like My Management API M2M.
After creating the application, navigate to the APIs tab within your new M2M application. Find and authorize the Auth0 Management API by clicking the toggle or authorize button.
Once authorized, click the dropdown arrow next to the Management API to configure permissions. Grant the following permissions to your M2M application:
read:usersupdate:usersread:connectionscreate:guardian_enrollment_ticketsread:connections_optionsClick Save to apply these permission changes.
Step 4: Set Up Custom API
Section titled “Step 4: Set Up Custom API”Your Daytona application will need a custom API to handle authentication and authorization. Navigate to Applications > APIs in the left sidebar and click Create API. Enter a descriptive name such as My Daytona API and provide an identifier like my-daytona-api. The identifier should be a unique string that will be used in your application configuration.
After creating the API, go to the Permissions tab to define the scopes your application will use. Add each of the following permissions with their corresponding descriptions:
| Permission | Description |
|---|---|
read:node | Get workspace node info |
create:node | Create new workspace node record |
create:user | Create user account |
read:users | Get all user accounts |
regenerate-key-pair:users | Regenerate user SSH key-pair |
read:workspaces | Read workspaces (user scope) |
create:registry | Create a new docker registry auth record |
read:registries | Get all docker registry records |
read:registry | Get docker registry record |
write:registry | Create or update docker registry record |
Step 5: Configure Environment Variables
Section titled “Step 5: Configure Environment Variables”Once you’ve completed all the Auth0 setup steps, you’ll need to configure environment variables in your Daytona deployment. These variables connect your application to the Auth0 services you’ve just configured.
Finding Your Configuration Values
Section titled “Finding Your Configuration Values”You can find the necessary values in the Auth0 dashboard. For your SPA application settings, go to Applications > Applications, select your SPA app, and click the Settings tab. For your M2M application, follow the same path but select your Machine-to-Machine app instead. Custom API settings are located under Applications > APIs, then select your custom API and go to Settings.
API Service Configuration
Section titled “API Service Configuration”Configure the following environment variables for your API service:
OIDC_CLIENT_ID=your_spa_app_client_idOIDC_ISSUER_BASE_URL=your_spa_app_domainOIDC_AUDIENCE=your_custom_api_identifierOIDC_MANAGEMENT_API_ENABLED=trueOIDC_MANAGEMENT_API_CLIENT_ID=your_m2m_app_client_idOIDC_MANAGEMENT_API_CLIENT_SECRET=your_m2m_app_client_secretOIDC_MANAGEMENT_API_AUDIENCE=your_auth0_managment_api_identifierProxy Service Configuration
Section titled “Proxy Service Configuration”For your proxy service, configure these environment variables:
OIDC_CLIENT_ID=your_spa_app_client_idOIDC_CLIENT_SECRET=OIDC_DOMAIN=your_spa_app_domainOIDC_AUDIENCE=your_custom_api_identifier (with trailing slash)Note that OIDC_CLIENT_SECRET should remain empty for your proxy environment.