Declarative Builder
Daytona’s declarative builder provides a powerful, code-first approach to defining dependencies for Sandboxes. Instead of importing images from a container registry, you can programmatically define them using the SDK.
Overview
The declarative builder system supports two primary workflows:
- Dynamic Images: Build images with varying dependencies on demand when creating Sandboxes
- Pre-built Images: Create and register reusable images that can be shared across multiple Sandboxes
It provides the following capabilities. For a complete API reference and method signatures, check the Python and TypeScript SDK references.
Base Image Selection
- Debian-based environments with Python and essential preinstalled build tools
- Custom base images from any Docker registry or existing container image
- Dockerfile integration to import and enhance existing Dockerfiles
Package Management
- Python package installation with support for
pip
,requirements.txt
, andpyproject.toml
- Advanced pip options including custom indexes, find-links, and optional dependencies
File System Operations
- File copying from the local development environment to the image
- Directory copying for bulk file transfers and project setup
- Working directory configuration to set the default execution context
Environment Configuration
- Environment variables for application configuration and secrets
- Shell command execution during the image build process
- Container runtime settings including an entrypoint and default commands
For detailed method signatures and usage examples, refer to the Python and TypeScript SDK references.
Dynamic Image Building
Create Images on-the-fly when creating Sandboxes. This is useful when you want to create a new Sandbox with specific dependencies that are not part of any existing image.
You can either define an entirely new image or append some specific dependencies to an existing one - e.g. a pip
package or an apt-get install
command.
This eliminates the need to use your own compute for the build process and you can instead offload it to Daytona’s infrastructure.
# Define the dynamic imagedynamic_image = ( Image.debian_slim("3.12") .pip_install(["pytest", "pytest-cov", "mypy", "ruff", "black", "gunicorn"]) .run_commands(["apt-get update && apt-get install -y git curl", "mkdir -p /home/daytona/project"]) .workdir("/home/daytona/project") .env({"ENV_VAR": "My Environment Variable"}) .add_local_file("file_example.txt", "/home/daytona/project/file_example.txt"))
# Create a new Sandbox with the dynamic image and stream the build logs
sandbox = daytona.create( CreateSandboxParams( image=dynamic_image, ), timeout=0, on_image_build_logs=lambda log_line: print(log_line, end=''),)
// Define the dynamic imageconst dynamicImage = Image.debianSlim('3.13') .pipInstall(['pytest', 'pytest-cov', 'black', 'isort', 'mypy', 'ruff']) .runCommands(['apt-get update && apt-get install -y git', 'mkdir -p /home/daytona/project']) .workdir('/home/daytona/project') .env({ NODE_ENV: 'development', }) .addLocalFile('file_example.txt', '/home/daytona/project/file_example.txt')
// Create a new Sandbox with the dynamic image and stream the build logsconst sandbox = await daytona.create({ image: dynamicImage,},{ timeout: 0, onImageBuildLogs: (msg) => process.stdout.write(msg),})
Creating Pre-built Images
If you want to prepare a new Daytona Image with specific dependencies and then use it across multiple Sandboxes whenever necessary, you can create a pre-built Image.
This image will stay visible in the Daytona dashboard and be permanently cached, ensuring that rebuilding it is not needed.
# Generate a unique name for the imageimage_name = f"python-example:{int(time.time())}"
# Create a local file with some data to add to the image
with open("file_example.txt", "w") as f: f.write("Hello, World!")
# Create a Python image with common data science packages
image = ( Image.debian_slim("3.12") .pip_install(["numpy", "pandas", "matplotlib", "scipy", "scikit-learn", "jupyter"]) .run_commands( [ "apt-get update && apt-get install -y git", "groupadd -r daytona && useradd -r -g daytona -m daytona", "mkdir -p /home/daytona/workspace", ] ) .dockerfile_commands(["USER daytona"]) .workdir("/home/daytona/workspace") .env({"MY_ENV_VAR": "My Environment Variable"}) .add_local_file("file_example.txt", "/home/daytona/workspace/file_example.txt"))
# Create the image and stream the build logs
print(f"=== Creating Image: {image_name} ===")daytona.create_image(image_name, image, on_logs=lambda log_line: print(log_line, end=''))
# Create a new Sandbox using the pre-built image
sandbox = daytona.create(CreateSandboxParams(image_name=image_name, os_user="daytona"))
// Generate a unique name for the imageconst imageName = `node-example:${Date.now()}`console.log(`Creating image with name: ${imageName}`)
// Create a local file with some data to add to the imageconst localFilePath = 'file_example.txt'const localFileContent = 'Hello, World!'fs.writeFileSync(localFilePath, localFileContent)
// Create a Python image with common data science packagesconst image = Image.debianSlim('3.12') .pipInstall(['numpy', 'pandas', 'matplotlib', 'scipy', 'scikit-learn']) .runCommands(['apt-get update && apt-get install -y git', 'mkdir -p /home/daytona/workspace']) .dockerfileCommands(['USER daytona']) .workdir('/home/daytona/workspace') .env({ MY_ENV_VAR: 'My Environment Variable', }) .addLocalFile(localFilePath, '/home/daytona/workspace/file_example.txt')
// Create the image and stream the build logsconsole.log(`=== Creating Image: ${imageName} ===`)await daytona.createImage(imageName, image, { onLogs: (msg) => process.stdout.write(msg) })
// Create a new Sandbox using the pre-built imageconst sandbox1 = await daytona.create({ image: imageName, })
Using an Existing Dockerfile
If you have an existing Dockerfile that you want to use as the base for your image, you can import it in the following way:
image = Image.from_dockerfile("app/Dockerfile").pip_install(["numpy"])
const image = Image.fromDockerfile("app/Dockerfile").pipInstall(['numpy'])
Best Practices
- Layer Optimization: Group related operations to minimize Docker layers
- Cache Utilization: Identical build commands and context will be cached and subsequent builds will be almost instant
- Security: Create non-root users for application workloads
- Resource Efficiency: Use slim base images when appropriate
- Context Minimization: Only include necessary files in the build context
The declarative builder streamlines the development workflow by providing a programmatic, maintainable approach to container image creation while preserving the full power and flexibility of Docker.