Mount object storage (Amazon S3, Cloudflare R2, Google Cloud Storage, Azure Blob) and filesystems like MesaFS into a Daytona sandbox as a regular directory. The sandbox reads from and writes to the bucket as if it were a local directory, so existing tools, scripts, and agents work without changes. This is useful for bringing in datasets, model weights, or build artifacts that already live in your own cloud account.
External storage mounts and Daytona Volumes are complementary FUSE-based mechanisms — both expose remote object storage as a regular sandbox directory, both can be shared across sandboxes, and both persist beyond any individual sandbox’s lifetime. The main distinction is where the data physically lives: Daytona Volumes are hosted on Daytona’s own S3-compatible object store, while external mounts connect to a bucket or filesystem hosted on another provider (Amazon S3, Cloudflare R2, GCS, Azure Blob, MesaFS).
External storage is mounted using FUSE. Daytona supports two approaches, and each provider section below shows both — pick whichever fits your workflow:
- Pre-built snapshot — build a snapshot once with the FUSE tool (
mount-s3,gcsfuse,blobfuse2) built-in, then launch every sandbox from that snapshot. Cold starts are fast and predictable. Best for production. - Runtime install — launch a default sandbox and
apt-get installthe FUSE tool when the sandbox starts. Adds time to sandbox startup, but you don’t manage snapshots. Best for quick experiments.
Both approaches end with the same mount command and the same usage — the only difference is when the FUSE tool gets installed.
Mount an Amazon S3 bucket
Section titled “Mount an Amazon S3 bucket”Mount an S3 bucket using Mountpoint for Amazon S3 ↗ — AWS’s official FUSE client, optimized for high throughput on S3.
Credentials — set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY in your local environment. The snippets below pass them into the sandbox via envVars, and mount-s3 reads them from there.
Pre-built snapshot
Section titled “Pre-built snapshot”Build a snapshot with mount-s3 preinstalled, then launch all S3-enabled sandboxes from that snapshot. This removes per-sandbox package install work, keeps cold starts predictable, and gives you a reusable baseline image for production workloads.
Build a snapshot
Section titled “Build a snapshot”Create a reusable snapshot that installs mount-s3 and its system dependencies. After it finishes, every sandbox launched from fuse-s3 already has the mount binary available.
from daytona import CreateSnapshotParams, Daytona, Image
daytona = Daytona()
image = ( Image.base("daytonaio/sandbox") .run_commands( "sudo apt-get update " "&& sudo apt-get install -y --no-install-recommends libfuse2 ca-certificates wget", 'arch="$(dpkg --print-architecture | sed s/amd64/x86_64/)" ' '&& wget -O /tmp/mount-s3.deb ' '"https://s3.amazonaws.com/mountpoint-s3-release/latest/${arch}/mount-s3.deb" ' "&& sudo apt-get install -y /tmp/mount-s3.deb " "&& rm /tmp/mount-s3.deb", ))
daytona.snapshot.create( CreateSnapshotParams(name="fuse-s3", image=image), on_logs=print,)import { Daytona, Image } from '@daytona/sdk'
const daytona = new Daytona()
const image = Image.base('daytonaio/sandbox').runCommands( 'sudo apt-get update ' + '&& sudo apt-get install -y --no-install-recommends libfuse2 ca-certificates wget', 'arch="$(dpkg --print-architecture | sed s/amd64/x86_64/)" ' + '&& wget -O /tmp/mount-s3.deb ' + '"https://s3.amazonaws.com/mountpoint-s3-release/latest/${arch}/mount-s3.deb" ' + '&& sudo apt-get install -y /tmp/mount-s3.deb ' + '&& rm /tmp/mount-s3.deb',)
await daytona.snapshot.create( { name: 'fuse-s3', image }, { onLogs: console.log },)require 'daytona'
daytona = Daytona::Daytona.new
image = Daytona::Image .base('daytonaio/sandbox') .run_commands( 'sudo apt-get update ' \ '&& sudo apt-get install -y --no-install-recommends libfuse2 ca-certificates wget', 'arch="$(dpkg --print-architecture | sed s/amd64/x86_64/)" ' \ '&& wget -O /tmp/mount-s3.deb ' \ '"https://s3.amazonaws.com/mountpoint-s3-release/latest/${arch}/mount-s3.deb" ' \ '&& sudo apt-get install -y /tmp/mount-s3.deb ' \ '&& rm /tmp/mount-s3.deb' )
daytona.snapshot.create( Daytona::CreateSnapshotParams.new(name: 'fuse-s3', image: image), on_logs: proc { |chunk| print(chunk) })import ( "context" "fmt" "log"
"github.com/daytonaio/daytona/libs/sdk-go/pkg/daytona" "github.com/daytonaio/daytona/libs/sdk-go/pkg/types")
ctx := context.Background()client, err := daytona.NewClient()if err != nil { log.Fatal(err)}
image := daytona.Base("daytonaio/sandbox"). Run("sudo apt-get update && sudo apt-get install -y --no-install-recommends libfuse2 ca-certificates wget"). Run(`arch="$(dpkg --print-architecture | sed s/amd64/x86_64/)" && ` + `wget -O /tmp/mount-s3.deb "https://s3.amazonaws.com/mountpoint-s3-release/latest/${arch}/mount-s3.deb" && ` + `sudo apt-get install -y /tmp/mount-s3.deb && rm /tmp/mount-s3.deb`)
_, logChan, err := client.Snapshot.Create(ctx, &types.CreateSnapshotParams{ Name: "fuse-s3", Image: image,})if err != nil { log.Fatal(err)}for line := range logChan { fmt.Print(line)}import io.daytona.sdk.Daytona;import io.daytona.sdk.Image;
public class App { public static void main(String[] args) { try (Daytona daytona = new Daytona()) { Image image = Image.base("daytonaio/sandbox") .runCommands( "sudo apt-get update " + "&& sudo apt-get install -y --no-install-recommends libfuse2 ca-certificates wget", "arch=\"$(dpkg --print-architecture | sed s/amd64/x86_64/)\" " + "&& wget -O /tmp/mount-s3.deb " + "\"https://s3.amazonaws.com/mountpoint-s3-release/latest/${arch}/mount-s3.deb\" " + "&& sudo apt-get install -y /tmp/mount-s3.deb " + "&& rm /tmp/mount-s3.deb" );
daytona.snapshot().create("fuse-s3", image, System.out::println); } }}Launch and mount
Section titled “Launch and mount”Pass AWS credentials as environment variables on sandbox creation. mount-s3 reads them automatically.
import osfrom daytona import CreateSandboxFromSnapshotParams, Daytona
daytona = Daytona()
sandbox = daytona.create( CreateSandboxFromSnapshotParams( snapshot="fuse-s3", env_vars={ "AWS_ACCESS_KEY_ID": os.environ["AWS_ACCESS_KEY_ID"], "AWS_SECRET_ACCESS_KEY": os.environ["AWS_SECRET_ACCESS_KEY"], }, ))
mount_path = "/home/daytona/s3"
# mount-s3 daemonizes by default and reads AWS_* from the environmentsandbox.process.exec(f"mkdir -p {mount_path}")sandbox.process.exec(f"mount-s3 my-bucket {mount_path}")
# Read and write through the mount as if it were a local directorysandbox.process.exec(f"echo 'hello from Daytona' > {mount_path}/hello.txt")response = sandbox.process.exec(f"cat {mount_path}/hello.txt")print(response.result)import { Daytona } from '@daytona/sdk'
const daytona = new Daytona()
const sandbox = await daytona.create({ snapshot: 'fuse-s3', envVars: { AWS_ACCESS_KEY_ID: process.env.AWS_ACCESS_KEY_ID!, AWS_SECRET_ACCESS_KEY: process.env.AWS_SECRET_ACCESS_KEY!, },})
const mountPath = '/home/daytona/s3'
// mount-s3 daemonizes by default and reads AWS_* from the environmentawait sandbox.process.executeCommand(`mkdir -p ${mountPath}`)await sandbox.process.executeCommand(`mount-s3 my-bucket ${mountPath}`)
// Read and write through the mount as if it were a local directoryawait sandbox.process.executeCommand(`echo 'hello from Daytona' > ${mountPath}/hello.txt`)const response = await sandbox.process.executeCommand(`cat ${mountPath}/hello.txt`)console.log(response.result)require 'daytona'
daytona = Daytona::Daytona.new
sandbox = daytona.create( Daytona::CreateSandboxFromSnapshotParams.new( snapshot: 'fuse-s3', env_vars: { 'AWS_ACCESS_KEY_ID' => ENV.fetch('AWS_ACCESS_KEY_ID'), 'AWS_SECRET_ACCESS_KEY' => ENV.fetch('AWS_SECRET_ACCESS_KEY') } ))
mount_path = '/home/daytona/s3'
# mount-s3 daemonizes by default and reads AWS_* from the environmentsandbox.process.exec(command: "mkdir -p #{mount_path}")sandbox.process.exec(command: "mount-s3 my-bucket #{mount_path}")
# Read and write through the mount as if it were a local directorysandbox.process.exec(command: "echo 'hello from Daytona' > #{mount_path}/hello.txt")response = sandbox.process.exec(command: "cat #{mount_path}/hello.txt")puts response.resultimport ( "context" "fmt" "log" "os"
"github.com/daytonaio/daytona/libs/sdk-go/pkg/daytona" "github.com/daytonaio/daytona/libs/sdk-go/pkg/types")
ctx := context.Background()client, err := daytona.NewClient()if err != nil { log.Fatal(err)}
sandbox, err := client.Create(ctx, types.SnapshotParams{ Snapshot: "fuse-s3", SandboxBaseParams: types.SandboxBaseParams{ EnvVars: map[string]string{ "AWS_ACCESS_KEY_ID": os.Getenv("AWS_ACCESS_KEY_ID"), "AWS_SECRET_ACCESS_KEY": os.Getenv("AWS_SECRET_ACCESS_KEY"), }, },})if err != nil { log.Fatal(err)}
mountPath := "/home/daytona/s3"
// mount-s3 daemonizes by default and reads AWS_* from the environmentif _, err := sandbox.Process.ExecuteCommand(ctx, "mkdir -p "+mountPath); err != nil { log.Fatal(err)}if _, err := sandbox.Process.ExecuteCommand(ctx, "mount-s3 my-bucket "+mountPath); err != nil { log.Fatal(err)}
// Read and write through the mount as if it were a local directoryif _, err := sandbox.Process.ExecuteCommand(ctx, "echo 'hello from Daytona' > "+mountPath+"/hello.txt"); err != nil { log.Fatal(err)}response, err := sandbox.Process.ExecuteCommand(ctx, "cat "+mountPath+"/hello.txt")if err != nil { log.Fatal(err)}fmt.Println(response.Result)import io.daytona.sdk.Daytona;import io.daytona.sdk.Sandbox;import io.daytona.sdk.model.CreateSandboxFromSnapshotParams;import io.daytona.sdk.model.ExecuteResponse;
import java.util.Map;
public class App { public static void main(String[] args) { try (Daytona daytona = new Daytona()) { CreateSandboxFromSnapshotParams params = new CreateSandboxFromSnapshotParams(); params.setSnapshot("fuse-s3"); params.setEnvVars(Map.of( "AWS_ACCESS_KEY_ID", System.getenv("AWS_ACCESS_KEY_ID"), "AWS_SECRET_ACCESS_KEY", System.getenv("AWS_SECRET_ACCESS_KEY") )); Sandbox sandbox = daytona.create(params);
String mountPath = "/home/daytona/s3";
// mount-s3 daemonizes by default and reads AWS_* from the environment sandbox.getProcess().executeCommand("mkdir -p " + mountPath); sandbox.getProcess().executeCommand("mount-s3 my-bucket " + mountPath);
// Read and write through the mount as if it were a local directory sandbox.getProcess().executeCommand( "echo 'hello from Daytona' > " + mountPath + "/hello.txt"); ExecuteResponse response = sandbox.getProcess().executeCommand( "cat " + mountPath + "/hello.txt"); System.out.println(response.getResult()); } }}Runtime install
Section titled “Runtime install”Start from a default sandbox and install mount-s3 during startup before running the mount command. This is useful for quick testing and temporary environments where you do not want to maintain a custom snapshot, with the tradeoff of slower cold starts.
import osfrom daytona import CreateSandboxBaseParams, Daytona
daytona = Daytona()
sandbox = daytona.create( CreateSandboxBaseParams( env_vars={ "AWS_ACCESS_KEY_ID": os.environ["AWS_ACCESS_KEY_ID"], "AWS_SECRET_ACCESS_KEY": os.environ["AWS_SECRET_ACCESS_KEY"], }, ))
# Install mount-s3 at runtimesandbox.process.exec( "sudo apt-get update " "&& sudo apt-get install -y --no-install-recommends libfuse2 ca-certificates wget")sandbox.process.exec( 'arch="$(dpkg --print-architecture | sed s/amd64/x86_64/)" ' '&& wget -O /tmp/mount-s3.deb ' '"https://s3.amazonaws.com/mountpoint-s3-release/latest/${arch}/mount-s3.deb" ' "&& sudo apt-get install -y /tmp/mount-s3.deb")
# Mount and usemount_path = "/home/daytona/s3"sandbox.process.exec(f"mkdir -p {mount_path} && mount-s3 my-bucket {mount_path}")response = sandbox.process.exec(f"ls {mount_path}")print(response.result)import { Daytona } from '@daytona/sdk'
const daytona = new Daytona()
const sandbox = await daytona.create({ envVars: { AWS_ACCESS_KEY_ID: process.env.AWS_ACCESS_KEY_ID!, AWS_SECRET_ACCESS_KEY: process.env.AWS_SECRET_ACCESS_KEY!, },})
// Install mount-s3 at runtimeawait sandbox.process.executeCommand( 'sudo apt-get update ' + '&& sudo apt-get install -y --no-install-recommends libfuse2 ca-certificates wget',)await sandbox.process.executeCommand( 'arch="$(dpkg --print-architecture | sed s/amd64/x86_64/)" ' + '&& wget -O /tmp/mount-s3.deb ' + '"https://s3.amazonaws.com/mountpoint-s3-release/latest/${arch}/mount-s3.deb" ' + '&& sudo apt-get install -y /tmp/mount-s3.deb',)
// Mount and useconst mountPath = '/home/daytona/s3'await sandbox.process.executeCommand(`mkdir -p ${mountPath} && mount-s3 my-bucket ${mountPath}`)const response = await sandbox.process.executeCommand(`ls ${mountPath}`)console.log(response.result)require 'daytona'
daytona = Daytona::Daytona.new
sandbox = daytona.create( Daytona::CreateSandboxBaseParams.new( env_vars: { 'AWS_ACCESS_KEY_ID' => ENV.fetch('AWS_ACCESS_KEY_ID'), 'AWS_SECRET_ACCESS_KEY' => ENV.fetch('AWS_SECRET_ACCESS_KEY') } ))
# Install mount-s3 at runtimesandbox.process.exec( command: 'sudo apt-get update ' \ '&& sudo apt-get install -y --no-install-recommends libfuse2 ca-certificates wget')sandbox.process.exec( command: 'arch="$(dpkg --print-architecture | sed s/amd64/x86_64/)" ' \ '&& wget -O /tmp/mount-s3.deb ' \ '"https://s3.amazonaws.com/mountpoint-s3-release/latest/${arch}/mount-s3.deb" ' \ '&& sudo apt-get install -y /tmp/mount-s3.deb')
# Mount and usemount_path = '/home/daytona/s3'sandbox.process.exec(command: "mkdir -p #{mount_path} && mount-s3 my-bucket #{mount_path}")response = sandbox.process.exec(command: "ls #{mount_path}")puts response.resultimport ( "context" "fmt" "log" "os"
"github.com/daytonaio/daytona/libs/sdk-go/pkg/daytona" "github.com/daytonaio/daytona/libs/sdk-go/pkg/types")
ctx := context.Background()client, err := daytona.NewClient()if err != nil { log.Fatal(err)}
sandbox, err := client.Create(ctx, types.SnapshotParams{ SandboxBaseParams: types.SandboxBaseParams{ EnvVars: map[string]string{ "AWS_ACCESS_KEY_ID": os.Getenv("AWS_ACCESS_KEY_ID"), "AWS_SECRET_ACCESS_KEY": os.Getenv("AWS_SECRET_ACCESS_KEY"), }, },})if err != nil { log.Fatal(err)}
// Install mount-s3 at runtimeif _, err := sandbox.Process.ExecuteCommand(ctx, "sudo apt-get update && sudo apt-get install -y --no-install-recommends libfuse2 ca-certificates wget"); err != nil { log.Fatal(err)}if _, err := sandbox.Process.ExecuteCommand(ctx, `arch="$(dpkg --print-architecture | sed s/amd64/x86_64/)" && `+ `wget -O /tmp/mount-s3.deb "https://s3.amazonaws.com/mountpoint-s3-release/latest/${arch}/mount-s3.deb" && `+ `sudo apt-get install -y /tmp/mount-s3.deb`); err != nil { log.Fatal(err)}
// Mount and usemountPath := "/home/daytona/s3"if _, err := sandbox.Process.ExecuteCommand(ctx, "mkdir -p "+mountPath+" && mount-s3 my-bucket "+mountPath); err != nil { log.Fatal(err)}response, err := sandbox.Process.ExecuteCommand(ctx, "ls "+mountPath)if err != nil { log.Fatal(err)}fmt.Println(response.Result)import io.daytona.sdk.Daytona;import io.daytona.sdk.Sandbox;import io.daytona.sdk.model.CreateSandboxFromSnapshotParams;import io.daytona.sdk.model.ExecuteResponse;
import java.util.Map;
public class App { public static void main(String[] args) { try (Daytona daytona = new Daytona()) { CreateSandboxFromSnapshotParams params = new CreateSandboxFromSnapshotParams(); params.setEnvVars(Map.of( "AWS_ACCESS_KEY_ID", System.getenv("AWS_ACCESS_KEY_ID"), "AWS_SECRET_ACCESS_KEY", System.getenv("AWS_SECRET_ACCESS_KEY") )); Sandbox sandbox = daytona.create(params);
// Install mount-s3 at runtime sandbox.getProcess().executeCommand( "sudo apt-get update " + "&& sudo apt-get install -y --no-install-recommends libfuse2 ca-certificates wget"); sandbox.getProcess().executeCommand( "arch=\"$(dpkg --print-architecture | sed s/amd64/x86_64/)\" " + "&& wget -O /tmp/mount-s3.deb " + "\"https://s3.amazonaws.com/mountpoint-s3-release/latest/${arch}/mount-s3.deb\" " + "&& sudo apt-get install -y /tmp/mount-s3.deb");
// Mount and use String mountPath = "/home/daytona/s3"; sandbox.getProcess().executeCommand( "mkdir -p " + mountPath + " && mount-s3 my-bucket " + mountPath); ExecuteResponse response = sandbox.getProcess().executeCommand("ls " + mountPath); System.out.println(response.getResult()); } }}Mount a Cloudflare R2 bucket
Section titled “Mount a Cloudflare R2 bucket”Cloudflare R2 is S3-compatible, so the same mount-s3 tool works. Pass an explicit --endpoint-url pointing at your R2 account.
Credentials — set R2_ACCOUNT_ID, R2_ACCESS_KEY_ID, and R2_SECRET_ACCESS_KEY in your local environment. R2 is S3-compatible, so the snippets below pass your R2 keys into the sandbox via envVars under the AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY names that mount-s3 expects.
Pre-built snapshot
Section titled “Pre-built snapshot”Build a snapshot with mount-s3 preinstalled, then launch all R2-enabled sandboxes from that snapshot. The mount flow stays identical to S3 except for the R2 --endpoint-url, and startup remains fast because installation is done once at snapshot build time.
Build a snapshot
Section titled “Build a snapshot”Create a reusable snapshot that installs the same mount-s3 tool used for S3. R2 remains S3-compatible, so this snapshot is identical to S3 setup and only the runtime mount command changes.
from daytona import CreateSnapshotParams, Daytona, Image
daytona = Daytona()
image = ( Image.base("daytonaio/sandbox") .run_commands( "sudo apt-get update " "&& sudo apt-get install -y --no-install-recommends libfuse2 ca-certificates wget", 'arch="$(dpkg --print-architecture | sed s/amd64/x86_64/)" ' '&& wget -O /tmp/mount-s3.deb ' '"https://s3.amazonaws.com/mountpoint-s3-release/latest/${arch}/mount-s3.deb" ' "&& sudo apt-get install -y /tmp/mount-s3.deb " "&& rm /tmp/mount-s3.deb", ))
daytona.snapshot.create( CreateSnapshotParams(name="fuse-r2", image=image), on_logs=print,)import { Daytona, Image } from '@daytona/sdk'
const daytona = new Daytona()
const image = Image.base('daytonaio/sandbox').runCommands( 'sudo apt-get update ' + '&& sudo apt-get install -y --no-install-recommends libfuse2 ca-certificates wget', 'arch="$(dpkg --print-architecture | sed s/amd64/x86_64/)" ' + '&& wget -O /tmp/mount-s3.deb ' + '"https://s3.amazonaws.com/mountpoint-s3-release/latest/${arch}/mount-s3.deb" ' + '&& sudo apt-get install -y /tmp/mount-s3.deb ' + '&& rm /tmp/mount-s3.deb',)
await daytona.snapshot.create( { name: 'fuse-r2', image }, { onLogs: console.log },)require 'daytona'
daytona = Daytona::Daytona.new
image = Daytona::Image .base('daytonaio/sandbox') .run_commands( 'sudo apt-get update ' \ '&& sudo apt-get install -y --no-install-recommends libfuse2 ca-certificates wget', 'arch="$(dpkg --print-architecture | sed s/amd64/x86_64/)" ' \ '&& wget -O /tmp/mount-s3.deb ' \ '"https://s3.amazonaws.com/mountpoint-s3-release/latest/${arch}/mount-s3.deb" ' \ '&& sudo apt-get install -y /tmp/mount-s3.deb ' \ '&& rm /tmp/mount-s3.deb' )
daytona.snapshot.create( Daytona::CreateSnapshotParams.new(name: 'fuse-r2', image: image), on_logs: proc { |chunk| print(chunk) })import ( "context" "fmt" "log"
"github.com/daytonaio/daytona/libs/sdk-go/pkg/daytona" "github.com/daytonaio/daytona/libs/sdk-go/pkg/types")
ctx := context.Background()client, err := daytona.NewClient()if err != nil { log.Fatal(err)}
image := daytona.Base("daytonaio/sandbox"). Run("sudo apt-get update && sudo apt-get install -y --no-install-recommends libfuse2 ca-certificates wget"). Run(`arch="$(dpkg --print-architecture | sed s/amd64/x86_64/)" && ` + `wget -O /tmp/mount-s3.deb "https://s3.amazonaws.com/mountpoint-s3-release/latest/${arch}/mount-s3.deb" && ` + `sudo apt-get install -y /tmp/mount-s3.deb && rm /tmp/mount-s3.deb`)
_, logChan, err := client.Snapshot.Create(ctx, &types.CreateSnapshotParams{ Name: "fuse-r2", Image: image,})if err != nil { log.Fatal(err)}for line := range logChan { fmt.Print(line)}import io.daytona.sdk.Daytona;import io.daytona.sdk.Image;
public class App { public static void main(String[] args) { try (Daytona daytona = new Daytona()) { Image image = Image.base("daytonaio/sandbox") .runCommands( "sudo apt-get update " + "&& sudo apt-get install -y --no-install-recommends libfuse2 ca-certificates wget", "arch=\"$(dpkg --print-architecture | sed s/amd64/x86_64/)\" " + "&& wget -O /tmp/mount-s3.deb " + "\"https://s3.amazonaws.com/mountpoint-s3-release/latest/${arch}/mount-s3.deb\" " + "&& sudo apt-get install -y /tmp/mount-s3.deb " + "&& rm /tmp/mount-s3.deb" );
daytona.snapshot().create("fuse-r2", image, System.out::println); } }}Launch and mount
Section titled “Launch and mount”Pass your R2 credentials into the sandbox as AWS_* environment variables and mount with the R2 endpoint URL. This keeps the authentication flow compatible with mount-s3 while targeting your Cloudflare account.
import osfrom daytona import CreateSandboxFromSnapshotParams, Daytona
daytona = Daytona()
# R2 credentials live in your Cloudflare dashboard under R2 > Manage API Tokensaccount_id = os.environ["R2_ACCOUNT_ID"]
sandbox = daytona.create( CreateSandboxFromSnapshotParams( snapshot="fuse-r2", env_vars={ "AWS_ACCESS_KEY_ID": os.environ["R2_ACCESS_KEY_ID"], "AWS_SECRET_ACCESS_KEY": os.environ["R2_SECRET_ACCESS_KEY"], }, ))
mount_path = "/home/daytona/r2"
sandbox.process.exec(f"mkdir -p {mount_path}")sandbox.process.exec( f"mount-s3 --endpoint-url https://{account_id}.r2.cloudflarestorage.com " f"my-r2-bucket {mount_path}")
response = sandbox.process.exec(f"ls {mount_path}")print(response.result)import { Daytona } from '@daytona/sdk'
const daytona = new Daytona()
// R2 credentials live in your Cloudflare dashboard under R2 > Manage API Tokensconst accountId = process.env.R2_ACCOUNT_ID!
const sandbox = await daytona.create({ snapshot: 'fuse-r2', envVars: { AWS_ACCESS_KEY_ID: process.env.R2_ACCESS_KEY_ID!, AWS_SECRET_ACCESS_KEY: process.env.R2_SECRET_ACCESS_KEY!, },})
const mountPath = '/home/daytona/r2'
await sandbox.process.executeCommand(`mkdir -p ${mountPath}`)await sandbox.process.executeCommand( `mount-s3 --endpoint-url https://${accountId}.r2.cloudflarestorage.com ` + `my-r2-bucket ${mountPath}`,)
const response = await sandbox.process.executeCommand(`ls ${mountPath}`)console.log(response.result)require 'daytona'
daytona = Daytona::Daytona.new
# R2 credentials live in your Cloudflare dashboard under R2 > Manage API Tokensaccount_id = ENV.fetch('R2_ACCOUNT_ID')
sandbox = daytona.create( Daytona::CreateSandboxFromSnapshotParams.new( snapshot: 'fuse-r2', env_vars: { 'AWS_ACCESS_KEY_ID' => ENV.fetch('R2_ACCESS_KEY_ID'), 'AWS_SECRET_ACCESS_KEY' => ENV.fetch('R2_SECRET_ACCESS_KEY') } ))
mount_path = '/home/daytona/r2'
sandbox.process.exec(command: "mkdir -p #{mount_path}")sandbox.process.exec( command: "mount-s3 --endpoint-url https://#{account_id}.r2.cloudflarestorage.com " \ "my-r2-bucket #{mount_path}")
response = sandbox.process.exec(command: "ls #{mount_path}")puts response.resultimport ( "context" "fmt" "log" "os"
"github.com/daytonaio/daytona/libs/sdk-go/pkg/daytona" "github.com/daytonaio/daytona/libs/sdk-go/pkg/types")
ctx := context.Background()client, err := daytona.NewClient()if err != nil { log.Fatal(err)}
// R2 credentials live in your Cloudflare dashboard under R2 > Manage API TokensaccountID := os.Getenv("R2_ACCOUNT_ID")
sandbox, err := client.Create(ctx, types.SnapshotParams{ Snapshot: "fuse-r2", SandboxBaseParams: types.SandboxBaseParams{ EnvVars: map[string]string{ "AWS_ACCESS_KEY_ID": os.Getenv("R2_ACCESS_KEY_ID"), "AWS_SECRET_ACCESS_KEY": os.Getenv("R2_SECRET_ACCESS_KEY"), }, },})if err != nil { log.Fatal(err)}
mountPath := "/home/daytona/r2"
if _, err := sandbox.Process.ExecuteCommand(ctx, "mkdir -p "+mountPath); err != nil { log.Fatal(err)}if _, err := sandbox.Process.ExecuteCommand(ctx, "mount-s3 --endpoint-url https://"+accountID+".r2.cloudflarestorage.com "+ "my-r2-bucket "+mountPath); err != nil { log.Fatal(err)}
response, err := sandbox.Process.ExecuteCommand(ctx, "ls "+mountPath)if err != nil { log.Fatal(err)}fmt.Println(response.Result)import io.daytona.sdk.Daytona;import io.daytona.sdk.Sandbox;import io.daytona.sdk.model.CreateSandboxFromSnapshotParams;import io.daytona.sdk.model.ExecuteResponse;
import java.util.Map;
public class App { public static void main(String[] args) { try (Daytona daytona = new Daytona()) { // R2 credentials live in your Cloudflare dashboard under R2 > Manage API Tokens String accountId = System.getenv("R2_ACCOUNT_ID");
CreateSandboxFromSnapshotParams params = new CreateSandboxFromSnapshotParams(); params.setSnapshot("fuse-r2"); params.setEnvVars(Map.of( "AWS_ACCESS_KEY_ID", System.getenv("R2_ACCESS_KEY_ID"), "AWS_SECRET_ACCESS_KEY", System.getenv("R2_SECRET_ACCESS_KEY") )); Sandbox sandbox = daytona.create(params);
String mountPath = "/home/daytona/r2";
sandbox.getProcess().executeCommand("mkdir -p " + mountPath); sandbox.getProcess().executeCommand( "mount-s3 --endpoint-url https://" + accountId + ".r2.cloudflarestorage.com " + "my-r2-bucket " + mountPath);
ExecuteResponse response = sandbox.getProcess().executeCommand("ls " + mountPath); System.out.println(response.getResult()); } }}Runtime install
Section titled “Runtime install”Start from a default sandbox and install mount-s3 during startup, then mount your bucket with the R2 --endpoint-url. This path is convenient for prototyping or one-off tasks, but each new sandbox pays the package installation cost.
import osfrom daytona import CreateSandboxBaseParams, Daytona
daytona = Daytona()
account_id = os.environ["R2_ACCOUNT_ID"]
sandbox = daytona.create( CreateSandboxBaseParams( env_vars={ "AWS_ACCESS_KEY_ID": os.environ["R2_ACCESS_KEY_ID"], "AWS_SECRET_ACCESS_KEY": os.environ["R2_SECRET_ACCESS_KEY"], }, ))
# Install mount-s3sandbox.process.exec( "sudo apt-get update " "&& sudo apt-get install -y --no-install-recommends libfuse2 ca-certificates wget")sandbox.process.exec( 'arch="$(dpkg --print-architecture | sed s/amd64/x86_64/)" ' '&& wget -O /tmp/mount-s3.deb ' '"https://s3.amazonaws.com/mountpoint-s3-release/latest/${arch}/mount-s3.deb" ' "&& sudo apt-get install -y /tmp/mount-s3.deb")
# Mount with R2 endpointmount_path = "/home/daytona/r2"sandbox.process.exec( f"mkdir -p {mount_path} && " f"mount-s3 --endpoint-url https://{account_id}.r2.cloudflarestorage.com " f"my-r2-bucket {mount_path}")response = sandbox.process.exec(f"ls {mount_path}")print(response.result)import { Daytona } from '@daytona/sdk'
const daytona = new Daytona()
const accountId = process.env.R2_ACCOUNT_ID!
const sandbox = await daytona.create({ envVars: { AWS_ACCESS_KEY_ID: process.env.R2_ACCESS_KEY_ID!, AWS_SECRET_ACCESS_KEY: process.env.R2_SECRET_ACCESS_KEY!, },})
// Install mount-s3await sandbox.process.executeCommand( 'sudo apt-get update ' + '&& sudo apt-get install -y --no-install-recommends libfuse2 ca-certificates wget',)await sandbox.process.executeCommand( 'arch="$(dpkg --print-architecture | sed s/amd64/x86_64/)" ' + '&& wget -O /tmp/mount-s3.deb ' + '"https://s3.amazonaws.com/mountpoint-s3-release/latest/${arch}/mount-s3.deb" ' + '&& sudo apt-get install -y /tmp/mount-s3.deb',)
// Mount with R2 endpointconst mountPath = '/home/daytona/r2'await sandbox.process.executeCommand( `mkdir -p ${mountPath} && ` + `mount-s3 --endpoint-url https://${accountId}.r2.cloudflarestorage.com ` + `my-r2-bucket ${mountPath}`,)const response = await sandbox.process.executeCommand(`ls ${mountPath}`)console.log(response.result)require 'daytona'
daytona = Daytona::Daytona.new
account_id = ENV.fetch('R2_ACCOUNT_ID')
sandbox = daytona.create( Daytona::CreateSandboxBaseParams.new( env_vars: { 'AWS_ACCESS_KEY_ID' => ENV.fetch('R2_ACCESS_KEY_ID'), 'AWS_SECRET_ACCESS_KEY' => ENV.fetch('R2_SECRET_ACCESS_KEY') } ))
# Install mount-s3sandbox.process.exec( command: 'sudo apt-get update ' \ '&& sudo apt-get install -y --no-install-recommends libfuse2 ca-certificates wget')sandbox.process.exec( command: 'arch="$(dpkg --print-architecture | sed s/amd64/x86_64/)" ' \ '&& wget -O /tmp/mount-s3.deb ' \ '"https://s3.amazonaws.com/mountpoint-s3-release/latest/${arch}/mount-s3.deb" ' \ '&& sudo apt-get install -y /tmp/mount-s3.deb')
# Mount with R2 endpointmount_path = '/home/daytona/r2'sandbox.process.exec( command: "mkdir -p #{mount_path} && " \ "mount-s3 --endpoint-url https://#{account_id}.r2.cloudflarestorage.com " \ "my-r2-bucket #{mount_path}")response = sandbox.process.exec(command: "ls #{mount_path}")puts response.resultimport ( "context" "fmt" "log" "os"
"github.com/daytonaio/daytona/libs/sdk-go/pkg/daytona" "github.com/daytonaio/daytona/libs/sdk-go/pkg/types")
ctx := context.Background()client, err := daytona.NewClient()if err != nil { log.Fatal(err)}
accountID := os.Getenv("R2_ACCOUNT_ID")
sandbox, err := client.Create(ctx, types.SnapshotParams{ SandboxBaseParams: types.SandboxBaseParams{ EnvVars: map[string]string{ "AWS_ACCESS_KEY_ID": os.Getenv("R2_ACCESS_KEY_ID"), "AWS_SECRET_ACCESS_KEY": os.Getenv("R2_SECRET_ACCESS_KEY"), }, },})if err != nil { log.Fatal(err)}
// Install mount-s3if _, err := sandbox.Process.ExecuteCommand(ctx, "sudo apt-get update && sudo apt-get install -y --no-install-recommends libfuse2 ca-certificates wget"); err != nil { log.Fatal(err)}if _, err := sandbox.Process.ExecuteCommand(ctx, `arch="$(dpkg --print-architecture | sed s/amd64/x86_64/)" && `+ `wget -O /tmp/mount-s3.deb "https://s3.amazonaws.com/mountpoint-s3-release/latest/${arch}/mount-s3.deb" && `+ `sudo apt-get install -y /tmp/mount-s3.deb`); err != nil { log.Fatal(err)}
// Mount with R2 endpointmountPath := "/home/daytona/r2"if _, err := sandbox.Process.ExecuteCommand(ctx, "mkdir -p "+mountPath+" && "+ "mount-s3 --endpoint-url https://"+accountID+".r2.cloudflarestorage.com "+ "my-r2-bucket "+mountPath); err != nil { log.Fatal(err)}response, err := sandbox.Process.ExecuteCommand(ctx, "ls "+mountPath)if err != nil { log.Fatal(err)}fmt.Println(response.Result)import io.daytona.sdk.Daytona;import io.daytona.sdk.Sandbox;import io.daytona.sdk.model.CreateSandboxFromSnapshotParams;import io.daytona.sdk.model.ExecuteResponse;
import java.util.Map;
public class App { public static void main(String[] args) { try (Daytona daytona = new Daytona()) { String accountId = System.getenv("R2_ACCOUNT_ID");
CreateSandboxFromSnapshotParams params = new CreateSandboxFromSnapshotParams(); params.setEnvVars(Map.of( "AWS_ACCESS_KEY_ID", System.getenv("R2_ACCESS_KEY_ID"), "AWS_SECRET_ACCESS_KEY", System.getenv("R2_SECRET_ACCESS_KEY") )); Sandbox sandbox = daytona.create(params);
// Install mount-s3 sandbox.getProcess().executeCommand( "sudo apt-get update " + "&& sudo apt-get install -y --no-install-recommends libfuse2 ca-certificates wget"); sandbox.getProcess().executeCommand( "arch=\"$(dpkg --print-architecture | sed s/amd64/x86_64/)\" " + "&& wget -O /tmp/mount-s3.deb " + "\"https://s3.amazonaws.com/mountpoint-s3-release/latest/${arch}/mount-s3.deb\" " + "&& sudo apt-get install -y /tmp/mount-s3.deb");
// Mount with R2 endpoint String mountPath = "/home/daytona/r2"; sandbox.getProcess().executeCommand( "mkdir -p " + mountPath + " && " + "mount-s3 --endpoint-url https://" + accountId + ".r2.cloudflarestorage.com " + "my-r2-bucket " + mountPath);
ExecuteResponse response = sandbox.getProcess().executeCommand("ls " + mountPath); System.out.println(response.getResult()); } }}Mount a Google Cloud Storage bucket
Section titled “Mount a Google Cloud Storage bucket”Mount a GCS bucket using gcsfuse ↗ — Google’s official FUSE client.
Credentials — gcsfuse reads a service account JSON key file. The snippets below read the key from a local path on your host and upload it into the sandbox via sandbox.fs.
Pre-built snapshot
Section titled “Pre-built snapshot”Build a snapshot with gcsfuse preinstalled, then launch all GCS-enabled sandboxes from that snapshot. This avoids repeating apt repository setup and package installation for every sandbox, which makes startup behavior more consistent.
Build a snapshot
Section titled “Build a snapshot”Create a reusable snapshot that installs gcsfuse plus its apt repository configuration. After this step, GCS-enabled sandboxes can mount immediately without repeating package setup.
from daytona import CreateSnapshotParams, Daytona, Image
daytona = Daytona()
image = ( Image.base("daytonaio/sandbox") .run_commands( "sudo apt-get update " "&& sudo apt-get install -y --no-install-recommends ca-certificates curl gnupg", "sudo mkdir -p /etc/apt/keyrings " "&& curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg " "| sudo gpg --dearmor -o /etc/apt/keyrings/gcsfuse.gpg", 'echo "deb [signed-by=/etc/apt/keyrings/gcsfuse.gpg] ' 'https://packages.cloud.google.com/apt gcsfuse-bookworm main" ' "| sudo tee /etc/apt/sources.list.d/gcsfuse.list", "sudo apt-get update && sudo apt-get install -y gcsfuse", ))
daytona.snapshot.create( CreateSnapshotParams(name="fuse-gcs", image=image), on_logs=print,)import { Daytona, Image } from '@daytona/sdk'
const daytona = new Daytona()
const image = Image.base('daytonaio/sandbox').runCommands( 'sudo apt-get update ' + '&& sudo apt-get install -y --no-install-recommends ca-certificates curl gnupg', 'sudo mkdir -p /etc/apt/keyrings ' + '&& curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg ' + '| sudo gpg --dearmor -o /etc/apt/keyrings/gcsfuse.gpg', 'echo "deb [signed-by=/etc/apt/keyrings/gcsfuse.gpg] ' + 'https://packages.cloud.google.com/apt gcsfuse-bookworm main" ' + '| sudo tee /etc/apt/sources.list.d/gcsfuse.list', 'sudo apt-get update && sudo apt-get install -y gcsfuse',)
await daytona.snapshot.create( { name: 'fuse-gcs', image }, { onLogs: console.log },)require 'daytona'
daytona = Daytona::Daytona.new
image = Daytona::Image .base('daytonaio/sandbox') .run_commands( 'sudo apt-get update ' \ '&& sudo apt-get install -y --no-install-recommends ca-certificates curl gnupg', 'sudo mkdir -p /etc/apt/keyrings ' \ '&& curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg ' \ '| sudo gpg --dearmor -o /etc/apt/keyrings/gcsfuse.gpg', 'echo "deb [signed-by=/etc/apt/keyrings/gcsfuse.gpg] ' \ 'https://packages.cloud.google.com/apt gcsfuse-bookworm main" ' \ '| sudo tee /etc/apt/sources.list.d/gcsfuse.list', 'sudo apt-get update && sudo apt-get install -y gcsfuse' )
daytona.snapshot.create( Daytona::CreateSnapshotParams.new(name: 'fuse-gcs', image: image), on_logs: proc { |chunk| print(chunk) })import ( "context" "fmt" "log"
"github.com/daytonaio/daytona/libs/sdk-go/pkg/daytona" "github.com/daytonaio/daytona/libs/sdk-go/pkg/types")
ctx := context.Background()client, err := daytona.NewClient()if err != nil { log.Fatal(err)}
image := daytona.Base("daytonaio/sandbox"). Run("sudo apt-get update && sudo apt-get install -y --no-install-recommends ca-certificates curl gnupg"). Run("sudo mkdir -p /etc/apt/keyrings && " + "curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | " + "sudo gpg --dearmor -o /etc/apt/keyrings/gcsfuse.gpg"). Run(`echo "deb [signed-by=/etc/apt/keyrings/gcsfuse.gpg] ` + `https://packages.cloud.google.com/apt gcsfuse-bookworm main" | ` + `sudo tee /etc/apt/sources.list.d/gcsfuse.list`). Run("sudo apt-get update && sudo apt-get install -y gcsfuse")
_, logChan, err := client.Snapshot.Create(ctx, &types.CreateSnapshotParams{ Name: "fuse-gcs", Image: image,})if err != nil { log.Fatal(err)}for line := range logChan { fmt.Print(line)}import io.daytona.sdk.Daytona;import io.daytona.sdk.Image;
public class App { public static void main(String[] args) { try (Daytona daytona = new Daytona()) { Image image = Image.base("daytonaio/sandbox") .runCommands( "sudo apt-get update " + "&& sudo apt-get install -y --no-install-recommends ca-certificates curl gnupg", "sudo mkdir -p /etc/apt/keyrings " + "&& curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg " + "| sudo gpg --dearmor -o /etc/apt/keyrings/gcsfuse.gpg", "echo \"deb [signed-by=/etc/apt/keyrings/gcsfuse.gpg] " + "https://packages.cloud.google.com/apt gcsfuse-bookworm main\" " + "| sudo tee /etc/apt/sources.list.d/gcsfuse.list", "sudo apt-get update && sudo apt-get install -y gcsfuse" );
daytona.snapshot().create("fuse-gcs", image, System.out::println); } }}Launch and mount
Section titled “Launch and mount”gcsfuse authenticates to GCS with a service account JSON key. Upload it into the sandbox via sandbox.fs and point gcsfuse at it with --key-file.
import osfrom daytona import CreateSandboxFromSnapshotParams, Daytona
daytona = Daytona()
# GCS_SERVICE_ACCOUNT_KEY holds the full service account JSON as a stringservice_account_key = os.environ["GCS_SERVICE_ACCOUNT_KEY"].encode()
sandbox = daytona.create(CreateSandboxFromSnapshotParams(snapshot="fuse-gcs"))
mount_path = "/home/daytona/gcs"key_path = "/home/daytona/.gcs-key.json"
# Upload the key file into the sandboxsandbox.fs.upload_file(service_account_key, key_path)sandbox.process.exec(f"chmod 600 {key_path}")
# Mount the bucketsandbox.process.exec(f"mkdir -p {mount_path}")sandbox.process.exec(f"gcsfuse --key-file={key_path} my-gcs-bucket {mount_path}")
# Use the mountresponse = sandbox.process.exec(f"ls {mount_path}")print(response.result)import { Daytona } from '@daytona/sdk'
const daytona = new Daytona()
// GCS_SERVICE_ACCOUNT_KEY holds the full service account JSON as a stringconst serviceAccountKey = Buffer.from(process.env.GCS_SERVICE_ACCOUNT_KEY!)
const sandbox = await daytona.create({ snapshot: 'fuse-gcs' })
const mountPath = '/home/daytona/gcs'const keyPath = '/home/daytona/.gcs-key.json'
// Upload the key file into the sandboxawait sandbox.fs.uploadFile(serviceAccountKey, keyPath)await sandbox.process.executeCommand(`chmod 600 ${keyPath}`)
// Mount the bucketawait sandbox.process.executeCommand(`mkdir -p ${mountPath}`)await sandbox.process.executeCommand(`gcsfuse --key-file=${keyPath} my-gcs-bucket ${mountPath}`)
// Use the mountconst response = await sandbox.process.executeCommand(`ls ${mountPath}`)console.log(response.result)require 'daytona'
daytona = Daytona::Daytona.new
# GCS_SERVICE_ACCOUNT_KEY holds the full service account JSON as a stringservice_account_key = ENV.fetch('GCS_SERVICE_ACCOUNT_KEY')
sandbox = daytona.create( Daytona::CreateSandboxFromSnapshotParams.new(snapshot: 'fuse-gcs'))
mount_path = '/home/daytona/gcs'key_path = '/home/daytona/.gcs-key.json'
# Upload the key file into the sandboxsandbox.fs.upload_file(service_account_key, key_path)sandbox.process.exec(command: "chmod 600 #{key_path}")
# Mount the bucketsandbox.process.exec(command: "mkdir -p #{mount_path}")sandbox.process.exec(command: "gcsfuse --key-file=#{key_path} my-gcs-bucket #{mount_path}")
# Use the mountresponse = sandbox.process.exec(command: "ls #{mount_path}")puts response.resultimport ( "context" "fmt" "log" "os"
"github.com/daytonaio/daytona/libs/sdk-go/pkg/daytona" "github.com/daytonaio/daytona/libs/sdk-go/pkg/types")
ctx := context.Background()client, err := daytona.NewClient()if err != nil { log.Fatal(err)}
// GCS_SERVICE_ACCOUNT_KEY holds the full service account JSON as a stringserviceAccountKey := []byte(os.Getenv("GCS_SERVICE_ACCOUNT_KEY"))
sandbox, err := client.Create(ctx, types.SnapshotParams{ Snapshot: "fuse-gcs",})if err != nil { log.Fatal(err)}
mountPath := "/home/daytona/gcs"keyPath := "/home/daytona/.gcs-key.json"
// Upload the key file into the sandboxif err := sandbox.FileSystem.UploadFile(ctx, serviceAccountKey, keyPath); err != nil { log.Fatal(err)}if _, err := sandbox.Process.ExecuteCommand(ctx, "chmod 600 "+keyPath); err != nil { log.Fatal(err)}
// Mount the bucketif _, err := sandbox.Process.ExecuteCommand(ctx, "mkdir -p "+mountPath); err != nil { log.Fatal(err)}if _, err := sandbox.Process.ExecuteCommand(ctx, "gcsfuse --key-file="+keyPath+" my-gcs-bucket "+mountPath); err != nil { log.Fatal(err)}
// Use the mountresponse, err := sandbox.Process.ExecuteCommand(ctx, "ls "+mountPath)if err != nil { log.Fatal(err)}fmt.Println(response.Result)import io.daytona.sdk.Daytona;import io.daytona.sdk.Sandbox;import io.daytona.sdk.model.CreateSandboxFromSnapshotParams;import io.daytona.sdk.model.ExecuteResponse;
import java.nio.charset.StandardCharsets;
public class App { public static void main(String[] args) { try (Daytona daytona = new Daytona()) { // GCS_SERVICE_ACCOUNT_KEY holds the full service account JSON as a string byte[] serviceAccountKey = System.getenv("GCS_SERVICE_ACCOUNT_KEY") .getBytes(StandardCharsets.UTF_8);
CreateSandboxFromSnapshotParams params = new CreateSandboxFromSnapshotParams(); params.setSnapshot("fuse-gcs"); Sandbox sandbox = daytona.create(params);
String mountPath = "/home/daytona/gcs"; String keyPath = "/home/daytona/.gcs-key.json";
// Upload the key file into the sandbox sandbox.fs.uploadFile(serviceAccountKey, keyPath); sandbox.getProcess().executeCommand("chmod 600 " + keyPath);
// Mount the bucket sandbox.getProcess().executeCommand("mkdir -p " + mountPath); sandbox.getProcess().executeCommand( "gcsfuse --key-file=" + keyPath + " my-gcs-bucket " + mountPath);
// Use the mount ExecuteResponse response = sandbox.getProcess().executeCommand("ls " + mountPath); System.out.println(response.getResult()); } }}Runtime install
Section titled “Runtime install”Start from a default sandbox and install gcsfuse when the sandbox starts, then upload the service account key and mount the bucket. This is the fastest way to iterate on setup, but every sandbox repeats install and key staging steps.
import osfrom daytona import CreateSandboxBaseParams, Daytona
daytona = Daytona()
# GCS_SERVICE_ACCOUNT_KEY holds the full service account JSON as a stringservice_account_key = os.environ["GCS_SERVICE_ACCOUNT_KEY"].encode()
sandbox = daytona.create(CreateSandboxBaseParams())
# Install gcsfuse from the bookworm repo (works on Trixie)sandbox.process.exec( "sudo apt-get update " "&& sudo apt-get install -y --no-install-recommends ca-certificates curl gnupg")sandbox.process.exec( "sudo mkdir -p /etc/apt/keyrings " "&& curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg " "| sudo gpg --dearmor -o /etc/apt/keyrings/gcsfuse.gpg")sandbox.process.exec( 'echo "deb [signed-by=/etc/apt/keyrings/gcsfuse.gpg] ' 'https://packages.cloud.google.com/apt gcsfuse-bookworm main" ' "| sudo tee /etc/apt/sources.list.d/gcsfuse.list " "&& sudo apt-get update && sudo apt-get install -y gcsfuse")
# Upload the key and mountmount_path = "/home/daytona/gcs"key_path = "/home/daytona/.gcs-key.json"sandbox.fs.upload_file(service_account_key, key_path)sandbox.process.exec(f"chmod 600 {key_path}")sandbox.process.exec(f"mkdir -p {mount_path}")sandbox.process.exec(f"gcsfuse --key-file={key_path} my-gcs-bucket {mount_path}")
response = sandbox.process.exec(f"ls {mount_path}")print(response.result)import { Daytona } from '@daytona/sdk'
const daytona = new Daytona()
// GCS_SERVICE_ACCOUNT_KEY holds the full service account JSON as a stringconst serviceAccountKey = Buffer.from(process.env.GCS_SERVICE_ACCOUNT_KEY!)
const sandbox = await daytona.create()
// Install gcsfuse from the bookworm repo (works on Trixie)await sandbox.process.executeCommand( 'sudo apt-get update ' + '&& sudo apt-get install -y --no-install-recommends ca-certificates curl gnupg',)await sandbox.process.executeCommand( 'sudo mkdir -p /etc/apt/keyrings ' + '&& curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg ' + '| sudo gpg --dearmor -o /etc/apt/keyrings/gcsfuse.gpg',)await sandbox.process.executeCommand( 'echo "deb [signed-by=/etc/apt/keyrings/gcsfuse.gpg] ' + 'https://packages.cloud.google.com/apt gcsfuse-bookworm main" ' + '| sudo tee /etc/apt/sources.list.d/gcsfuse.list ' + '&& sudo apt-get update && sudo apt-get install -y gcsfuse',)
// Upload the key and mountconst mountPath = '/home/daytona/gcs'const keyPath = '/home/daytona/.gcs-key.json'await sandbox.fs.uploadFile(serviceAccountKey, keyPath)await sandbox.process.executeCommand(`chmod 600 ${keyPath}`)await sandbox.process.executeCommand(`mkdir -p ${mountPath}`)await sandbox.process.executeCommand(`gcsfuse --key-file=${keyPath} my-gcs-bucket ${mountPath}`)
const response = await sandbox.process.executeCommand(`ls ${mountPath}`)console.log(response.result)require 'daytona'
daytona = Daytona::Daytona.new
# GCS_SERVICE_ACCOUNT_KEY holds the full service account JSON as a stringservice_account_key = ENV.fetch('GCS_SERVICE_ACCOUNT_KEY')
sandbox = daytona.create(Daytona::CreateSandboxBaseParams.new)
# Install gcsfuse from the bookworm repo (works on Trixie)sandbox.process.exec( command: 'sudo apt-get update ' \ '&& sudo apt-get install -y --no-install-recommends ca-certificates curl gnupg')sandbox.process.exec( command: 'sudo mkdir -p /etc/apt/keyrings ' \ '&& curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg ' \ '| sudo gpg --dearmor -o /etc/apt/keyrings/gcsfuse.gpg')sandbox.process.exec( command: 'echo "deb [signed-by=/etc/apt/keyrings/gcsfuse.gpg] ' \ 'https://packages.cloud.google.com/apt gcsfuse-bookworm main" ' \ '| sudo tee /etc/apt/sources.list.d/gcsfuse.list ' \ '&& sudo apt-get update && sudo apt-get install -y gcsfuse')
# Upload the key and mountmount_path = '/home/daytona/gcs'key_path = '/home/daytona/.gcs-key.json'sandbox.fs.upload_file(service_account_key, key_path)sandbox.process.exec(command: "chmod 600 #{key_path}")sandbox.process.exec(command: "mkdir -p #{mount_path}")sandbox.process.exec(command: "gcsfuse --key-file=#{key_path} my-gcs-bucket #{mount_path}")
response = sandbox.process.exec(command: "ls #{mount_path}")puts response.resultimport ( "context" "fmt" "log" "os"
"github.com/daytonaio/daytona/libs/sdk-go/pkg/daytona" "github.com/daytonaio/daytona/libs/sdk-go/pkg/types")
ctx := context.Background()client, err := daytona.NewClient()if err != nil { log.Fatal(err)}
// GCS_SERVICE_ACCOUNT_KEY holds the full service account JSON as a stringserviceAccountKey := []byte(os.Getenv("GCS_SERVICE_ACCOUNT_KEY"))
sandbox, err := client.Create(ctx, types.SnapshotParams{})if err != nil { log.Fatal(err)}
// Install gcsfuse from the bookworm repo (works on Trixie)if _, err := sandbox.Process.ExecuteCommand(ctx, "sudo apt-get update && sudo apt-get install -y --no-install-recommends ca-certificates curl gnupg"); err != nil { log.Fatal(err)}if _, err := sandbox.Process.ExecuteCommand(ctx, "sudo mkdir -p /etc/apt/keyrings && "+ "curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | "+ "sudo gpg --dearmor -o /etc/apt/keyrings/gcsfuse.gpg"); err != nil { log.Fatal(err)}if _, err := sandbox.Process.ExecuteCommand(ctx, `echo "deb [signed-by=/etc/apt/keyrings/gcsfuse.gpg] `+ `https://packages.cloud.google.com/apt gcsfuse-bookworm main" | `+ `sudo tee /etc/apt/sources.list.d/gcsfuse.list && `+ `sudo apt-get update && sudo apt-get install -y gcsfuse`); err != nil { log.Fatal(err)}
// Upload the key and mountmountPath := "/home/daytona/gcs"keyPath := "/home/daytona/.gcs-key.json"if err := sandbox.FileSystem.UploadFile(ctx, serviceAccountKey, keyPath); err != nil { log.Fatal(err)}if _, err := sandbox.Process.ExecuteCommand(ctx, "chmod 600 "+keyPath); err != nil { log.Fatal(err)}if _, err := sandbox.Process.ExecuteCommand(ctx, "mkdir -p "+mountPath); err != nil { log.Fatal(err)}if _, err := sandbox.Process.ExecuteCommand(ctx, "gcsfuse --key-file="+keyPath+" my-gcs-bucket "+mountPath); err != nil { log.Fatal(err)}
response, err := sandbox.Process.ExecuteCommand(ctx, "ls "+mountPath)if err != nil { log.Fatal(err)}fmt.Println(response.Result)import io.daytona.sdk.Daytona;import io.daytona.sdk.Sandbox;import io.daytona.sdk.model.CreateSandboxFromSnapshotParams;import io.daytona.sdk.model.ExecuteResponse;
import java.nio.charset.StandardCharsets;
public class App { public static void main(String[] args) { try (Daytona daytona = new Daytona()) { // GCS_SERVICE_ACCOUNT_KEY holds the full service account JSON as a string byte[] serviceAccountKey = System.getenv("GCS_SERVICE_ACCOUNT_KEY") .getBytes(StandardCharsets.UTF_8);
Sandbox sandbox = daytona.create(new CreateSandboxFromSnapshotParams());
// Install gcsfuse from the bookworm repo (works on Trixie) sandbox.getProcess().executeCommand( "sudo apt-get update " + "&& sudo apt-get install -y --no-install-recommends ca-certificates curl gnupg"); sandbox.getProcess().executeCommand( "sudo mkdir -p /etc/apt/keyrings " + "&& curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg " + "| sudo gpg --dearmor -o /etc/apt/keyrings/gcsfuse.gpg"); sandbox.getProcess().executeCommand( "echo \"deb [signed-by=/etc/apt/keyrings/gcsfuse.gpg] " + "https://packages.cloud.google.com/apt gcsfuse-bookworm main\" " + "| sudo tee /etc/apt/sources.list.d/gcsfuse.list " + "&& sudo apt-get update && sudo apt-get install -y gcsfuse");
// Upload the key and mount String mountPath = "/home/daytona/gcs"; String keyPath = "/home/daytona/.gcs-key.json"; sandbox.fs.uploadFile(serviceAccountKey, keyPath); sandbox.getProcess().executeCommand("chmod 600 " + keyPath); sandbox.getProcess().executeCommand("mkdir -p " + mountPath); sandbox.getProcess().executeCommand( "gcsfuse --key-file=" + keyPath + " my-gcs-bucket " + mountPath);
ExecuteResponse response = sandbox.getProcess().executeCommand("ls " + mountPath); System.out.println(response.getResult()); } }}Mount an Azure Blob container
Section titled “Mount an Azure Blob container”Mount an Azure Blob container using blobfuse2 ↗ — Microsoft’s official FUSE client.
Credentials — set AZURE_STORAGE_ACCOUNT, AZURE_STORAGE_CONTAINER, and AZURE_STORAGE_ACCOUNT_KEY in your local environment. The snippets below pass them into the sandbox via envVars, and blobfuse2 reads them from its YAML config.
Pre-built snapshot
Section titled “Pre-built snapshot”Build a snapshot with blobfuse2 and required FUSE compatibility setup preinstalled, then launch all Azure-enabled sandboxes from that snapshot. This is the recommended path for stable environments because dependency and compatibility work runs once during snapshot creation.
Build a snapshot
Section titled “Build a snapshot”Create a reusable snapshot that installs blobfuse2, configures required FUSE dependencies, and applies the Trixie compatibility steps. This ensures Azure mounts work out of the box in sandboxes launched from fuse-azure.
from daytona import CreateSnapshotParams, Daytona, Image
daytona = Daytona()
image = ( Image.base("daytonaio/sandbox") .run_commands( "sudo apt-get update " "&& sudo apt-get install -y --no-install-recommends ca-certificates curl gnupg wget", # Microsoft's apt repo (use bookworm packages on Trixie) "wget -qO- https://packages.microsoft.com/keys/microsoft.asc " "| sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/microsoft.gpg", 'echo "deb [arch=$(dpkg --print-architecture) ' 'signed-by=/etc/apt/trusted.gpg.d/microsoft.gpg] ' 'https://packages.microsoft.com/debian/12/prod bookworm main" ' "| sudo tee /etc/apt/sources.list.d/microsoft-prod.list", "sudo apt-get update && sudo apt-get install -y blobfuse2 fuse3", # libfuse3.so.3 compat symlink for Trixie (see :::caution above) 'src=$(find /usr/lib /lib -name "libfuse3.so.3.*" -type f 2>/dev/null ' "| sort -V | tail -1) " '&& sudo ln -sfn "$src" "$(dirname "$src")/libfuse3.so.3" ' "&& sudo ldconfig", "sudo touch /etc/fuse.conf " '&& grep -qxF "user_allow_other" /etc/fuse.conf ' '|| echo "user_allow_other" | sudo tee -a /etc/fuse.conf', ))
daytona.snapshot.create( CreateSnapshotParams(name="fuse-azure", image=image), on_logs=print,)import { Daytona, Image } from '@daytona/sdk'
const daytona = new Daytona()
const image = Image.base('daytonaio/sandbox').runCommands( 'sudo apt-get update ' + '&& sudo apt-get install -y --no-install-recommends ca-certificates curl gnupg wget', // Microsoft's apt repo (use bookworm packages on Trixie) 'wget -qO- https://packages.microsoft.com/keys/microsoft.asc ' + '| sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/microsoft.gpg', 'echo "deb [arch=$(dpkg --print-architecture) ' + 'signed-by=/etc/apt/trusted.gpg.d/microsoft.gpg] ' + 'https://packages.microsoft.com/debian/12/prod bookworm main" ' + '| sudo tee /etc/apt/sources.list.d/microsoft-prod.list', 'sudo apt-get update && sudo apt-get install -y blobfuse2 fuse3', // libfuse3.so.3 compat symlink for Trixie (see :::caution above) 'src=$(find /usr/lib /lib -name "libfuse3.so.3.*" -type f 2>/dev/null ' + '| sort -V | tail -1) ' + '&& sudo ln -sfn "$src" "$(dirname "$src")/libfuse3.so.3" ' + '&& sudo ldconfig', 'sudo touch /etc/fuse.conf ' + '&& grep -qxF "user_allow_other" /etc/fuse.conf ' + '|| echo "user_allow_other" | sudo tee -a /etc/fuse.conf',)
await daytona.snapshot.create( { name: 'fuse-azure', image }, { onLogs: console.log },)require 'daytona'
daytona = Daytona::Daytona.new
image = Daytona::Image .base('daytonaio/sandbox') .run_commands( 'sudo apt-get update ' \ '&& sudo apt-get install -y --no-install-recommends ca-certificates curl gnupg wget', # Microsoft's apt repo (use bookworm packages on Trixie) 'wget -qO- https://packages.microsoft.com/keys/microsoft.asc ' \ '| sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/microsoft.gpg', 'echo "deb [arch=$(dpkg --print-architecture) ' \ 'signed-by=/etc/apt/trusted.gpg.d/microsoft.gpg] ' \ 'https://packages.microsoft.com/debian/12/prod bookworm main" ' \ '| sudo tee /etc/apt/sources.list.d/microsoft-prod.list', 'sudo apt-get update && sudo apt-get install -y blobfuse2 fuse3', # libfuse3.so.3 compat symlink for Trixie 'src=$(find /usr/lib /lib -name "libfuse3.so.3.*" -type f 2>/dev/null ' \ '| sort -V | tail -1) ' \ '&& sudo ln -sfn "$src" "$(dirname "$src")/libfuse3.so.3" ' \ '&& sudo ldconfig', 'sudo touch /etc/fuse.conf ' \ '&& grep -qxF "user_allow_other" /etc/fuse.conf ' \ '|| echo "user_allow_other" | sudo tee -a /etc/fuse.conf' )
daytona.snapshot.create( Daytona::CreateSnapshotParams.new(name: 'fuse-azure', image: image), on_logs: proc { |chunk| print(chunk) })import ( "context" "fmt" "log"
"github.com/daytonaio/daytona/libs/sdk-go/pkg/daytona" "github.com/daytonaio/daytona/libs/sdk-go/pkg/types")
ctx := context.Background()client, err := daytona.NewClient()if err != nil { log.Fatal(err)}
image := daytona.Base("daytonaio/sandbox"). Run("sudo apt-get update && sudo apt-get install -y --no-install-recommends ca-certificates curl gnupg wget"). // Microsoft's apt repo (use bookworm packages on Trixie) Run("wget -qO- https://packages.microsoft.com/keys/microsoft.asc | " + "sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/microsoft.gpg"). Run(`echo "deb [arch=$(dpkg --print-architecture) ` + `signed-by=/etc/apt/trusted.gpg.d/microsoft.gpg] ` + `https://packages.microsoft.com/debian/12/prod bookworm main" | ` + `sudo tee /etc/apt/sources.list.d/microsoft-prod.list`). Run("sudo apt-get update && sudo apt-get install -y blobfuse2 fuse3"). // libfuse3.so.3 compat symlink for Trixie Run(`src=$(find /usr/lib /lib -name "libfuse3.so.3.*" -type f 2>/dev/null | sort -V | tail -1) && ` + `sudo ln -sfn "$src" "$(dirname "$src")/libfuse3.so.3" && sudo ldconfig`). Run(`sudo touch /etc/fuse.conf && grep -qxF "user_allow_other" /etc/fuse.conf || ` + `echo "user_allow_other" | sudo tee -a /etc/fuse.conf`)
_, logChan, err := client.Snapshot.Create(ctx, &types.CreateSnapshotParams{ Name: "fuse-azure", Image: image,})if err != nil { log.Fatal(err)}for line := range logChan { fmt.Print(line)}import io.daytona.sdk.Daytona;import io.daytona.sdk.Image;
public class App { public static void main(String[] args) { try (Daytona daytona = new Daytona()) { Image image = Image.base("daytonaio/sandbox") .runCommands( "sudo apt-get update " + "&& sudo apt-get install -y --no-install-recommends ca-certificates curl gnupg wget", // Microsoft's apt repo (use bookworm packages on Trixie) "wget -qO- https://packages.microsoft.com/keys/microsoft.asc " + "| sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/microsoft.gpg", "echo \"deb [arch=$(dpkg --print-architecture) " + "signed-by=/etc/apt/trusted.gpg.d/microsoft.gpg] " + "https://packages.microsoft.com/debian/12/prod bookworm main\" " + "| sudo tee /etc/apt/sources.list.d/microsoft-prod.list", "sudo apt-get update && sudo apt-get install -y blobfuse2 fuse3", // libfuse3.so.3 compat symlink for Trixie "src=$(find /usr/lib /lib -name \"libfuse3.so.3.*\" -type f 2>/dev/null " + "| sort -V | tail -1) " + "&& sudo ln -sfn \"$src\" \"$(dirname \"$src\")/libfuse3.so.3\" " + "&& sudo ldconfig", "sudo touch /etc/fuse.conf " + "&& grep -qxF \"user_allow_other\" /etc/fuse.conf " + "|| echo \"user_allow_other\" | sudo tee -a /etc/fuse.conf" );
daytona.snapshot().create("fuse-azure", image, System.out::println); } }}Launch and mount
Section titled “Launch and mount”blobfuse2 reads its configuration from a YAML file. Build it with your account credentials and upload it into the sandbox.
The YAML below tells blobfuse2 three things: where to connect (the azstorage: block — your storage account, the container you want to mount, the endpoint URL, and the auth method), what to enable (the components: list — the FUSE interface itself, a content cache, a metadata cache, and the Azure backend), and how to log. The cache components use sensible defaults when listed without their own top-level config blocks; add explicit block_cache: / attr_cache: blocks later if you need to tune cache sizes or timeouts. Note that in Azure terminology, a “container” is the equivalent of an S3 bucket — it’s specified inside the YAML rather than passed as a command-line argument.
import osfrom daytona import CreateSandboxFromSnapshotParams, Daytona
daytona = Daytona()
sandbox = daytona.create(CreateSandboxFromSnapshotParams(snapshot="fuse-azure"))
mount_path = "/home/daytona/azure"config_path = "/home/daytona/.blobfuse2.yaml"
account = os.environ["AZURE_STORAGE_ACCOUNT"]container = os.environ["AZURE_STORAGE_CONTAINER"]account_key = os.environ["AZURE_STORAGE_ACCOUNT_KEY"]
config = f"""\allow-other: truelogging: type: syslog level: log_warningcomponents: - libfuse - block_cache - attr_cache - azstorageazstorage: type: block account-name: {account} container: {container} endpoint: https://{account}.blob.core.windows.net auth-type: key account-key: {account_key}"""
sandbox.fs.upload_file(config.encode(), config_path)sandbox.process.exec(f"chmod 600 {config_path}")
# Mount the containersandbox.process.exec(f"mkdir -p {mount_path}")sandbox.process.exec(f"blobfuse2 mount --config-file={config_path} {mount_path}")
# Use the mountresponse = sandbox.process.exec(f"ls {mount_path}")print(response.result)import { Daytona } from '@daytona/sdk'
const daytona = new Daytona()
const sandbox = await daytona.create({ snapshot: 'fuse-azure' })
const mountPath = '/home/daytona/azure'const configPath = '/home/daytona/.blobfuse2.yaml'
const account = process.env.AZURE_STORAGE_ACCOUNT!const container = process.env.AZURE_STORAGE_CONTAINER!const accountKey = process.env.AZURE_STORAGE_ACCOUNT_KEY!
const config = `allow-other: truelogging: type: syslog level: log_warningcomponents: - libfuse - block_cache - attr_cache - azstorageazstorage: type: block account-name: ${account} container: ${container} endpoint: https://${account}.blob.core.windows.net auth-type: key account-key: ${accountKey}`
await sandbox.fs.uploadFile(Buffer.from(config), configPath)await sandbox.process.executeCommand(`chmod 600 ${configPath}`)
// Mount the containerawait sandbox.process.executeCommand(`mkdir -p ${mountPath}`)await sandbox.process.executeCommand(`blobfuse2 mount --config-file=${configPath} ${mountPath}`)
// Use the mountconst response = await sandbox.process.executeCommand(`ls ${mountPath}`)console.log(response.result)require 'daytona'
daytona = Daytona::Daytona.new
sandbox = daytona.create( Daytona::CreateSandboxFromSnapshotParams.new(snapshot: 'fuse-azure'))
mount_path = '/home/daytona/azure'config_path = '/home/daytona/.blobfuse2.yaml'
account = ENV.fetch('AZURE_STORAGE_ACCOUNT')container = ENV.fetch('AZURE_STORAGE_CONTAINER')account_key = ENV.fetch('AZURE_STORAGE_ACCOUNT_KEY')
config = <<~YAML allow-other: true logging: type: syslog level: log_warning components: - libfuse - block_cache - attr_cache - azstorage azstorage: type: block account-name: #{account} container: #{container} endpoint: https://#{account}.blob.core.windows.net auth-type: key account-key: #{account_key}YAML
sandbox.fs.upload_file(config, config_path)sandbox.process.exec(command: "chmod 600 #{config_path}")
# Mount the containersandbox.process.exec(command: "mkdir -p #{mount_path}")sandbox.process.exec(command: "blobfuse2 mount --config-file=#{config_path} #{mount_path}")
# Use the mountresponse = sandbox.process.exec(command: "ls #{mount_path}")puts response.resultimport ( "context" "fmt" "log" "os"
"github.com/daytonaio/daytona/libs/sdk-go/pkg/daytona" "github.com/daytonaio/daytona/libs/sdk-go/pkg/types")
ctx := context.Background()client, err := daytona.NewClient()if err != nil { log.Fatal(err)}
sandbox, err := client.Create(ctx, types.SnapshotParams{ Snapshot: "fuse-azure",})if err != nil { log.Fatal(err)}
mountPath := "/home/daytona/azure"configPath := "/home/daytona/.blobfuse2.yaml"
account := os.Getenv("AZURE_STORAGE_ACCOUNT")container := os.Getenv("AZURE_STORAGE_CONTAINER")accountKey := os.Getenv("AZURE_STORAGE_ACCOUNT_KEY")
config := fmt.Sprintf(`allow-other: truelogging: type: syslog level: log_warningcomponents: - libfuse - block_cache - attr_cache - azstorageazstorage: type: block account-name: %s container: %s endpoint: https://%s.blob.core.windows.net auth-type: key account-key: %s`, account, container, account, accountKey)
if err := sandbox.FileSystem.UploadFile(ctx, []byte(config), configPath); err != nil { log.Fatal(err)}if _, err := sandbox.Process.ExecuteCommand(ctx, "chmod 600 "+configPath); err != nil { log.Fatal(err)}
// Mount the containerif _, err := sandbox.Process.ExecuteCommand(ctx, "mkdir -p "+mountPath); err != nil { log.Fatal(err)}if _, err := sandbox.Process.ExecuteCommand(ctx, "blobfuse2 mount --config-file="+configPath+" "+mountPath); err != nil { log.Fatal(err)}
// Use the mountresponse, err := sandbox.Process.ExecuteCommand(ctx, "ls "+mountPath)if err != nil { log.Fatal(err)}fmt.Println(response.Result)import io.daytona.sdk.Daytona;import io.daytona.sdk.Sandbox;import io.daytona.sdk.model.CreateSandboxFromSnapshotParams;import io.daytona.sdk.model.ExecuteResponse;
import java.nio.charset.StandardCharsets;
public class App { public static void main(String[] args) { try (Daytona daytona = new Daytona()) { CreateSandboxFromSnapshotParams params = new CreateSandboxFromSnapshotParams(); params.setSnapshot("fuse-azure"); Sandbox sandbox = daytona.create(params);
String mountPath = "/home/daytona/azure"; String configPath = "/home/daytona/.blobfuse2.yaml";
String account = System.getenv("AZURE_STORAGE_ACCOUNT"); String container = System.getenv("AZURE_STORAGE_CONTAINER"); String accountKey = System.getenv("AZURE_STORAGE_ACCOUNT_KEY");
String config = "allow-other: true\n" + "logging:\n" + " type: syslog\n" + " level: log_warning\n" + "components:\n" + " - libfuse\n" + " - block_cache\n" + " - attr_cache\n" + " - azstorage\n" + "azstorage:\n" + " type: block\n" + " account-name: " + account + "\n" + " container: " + container + "\n" + " endpoint: https://" + account + ".blob.core.windows.net\n" + " auth-type: key\n" + " account-key: " + accountKey + "\n";
sandbox.fs.uploadFile(config.getBytes(StandardCharsets.UTF_8), configPath); sandbox.getProcess().executeCommand("chmod 600 " + configPath);
// Mount the container sandbox.getProcess().executeCommand("mkdir -p " + mountPath); sandbox.getProcess().executeCommand( "blobfuse2 mount --config-file=" + configPath + " " + mountPath);
// Use the mount ExecuteResponse response = sandbox.getProcess().executeCommand("ls " + mountPath); System.out.println(response.getResult()); } }}Runtime install
Section titled “Runtime install”Start from a default sandbox and install blobfuse2 during startup before writing the config and mounting the container. This is useful for quick validation and experiments, with the tradeoff of slower cold starts and repeated setup on each sandbox launch.
import osfrom daytona import CreateSandboxBaseParams, Daytona
daytona = Daytona()
sandbox = daytona.create(CreateSandboxBaseParams())
# Install blobfuse2sandbox.process.exec( "sudo apt-get update " "&& sudo apt-get install -y --no-install-recommends ca-certificates curl gnupg wget")sandbox.process.exec( "wget -qO- https://packages.microsoft.com/keys/microsoft.asc " "| sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/microsoft.gpg")sandbox.process.exec( 'echo "deb [arch=$(dpkg --print-architecture) ' 'signed-by=/etc/apt/trusted.gpg.d/microsoft.gpg] ' 'https://packages.microsoft.com/debian/12/prod bookworm main" ' "| sudo tee /etc/apt/sources.list.d/microsoft-prod.list " "&& sudo apt-get update && sudo apt-get install -y blobfuse2 fuse3")# libfuse3.so.3 compat symlink for Trixiesandbox.process.exec( 'src=$(find /usr/lib /lib -name "libfuse3.so.3.*" -type f 2>/dev/null ' "| sort -V | tail -1) " '&& sudo ln -sfn "$src" "$(dirname "$src")/libfuse3.so.3" ' "&& sudo ldconfig")
# Build config and mountmount_path = "/home/daytona/azure"config_path = "/home/daytona/.blobfuse2.yaml"account = os.environ["AZURE_STORAGE_ACCOUNT"]container = os.environ["AZURE_STORAGE_CONTAINER"]account_key = os.environ["AZURE_STORAGE_ACCOUNT_KEY"]
config = f"""\allow-other: truecomponents: - libfuse - block_cache - attr_cache - azstorageazstorage: type: block account-name: {account} container: {container} endpoint: https://{account}.blob.core.windows.net auth-type: key account-key: {account_key}"""
sandbox.fs.upload_file(config.encode(), config_path)sandbox.process.exec(f"chmod 600 {config_path}")sandbox.process.exec(f"mkdir -p {mount_path}")sandbox.process.exec(f"blobfuse2 mount --config-file={config_path} {mount_path}")
response = sandbox.process.exec(f"ls {mount_path}")print(response.result)import { Daytona } from '@daytona/sdk'
const daytona = new Daytona()
const sandbox = await daytona.create()
// Install blobfuse2await sandbox.process.executeCommand( 'sudo apt-get update ' + '&& sudo apt-get install -y --no-install-recommends ca-certificates curl gnupg wget',)await sandbox.process.executeCommand( 'wget -qO- https://packages.microsoft.com/keys/microsoft.asc ' + '| sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/microsoft.gpg',)await sandbox.process.executeCommand( 'echo "deb [arch=$(dpkg --print-architecture) ' + 'signed-by=/etc/apt/trusted.gpg.d/microsoft.gpg] ' + 'https://packages.microsoft.com/debian/12/prod bookworm main" ' + '| sudo tee /etc/apt/sources.list.d/microsoft-prod.list ' + '&& sudo apt-get update && sudo apt-get install -y blobfuse2 fuse3',)// libfuse3.so.3 compat symlink for Trixieawait sandbox.process.executeCommand( 'src=$(find /usr/lib /lib -name "libfuse3.so.3.*" -type f 2>/dev/null ' + '| sort -V | tail -1) ' + '&& sudo ln -sfn "$src" "$(dirname "$src")/libfuse3.so.3" ' + '&& sudo ldconfig',)
// Build config and mountconst mountPath = '/home/daytona/azure'const configPath = '/home/daytona/.blobfuse2.yaml'const account = process.env.AZURE_STORAGE_ACCOUNT!const container = process.env.AZURE_STORAGE_CONTAINER!const accountKey = process.env.AZURE_STORAGE_ACCOUNT_KEY!
const config = `allow-other: truecomponents: - libfuse - block_cache - attr_cache - azstorageazstorage: type: block account-name: ${account} container: ${container} endpoint: https://${account}.blob.core.windows.net auth-type: key account-key: ${accountKey}`
await sandbox.fs.uploadFile(Buffer.from(config), configPath)await sandbox.process.executeCommand(`chmod 600 ${configPath}`)await sandbox.process.executeCommand(`mkdir -p ${mountPath}`)await sandbox.process.executeCommand(`blobfuse2 mount --config-file=${configPath} ${mountPath}`)
const response = await sandbox.process.executeCommand(`ls ${mountPath}`)console.log(response.result)require 'daytona'
daytona = Daytona::Daytona.new
sandbox = daytona.create(Daytona::CreateSandboxBaseParams.new)
# Install blobfuse2sandbox.process.exec( command: 'sudo apt-get update ' \ '&& sudo apt-get install -y --no-install-recommends ca-certificates curl gnupg wget')sandbox.process.exec( command: 'wget -qO- https://packages.microsoft.com/keys/microsoft.asc ' \ '| sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/microsoft.gpg')sandbox.process.exec( command: 'echo "deb [arch=$(dpkg --print-architecture) ' \ 'signed-by=/etc/apt/trusted.gpg.d/microsoft.gpg] ' \ 'https://packages.microsoft.com/debian/12/prod bookworm main" ' \ '| sudo tee /etc/apt/sources.list.d/microsoft-prod.list ' \ '&& sudo apt-get update && sudo apt-get install -y blobfuse2 fuse3')# libfuse3.so.3 compat symlink for Trixiesandbox.process.exec( command: 'src=$(find /usr/lib /lib -name "libfuse3.so.3.*" -type f 2>/dev/null ' \ '| sort -V | tail -1) ' \ '&& sudo ln -sfn "$src" "$(dirname "$src")/libfuse3.so.3" ' \ '&& sudo ldconfig')
# Build config and mountmount_path = '/home/daytona/azure'config_path = '/home/daytona/.blobfuse2.yaml'account = ENV.fetch('AZURE_STORAGE_ACCOUNT')container = ENV.fetch('AZURE_STORAGE_CONTAINER')account_key = ENV.fetch('AZURE_STORAGE_ACCOUNT_KEY')
config = <<~YAML allow-other: true components: - libfuse - block_cache - attr_cache - azstorage azstorage: type: block account-name: #{account} container: #{container} endpoint: https://#{account}.blob.core.windows.net auth-type: key account-key: #{account_key}YAML
sandbox.fs.upload_file(config, config_path)sandbox.process.exec(command: "chmod 600 #{config_path}")sandbox.process.exec(command: "mkdir -p #{mount_path}")sandbox.process.exec(command: "blobfuse2 mount --config-file=#{config_path} #{mount_path}")
response = sandbox.process.exec(command: "ls #{mount_path}")puts response.resultimport ( "context" "fmt" "log" "os"
"github.com/daytonaio/daytona/libs/sdk-go/pkg/daytona" "github.com/daytonaio/daytona/libs/sdk-go/pkg/types")
ctx := context.Background()client, err := daytona.NewClient()if err != nil { log.Fatal(err)}
sandbox, err := client.Create(ctx, types.SnapshotParams{})if err != nil { log.Fatal(err)}
// Install blobfuse2if _, err := sandbox.Process.ExecuteCommand(ctx, "sudo apt-get update && sudo apt-get install -y --no-install-recommends ca-certificates curl gnupg wget"); err != nil { log.Fatal(err)}if _, err := sandbox.Process.ExecuteCommand(ctx, "wget -qO- https://packages.microsoft.com/keys/microsoft.asc | "+ "sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/microsoft.gpg"); err != nil { log.Fatal(err)}if _, err := sandbox.Process.ExecuteCommand(ctx, `echo "deb [arch=$(dpkg --print-architecture) `+ `signed-by=/etc/apt/trusted.gpg.d/microsoft.gpg] `+ `https://packages.microsoft.com/debian/12/prod bookworm main" | `+ `sudo tee /etc/apt/sources.list.d/microsoft-prod.list && `+ `sudo apt-get update && sudo apt-get install -y blobfuse2 fuse3`); err != nil { log.Fatal(err)}// libfuse3.so.3 compat symlink for Trixieif _, err := sandbox.Process.ExecuteCommand(ctx, `src=$(find /usr/lib /lib -name "libfuse3.so.3.*" -type f 2>/dev/null | sort -V | tail -1) && `+ `sudo ln -sfn "$src" "$(dirname "$src")/libfuse3.so.3" && sudo ldconfig`); err != nil { log.Fatal(err)}
// Build config and mountmountPath := "/home/daytona/azure"configPath := "/home/daytona/.blobfuse2.yaml"account := os.Getenv("AZURE_STORAGE_ACCOUNT")container := os.Getenv("AZURE_STORAGE_CONTAINER")accountKey := os.Getenv("AZURE_STORAGE_ACCOUNT_KEY")
config := fmt.Sprintf(`allow-other: truecomponents: - libfuse - block_cache - attr_cache - azstorageazstorage: type: block account-name: %s container: %s endpoint: https://%s.blob.core.windows.net auth-type: key account-key: %s`, account, container, account, accountKey)
if err := sandbox.FileSystem.UploadFile(ctx, []byte(config), configPath); err != nil { log.Fatal(err)}if _, err := sandbox.Process.ExecuteCommand(ctx, "chmod 600 "+configPath); err != nil { log.Fatal(err)}if _, err := sandbox.Process.ExecuteCommand(ctx, "mkdir -p "+mountPath); err != nil { log.Fatal(err)}if _, err := sandbox.Process.ExecuteCommand(ctx, "blobfuse2 mount --config-file="+configPath+" "+mountPath); err != nil { log.Fatal(err)}
response, err := sandbox.Process.ExecuteCommand(ctx, "ls "+mountPath)if err != nil { log.Fatal(err)}fmt.Println(response.Result)import io.daytona.sdk.Daytona;import io.daytona.sdk.Sandbox;import io.daytona.sdk.model.CreateSandboxFromSnapshotParams;import io.daytona.sdk.model.ExecuteResponse;
import java.nio.charset.StandardCharsets;
public class App { public static void main(String[] args) { try (Daytona daytona = new Daytona()) { Sandbox sandbox = daytona.create(new CreateSandboxFromSnapshotParams());
// Install blobfuse2 sandbox.getProcess().executeCommand( "sudo apt-get update " + "&& sudo apt-get install -y --no-install-recommends ca-certificates curl gnupg wget"); sandbox.getProcess().executeCommand( "wget -qO- https://packages.microsoft.com/keys/microsoft.asc " + "| sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/microsoft.gpg"); sandbox.getProcess().executeCommand( "echo \"deb [arch=$(dpkg --print-architecture) " + "signed-by=/etc/apt/trusted.gpg.d/microsoft.gpg] " + "https://packages.microsoft.com/debian/12/prod bookworm main\" " + "| sudo tee /etc/apt/sources.list.d/microsoft-prod.list " + "&& sudo apt-get update && sudo apt-get install -y blobfuse2 fuse3"); // libfuse3.so.3 compat symlink for Trixie sandbox.getProcess().executeCommand( "src=$(find /usr/lib /lib -name \"libfuse3.so.3.*\" -type f 2>/dev/null " + "| sort -V | tail -1) " + "&& sudo ln -sfn \"$src\" \"$(dirname \"$src\")/libfuse3.so.3\" " + "&& sudo ldconfig");
// Build config and mount String mountPath = "/home/daytona/azure"; String configPath = "/home/daytona/.blobfuse2.yaml"; String account = System.getenv("AZURE_STORAGE_ACCOUNT"); String container = System.getenv("AZURE_STORAGE_CONTAINER"); String accountKey = System.getenv("AZURE_STORAGE_ACCOUNT_KEY");
String config = "allow-other: true\n" + "components:\n" + " - libfuse\n" + " - block_cache\n" + " - attr_cache\n" + " - azstorage\n" + "azstorage:\n" + " type: block\n" + " account-name: " + account + "\n" + " container: " + container + "\n" + " endpoint: https://" + account + ".blob.core.windows.net\n" + " auth-type: key\n" + " account-key: " + accountKey + "\n";
sandbox.fs.uploadFile(config.getBytes(StandardCharsets.UTF_8), configPath); sandbox.getProcess().executeCommand("chmod 600 " + configPath); sandbox.getProcess().executeCommand("mkdir -p " + mountPath); sandbox.getProcess().executeCommand( "blobfuse2 mount --config-file=" + configPath + " " + mountPath);
ExecuteResponse response = sandbox.getProcess().executeCommand("ls " + mountPath); System.out.println(response.getResult()); } }}Mount a MesaFS filesystem
Section titled “Mount a MesaFS filesystem”MesaFS ↗ is an agent-native versioned filesystem from Mesa, purpose-built for the same workloads Daytona sandboxes run — parallel agent swarms, shared working memory, structured artifacts, and long-lived state across runs. With MesaFS, instead of mounting a cloud bucket, you mount a Mesa repository: a Git-compatible versioned working directory with sub-50ms reads/writes, instant fork/branch operations, and unlimited concurrent writers.
The Mesa setup follows the same pattern as the bucket providers but uses the Mesa CLI rather than a FUSE-specific tool: install the CLI in your sandbox, authenticate with your API key, and run mesa mount --daemonize to mount your repos at /home/daytona/mesa/mnt/<org>/<repo>.
Credentials — set MESA_API_KEY and MESA_ORG (your Mesa organization slug) in your local environment. The snippets below pass them into the sandbox via envVars, and the Mesa CLI reads them from there.
Pre-built snapshot
Section titled “Pre-built snapshot”Build a snapshot with the Mesa CLI preinstalled, then launch Mesa-enabled sandboxes from that snapshot. You still authenticate and mount at runtime, but installation is no longer part of each sandbox startup sequence.
Build a snapshot
Section titled “Build a snapshot”Create a reusable snapshot that installs the Mesa CLI and enables the FUSE user_allow_other setting. Sandboxes launched from fuse-mesa can then authenticate and mount repos without repeating install work.
from daytona import CreateSnapshotParams, Daytona, Image
daytona = Daytona()
image = ( Image.base("daytonaio/sandbox") .run_commands( "curl -fsSL https://mesa.dev/install.sh | sh", "sudo sed -i 's/^#user_allow_other/user_allow_other/' /etc/fuse.conf", ))
daytona.snapshot.create( CreateSnapshotParams(name="fuse-mesa", image=image), on_logs=lambda chunk: print(chunk, end="", flush=True),)import { Daytona, Image } from '@daytona/sdk'
const daytona = new Daytona()
const image = Image.base('daytonaio/sandbox').runCommands( 'curl -fsSL https://mesa.dev/install.sh | sh', "sudo sed -i 's/^#user_allow_other/user_allow_other/' /etc/fuse.conf",)
await daytona.snapshot.create( { name: 'fuse-mesa', image }, { onLogs: console.log },)require 'daytona'
daytona = Daytona::Daytona.new
image = Daytona::Image .base('daytonaio/sandbox') .run_commands( 'curl -fsSL https://mesa.dev/install.sh | sh', "sudo sed -i 's/^#user_allow_other/user_allow_other/' /etc/fuse.conf" )
daytona.snapshot.create( Daytona::CreateSnapshotParams.new(name: 'fuse-mesa', image: image), on_logs: proc { |chunk| print(chunk) })import ( "context" "fmt" "log"
"github.com/daytonaio/daytona/libs/sdk-go/pkg/daytona" "github.com/daytonaio/daytona/libs/sdk-go/pkg/types")
ctx := context.Background()client, err := daytona.NewClient()if err != nil { log.Fatal(err)}
image := daytona.Base("daytonaio/sandbox"). Run("curl -fsSL https://mesa.dev/install.sh | sh"). Run(`sudo sed -i 's/^#user_allow_other/user_allow_other/' /etc/fuse.conf`)
_, logChan, err := client.Snapshot.Create(ctx, &types.CreateSnapshotParams{ Name: "fuse-mesa", Image: image,})if err != nil { log.Fatal(err)}for line := range logChan { fmt.Print(line)}import io.daytona.sdk.Daytona;import io.daytona.sdk.Image;
public class App { public static void main(String[] args) { try (Daytona daytona = new Daytona()) { Image image = Image.base("daytonaio/sandbox") .runCommands( "curl -fsSL https://mesa.dev/install.sh | sh", "sudo sed -i 's/^#user_allow_other/user_allow_other/' /etc/fuse.conf" );
daytona.snapshot().create("fuse-mesa", image, System.out::println); } }}Launch and mount
Section titled “Launch and mount”Pass MESA_API_KEY and your Mesa organization slug to the sandbox via envVars. Your code then writes a TOML config into the sandbox, authenticates the Mesa CLI, and mounts your repos at /home/daytona/mesa/mnt/<org>/<repo>.
import osfrom daytona import CreateSandboxFromSnapshotParams, Daytona
daytona = Daytona()
org = os.environ["MESA_ORG"]repo = "my-workspace"mount_path = f"/home/daytona/mesa/mnt/{org}/{repo}"config_path = "/home/daytona/.config/mesa/config.toml"
sandbox = daytona.create( CreateSandboxFromSnapshotParams( snapshot="fuse-mesa", env_vars={ "MESA_API_KEY": os.environ["MESA_API_KEY"], "MESA_ORG": org, }, ))
config = f'''mount-point = "/home/daytona/mesa/mnt"
[secrets]backend = "plaintext-file"
[organizations.{org}]'''
sandbox.process.exec(f"mkdir -p $(dirname {config_path})")sandbox.fs.upload_file(config.encode(), config_path)
sandbox.process.exec("mesa auth set-key --org $MESA_ORG $MESA_API_KEY")sandbox.process.exec("mesa mount --daemonize")
response = sandbox.process.exec(f"ls {mount_path}")print(response.result)import { Daytona } from '@daytona/sdk'
const daytona = new Daytona()
const org = process.env.MESA_ORG!const repo = 'my-workspace'const mountPath = `/home/daytona/mesa/mnt/${org}/${repo}`const configPath = '/home/daytona/.config/mesa/config.toml'
const sandbox = await daytona.create({ snapshot: 'fuse-mesa', envVars: { MESA_API_KEY: process.env.MESA_API_KEY!, MESA_ORG: org, },})
const config = `mount-point = "/home/daytona/mesa/mnt"
[secrets]backend = "plaintext-file"
[organizations.${org}]`
await sandbox.process.executeCommand(`mkdir -p $(dirname ${configPath})`)await sandbox.fs.uploadFile(Buffer.from(config), configPath)
await sandbox.process.executeCommand('mesa auth set-key --org $MESA_ORG $MESA_API_KEY')await sandbox.process.executeCommand('mesa mount --daemonize')
const response = await sandbox.process.executeCommand(`ls ${mountPath}`)console.log(response.result)require 'daytona'
daytona = Daytona::Daytona.new
org = ENV.fetch('MESA_ORG')repo = 'my-workspace'mount_path = "/home/daytona/mesa/mnt/#{org}/#{repo}"config_path = '/home/daytona/.config/mesa/config.toml'
sandbox = daytona.create( Daytona::CreateSandboxFromSnapshotParams.new( snapshot: 'fuse-mesa', env_vars: { 'MESA_API_KEY' => ENV.fetch('MESA_API_KEY'), 'MESA_ORG' => org } ))
config = <<~TOML mount-point = "/home/daytona/mesa/mnt"
[secrets] backend = "plaintext-file"
[organizations.#{org}]TOML
sandbox.process.exec(command: "mkdir -p $(dirname #{config_path})")sandbox.fs.upload_file(config, config_path)
sandbox.process.exec(command: 'mesa auth set-key --org $MESA_ORG $MESA_API_KEY')sandbox.process.exec(command: 'mesa mount --daemonize')
response = sandbox.process.exec(command: "ls #{mount_path}")puts response.resultimport ( "context" "fmt" "log" "os"
"github.com/daytonaio/daytona/libs/sdk-go/pkg/daytona" "github.com/daytonaio/daytona/libs/sdk-go/pkg/types")
ctx := context.Background()client, err := daytona.NewClient()if err != nil { log.Fatal(err)}
org := os.Getenv("MESA_ORG")repo := "my-workspace"mountPath := fmt.Sprintf("/home/daytona/mesa/mnt/%s/%s", org, repo)configPath := "/home/daytona/.config/mesa/config.toml"
sandbox, err := client.Create(ctx, types.SnapshotParams{ Snapshot: "fuse-mesa", SandboxBaseParams: types.SandboxBaseParams{ EnvVars: map[string]string{ "MESA_API_KEY": os.Getenv("MESA_API_KEY"), "MESA_ORG": org, }, },})if err != nil { log.Fatal(err)}
config := fmt.Sprintf(`mount-point = "/home/daytona/mesa/mnt"
[secrets]backend = "plaintext-file"
[organizations.%s]`, org)
if _, err := sandbox.Process.ExecuteCommand(ctx, "mkdir -p $(dirname "+configPath+")"); err != nil { log.Fatal(err)}if err := sandbox.FileSystem.UploadFile(ctx, []byte(config), configPath); err != nil { log.Fatal(err)}
if _, err := sandbox.Process.ExecuteCommand(ctx, "mesa auth set-key --org $MESA_ORG $MESA_API_KEY"); err != nil { log.Fatal(err)}if _, err := sandbox.Process.ExecuteCommand(ctx, "mesa mount --daemonize"); err != nil { log.Fatal(err)}
response, err := sandbox.Process.ExecuteCommand(ctx, "ls "+mountPath)if err != nil { log.Fatal(err)}fmt.Println(response.Result)import io.daytona.sdk.Daytona;import io.daytona.sdk.Sandbox;import io.daytona.sdk.model.CreateSandboxFromSnapshotParams;import io.daytona.sdk.model.ExecuteResponse;
import java.nio.charset.StandardCharsets;import java.util.Map;
public class App { public static void main(String[] args) { try (Daytona daytona = new Daytona()) { String org = System.getenv("MESA_ORG"); String repo = "my-workspace"; String mountPath = "/home/daytona/mesa/mnt/" + org + "/" + repo; String configPath = "/home/daytona/.config/mesa/config.toml";
CreateSandboxFromSnapshotParams params = new CreateSandboxFromSnapshotParams(); params.setSnapshot("fuse-mesa"); params.setEnvVars(Map.of( "MESA_API_KEY", System.getenv("MESA_API_KEY"), "MESA_ORG", org )); Sandbox sandbox = daytona.create(params);
String config = "mount-point = \"/home/daytona/mesa/mnt\"\n\n" + "[secrets]\n" + "backend = \"plaintext-file\"\n\n" + "[organizations." + org + "]\n";
sandbox.getProcess().executeCommand("mkdir -p $(dirname " + configPath + ")"); sandbox.fs.uploadFile(config.getBytes(StandardCharsets.UTF_8), configPath);
sandbox.getProcess().executeCommand( "mesa auth set-key --org $MESA_ORG $MESA_API_KEY"); sandbox.getProcess().executeCommand("mesa mount --daemonize");
ExecuteResponse response = sandbox.getProcess().executeCommand("ls " + mountPath); System.out.println(response.getResult()); } }}Runtime install
Section titled “Runtime install”Start from a default sandbox and install the Mesa CLI during startup before configuring auth and running mesa mount --daemonize. This is useful when iterating quickly on mount behavior, with the tradeoff of slower cold starts for each sandbox.
import osfrom daytona import CreateSandboxBaseParams, Daytona
daytona = Daytona()
org = os.environ["MESA_ORG"]repo = "my-workspace"mount_path = f"/home/daytona/mesa/mnt/{org}/{repo}"config_path = "/home/daytona/.config/mesa/config.toml"
sandbox = daytona.create( CreateSandboxBaseParams( env_vars={ "MESA_API_KEY": os.environ["MESA_API_KEY"], "MESA_ORG": org, }, ))
sandbox.process.exec("curl -fsSL https://mesa.dev/install.sh | sh")sandbox.process.exec( "sudo sed -i 's/^#user_allow_other/user_allow_other/' /etc/fuse.conf")
config = f'''mount-point = "/home/daytona/mesa/mnt"
[secrets]backend = "plaintext-file"
[organizations.{org}]'''
sandbox.process.exec(f"mkdir -p $(dirname {config_path})")sandbox.fs.upload_file(config.encode(), config_path)
sandbox.process.exec("mesa auth set-key --org $MESA_ORG $MESA_API_KEY")sandbox.process.exec("mesa mount --daemonize")
response = sandbox.process.exec(f"ls {mount_path}")print(response.result)import { Daytona } from '@daytona/sdk'
const daytona = new Daytona()
const org = process.env.MESA_ORG!const repo = 'my-workspace'const mountPath = `/home/daytona/mesa/mnt/${org}/${repo}`const configPath = '/home/daytona/.config/mesa/config.toml'
const sandbox = await daytona.create({ envVars: { MESA_API_KEY: process.env.MESA_API_KEY!, MESA_ORG: org, },})
await sandbox.process.executeCommand('curl -fsSL https://mesa.dev/install.sh | sh')await sandbox.process.executeCommand( "sudo sed -i 's/^#user_allow_other/user_allow_other/' /etc/fuse.conf",)
const config = `mount-point = "/home/daytona/mesa/mnt"
[secrets]backend = "plaintext-file"
[organizations.${org}]`
await sandbox.process.executeCommand(`mkdir -p $(dirname ${configPath})`)await sandbox.fs.uploadFile(Buffer.from(config), configPath)
await sandbox.process.executeCommand('mesa auth set-key --org $MESA_ORG $MESA_API_KEY')await sandbox.process.executeCommand('mesa mount --daemonize')
const response = await sandbox.process.executeCommand(`ls ${mountPath}`)console.log(response.result)require 'daytona'
daytona = Daytona::Daytona.new
org = ENV.fetch('MESA_ORG')repo = 'my-workspace'mount_path = "/home/daytona/mesa/mnt/#{org}/#{repo}"config_path = '/home/daytona/.config/mesa/config.toml'
sandbox = daytona.create( Daytona::CreateSandboxBaseParams.new( env_vars: { 'MESA_API_KEY' => ENV.fetch('MESA_API_KEY'), 'MESA_ORG' => org } ))
sandbox.process.exec(command: 'curl -fsSL https://mesa.dev/install.sh | sh')sandbox.process.exec( command: "sudo sed -i 's/^#user_allow_other/user_allow_other/' /etc/fuse.conf")
config = <<~TOML mount-point = "/home/daytona/mesa/mnt"
[secrets] backend = "plaintext-file"
[organizations.#{org}]TOML
sandbox.process.exec(command: "mkdir -p $(dirname #{config_path})")sandbox.fs.upload_file(config, config_path)
sandbox.process.exec(command: 'mesa auth set-key --org $MESA_ORG $MESA_API_KEY')sandbox.process.exec(command: 'mesa mount --daemonize')
response = sandbox.process.exec(command: "ls #{mount_path}")puts response.resultimport ( "context" "fmt" "log" "os"
"github.com/daytonaio/daytona/libs/sdk-go/pkg/daytona" "github.com/daytonaio/daytona/libs/sdk-go/pkg/types")
ctx := context.Background()client, err := daytona.NewClient()if err != nil { log.Fatal(err)}
org := os.Getenv("MESA_ORG")repo := "my-workspace"mountPath := fmt.Sprintf("/home/daytona/mesa/mnt/%s/%s", org, repo)configPath := "/home/daytona/.config/mesa/config.toml"
sandbox, err := client.Create(ctx, types.SnapshotParams{ SandboxBaseParams: types.SandboxBaseParams{ EnvVars: map[string]string{ "MESA_API_KEY": os.Getenv("MESA_API_KEY"), "MESA_ORG": org, }, },})if err != nil { log.Fatal(err)}
if _, err := sandbox.Process.ExecuteCommand(ctx, "curl -fsSL https://mesa.dev/install.sh | sh"); err != nil { log.Fatal(err)}if _, err := sandbox.Process.ExecuteCommand(ctx, `sudo sed -i 's/^#user_allow_other/user_allow_other/' /etc/fuse.conf`); err != nil { log.Fatal(err)}
config := fmt.Sprintf(`mount-point = "/home/daytona/mesa/mnt"
[secrets]backend = "plaintext-file"
[organizations.%s]`, org)
if _, err := sandbox.Process.ExecuteCommand(ctx, "mkdir -p $(dirname "+configPath+")"); err != nil { log.Fatal(err)}if err := sandbox.FileSystem.UploadFile(ctx, []byte(config), configPath); err != nil { log.Fatal(err)}
if _, err := sandbox.Process.ExecuteCommand(ctx, "mesa auth set-key --org $MESA_ORG $MESA_API_KEY"); err != nil { log.Fatal(err)}if _, err := sandbox.Process.ExecuteCommand(ctx, "mesa mount --daemonize"); err != nil { log.Fatal(err)}
response, err := sandbox.Process.ExecuteCommand(ctx, "ls "+mountPath)if err != nil { log.Fatal(err)}fmt.Println(response.Result)import io.daytona.sdk.Daytona;import io.daytona.sdk.Sandbox;import io.daytona.sdk.model.CreateSandboxFromSnapshotParams;import io.daytona.sdk.model.ExecuteResponse;
import java.nio.charset.StandardCharsets;import java.util.Map;
public class App { public static void main(String[] args) { try (Daytona daytona = new Daytona()) { String org = System.getenv("MESA_ORG"); String repo = "my-workspace"; String mountPath = "/home/daytona/mesa/mnt/" + org + "/" + repo; String configPath = "/home/daytona/.config/mesa/config.toml";
CreateSandboxFromSnapshotParams params = new CreateSandboxFromSnapshotParams(); params.setEnvVars(Map.of( "MESA_API_KEY", System.getenv("MESA_API_KEY"), "MESA_ORG", org )); Sandbox sandbox = daytona.create(params);
sandbox.getProcess().executeCommand( "curl -fsSL https://mesa.dev/install.sh | sh"); sandbox.getProcess().executeCommand( "sudo sed -i 's/^#user_allow_other/user_allow_other/' /etc/fuse.conf");
String config = "mount-point = \"/home/daytona/mesa/mnt\"\n\n" + "[secrets]\n" + "backend = \"plaintext-file\"\n\n" + "[organizations." + org + "]\n";
sandbox.getProcess().executeCommand("mkdir -p $(dirname " + configPath + ")"); sandbox.fs.uploadFile(config.getBytes(StandardCharsets.UTF_8), configPath);
sandbox.getProcess().executeCommand( "mesa auth set-key --org $MESA_ORG $MESA_API_KEY"); sandbox.getProcess().executeCommand("mesa mount --daemonize");
ExecuteResponse response = sandbox.getProcess().executeCommand("ls " + mountPath); System.out.println(response.getResult()); } }}Production: scoped ephemeral keys
Section titled “Production: scoped ephemeral keys”For non-test workloads, Mesa recommends minting a short-lived, scoped API key per sandbox session rather than passing your long-lived MESA_API_KEY into the sandbox. Use the Mesa SDK ↗ on your trusted host to derive an ephemeral key from your long-lived one — the long-lived key never leaves your host process. Mesa SDKs are available for TypeScript, Python, and Rust; for other languages, use the Mesa REST API ↗ directly.
import asyncioimport osfrom daytona import CreateSandboxFromSnapshotParams, Daytonafrom mesa_sdk import Mesa
async def mint_ephemeral_key() -> str: async with Mesa(api_key=os.environ["MESA_API_KEY"], org=os.environ["MESA_ORG"]) as mesa: key = await mesa.api_keys.create( name="sandbox-session", scopes=["read", "write"], expires_in_seconds=3600, ) return key.key
ephemeral_key = asyncio.run(mint_ephemeral_key())
daytona = Daytona()sandbox = daytona.create( CreateSandboxFromSnapshotParams( snapshot="fuse-mesa", env_vars={ "MESA_API_KEY": ephemeral_key, "MESA_ORG": os.environ["MESA_ORG"], }, ))import { Daytona } from '@daytona/sdk'import { Mesa } from '@mesadev/sdk'
const mesa = new Mesa({ apiKey: process.env.MESA_API_KEY!, org: process.env.MESA_ORG! })
const ephemeralKey = await mesa.apiKeys.create({ name: 'sandbox-session', scopes: ['read', 'write'], expires_in_seconds: 3600,})
const daytona = new Daytona()const sandbox = await daytona.create({ snapshot: 'fuse-mesa', envVars: { MESA_API_KEY: ephemeralKey.key, MESA_ORG: process.env.MESA_ORG!, },})The rest of the launch flow (writing the TOML config, mesa auth set-key, mesa mount --daemonize) is unchanged — the sandbox doesn’t know whether the key it received is long-lived or ephemeral.
For repo-scoped or path-scoped keys, see Mesa’s auth and permissions guide ↗. For the full integration recipe, see Mesa’s Daytona guide ↗.
Unmount
Section titled “Unmount”When a sandbox is deleted via daytona.delete(sandbox), the container teardown automatically removes any active FUSE mounts and shuts down their daemons. For normal cleanup, this is all you need — no manual unmount required.
To free a mount path during a sandbox’s lifetime (for example, to remount with different credentials or before persisting a workspace archive), relocate the mount onto a throwaway path:
sudo mkdir -p /tmp/.fuse-defunct-$$sudo mount --move <your-mount-path> /tmp/.fuse-defunct-$$After this, your original mount path is free for remounting. The FUSE daemon stays alive serving the mount at the new path; both the relocated mount and the daemon are cleaned up automatically when the sandbox is deleted.
This works for any FUSE-based mount — verified against mount-s3, gcsfuse, and blobfuse2.