LogoLogo
Foundry Documentation
Foundry Documentation
  • Welcome to Foundry
  • Quickstart guide
  • Compute & Storage
    • Compute overview
    • Instance types & specifications
    • Reserving compute
    • Spot bids
      • Spot auction mechanics
    • Startup scripts
    • Access & manage instances
      • Statuses
    • Compute quotas
    • Managing open ports
    • Persistent storage
      • File shares
      • Block storage
    • Ephemeral storage
  • Foundry API
    • API overview and quickstart
    • API reference
      • Projects
      • Instance types
      • SSH Keys
      • Volumes
      • Instances
      • Spot
        • Bids
        • Availability
      • API Keys
      • Profile
    • Specification
  • Access Management
    • Access Management Overview
    • SSH keys
  • Account and Billing
    • Billing overview
    • Foundry Referral Program
  • Security & trust
    • Foundry's approach to security
    • Reporting security concerns
Powered by GitBook
LogoLogo

© 2025 Foundry Technologies, Inc.

On this page
  • Provisioning file share storage
  • Attach storage to new reservations and spot bids
  • Attaching new file shares to existing instances
  • Accessing the file share from your instance
  • Performance & benchmarks
  • Quotas
  1. Compute & Storage
  2. Persistent storage

File shares

Provisioning file shares for your instances

File share storage is currently in private preview and requires a quota to be added to your account. For access, contact your account team via Slack or email support@mlfoundry.com.

File shares provide convenient, high-performance file storage for your instances. File shares selected on instance creation are automatically mounted to your instances as a directory under /mnt, and are readable and writable from multiple instances concurrently.

File shares are read-optimized. You are likely to see higher read speeds in practice.

Provisioning file share storage

To provision file share storage, go to Storage, click + Create storage and select File share for the type field.

  1. Select the region you would like to provision in.

  2. Select the size for your disk. Currently, each file share storage has a maximum size of 15TB.

When creating a file share, you must provide a unique Name which will be used for the folder where it will be automatically mounted inside your instance. For example, if you named your file share "training-model" it will be automatically mounted on the /mnt/training-model folder after your instance is launched.

Region
GPUs available

us-central1-a

NVIDIA H100

na-east1-a

NVIDIA A40

na-east1-b

NVIDIA A5000

eu-central1-a

NVIDIA A100

Attach storage to new reservations and spot bids

Provisioned storage is available for selection while creating a reservation or spot bid. You can attach as many storage options as needed from the same region. If you select a region that does not have provisioned storage, it will not appear as an option.

Currently, it's not possible to shrink/expand disk size.

Attaching new file shares to existing instances

You can attach new files shares to existing instances when they are in a Paused state. If the instance has already booted up once when you do this, the new file shares will not automatically be mounted for you. To mount the new file shares in the standard format, you can use the following commands:

FILESHARE_NAME=<insert-name-here>
sudo mkdir /mnt/$FILESHARE_NAME
sudo chmod 777 /mnt/$FILESHARE_NAME
echo "$FILESHARE_NAME /mnt/$FILESHARE_NAME virtiofs defaults,nofail 0 1" | sudo tee -a /etc/fstab
sudo mount -a

Accessing the file share from your instance

You can list all file shares attached to your instance with ls:

$ ls /mnt

You can read and write to any attached file share folders.

Performance & benchmarks

Actual performance for your file share is likely to vary greatly depending on your workload and instance configuration; however, below are representative benchmarks for a single 8x H100 instance:

Sequential Read BW
Read IOPS (4Kb)
Sequential Write BW
Write IOPS (4Kb)

12 Gbps

12,000

5 Gbps

4500

The numbers above are a snapshot meant for guidance. We are constantly making improvements to optimize performance. You can also run the benchmarks yourself from your instances using the following:

# sequential write bw
fio --name=my-job --group_reporting --time_based=1 --cpus_allowed_policy=split --runtime=10s --ramp_time=5s --size 20G --numjobs=32 --direct=1 --iodepth 8 --rw write --bs 1m
# sequential read bw
fio --name=my-job --group_reporting --time_based=1 --cpus_allowed_policy=split --runtime=10s --ramp_time=5s --size 20G --numjobs=32 --direct=1 --iodepth 8 --rw read --bs 1m
# write iops
fio --name=my-job --group_reporting --time_based=1 --cpus_allowed_policy=split --runtime=10s --ramp_time=5s --size 20G --numjobs=32 --direct=1 --iodepth 8 --rw write --bs 4k
# read iops
fio --name=my-job --group_reporting --time_based=1 --cpus_allowed_policy=split --runtime=10s --ramp_time=5s --size 20G --numjobs=32  --direct=1 --iodepth 8 --rw read --bs 4k

Quotas

Each project has a total storage capacity quota that accumulates usage across all regions. Contact your account team via Slack or email support@mlfoundry.com to increase your quota.

PreviousPersistent storageNextBlock storage

Last updated 2 days ago