Serving files is the bread and butter of Cachix. Fetching store paths from Cachix is fast — downloads are served directly from fast distributed cloud storage. Uploads, however, are a different story.
Each upload still had to go directly through one of our servers, making our ingest bandwidth a scarce and highly contested resource during peak times.
Many of our users rely on Cachix in their CI pipelines, where slower uploads lead to longer CI runs and a worse developer experience, which is why we’ve been busy reorganizing our internals to remove this bottleneck.
Starting with Cachix v1.3, we’re enabling multipart uploads directly to storage. You’ll enjoy:
- Virtually unlimited bandwidth for uploads.
- Improved throughput for large files, which can now be uploaded in multiple parts in parallel.
- Improved resiliency to intermittent network errors, especially for large single uploads.
- Improved geographic performance using Cloudflare’s edge network.
To help everyone take advantage of the improved upload speeds, we’re conservatively increasing the default number
of upload jobs in the Cachix CLI to 8. You can always push that number even higher with the --jobs
option.
So how big of an improvement are we talking about? Here are a couple of tests recorded on a 4vCPU Hetzner machine:
Pushing a single 5GB file:
Before | After |
---|---|
14m 9s | 2m 13s |
Pushing a build of Firefox and its dependencies (1GB closure size):
Before | After |
---|---|
1m 54s | 1m 0s |
To get the benefits of faster uploads, install Cachix 1.3.
Happy Nixing!
Sander