Moving S3 from Backblaze B2 to Scaleway

After starting to host my own private Mastodon instance it became clear that disk usage was going to be problem. I eventually moved all the assets to an S3 compatible Object Storage.

Now I'm moving away from Backblaze B2 as a S3 provider to Scaleway. I wanted to highlight why and how I did it, which is trivial when you know what to do

Why?

It's funny that all came down to a currency and payment issue (not pricing). Both providers are super cheap and high free tiers.

  • Backblaze charges in USD โŒ
  • Scaleway charges in EUR โœ…
  • Backblaze speed for deleting objects: slow โŒ
  • Scaleway speed for deleting objects: faster โœ…
  • Both providers have data centers in the EU โœ…โœ…

My credit card is linked to an Spanish bank and they charge me a fee for every purchase I do in currencies other than EUR. So the actual S3 Object Storage cost was being inflated by my bank's fees. That's no good.
In hindsight, I should've known this when I signed up for Backblaze

Want to know how I migrated? Here's how:

1. Create a snapshot of the bucket

From B2 you can create a zip (snapshot), then you just download it. It might take a few hours to generate depending on your bucket size, though. Mine was about 50Gb and took around 5 hours.

2. Point Mastodon to the new S3 bucket

Since I'm a single private instance I could this step now. While waiting for the zip to be generated with the existing assets I updated ~/live/.env.production with the new values from the Scaleway bucket (as well as the new API credentials generated within Scaleway, so Mastodon can upload objects) (More on this here).

โš ๏ธ There's a difference on how to set the S3 configuration on your Mastodon instance for Scaleway, here's how it should look like (you may change nl-ams for your bucket region):

S3_ENDPOINT=https://s3.nl-ams.scw.cloud
S3_HOSTNAME=s3.nl-ams.scw.cloud
S3_BUCKET=your-bucket-nameCode language: JavaScript (javascript)

By doing this all new avatars and headers fetched by Mastodon from this point on started being uploaded to the new bucket.

3. AWS S3 cli to the rescue

Now that you have all your assets on your machine (zip file), it's time to upload them to the new provider.

Since all these Object Storage providers support the Amazon S3 API you can make use of AWS's CLI to do all the hard work for you.

Here's the command I used, which syncs a local folder in my computer to the S3 bucket while setting the policy to public. โš ๏ธ This last part is very important because in Scaleway even if the bucket is public an uploaded file, by default, is set to private. Not sure why, but it works like that.

aws s3 sync . s3://my-bucket --acl public-readCode language: JavaScript (javascript)

I used Scaleway's documentation to configure the aws client on my Mac. I only changed the default max_concurrent_requests to 50 because otherwise my Mac would complain "too many files open" ๐Ÿคทโ€โ™‚๏ธ

4. Update the CDN's origin

Because I'm using the fantastic (and free, because of their insanely high free tier) AWS's CloudFront I just needed to go into the distribution settings and change the origin S3 bucket.

Done, now my CDN starts pulling from the new bucket.

Conclusion

I was a little challenged at the begining but after some digging turns out is not so bad. The longest part is of course the syncing of all the assets from your local machine to the S3, which can take hours.

Otherwise very happy with the process and the change

Comments

  1. @rolle thank you for the feedback!I don’t fully know how that storage mount works but you should be able to use a CLI to sync it to an S3. I’ve used AWS CLI and had no problems moving from local to S3 and the other way around.

  2. @ricard I’m using Hetzner storagebox. I actually read your other blog post just now, great stuff!I’m not exactly sure how to move to S3 if I’m currently using a storagebox mount… Switching would lead to a lot of broken files? I’m not sure if there’s a wise way to move/re-fetch media etc.? Any tips appreciated.

  3. Waqas says:

    Blackblaze is painfully slow. I hosted images and https requests were supper slow. Almost took 1s to load image.

    1. I never noticed as I had Amazon’s CDN in front of it.
      What I did notice however is how slow it was to delete objects. Although now that I’m at Scaleway it doesn’t seem to be fast either.

      On the same note there’s a PR to do batch objects deletion in the works: https://github.com/mastodon/mastodon/pull/23302

  4. Mastodon’s built-in CLI gives you the availability to clean attachments and previews from remote accounts, purging the disk cache. This is fantastic and you couldn’t…

Leave a Reply

Your email address will not be published. Required fields are marked *