Using R2 Storage (Cloudflare) with the S3-Upload-Options

Hi Hedgedoc-Team

(My version of HedgeDoc is: 1.9.7)

I am using S3-Storage for image uploads and it works fine.

But would you consider integrating Cloudflares R2-Storage in the future. It’s cheaper, has a bigger free Tier (10gb, no egress costs) and is easier to set up than the aws iam console.

R2 Storage is compatible to the S3-API. It is possible to configure hedgedoc in a way, that it can upload to R2 (via the custom_endpoint), but you would need one more configurable path, because the custom endpoint on R2 ist not the public-url for receiving the object.
I am using this configuration:

  - CMD_IMAGE_UPLOAD_TYPE=s3
  - CMD_S3_ENDPOINT=123abc.r2.cloudflarestorage.com
  - CMD_S3_BUCKET=pad
  - CMD_S3_ACCESS_KEY_ID=abcde
  - CMD_S3_SECRET_ACCESS_KEY=12345
  - CMD_S3_REGION=auto
  - CMD_S3_PUBLIC_FILES=true
  - CMD_S3_FOLDER=hedgedoc-upload

It uploads the images, but tries to receive them via https://123abc.r2.cloudflarestorage.com/pad/hedgedoc-upload/image.jpg.
But Cloudflare makes them only publicly available via https://pub-456xyz.r2.dev/hedgedoc-upload/image.jpg or you could set a custom url like https://files.example.com/hedgedoc-upload/image.jpg

It would need another command (similiar to s3-bucket-use in the mastodon-configuration) for the configuration:

- CMD_S3_ALIAS_HOST=files.example.com

Thank you for hedgedoc!
richard

1 Like

bumping this up as that would be amazing :slight_smile:

HedgeDoc 2 will use standard S3 getPresignedObjectUrl calls to receive an URL directly from the server (PR was created yesterday). This should work better than the 1.x approach of concatenating paths together.

As HedgeDoc 1 is maintenance-only, we’re not going to add the functionality there. You could try to setup some kind of reverse-proxy which corrects the paths maybe if you’re forced to use R2.

1 Like