Not exactly You should set this variable to be the same as the arguments that you would pass to the minio gateway command. For instance, for Azure, you would set this setting to azure. The MINIO_ACCESS_KEY and MINIO_SECRET_KEY should then refer to your azurestorageaccountname and azurestorageaccountkey.
Not very intuitive, I know… If it works for you please report your findings here and I’ll update the README.
I set the via config save
MINIO_ACCESS_KEY= storageaccountname
MINIO_SECRET_KEY= accesskeylong
MINIO_Gateway= azure
leaving the base openedx settings for the OPENEDX_AWS keys
when running save and run a local quick start
it fails at
`Plugin minio: running pre-init for service minio
docker-compose -f /home/AzureUser/.local/share/tutor/env/local/docker-compose.yml -f /home/AzureUser/.local/share/tutor/env/local/docker-compose.prod.yml --project-name tutor_local -f /home/AzureUser/.local/share/tutor/env/local/docker-compose.jobs.yml run --rm minio-job sh -e -c mc config host add minio http://minio:9000 openedx BuXLVVNGBO6a0ThtsQFHpAzS --api s3v4
mc mb --ignore-existing minio/openedx minio/openedxuploads minio/openedxvideos
Make common file upload bucket public (e.g: for forum image upload)
mc policy set public minio/openedx
Creating tutor_local_minio-job_run … done
mc: Configuration written to /root/.mc/config.json. Please update your access credentials.
mc: Successfully created /root/.mc/share.
mc: Initialized share uploads /root/.mc/share/uploads.json file.
mc: Initialized share downloads /root/.mc/share/downloads.json file.
Added minio successfully.
mc: Unable to make bucket minio/openedx. Put “http://minio:9000/openedx/”: dial tcp: lookup minio on 127.0.0.11:53: no such host
okay thats worked but now having issues with setting public access fails
reating tutor_local_minio-job_run ... done
mc: Configuration written to `/root/.mc/config.json`. Please update your access credentials.
mc: Successfully created `/root/.mc/share`.
mc: Initialized share uploads `/root/.mc/share/uploads.json` file.
mc: Initialized share downloads `/root/.mc/share/downloads.json` file.
Added `minio` successfully.
Bucket created successfully `minio/openedx`.
Bucket created successfully `minio/openedxuploads`.
Bucket created successfully `minio/openedxvideos`.
mc: <ERROR> Unable to set policy `public` for `minio/openedx`. A header you provided implies functionality that is not implemented.
Error: Command failed with status 1: docker-compose -f /home/AzureUser/.local/share/tutor/env/local/docker-compose.yml -f /home/AzureUser/.local/share/tutor/env/local/docker-compose.prod.yml --project-name tutor_local -f /home/AzureUser/.local/share/tutor/env/local/docker-compose.jobs.yml run --rm minio-job sh -e -c mc config host add minio http://minio:9000 tutorstore XXXXXXXXX api s3v4
mc mb --ignore-existing minio/openedx minio/openedxuploads minio/openedxvideos
# Make common file upload bucket public (e.g: for forum image upload)
mc policy set public minio/openedx
At this point I don’t know enough about Azure object storage to propose a solution. You need to investigate existing bucket policies to make the bucket publicly readable.
Also, depending on your Scorm package provider, you might stumble upon issues where the Scorm package expects that it is served from the same domain as the LMS/CMS – and this will fail for your MinIO configuration. There are people who have found fixes, but they are a little “advanced”.