Tutor course export and import error on k8s deployment

We setup Open edX using Tutor k8s deployment on our kubernetes cluster . When we make an export from an existing course, it successfully done but when we want to download exported course we gets " The Studio servers encountered an error" and when we check cms container logs we gets below error

Traceback (most recent call last):
File "/openedx/venv/lib/python3.8/site-packages/django/core/handlers/exception.py", line 34, in inner
response = get_response(request)
File "/openedx/venv/lib/python3.8/site-packages/django/core/handlers/base.py", line 115, in _get_response
response = self.process_exception_by_middleware(e, request)
File "/openedx/venv/lib/python3.8/site-packages/django/core/handlers/base.py", line 113, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/openedx/venv/lib/python3.8/site-packages/django/views/decorators/http.py", line 40, in inner
return func(request, *args, **kwargs)
File "/openedx/venv/lib/python3.8/site-packages/django/utils/decorators.py", line 142, in _wrapped_view
response = view_func(request, *args, **kwargs)
File "/openedx/venv/lib/python3.8/site-packages/django/contrib/auth/decorators.py", line 21, in _wrapped_view
return view_func(request, *args, **kwargs)
File "/openedx/edx-platform/common/djangoapps/util/views.py", line 43, in inner
response = view_func(request, *args, **kwargs)
File "./cms/djangoapps/contentstore/views/import_export.py", line 441, in export_output_handler
tarball = course_import_export_storage.open(artifact.file.name)
File "/openedx/venv/lib/python3.8/site-packages/django/core/files/storage.py", line 38, in open
return self._open(name, mode)
File "/openedx/venv/lib/python3.8/site-packages/django/core/files/storage.py", line 231, in _open
return File(open(self.path(name), mode))
FileNotFoundError: [Errno 2] No such file or directory: '/openedx/media/user_tasks/2021/06/27/course.6o3rdzzg.tar.gz'
[pid: 6|app: 0|req: 10/138] 100.96.7.235 () {56 vars in 2167 bytes} [Sun Jun 27 13:45:12 2021] GET /export_output/course-v1:iin+C5899+2021 => generated 8246 bytes in 81 msecs (HTTP/1.0 500) 7 headers in 505 bytes (1 switches on core 0) 

we had same issue when we import an exported course .
we do not have this problem in local deployment.
tutor vesion 12.0.1
minio plugin isnā€™t active and OPENEDX_AWS_SECRET_ACCESS_KEY dose not contain any value

I think itā€™s because of shared volumes. there is a ā€œ/openedx/mediaā€ volume in local deployment that shares data between cms and cms worker but we donā€™t have same volume in k8s deployment.

How can we fix this in k8s deployment?

1 Like

You must use the minio plugin in order to have shared media storage between the CMS and the asynchronous workers. There are no shared media volumes because most Kubernetes providers donā€™t have volumes with ReadWriteMany access.

1 Like

Thanks @regis it resolves my issue.

Hello @regis
How about using AWS EFS volumes which supports ReadWriteMany accessMode?

2 Likes

Not everyone uses AWS. I think that making use of MinIO is a good practice for everyone ā€“ whether they are running on AWS or not. It just makes Open edX much easier to scale.

1 Like

Hi all,
I am working in tutor k8s in EKS. I was able to make MinIO work, but I would really like to use S3 as the file storage. Using MinIO results in all files being stored in the nodeā€™s block storage, which will increase the costs and challenge the scaling strategy. Besides, ā€œMinIO does not recommend running MinIO in AWS if AWS is the only anticipated instance. Rather, MinIO in AWS should be reserved for scenarios where the organization seeks consistency across multiple environments.ā€.

Anyway, I donā€™t come to understand why S3Boto3Storage works with MinIO but does not with S3, if in k8s the CMS and CMS worker communicate with MinIO and there is no shared folder at all.

I see two options to use S3 in this scenario:

  1. Use MinIO gateway feature. I see that itā€™s still experimental in the plugin. @regis can you please let us know the status and give some example configuration to test it?
  2. Use EBS or EFS to share the volume among the pods, and then use the traditional S3 config for other file storage. I found an article about EBS vs EFS in k8s. Looks like EFS is less performant, but EBS requires all pods to be running in the same node, which may not be the case always.

Any ideas?

This feature should actually be 100% functional. Please run tutor config save --set MINIO_GATEWAY=s3. If you tell us that it does not work, weā€™ll fix it. If it does, weā€™ll remove the experimental notice :slight_smile:

Youā€™re bound to encounter other, more difficult issues if you go down this path. I would strongly recommend sticking with object storage and stateless containers in kubernetes.

Hi!
MinIO gateway worked well, with some adjustments. I didnā€™t test in production, so I wouldnā€™t remove the experimental tag yetā€¦

But the default settings donā€™t point the profile image backend to S3, so I had to create a plugin to add it:

patches:
  openedx-common-settings: |
    PROFILE_IMAGE_BACKEND = {
      "class": "storages.backends.s3boto3.S3Boto3Storage",
      "options": {
        "base_url": "/media/profile-images/",
        "headers": {
          "Cache-Control": "max-age-31536000"
        },
        "location": "openedx/media/profile-images"
      }
    }

Can you add this setting into the MinIO plugin?

I also had to touch the production.py file and comment out the following lines:

# Fix media files paths
PROFILE_IMAGE_BACKEND["options"]["location"] = os.path.join(
    MEDIA_ROOT, "profile-images/"
)

Can you please do something as you did in the VIDEO_* settings instead, like:

PROFILE_IMAGE_BACKEND["options"]["location"] = PROFILE_IMAGE_BACKEND["options"]["location"].lstrip("/")

Very good catch @andres, thanks a lot for the suggestion. Please review the following PR: https://github.com/overhangio/tutor-minio/pull/10

Great!

I took the chance to go over common.py and check for every possible file storage that could potentially be left local, and found the following settings:

XBLOCK_FS_STORAGE_BUCKET: I couldnā€™t find any other mention of this setting in all github. I donā€™t know if itā€™s used or not.

STATICFILES_STORAGE = ā€˜openedx.core.storage.ProductionStorageā€™: Iā€™m a bit lost here, but Iā€™ve found ā€˜openedx.core.storage.ProductionS3Storageā€™ class near. Maybe we can use it instead? (we would need to fill STATICFILES_STORAGE_KWARGS too, donā€™t know with what)

The idea is that there shouldnā€™t be local storage (that needs to be persisted) in ANY container, and that we can autoscale pods without gaps in the user experience.

I learned from @BbrSofiane the existence of tutor-s3 plugin. If everything is in AWS, I would definitely prefer this instead of MinIO. Is there any other drawback?

To clarify, I am using both in production at the moment. tutor-s3 configures some parameters that the miniIO plugin doesnā€™t and they work together fine.

Also this tutor-s3 is a fork because I had to make some changes to make it work for lilac.

Same here. I decided to just ignore this setting for now.

We should only set STATICFILES_STORAGE if we want to push static assets elsewhere when we run ./manage.py lms collectstatic (this command happens in the image building step).

Yes, exactly. I think we should be good now.

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.