We setup Open edX using Tutor k8s deployment on our kubernetes cluster . When we make an export from an existing course, it successfully done but when we want to download exported course we gets " The Studio servers encountered an error" and when we check cms container logs we gets below error
Traceback (most recent call last):
File "/openedx/venv/lib/python3.8/site-packages/django/core/handlers/exception.py", line 34, in inner
response = get_response(request)
File "/openedx/venv/lib/python3.8/site-packages/django/core/handlers/base.py", line 115, in _get_response
response = self.process_exception_by_middleware(e, request)
File "/openedx/venv/lib/python3.8/site-packages/django/core/handlers/base.py", line 113, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/openedx/venv/lib/python3.8/site-packages/django/views/decorators/http.py", line 40, in inner
return func(request, *args, **kwargs)
File "/openedx/venv/lib/python3.8/site-packages/django/utils/decorators.py", line 142, in _wrapped_view
response = view_func(request, *args, **kwargs)
File "/openedx/venv/lib/python3.8/site-packages/django/contrib/auth/decorators.py", line 21, in _wrapped_view
return view_func(request, *args, **kwargs)
File "/openedx/edx-platform/common/djangoapps/util/views.py", line 43, in inner
response = view_func(request, *args, **kwargs)
File "./cms/djangoapps/contentstore/views/import_export.py", line 441, in export_output_handler
tarball = course_import_export_storage.open(artifact.file.name)
File "/openedx/venv/lib/python3.8/site-packages/django/core/files/storage.py", line 38, in open
return self._open(name, mode)
File "/openedx/venv/lib/python3.8/site-packages/django/core/files/storage.py", line 231, in _open
return File(open(self.path(name), mode))
FileNotFoundError: [Errno 2] No such file or directory: '/openedx/media/user_tasks/2021/06/27/course.6o3rdzzg.tar.gz'
[pid: 6|app: 0|req: 10/138] 100.96.7.235 () {56 vars in 2167 bytes} [Sun Jun 27 13:45:12 2021] GET /export_output/course-v1:iin+C5899+2021 => generated 8246 bytes in 81 msecs (HTTP/1.0 500) 7 headers in 505 bytes (1 switches on core 0)
we had same issue when we import an exported course .
we do not have this problem in local deployment.
tutor vesion 12.0.1
minio plugin isnāt active and OPENEDX_AWS_SECRET_ACCESS_KEY dose not contain any value
I think itās because of shared volumes. there is a ā/openedx/mediaā volume in local deployment that shares data between cms and cms worker but we donāt have same volume in k8s deployment.
You must use the minio plugin in order to have shared media storage between the CMS and the asynchronous workers. There are no shared media volumes because most Kubernetes providers donāt have volumes with ReadWriteMany access.
Not everyone uses AWS. I think that making use of MinIO is a good practice for everyone ā whether they are running on AWS or not. It just makes Open edX much easier to scale.
Anyway, I donāt come to understand why S3Boto3Storage works with MinIO but does not with S3, if in k8s the CMS and CMS worker communicate with MinIO and there is no shared folder at all.
Use EBS or EFS to share the volume among the pods, and then use the traditional S3 config for other file storage. I found an article about EBS vs EFS in k8s. Looks like EFS is less performant, but EBS requires all pods to be running in the same node, which may not be the case always.
This feature should actually be 100% functional. Please run tutor config save --set MINIO_GATEWAY=s3. If you tell us that it does not work, weāll fix it. If it does, weāll remove the experimental notice
Youāre bound to encounter other, more difficult issues if you go down this path. I would strongly recommend sticking with object storage and stateless containers in kubernetes.
I took the chance to go over common.py and check for every possible file storage that could potentially be left local, and found the following settings:
XBLOCK_FS_STORAGE_BUCKET: I couldnāt find any other mention of this setting in all github. I donāt know if itās used or not.
The idea is that there shouldnāt be local storage (that needs to be persisted) in ANY container, and that we can autoscale pods without gaps in the user experience.
I learned from @BbrSofiane the existence of tutor-s3 plugin. If everything is in AWS, I would definitely prefer this instead of MinIO. Is there any other drawback?
To clarify, I am using both in production at the moment. tutor-s3 configures some parameters that the miniIO plugin doesnāt and they work together fine.
Also this tutor-s3 is a fork because I had to make some changes to make it work for lilac.
Same here. I decided to just ignore this setting for now.
We should only set STATICFILES_STORAGE if we want to push static assets elsewhere when we run ./manage.py lms collectstatic (this command happens in the image building step).