Kubernetes/minikube deployment failed

Hello Everyone,
I`m trying to deploy Tutor my local Kubernetes/Minikube maskin. Which is Virtualbox driver 7,2 GB ram 2 CPUs. But always geting some ERROR “Initialising MySQL”.
I stoped the minikube and cleaned cache,
Stoped tutor and deletad all data from “tutor/data/mysql”
Started them again, but same connections ERROR.
And this logs from PODS/CMS_WORKER:

File "/openedx/venv/local/lib/python2.7/site-packages/MySQLdb/connections.py", line 193, in __init__
    super(Connection, self).__init__(*args, **kwargs2)
django.db.utils.OperationalError: (1045, "Access denied for user 'openedx'@'' (using password: YES)")"

Best regards,

Did you run tutor k8s quickstart? Was it successful?

and thanks for quick response. I followed documentations (https://docs.tutor.overhang.io/k8s.html#) .
After tutor k8s quickstart.
all services did not started.

Thanks for that :slight_smile:

Which service did not start?

in this case was lms_worker
But yesterday was cms_worker. Due to Mysql connection failed… I think…

Can confirm this issue, running tutor k8s quickstart (with all default settings, minio plugin installed and 6GB memory for minikube (4GB didn’t work), minikube ingress addon enabled, minikube v1.7.2, Kubernetes v1.17.2).

It seems to be related to this issue: https://github.com/kubernetes/minikube/issues/1568 which causes pods in minikube to be unable to reach “themselves” through the service cluster IP.

This would have been mitigated if the mysql migration was either run directly against localhost (which would work if you exec inside the mysql server container) or if it was run from a separate pod.

This workaround from the thread worked for me:

minikube ssh
sudo ip link set docker0 promisc on

This got me one step further. However, I’m still unable to successfully deploy to minikube, as one of the migrations failed:

Running migrations:
Applying certificates.0003_data__default_modes…Traceback (most recent call last):
File “./manage.py”, line 123, in
execute_from_command_line([sys.argv[0]] + django_args)
File “/openedx/venv/local/lib/python2.7/site-packages/django/core/management/init.py”, line 364, in execute_from_command_line
File “/openedx/venv/local/lib/python2.7/site-packages/django/core/management/init.py”, line 356, in execute
File “/openedx/venv/local/lib/python2.7/site-packages/django/core/management/base.py”, line 283, in run_from_argv
self.execute(*args, **cmd_options)
File “/openedx/venv/local/lib/python2.7/site-packages/django/core/management/base.py”, line 330, in execute
output = self.handle(*args, **options)
File “/openedx/venv/local/lib/python2.7/site-packages/django/core/management/commands/migrate.py”, line 204, in handle
File “/openedx/venv/local/lib/python2.7/site-packages/django/db/migrations/executor.py”, line 115, in migrate
state = self._migrate_all_forwards(state, plan, full_plan, fake=fake, fake_initial=fake_initial)
File “/openedx/venv/local/lib/python2.7/site-packages/django/db/migrations/executor.py”, line 145, in _migrate_all_forwards
state = self.apply_migration(state, migration, fake=fake, fake_initial=fake_initial)
File “/openedx/venv/local/lib/python2.7/site-packages/django/db/migrations/executor.py”, line 244, in apply_migration
state = migration.apply(state, schema_editor)
File “/openedx/venv/local/lib/python2.7/site-packages/django/db/migrations/migration.py”, line 126, in apply
operation.database_forwards(self.app_label, schema_editor, old_state, project_state)
File “/openedx/venv/local/lib/python2.7/site-packages/django/db/migrations/operations/special.py”, line 193, in database_forwards
self.code(from_state.apps, schema_editor)
File “/openedx/edx-platform/lms/djangoapps/certificates/migrations/0003_data__default_modes.py”, line 24, in forwards
File(open(settings.PROJECT_ROOT / ‘static’ / ‘images’ / ‘default-badges’ / file_name))
File “/openedx/venv/local/lib/python2.7/site-packages/django/db/models/fields/files.py”, line 94, in save
self.name = self.storage.save(name, content, max_length=self.field.max_length)
File “/openedx/venv/local/lib/python2.7/site-packages/django/core/files/storage.py”, line 54, in save
return self._save(name, content)
File “/openedx/venv/local/lib/python2.7/site-packages/storages/backends/s3boto.py”, line 409, in _save
key = self.bucket.get_key(encoded_name)
File “/openedx/venv/local/lib/python2.7/site-packages/boto/s3/bucket.py”, line 193, in get_key
key, resp = self._get_key_internal(key_name, headers, query_args_l)
File “/openedx/venv/local/lib/python2.7/site-packages/boto/s3/bucket.py”, line 200, in _get_key_internal
File “/openedx/venv/local/lib/python2.7/site-packages/boto/s3/connection.py”, line 665, in make_request
File “/openedx/venv/local/lib/python2.7/site-packages/boto/connection.py”, line 1071, in make_request
File “/openedx/venv/local/lib/python2.7/site-packages/boto/connection.py”, line 1030, in _mexe
raise ex
socket.gaierror: [Errno -2] Name or service not known
command terminated with exit code 1

I think the error might somehow be related to minio/s3 support. I tried disabling the minio plugin, and I now got a different error on the same migration:

boto.exception.S3ResponseError: S3ResponseError: 403 Forbidden

It’s not clear from the documentation whether you can install tutor on minikube without either setting up S3 or minio, and if so, how you would do that.

The documentation mentions that you need to set up minio.LMS_HOST, but if you are running minikube on a local installation only with editing /etc/hosts, those host entries can not be resolved inside minikube.

What I made work was:

  1. Enable minio plugin: tutor plugins enable minio
  2. Enable minikube ingress controller: minikube addons enable ingress
  3. Expose minikube ingress controller as a service: kubectl expose deployment nginx-ingress-controller --port=80 --target-port=80 -n kube-system
  4. Add a CoreDNS rewrite rule: kubectl edit configmap -n kube-system coredns adding rewrite name minio.www.myopenedx.com nginx-ingress-controller.kube-system.svc.cluster.local as explained here: https://coredns.io/2017/05/08/custom-dns-entries-for-kubernetes/ (remember to reload CoreDNS as described, or for minikube just kill the coredns pods in the kube-system namespace to have them restarted)

With these changes in place, I was able to successfully complete the migrations and have a working tutor setup in minikube.

Hi Barek, and thanks for solution :slight_smile:
Now everything is working… Thanks…