I have been working on deploy openedx on Aliyun Kubernetes Service by using tutor this week. Though as a novice with Kubernetes, I and my colleague make it work eventually. Thanks for your great job @regisb ! But there still some question remained, and I am really appreciated for some suggestions.
What is the appropriate way to let tutor support deploy openedx on the Cloud Service?
I have added a new command line tutor aliyunjust like tutor k8s . The main difference between them is tutor aliyun use different tempaltes(volumes.yml to be more specific). So I am not sure, should I keep it like that for now or put it as a subcommand of tutor k8s , like tutor k8s aliyun , as the code of these two are almost the same.
Do you have some idea to make tutor k8s support deploy different open edx instance with different namespace?
I have tried it by adding NAMESPACE to config.yml and {{ NAMESPACE }} to namespace.yml template. what do you think?
Hi @WeJie! This issue is related to #126. To answer your question, I need to detail a little what are my plans for k8s integration with Tutor.
One of the goals of the k8s command is to make it compatible with all cloud providers: Aliyun, AWS, GCP, IBM, etc. From my understanding, the only major difference is how these providers handle volumes. So I would like to avoid having a separate tutor aliyun/aws/gcp/ibm command for each provider; instead, the type and configuration of volumes will be configurable parameters. I plan on switching from a pure kubectl approach to helm -based commands (http://helm.sh/). Provider-specifiic properties will be defined by Helm configuration values. Every platform will be heavily configurable. One of the consequences is that the k8s install is probably not going to be a 1-click install, for most people. Kubernetes deployment is for advanced platforms, so users don’t necessarily expect to have a working platform in just one click, and they are probably ready to run a few (simple) commands to configure the platform.
I came across this topic as I’m currently addressing this issue.
We want to run OpenEdx in a Kubernetes environment, and we are leveraging the Tutor tooling to build the images. To run them in a Kubernetes cluster, we are looking at Helm charts ourselves, as we don’t think the current Kubernetes support is CI friendly or a good fit for production.
The choice of Helm charts is mostly because that’s the Cloud Native way of doing it, and we want our deployments/releases to be driven by environment specific config only (helm values.yml).
Before we go and re-invent the wheel, was there any progress on this front since this topic? I found some pointers to WIP branches in other topics and GitHub issue, but all seem gone by now.