How tutor will support deploy openedx on Cloud Service?

I have been working on deploy openedx on Aliyun Kubernetes Service by using tutor this week. Though as a novice with Kubernetes, I and my colleague make it work eventually. Thanks for your great job @regisb ! But there still some question remained, and I am really appreciated for some suggestions.

  1. What is the appropriate way to let tutor support deploy openedx on the Cloud Service?
    I have added a new command line tutor aliyun just like tutor k8s . The main difference between them is tutor aliyun use different tempaltes(volumes.yml to be more specific). So I am not sure, should I keep it like that for now or put it as a subcommand of tutor k8s , like tutor k8s aliyun , as the code of these two are almost the same.

  2. Do you have some idea to make tutor k8s support deploy different open edx instance with different namespace?
    I have tried it by adding NAMESPACE to config.yml and {{ NAMESPACE }} to namespace.yml template. what do you think?

Move from (Issue #206)[https://github.com/regisb/tutor/issues/206]

Reply from Regisb:

Hi @WeJie! This issue is related to #126. To answer your question, I need to detail a little what are my plans for k8s integration with Tutor.

One of the goals of the k8s command is to make it compatible with all cloud providers: Aliyun, AWS, GCP, IBM, etc. From my understanding, the only major difference is how these providers handle volumes. So I would like to avoid having a separate tutor aliyun/aws/gcp/ibm command for each provider; instead, the type and configuration of volumes will be configurable parameters. I plan on switching from a pure kubectl approach to helm -based commands (http://helm.sh/). Provider-specifiic properties will be defined by Helm configuration values. Every platform will be heavily configurable. One of the consequences is that the k8s install is probably not going to be a 1-click install, for most people. Kubernetes deployment is for advanced platforms, so users don’t necessarily expect to have a working platform in just one click, and they are probably ready to run a few (simple) commands to configure the platform.

As for namespaces: I initially relied on namespaces to separate the Open edX platform from the rest of the k8s cluster. But when I made that decision, I was very much unaware of what the best practices are for k8s deployment. I recently learned that having multiple namespaces is not necessarily a good idea: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/#when-to-use-multiple-namespaces Instead, we should rely on good object labels: https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/.
So in the future tutor k8s implementation (which I’m currently working on), the namespace will be configurable, too, but by default all platforms will probably run in the same namespace.

I came across this topic as I’m currently addressing this issue.
We want to run OpenEdx in a Kubernetes environment, and we are leveraging the Tutor tooling to build the images. To run them in a Kubernetes cluster, we are looking at Helm charts ourselves, as we don’t think the current Kubernetes support is CI friendly or a good fit for production.

The choice of Helm charts is mostly because that’s the Cloud Native way of doing it, and we want our deployments/releases to be driven by environment specific config only (helm values.yml).

Before we go and re-invent the wheel, was there any progress on this front since this topic? I found some pointers to WIP branches in other topics and GitHub issue, but all seem gone by now.