K8s Deployment to Private Docker Registry


We are looking to setup an OpenEdx Kubernetes deployment using a private docker registry. For our use case we will not have access to a public docker registry. We understand that you can point to a custom registry using: DOCKER_REGISTRY, but how can we provide login credentials for the private registry? I have looked through the K8’s files, but I cannot find a good spot to Patch or inject the Docker login credentials. Ideally we would inject a secret into the Kubernetes manifest files from our Kubernetes deployment where the secrets are already stored.


Allow me to chime in here as I’m currently dealing with the same issue myself.

Pulling an Image from a Private Registry is documented the upstream Kubernetes docs. To the best of my knowledge however, Tutor currently supports neither creating nor referencing a secret for image registry credentials.

You can, as a stop-gap for an existing Tutor deployment,

  • inject your registry credentials into a Kubernetes secret as described in the Kubernetes docs,
  • then set DOCKER_REGISTRY in your config.yml (or else, set some of the various *_DOCKER_IMAGE variables)
  • run tutor config save,
  • manually edit $TUTOR_ROOT/env/k8s/deployments.yml to add imagePullSecrets references to the Deployments whose containers’ image references point to your private registry,
  • and then run tutor k8s start. Tutor should then replace the affected pods with new ones using the images from your private registry.

How to add such support to Tutor “properly” is probably a matter of debate. You could argue that the onus would be on the user to first create the registry credentials with kubectl create secret, then add the secret reference as a configuration variable to config.yml. Then, tutor k8s could just pick up that secret reference if set, and add imagePullSecrets to the spec of the pods that need it in deployments.yml. But that probably violates the “1-click install” approach. So Tutor would instead probably have to both create the secret, and plug it into pods.

This is a little trickier than it sounds though; in particular, the question is whether Tutor wants to (a) make this an either-or proposition, or whether (b) it wants to support both public registries (which require no authentication to pull images) and private ones (which do require credentials) in one deployment.

In the (a) case, the user would simply have to mirror all images Tutor uses in their own registry, in which case Kubernetes could pull all images from the same registry, using the same set of credentials. In the (b) case, Tutor would have to support that practically each and every image is hosted in a different registry, with a different set of credentials, and that might quickly get unwieldy in the configuration.

I wonder how @regis leans on this one. :slight_smile:

1 Like

Thanks for the response @fghaas, for now we will try to incorporate the sealed secret configuration the way you describe by modifying the k8s files locally.

However, I am interested in more configurability within the k8s files in general. The sealed secret is one thing but adding resources much like this ticket describes would be ideal for our deployment. When deploying to the cluster we need to allocate the resource allocation manually via the Kubernetes interface on our third party hosted kubernetes cluster. Ideally we could configure this as well. Given I am new to the Tutor project I am not sure what the best way to implement this would be, but maybe something similar to the way custom theming is implemented? This may not be the best way of thinking about it given CSS is fundamentally driven by the cascading workflow supporting overiding out of the box, and I am not sure K8s supports this without implementing something like Helm.

Without adding another technology to the stack the other option I see would be to add more patching throughout the k8s files. Again with my lack of experience with the Tutor source code I am not sure that this is a good approach? If this was the approach ideally we could help contribute this code to the baseline and incorporate it in future realeases as opposed to forking the code and dealing with upgrade issues in the future.

Right. I think that eventually this should go into Tutor though, possibly based on the following logic, applied during tutor k8s quickstart:

  1. Check if DOCKER_REGISTRY or any of the *_DOCKER_IMAGE variables are set.
  2. If it is, look into ~/.docker/config.json and see if there is an auth entry that matches the hostname in those variables.
  3. If there is, create the Kubernetes secret(s) (after prompting the user to confirm), and add the imagePullSecrets bits.

And I think this sort of thing would have to live in Tutor proper rather than in a plugin, because this functionality would need to be used by other plugins as well. And it would be odd to have to re-implement it in all of those (for example, tutor-mfe would definitely need to be able to pull custom images, quite possibly from a private registry).

Yes but that’s beyond the scope of what the subject of this thread refers to, so in the interest of making the Kubernetes configurability discussion easier to find for people, my I suggest you create a separate thread for that?

Hi @adambies! @fghaas is right that in general you should add your custom changes as tutor plugins. The tutor templates include a certain number of {{ patch(...) }} statements that allow you to add arbitrary content there.

That being said, plugins are probably not what you need here :sweat_smile: As a Tutor maintainer, I could certainly add dozens of patch statements to the k8s manifests, but that’s not really sustainable. @fghaas’ solution that consists of checking the credentials in ~/.docker/config.json does not work either because nothing assures us that the user currently running the tutor CLI has access to the registry credentials.

There are alternative solutions, though. First of all, you could manually login on all nodes: Images | Kubernetes

The second solution is to add the imagePullSecrets to a service account: Configure Service Accounts for Pods | Kubernetes

Let us know which solution you end up with – and if possible write a tutorial :slight_smile:

I really dislike the idea of logging into all nodes manually, but I’ve just tried the approach with adding imagePullSecrets to the default service account and that appears to work like a charm, and strikes me as nice and clean. Thanks for the suggestion!

1 Like

@regis, I’ve been thinking about the issue of private Open edX images and how to manage and deploy them with Tutor, and I have an idea here.

Come Maple, pretty much everyone will need to build custom images for almost every Open edX instance they run — because they effectively won’t be able to run without MFE, and MFE needs an image rebuild for every theme change. Also many users will be running Comprehensive Themes that change per platform they manage, which also requires that they run custom openedx images for each platform. (Not to mention that XBlock requirements may change per platform, or even local Open edX patches.)

So, we should have a way to easily manage private images, both for the local build use case and for deployment to Kubernetes. And for the Kubernetes use case, it should be equally applicable no matter if people self-host their Kubernetes, or run fully managed on AWS EKS, or Digital Ocean, or OVH, or whatever.

We also already assume that every Open edX environment has access to a storage platform that exposes the S3 API: either to AWS S3 itself, or via Minio.

So, how about this idea:

  • Tutor would optionally deploy its own registry when running in local mode, meaning it spins up a local registry service.
  • Tutor would also optionally deploy the registry in k8s mode, by adding a Deployment containing of a single Pod that deploys a container from the same image, and a Service exposing that registry.
  • In either of these cases, Tutor would configure the registry to use the S3 storage backend. That means that the registry service is completely stateless, and all of its persistent data lives in S3.
  • In the tutor k8s use case, the registry service endpoint could get plugged into Caddy as an additional backend, meaning it gets HTTPS termination and Let’s Encrypt certificates for free, and doesn’t consume an additional external IP. However, if we wanted to support TLS in a completely standalone fashion (meaning, the registry is able to run even if it’s the only Pod or Service left in the cluster), we could also use the registry service’s built-in Let’s Encrypt support.

This way, depending on how organizations organize their development workflow,

  • if they don’t use automated/reproducible image builds out of their CI, developers can build images locally and push them directly to the registry, as long as they have the S3 credentials for the appropriate bucket.
  • if the image build workflow is CI driven, the organization sets S3 credentials as a Secret in their CI infrastructure, and then pushes them to a local registry running in their CI environment, which also populates the S3 bucket (as an alternative to using a container image registry that may be specific to a CI).

Either way, the Tutor-managed containers, whether they’re running locally or in Kubernetes, then have the ability to pull from this private registry, which exists once per Tutor platform.

This way the private image management approach is unified, and it doesn’t matter

(a) where Tutor users run their Kubernetes, or if they run Kubernetes at all (the same approach might be useful for building images that are then used to deploy Open edX with, say, Nomad),
(b) what CI Tutor users run, or if they run with a CI at all.

Also, this whole functionality should be straightforward to make entirely optional, in other words, would suit itself to being a Tutor plugin. And (even when that plugin is enabled) for any upstream public images that the Tutor user does want to use unmodified, those would of course bypass the private registry altogether, as they do now.

What do you think of this? Do consider that this may be an exceptionally silly idea, because I may be overlooking something completely obvious that’s a deal-breaker for this kind of approach. So please poke holes into these thoughts. Thanks!


I second adding the imagePullSecrets to the service account approach. I modified the default service account just like @fghaas and it works great!

1 Like

Hey Florian, I think it’s great that you are investigating this topic. I have been thinking along very similar lines. Yes, we should offer users the possibility of deploying a private registry, and yes we should almost certainly offer them the possibility of hosting on MinIO/S3.

The one part that is missing, as far as I am concerned, is the image building component. Where should it run? With tutor local we can assume that users will run tutor images build commands on the same server. What about k8s users? ideally, they should build their images on Kubernetes itself. This is what I am doing as part of the Tutor CI, but it’s very clunky because it’s difficult to build Docker images from within Docker.

On the other hand, having a dedicated image building worker is probably not an absolute pre-requisite for all tutor k8s users, so we should be able to make do without it. Users should be able to run tutor images build all locally, then tutor images push all, which will push to the k8s registry.

So I think we should take this opportunity to build this tutor-registry plugin! Would you like to create and maintain it? The rules for 3rd-party-maintained plugins to be officially supported are here: Maintainers meeting I like to encourage 3rd-party developers to maintain their own plugins because it’s 1. more publicity for you 2. less work for me. But I would totally understand if you told me that you’d rather not commit to a plugin maintenance. In which case it would be developed under the overhangio umbrella.

I’ve been looking around and experimenting a bit and I now think you probably don’t want to reinvent in Tutor what Trow is already doing.

Trow gives you a self-hosted registry that even does image validation from Kubernetes. So it solves the problem of doing image management in a provider agnostic way, needs no changes to Tutor to work, and can easily be replaced by a provider-hosted registry if that’s what people prefer.

What do you think?

Yes, absolutely, we should be going with an off-the-shelf Docker registry solution. I don’t have any experience with Trow, so maybe it’s the right component.

Are you saying that running a private Docker registry is easy enough that we don’t need a plugin for that? I guess I could agree with that. At the very minimum we need a nice tutorial to explain how to setup Tutor (tutor config save --set DOCKER_REGISTRY=...), build and push images (tutor images build all && tutor images push all) on Kubernetes. Would you like to write such a tutorial?

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.