This set of changes ensures that every k8s resource created by the Helm chart includes the fullname
macro which includes the name of the Helm release.
It's a big change, and is probably much easier to read commit-by-commit, due to the distance between different parts for some of the specific changes.
The main observable effect is that all the resources that were hard-coded to om-XXX
or open-match-XXX
are now RELEASE-open-match-XXX
, although if RELEASE
contains "open-match" then they will be RELEASE-XXX
.
In a special case (or rather, removing a special case), e01c77b838fdd86fae4ae8983b030fae552973f4 causes the open-match-scale subchart's resources to be RELEASE-open-match-scale-XXX
, which means a release named "open-match" gets, e.g., Deployments named open-match-backend
and open-match-open-match-scale-backend
. It wasn't clear to be if this is desirable, and if not, that commit can be removed, producing open-match-scale-backend
instead, but cementing that the subcharts are really just part of the main chart.
Similarly, redis no longer has a fullnameOverride
set, so its resources will generally be of the form RELEASE-redis-XXX
, e.g., a StatefulSet named open-match-redis-master
.
Note: Redis PVCs names change with this. Since it's a StatefulSet, the old PVCs are left behind. It's still possible to set redis.fullnameOverride
to "om-redis" to use any existing PVCs in a namespace, when upgrading. If your release is named om
, this will be the default name anyway.
As noted in the values.yaml, the hostName
and serviceAccount
values now default to empty to generate names, and can be overridden where desired.
To regain the short-names, you can use the YAML file (inside the Details) with --values
. The only thing that will differ is the ConfigMap created by open-match-scale for Grafana dashboards, because it was already named open-match-scale-dashboard
. If you do this, you can only have one such release in a namespace, and you cannot then have a release named 'om' or 'open-match' installed at the same time, as some resources will conflict.
# Reinstate all the resource names as if I hadn't done anything
fullnameOverride: "om"
open-match-scale:
fullnameOverride: "om"
configs:
default:
configName: "om-configmap-default"
override:
configName: "om-configmap-override"
scaleBackend:
hostName: "om-scale-backend"
scaleFrontend:
hostName: "om-scale-frontend"
open-match-customize:
fullnameOverride: "om"
redis:
fullnameOverride: "om-redis"
serviceAccount:
name: "open-match-redis-service"
global:
kubernetes:
serviceAccount: "open-match-unprivileged-service"
configs:
default:
configName: "om-configmap-default"
override:
configName: "om-configmap-override"
Tested to support multiple installs using the following commands from install/helm/open-match
helm dep build
helm install open-match . --values om-install-all.values.yaml --set-string global.image.tag=1.0.0
helm install another-open-match . --values om-install-all.values.yaml --set-string global.image.tag=1.0.0
with TLS keys generated per TLS Encryption
where om-install-all-values.yaml is set to install as many of the templated resources as possible, as below:
open-match-core:
enabled: true
swaggerui:
enabled: true
redis:
enabled: true
open-match-customize:
enabled: true
function:
enabled: true
evaluator:
enabled: true
open-match-scale:
enabled: true
open-match-telemetry:
enabled: true
open-match-override: true
usingHelmTemplate: false
ci: false
global:
tls:
enabled: true
telemetry:
grafana:
enabled: true
jaeger:
enabled: true
prometheus:
enabled: true
I repeated the test with --set ci=true
appended to both, to confirm that the testing resources do not conflict.
Fixes: #1235