Helm doesn't reboot pod when WordPress is updated
I'm not 100% sure if this is the problem, but it seems like it.
Symptom:
- Tin updated WP to 5.8.2 from the Admin interface
- Cluster was rebooted
- 5.8.1 was installed again (actually so far this is what I'd expect)
Somewhere in the meantime the chart got updated from 0.4.2 to 0.4.3 on the cluster. This should have updated WP to 5.8.2 again, but didn't. All values seem to be set correctly. Deleting (i.e. restarting) the pod doesn't lead to a downgrade anymore, so the updated values have propagated correctly.
This leads me to believe that when the Helm chart was updated from 0.4.2 to 0.4.3 it did not replace the WordPress pod with one that installs 5.8.2.
Steps to research/reproduce:
- Override
wordpress.site.version
to5.8.1
in values-local.yaml - Install WP chart
- Remove overridden value from values-local.yaml
- Update WP chart
- Observe if wordpress pod gets removed (which triggers the
init
pod to run again and update WP)
If this fails, I assume the culprit is this line in templates/statefulset.yaml
:
30 checksum/config: {{ printf "%s%s" (include (print $.Template.BasePath "/ansible-vars.yaml") .) (include (print $.Template.BasePath "/secrets.yaml") .) | sha256sum }}
- I'm not sure if this works for Secrets (we store the WP version in a secret). If it doesn't, let's try to put the WP version in the CM
- I'm not sure if this works at all. I believe we can add this annotation several times (I believe the system works with
checksum/X
where X can have any value but I'm not sure)