increase liveness probe and readiness probe limits for helm-operator
I did this in !388 (merged) already and it seems to make the helm-operator startup a bit less buggy. I hope it fixes this problem in this pipeline too:
root@master:~# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-854c77959c-vv7fc 1/1 Running 0 11d
kube-system metrics-server-86cbb8457f-q8sdm 1/1 Running 0 11d
oas flux-memcached-5dbc947678-thhkm 1/1 Running 0 89m
oas flux-6ccf67466b-9h55l 1/1 Running 0 89m
oas local-flux-5d8cfdc7c6-5g2dq 1/1 Running 0 88m
oas helm-operator-5f9cc8c4ff-nl95p 0/1 CrashLoopBackOff 31 89m
root@master:~# kubectl describe pod -n oas helm-operator-5f9cc8c4ff-nl95p
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 89m default-scheduler Successfully assigned oas/helm-operator-5f9cc8c4ff-nl95p to master
Normal Pulling 89m kubelet Pulling image "docker.io/fluxcd/helm-operator:1.2.0"
Normal Pulled 88m kubelet Successfully pulled image "docker.io/fluxcd/helm-operator:1.2.0" in 8.715501395s
Warning Unhealthy 87m (x6 over 88m) kubelet Liveness probe failed: Get "http://10.42.1.2:3030/healthz": dial tcp 10.42.1.2:3030: connect: connection refused
Normal Killing 87m (x2 over 88m) kubelet Container flux-helm-operator failed liveness probe, will be restarted
Normal Created 87m (x3 over 88m) kubelet Created container flux-helm-operator
Normal Started 87m (x3 over 88m) kubelet Started container flux-helm-operator
Normal Pulled 48m (x16 over 88m) kubelet Container image "docker.io/fluxcd/helm-operator:1.2.0" already present on machine
Warning BackOff 9m (x302 over 85m) kubelet Back-off restarting failed container
Warning Unhealthy 3m57s (x111 over 88m) kubelet Readiness probe failed: Get "http://10.42.1.2:3030/healthz": dial tcp 10.42.1.2:3030: connect: connection refused