SSO jobs fail once then succeed

I have this on my cluster:

root@oas:~# kubectl -n oas get pods
NAME                                                     READY   STATUS      RESTARTS   AGE

single-sign-on-consent-56847756f6-vz28g                  1/1     Running     0          130m
single-sign-on-create-admin-user-8gtzq                   0/1     Completed   0          127m
single-sign-on-create-admin-user-x8fvv                   0/1     Error       0          130m
single-sign-on-create-oauth2-client-9xb4d                0/1     Error       0          130m
single-sign-on-create-oauth2-client-wr26j                0/1     Completed   0          127m
single-sign-on-hydra-6f99b48997-6klcj                    1/1     Running     0          130m
single-sign-on-hydra-maester-6669cd84d4-ls88n            1/1     Running     0          130m
single-sign-on-login-7fc5f5b997-xjrcq                    1/1     Running     0          130m
single-sign-on-userbackend-6564469d8d-6gdfv              2/2     Running     0          130m
single-sign-on-userpanel-9b48bcb4b-jt2zf                 1/1     Running     0          130m
Total alerts: 4
[
  {
    "labels": {
      "alertname": "KubePodNotReady",
      "namespace": "oas",
      "pod": "single-sign-on-create-oauth2-client-9xb4d",
      "severity": "critical"
    },
    "annotations": {
      "message": "Pod oas/single-sign-on-create-oauth2-client-9xb4d has been in a non-ready state for longer than 15 minutes.",
      "runbook_url": "https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md#alert-name-kubepodnotready"
    },
    "state": "firing",
    "activeAt": "2019-12-20T19:55:54.357099851Z",
    "value": "1e+00"
  },
  {
    "labels": {
      "alertname": "KubePodNotReady",
      "namespace": "oas",
      "pod": "single-sign-on-create-admin-user-x8fvv",
      "severity": "critical"
    },
    "annotations": {
      "message": "Pod oas/single-sign-on-create-admin-user-x8fvv has been in a non-ready state for longer than 15 minutes.",
      "runbook_url": "https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md#alert-name-kubepodnotready"
    },
    "state": "firing",
    "activeAt": "2019-12-20T19:55:54.357099851Z",
    "value": "1e+00"
  },
  {
    "labels": {
      "alertname": "KubeJobFailed",
      "endpoint": "http",
      "instance": "10.42.0.24:8080",
      "job": "kube-state-metrics",
      "job_name": "single-sign-on-create-oauth2-client",
      "namespace": "oas",
      "pod": "monitoring-kube-state-metrics-6959ffbdd6-ggwhm",
      "service": "monitoring-kube-state-metrics",
      "severity": "warning"
    },
    "annotations": {
      "message": "Job oas/single-sign-on-create-oauth2-client failed to complete.",
      "runbook_url": "https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md#alert-name-kubejobfailed"
    },
    "state": "firing",
    "activeAt": "2019-12-20T19:55:54.357099851Z",
    "value": "1e+00"
  },
  {
    "labels": {
      "alertname": "KubeJobFailed",
      "endpoint": "http",
      "instance": "10.42.0.24:8080",
      "job": "kube-state-metrics",
      "job_name": "single-sign-on-create-admin-user",
      "namespace": "oas",
      "pod": "monitoring-kube-state-metrics-6959ffbdd6-ggwhm",
      "service": "monitoring-kube-state-metrics",
      "severity": "warning"
    },
    "annotations": {
      "message": "Job oas/single-sign-on-create-admin-user failed to complete.",
      "runbook_url": "https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md#alert-name-kubejobfailed"
    },
    "state": "firing",
    "activeAt": "2019-12-20T19:55:54.357099851Z",
    "value": "1e+00"
  }
]

It would be good to investigate why they fail in the first place.

To upload designs, you'll need to enable LFS and have an admin enable hashed storage. More information