stackspin issueshttps://open.greenhost.net/stackspin/stackspin/-/issues2024-02-01T10:18:45Zhttps://open.greenhost.net/stackspin/stackspin/-/issues/1652Renew session when showing bulk useradd form2024-02-01T10:18:45ZJanekRenew session when showing bulk useradd formI just now clicked on "Add new users" in the Dashboard, diligently filled in the data and set permissions, only to be greeted with a 301 upon confirmation - meaning I had to log back in and do it all over again. I guess either save the f...I just now clicked on "Add new users" in the Dashboard, diligently filled in the data and set permissions, only to be greeted with a 301 upon confirmation - meaning I had to log back in and do it all over again. I guess either save the form state or verify the session upon showing the form, with the latter seeming more feasible.https://open.greenhost.net/stackspin/stackspin/-/issues/1647Website page load is delayed by fetching unreachable script2024-01-09T14:14:52ZRemon HuijtsWebsite page load is delayed by fetching unreachable scriptWhen I visit the public Stackspin website, the homepage tries to fetch a script from https://analytics.greenhost.net/js/plausible.outbound-links.js?ver=1.3.0 but fails to connect. It blocks page loading for a few seconds.When I visit the public Stackspin website, the homepage tries to fetch a script from https://analytics.greenhost.net/js/plausible.outbound-links.js?ver=1.3.0 but fails to connect. It blocks page loading for a few seconds.https://open.greenhost.net/stackspin/stackspin/-/issues/1615Logout from wekan leads to 4042023-09-07T10:14:08ZArie PetersonLogout from wekan leads to 404Logging out from Wekan leads to `https://sso.$DOMAIN/` which gives a 404. The Wekan session is ended though.Logging out from Wekan leads to `https://sso.$DOMAIN/` which gives a 404. The Wekan session is ended though.https://open.greenhost.net/stackspin/stackspin/-/issues/1528Set right Zulip role for admins2023-12-21T15:17:39ZArie PetersonSet right Zulip role for adminsCurrently, Stackspin admins do not get any special role in Zulip, but only have "member" permissions. They should be something like "organization admin" or "owner" probably. I'm not sure if we can do this through SSO, or SCIM, or have to...Currently, Stackspin admins do not get any special role in Zulip, but only have "member" permissions. They should be something like "organization admin" or "owner" probably. I'm not sure if we can do this through SSO, or SCIM, or have to use or more manual Zulip CLI action for that.2.12https://open.greenhost.net/stackspin/stackspin/-/issues/1251Error when rerunning `install-app.sh` for wordpress2022-05-04T12:26:48ZJanekError when rerunning `install-app.sh` for wordpress```sh
❯ ./install/install-app.sh wordpress
Secret stackspin-wordpress-variables in namespace flux-system is already in a good state, doing nothing.
Storing secret stackspin-{{ app }}-oauth-variables in namespace flux-system in cluster.
S...```sh
❯ ./install/install-app.sh wordpress
Secret stackspin-wordpress-variables in namespace flux-system is already in a good state, doing nothing.
Storing secret stackspin-{{ app }}-oauth-variables in namespace flux-system in cluster.
Secret not created because of exception Error from server (Conflict): {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"secrets \"stackspin-wordpress-oauth-variables\" already exists","reason":"AlreadyExists","details":{"name":"stackspin-wordpress-oauth-variables","kind":"secrets"},"code":409}
File /root/stackspin/install/templates/stackspin-wordpress-basic-auth.yaml.jinja does not exist, no action needed
✚ generating Kustomization
► applying Kustomization
✔ Kustomization updated
◎ waiting for Kustomization reconciliation
✔ Kustomization add-wordpress is ready
✔ applied revision v0.8/b136ab279e04471d63bc0117e52573c3960ed5af
```Futurehttps://open.greenhost.net/stackspin/stackspin/-/issues/1098CI droplet cannot get deleted2023-01-19T14:03:26ZVaracCI droplet cannot get deletedJob [#149095](https://open.greenhost.net/stackspin/stackspin/-/jobs/149095) failed for fc9828a04d6b5a2e0f631733d5c77f33a969bea0:
```
echo "Deleting old machine"
Deleting old machine
python3 -c "import greenhost_cloud;
greenhost_cloud.t...Job [#149095](https://open.greenhost.net/stackspin/stackspin/-/jobs/149095) failed for fc9828a04d6b5a2e0f631733d5c77f33a969bea0:
```
echo "Deleting old machine"
Deleting old machine
python3 -c "import greenhost_cloud;
greenhost_cloud.terminate_droplets_by_name(\"^${VPS_HOSTNAME}$\")"
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "/src/greenhost-cloud/greenhost_cloud/cosmos.py", line 325, in terminate_droplets_by_name
delete_droplet(droplet['id'])
File "/src/greenhost-cloud/greenhost_cloud/cosmos.py", line 152, in delete_droplet
response = request_api('droplets/{0}'.format(droplet_id), 'DELETE')
File "/src/greenhost-cloud/greenhost_cloud/cosmos.py", line 56, in request_api
raise requests.HTTPError('WARNING: Got response code ',
requests.exceptions.HTTPError: [Errno WARNING: Got response code ] 500: '"The VPS was not deleted, please shutdown the VPS before removal"'
```Futurehttps://open.greenhost.net/stackspin/stackspin/-/issues/619Display output of k3s installation script2021-03-02T16:49:04ZVaracDisplay output of k3s installation script[Pipeline 4402](https://open.greenhost.net/openappstack/openappstack/pipelines/4402) failed
because of ` 0/1 nodes are available: 1 node(s) had taint {node.cloudprovider.kubernetes.io/uninitialized: true}, that the pod didn't tolerate.`:...[Pipeline 4402](https://open.greenhost.net/openappstack/openappstack/pipelines/4402) failed
because of ` 0/1 nodes are available: 1 node(s) had taint {node.cloudprovider.kubernetes.io/uninitialized: true}, that the pod didn't tolerate.`:
```
root@extra-helm-values:~# kubectl -n oas get events
LAST SEEN TYPE REASON OBJECT MESSAGE
<unknown> Warning FailedScheduling pod/helm-operator-86c5869dbc-47vdp 0/1 nodes are available: 1 node(s) had taint {node.cloudprovider.kubernetes.io/uninitialized: true}, that the pod didn't tolerate.
22m Normal ScalingReplicaSet deployment/flux-memcached Scaled up replica set flux-memcached-869757cb88 to 1
22m Normal ScalingReplicaSet deployment/flux Scaled up replica set flux-647f949c78 to 1
22m Normal SuccessfulCreate replicaset/flux-647f949c78 Created pod: flux-647f949c78-whd67
<unknown> Warning FailedScheduling pod/flux-647f949c78-whd67 0/1 nodes are available: 1 node(s) had taint {node.cloudprovider.kubernetes.io/uninitialized: true}, that the pod didn't tolerate.
22m Normal SuccessfulCreate replicaset/flux-memcached-869757cb88 Created pod: flux-memcached-869757cb88-r2gg8
<unknown> Warning FailedScheduling pod/flux-memcached-869757cb88-r2gg8 0/1 nodes are available: 1 node(s) had taint {node.cloudprovider.kubernetes.io/uninitialized: true}, that the pod didn't tolerate.
<unknown> Warning FailedScheduling pod/flux-647f949c78-whd67 0/1 nodes are available: 1 node(s) had taint {node.cloudprovider.kubernetes.io/uninitialized: true}, that the pod didn't tolerate.
<unknown> Warning FailedScheduling pod/flux-memcached-869757cb88-r2gg8 0/1 nodes are available: 1 node(s) had taint {node.cloudprovider.kubernetes.io/uninitialized: true}, that the pod didn't tolerate.
21m Normal ScalingReplicaSet deployment/local-flux Scaled up replica set local-flux-54cb9dcc4c to 1
21m Normal SuccessfulCreate replicaset/local-flux-54cb9dcc4c Created pod: local-flux-54cb9dcc4c-bv5z2
<unknown> Warning FailedScheduling pod/local-flux-54cb9dcc4c-bv5z2 0/1 nodes are available: 1 node(s) had taint {node.cloudprovider.kubernetes.io/uninitialized: true}, that the pod didn't tolerate.
<unknown> Warning FailedScheduling pod/local-flux-54cb9dcc4c-bv5z2 0/1 nodes are available: 1 node(s) had taint {node.cloudprovider.kubernetes.io/uninitialized: true}, that the pod didn't tolerate.
105s Warning ProvisioningFailed persistentvolumeclaim/prometheus-server storageclass.storage.k8s.io "local-path" not found
105s Warning ProvisioningFailed persistentvolumeclaim/grafana storageclass.storage.k8s.io "local-path" not found
105s Warning ProvisioningFailed persistentvolumeclaim/alertmanager storageclass.storage.k8s.io "local-path" not found
```
But there's no output from the k3s installation script that would allow retrospective debugging:
```
TASK [setup-kubernetes : Run k3s installation script] **************************
Friday 19 June 2020 10:43:08 +0000 (0:00:03.194) 0:00:58.497 ***********
changed: [extra-helm-values]
```Backlog