Huge docker log files fills up cluster space
from the chat:
Master CI pipeline is failing. All pods are pending, with this message:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 45s (x5094 over 9h) default-scheduler 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.


I think it's because the disk is full: /dev/xvda1 30G 29G 1,6G 95% /
root@master:/var/lib/OpenAppStack/local-storage# du -sh .
3,1G .
It's not because of the local storage volumes
docker system prune
only cleared 5 MB so that's also not the problem
There's a 5e415309f9e1cd963e8aa535426cc7bdbb536214acd82a92611eb3b5cc697f23-json.log
in a docker container that takes up 12G
I think it's the kubelet log it were log files in these containers:
3c0c9b0897f9 rancher/hyperkube:v1.15.5-rancher1 "/opt/rke-tools/entr…" 2 days ago Up 10 hours kube-controller-manager
5e415309f9e1 rancher/hyperkube:v1.15.5-rancher1 "/opt/rke-tools/entr…" 2 days ago Up 2 days kube-apiserver
I think we need to somehow setup RKE/k8s to rotate these log files
Edited by Varac