Karbon always allocates 400MB for kube-let and other node resources. However, this is not always enough and kube-let can run out-of-memory when system is under load.
When kube-let runs out of memory then we get pods stranded in “Terminating” state and we have to reboot the node. What I did expect was the kube-let terminated pods that got out-of-memory and that kube-let would not run out of memory.
Typical 1-1.4MB are reserved at nodes with 16GB memory in other Kubernetes clusters (EKS/AKS/GKE/...)
See also
https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/
Currently we have added extra nodes and hope that we will not get out-of-memory on any nodes. However, that is not what Kubernetes is designed.
Any work-a-round for adjusting the 400MB fixed size reservation?