5 Essential Tips for Maximizing Your Experience at Nutanix .NEXT for Bloggers
Hi Jose,Yes, that’s fine - just to figure out how to raise the ticket for support, as I never had a pleasure to use it in the past Yes, it seems something is off with the particular worker node (10.20.25.73) and pods belonging there and communicating via kubelet and not strictly from calico nodes:igor.stankovic@rgs-pa-bastion-1:~$ kubectl -n kube-system logs -f kube-proxy-ds-whbpl Error from server: Get "https://10.20.25.73:10250/containerLogs/kube-system/kube-proxy-ds-whbpl/kube-proxy?follow=true": dial tcp 10.20.25.73:10250: i/o timeoutigor.stankovic@rgs-pa-bastion-1:~$ We tried to reboot the kubelet, docker then full recycle for the VM node but still the same. It would be interesting to hear from the support.
Hi,Yes, bandwidth is just fine … did some basic testing and all K8 based VMs initialised just fine. It’s just weird that his particular pod can’t initialise Calico network hence the Karbon deployment fails. The Karvon cluster is not removed though (automatically) so there is a chance to look around.For the pod calico-node-fjwjp kube-system calico-node-fjwjp 0/1 CrashLoopBackOff 327 19h It’s constantly restarting as one would expect as readiness state is not reached.Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning Unhealthy 12m (x2224 over 19h) kubelet Readiness probe failed: calico/node is not ready: BIRD is not ready: Failed to stat() nodename file: stat /var/lib/calico/nodename: no such file or directory Warning BackOff 2m46s (x3945 over 19h) kubelet Back-off restarting failed container Full output from pod describ
Yes, 1.21+ sorry it was long day I will mark this now as resolved. Thanks!!
Forgot to post, the pods status:NAMESPACE NAME READY STATUS RESTARTS AGEkube-system calico-kube-controllers-7f66766f7f-nd8sx 1/1 Running 1 74mkube-system calico-node-2ctb4 1/1 Running 0 74mkube-system calico-node-7fx7n 1/1 Running 0 74mkube-system calico-node-bvct7 1/1 Running 1 74mkube-system calico-node-fjwjp 0/1 CrashLoopBackOff 23 74mkube-system calico-node-xth2k 1/1 Running 0 74mkube-system calico-typha-6bfd55df7-ptc7d 1/1 R
Hi Jose! many thanks for the reach.I found the culprit here. In essence, we are running centralised Prism control and it’s linked with site2site FW based VPN tunnels to other Nutanix platforms - so that management is central.However, I had to NAT exempt completely private CIDR between the interfaces for the encrypted S2S channel and in particular for inter routing between the Nutanix interfaces. The returned network packets didn’t have originated source IP addresses, so there was a breakdown and this is now fixed.Curiously, are you planning to support Kubernetes 2.1+ anytime soon for Karbon?So, Karbon deployment progressed almost to the end, and now I have different problem with Calico, so failed again: 2021-10-16T11:51:32.407Z calico.go:552: [ERROR] [k8s_cluster=RGS-PA-K8-STAGING] Failed to verify calico addon2021-10-16T11:51:32.407Z k8s_deploy.go:1478: [ERROR] [k8s_cluster=RGS-PA-K8-STAGING] Failed to deploy calico/flannel: Failed to deploy calico: Failed to verify calico: [ Operati
Already have an account? Login
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.