Nutanix Kubernetes Engine
Kubernetes Management Made Simple
- 99 Topics
- 288 Replies
We have setup an NKE with Rancher as the Container Management and Orchestration. We have also enabled ARC Kubernetes in Azure and enabled the communication between Azure and On Prem Nutanix. We have a strict requirement for security and require Azure Defender for Containers to scan the clusters. The Defender has identified some recommendations on the Nutanix Kubernetes as medium to high in terms of security configurations.Is there any documentation on which recommendations to suppress due to the nature of NKE vs actual recommendations? We have a "clean install" (but for rancher and ARC) and the recommendations are related to nutanix pods.
Hello,I got an issue with calico pods:Warning Unhealthy 9m53s (x3 over 10m) kubelet, karbon Liveness probe failed: calico/node is not ready: bird/confd is not live: Service bird is not running. Output << down: /etc/service/enabled/bird: 0s, normally up, want up >> Warning Unhealthy 7m3s (x12 over 10m) kubelet, karbon- Liveness probe failed: calico/node is not ready: bird/confd is not live: Service bird is not running. Output << down: /etc/service/enabled/bird: 1s, normally up, want up >> Warning Unhealthy 2m3s (x50 over 10m) kubelet, karbon- Readiness probe failed: calico/node is not ready: BIRD is not ready: Error querying BIRD: unable to connect to BIRDv4 socket: dial unix /var/run/calico/bird.ctl: connect: connection refused I tried different workarounds:increasing scan time from 2 to 10 then 20. Modify the “IP_AUTODETECTION_METHOD=interface” from eth.* to eth0/eth1/ens.*But no luck, Would you have any suggestions to resolve the issue.Thanks.
HelloI wanted to test Advanced Kubernetes Management but unfortunately I didn't understand how it works and I deleted my cluster on which I had activated it. Now I can't create a new cluster because I have an api error that is unreachable (this is normal since my cluster was deleted). Is it possible to reinstall properly on a new cluster or to disable it completely ? Thanks
Hi guys,Currently working on Ansible to deploy K8s clusters through Karbon and i got a fatal issue.Ansible is version core 2.12.9, PC is 2022.1, nutanix.ncp collection has been tested in version 1.6.0 and 1.7.0 and there’s absolutely no traffic denied between ansible and PC (to sum up, everything is opened). I use this playbook, based on example provided in ansible-doc (i also tried the example, and i got the same errors in any cases) ###EDIT : i can’t add code in this post, got a banner Something gone wrong everytime ### Here’s the logs when the playbook fails (after ~5min running, from PC, ETCD deployment get stuck at 8%) : ###EDIT : i can’t add code in this post, got a banner Something gone wrong everytime ###Moreover, K8s cluster deployment through PC GUI is working smoothly. Does anyone got an idea to fix the “failed to deploy ntnx dvp” error ? Thanks a lot :) Gael
HiI’m currently working on RBAC rights for Prism Central but also for entities (VM,Apps...)In my understanding and in the documentation, i saw that you have to be User Admin on PC to get full rights on K8s cluster. In that statement, how does Karbon acts on K8s cluster rights while it deploy a new cluster ? (For instance, if you are a Viewer on PC you can’t connect to K8s cluster, if you’re User Admin, you can do whatever you want. ) Best regards Gael
“Kubernetes deployments are inherently dynamic and challenging to manage at scale,” said Thomas Cornely, SVP, Product Management, Nutanix. “Running Kubernetes container platforms cost-effectively at large scale requires developer-ready infrastructure that seamlessly adapts to changing requirements. Our expertise in simplifying infrastructure management while optimizing resourcesーboth on-premises and in the public cloudーis now being applied to help enterprises adopt Kubernetes more quickly. The Nutanix Cloud Platform now supports a broad choice of Kubernetes container platforms, provides integrated data services for modern applications, and enables developers to provision Infrastructure as Code.”According to Gartner, by 2027, 25% of all enterprise applications will run in containers, an increase from fewer than 10% in 2021. This is a significant challenge for many given most Kubernetes solutions are not meant to support enterprise scale, even less can do so in a manner that is cost effe
Hello , I have a kubernetes Cluster on Nutanix with 1 Master Node and 2 Worker Nodes(Test ENV). All the Nodes have Kubernetes version of 1.20.x. I want to Update the all the Nodes kubernetes version to 1.21.x. without the Downtime of application/services running on it. I have Seen in Nutanix guide which is easy step to upgrade kubernetes version but it says application may have downtime. Any Ideas and Inputs to upgrade Kubernetes version from 1.20.x to 1.21.x withoutdown of apps.
I would like to know if support for Windows nodes has made any progress toward being prioritized for Karbon.We run a product called OutSystems and would like to move that workload to Nutanix Karbon. But OutSystems containers only work with Windows Nodes. I asked this question a year ago and was told that it was on the backlog.Could someone from the Karbon product team provide an update on the backlog item to support Windows Nodes in a Karbon based Kuberentes cluster?
When I run the command “kubctl api-resources”, I get a list of all available resources types. This list includes the CronJob type in the apigroup batch. However, when I want to deploy a cronjob I get the message: no matches for kind “CronJob” in version “batch/v1”.apiVersion: batch/v1kind: CronJobmetadata: name: devel-sched namespace: develDoes anybody know, how to fix this?
Hello folks!I just signed up for a SaaS test of Nutanix Karbon. After creating the cluster and downloading the kubeconfig, I see it gives me an IP 172.31.x.x:443 After downloading the kubeconfig file and running a kubectl get nodes, it times out because it can’t reach the cluster IPWhat should I do in order to expose the cluster IP and try installing/running our applications on Karbon?
I have Community Edition and I can only see "ntnx-0.5" image due to which I am stuck at Kubernetes v1.16. (Karbon 2.0.2). My objective is to upgrade Kubernetes to 1.17 or 1.8. LCM does not display any software to upgrade.I have uploaded one more image "1.0" under images in PC as below - which I understand is 1st step before K8S upgrade. However, I cant not get ntnx-1.0 image under Karbon download Could someone please advise on this? Thanks
Hi All,I have Community Edition and I can only see "ntnx-0.5" image due to which I am stuck at Kubernetes v1.16. (Karbon 2.0.2). My objective is to upgrade Kubernetes to 1.17 or 1.8. LCM does not display any software to upgrade.I have uploaded one more image "1.0" under images in PC as below - which I understand is 1st step before K8S upgrade. However, I cant not get ntnx-1.0 image under Karbon download Could someone please advise on this? Thanks
If you check the carbon setting document on the Nutanix homepage, it is as follows.https://portal.nutanix.com/page/documents/details?targetId=Karbon-v2_2:kar-karbon-airgap-deploy-t.html There is a WEB server (192.168.100.99) and prism central (192.168.100.45) that allow you to download carbon files by following the above method. When entering the command in Prism central CLI, it was successfully confirmed, but when checking in Prism central WEB, it is confirmed as tasks fail.Please tell me another way after confirmation ASAP
Karbon always allocates 400MB for kube-let and other node resources. However, this is not always enough and kube-let can run out-of-memory when system is under load.When kube-let runs out of memory then we get pods stranded in “Terminating” state and we have to reboot the node. What I did expect was the kube-let terminated pods that got out-of-memory and that kube-let would not run out of memory.Typical 1-1.4MB are reserved at nodes with 16GB memory in other Kubernetes clusters (EKS/AKS/GKE/...)See alsohttps://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/Currently we have added extra nodes and hope that we will not get out-of-memory on any nodes. However, that is not what Kubernetes is designed. Any work-a-round for adjusting the 400MB fixed size reservation?
Having deployed a new Kubernetes cluster with Karbon and downloaded the kubeconfig, kubectl operations result in an error:Unable to connect to the server: x509: certificate is valid for cluster.local, kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster, kubernetes.default.svc.cluster.local, cluster.local, not xxxxx What is the recommended way to add a SAN to the Karbon Kubernetes API certificate?
Hi there,It seems that I have a failure near the very end of deployment. Here is the output from karbon_core.out2021-10-17T11:57:31.052Z kube_prometheus.go:1016: [DEBUG] [k8s_cluster=RGS-PA-K8-CLUSTER-STAGING] expecting 5 nodes to be running calico-node daemon pod in kube-system namespace. Currently running: 42021-10-17T11:57:33.093Z kube_prometheus.go:1016: [DEBUG] [k8s_cluster=RGS-PA-K8-CLUSTER-STAGING] expecting 5 nodes to be running calico-node daemon pod in kube-system namespace. Currently running: 42021-10-17T11:57:35.135Z kube_prometheus.go:1016: [DEBUG] [k8s_cluster=RGS-PA-K8-CLUSTER-STAGING] expecting 5 nodes to be running calico-node daemon pod in kube-system namespace. Currently running: 42021-10-17T11:57:36.806Z calico.go:552: [ERROR] [k8s_cluster=RGS-PA-K8-CLUSTER-STAGING] Failed to verify calico addon2021-10-17T11:57:36.806Z k8s_deploy.go:1478: [ERROR] [k8s_cluster=RGS-PA-K8-CLUSTER-STAGING] Failed to deploy calico/flannel: Failed to deploy calico: Failed to verify calico
Hi, I have performed tests of the plugin for integration between Volume Groups in a Nutanix cluster with Dockers containers (https://next.nutanix.com/karbon-kubernetes-orchestration-30/nutanix-dvp-docker-volume-plug-in-25371) and (https://next.nutanix.com/karbon-kubernetes-orchestration-30/docker-nutanix-container-volume-plug-in-18726). While reading the Docker Volume Plugin documentation at (https://portal.nutanix.com/page/documents/solutions/details?targetId=BP-2037-Docker-Containers-on-AHV:BP-2037-Docker-Containers-on-AHV), I did not notice if such a plugin works with Docker containers in swarm mode (Docker swarm cluster). I performed some tests with Ubuntu 20.04, forming a Docker swarm cluster with 03 nodes, all VMs with Nutanix Guest Tools installed. All the tests using .yml files for the creation of stacks and services, as for the volumes the tests were made both with the command to create the volumes, as well as in the .yml files. I noticed that the volumes are created normally
Is it possible to use Airgap to store additional K8s components like Flux, Ingress controller etc.? I’m using terraform to deploy Karbon, so I should be able to access airgap registry from those deployment modules. How do I manage airgap registry, is it standard private container registry? How about helm charts, can airgap registry store them also?
Hello friends,From what I have read in the Karbon documentation, as well as from YouTube videos, through karbon in AHV I cannot manage aspects of the containers, the applications themselves, load balancing, reverse proxy, container images, etc. Is that right?Associated with this, if Karbon does not in fact have such features, is it possible to use Karbon in conjunction with tools that have the above features, like Rancher? Has anyone had this experience?
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.