Nutanix Community Podcast: The AI-Ready Platform: Nutanix Simplifies Enterprise AI
The Foundation for Your Hybrid Cloud
Recently active
Karbon always allocates 400MB for kube-let and other node resources. However, this is not always enough and kube-let can run out-of-memory when system is under load.When kube-let runs out of memory then we get pods stranded in “Terminating” state and we have to reboot the node. What I did expect was the kube-let terminated pods that got out-of-memory and that kube-let would not run out of memory.Typical 1-1.4MB are reserved at nodes with 16GB memory in other Kubernetes clusters (EKS/AKS/GKE/...)See alsohttps://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/Currently we have added extra nodes and hope that we will not get out-of-memory on any nodes. However, that is not what Kubernetes is designed. Any work-a-round for adjusting the 400MB fixed size reservation?
Hi, We are looking for a solution to restore the VMs running on Nutanix AHV to a vSphere (SAN based) environment. I've heard that HYCU can do backups on ESXi non-nutanix, however is not clear if I can restore VMs running on AHV to ESXi non-Nutanix and vice-versa. If HYCU is not the answer, is there any other vendor that can achieve this goal? Nutanix AHV backup/DR to ESXi non-Nutanix. We need to run the VMs on ESXi non-Nutanix in case of Nutanix/Primary DC failure.
Nutanix’s support for bare-metal is getting close. Please take a look at some of the foundational components for networking in Azure.NCM (Nutanix Clusters on Azure) utilizes Flow Networking to create an overlay network in Azure to ease administration for Nutanix administrators and reduce networking constraints across Cloud vendors. Flow Networking is used to abstract the Azure native network by creating overlay virtual networks. On the one hand this abstracts the underlying network in Azure, while at the same time, it allows the network substrate (and its associated features and functionalities) to be consistent with the customer’s on-premise Nutanix deployments.You will be able to create new virtual networks (called Virtual Private Clouds or VPCs) within Nutanix, subnets in any address range, including those from the RFC1918 (private) address space and define DHCP, NAT, routing, and security policy right from the familiar Prism Central interface.Flow networking can mask or reduce Cl
What is the status of Credential Guard on Nutanix VMs? We are running AOA 5.20.I have created a new VM with UEFI, Secure boot and Credential Guard enabled, but I can’t get it to work. Credential Guard is enabled with GPO, but still will not run. When I look at device security, it says “Standard hardware security not supported” and there is no compatible TPM shown in tpm.msc.The OS I’m testing it on is Microsoft Server 2019.
I need someone to light my bulb regarding data protection scenario I have in my mind. Currently we're migrating from NetApp/UCS ESX to Nutanix ESX and we uses Commvault as data protection. In addition to Commvault, we uses volume snapshot in NetApp for DR and fast recovery. This does not replicate to remote site however. In Nutanix, I have configured container for Prod and Dev/QA datastores. Prod datastore protected by Commvault intellisnap and keep two day worth of snapshots. DEV/QA datastore does not protected by Commvault due to licensing issue, we would like to use Nutanix data protection to have similar data protection like Prod datastore (2 day snapshots) . My question is, writing script to adding VMs automatically to data protection domain is the only way to go? Or any other suggestion?
Can Credential Guard be enabled via GPO for 2016 servers running in AHV? Or is this something that only applies to servers running on a HyperV host?
We have a UTM with 6 interfaces that go to separate subnets. Each subnet has a ToR Switch. The UTM handles the routing. We are installing a new Nutanix 4 node Block and virtualization physical servers. Assuming that we have 6 physical servers and each server is on a separate subnet, what would our network diagram look like if we included the Nutanix Block? Would we need another switch that has the 6 subnets vlanned, ie: 1 connection from the block and then based on the traffic destination send the traffic out through the vlan port to the ToR switch in the destination subnet?
So i’ve been going through Google searches to find an answer whether Cascade Lake vs Ice Lake can coexist within the same cluster. Unfortunately no luck. So i hope this forum can help me answer this question.My concern is that i was made to understand that Cascade Lake and Ice Lake CPU architecture is different, thus that may pose a performance problems to VM even if the nodes can coexist within the cluster.So the question is, can NX-G7 and NX-G8, which will run on different CPU archtiecture coexist within the same cluster and will it cause any performance issues if this is possible?It would help if you can point me to a KB article or any article for that matter to explain this.Thank you in advance for the help……..
Having deployed a new Kubernetes cluster with Karbon and downloaded the kubeconfig, kubectl operations result in an error:Unable to connect to the server: x509: certificate is valid for cluster.local, kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster, kubernetes.default.svc.cluster.local, cluster.local, not xxxxx What is the recommended way to add a SAN to the Karbon Kubernetes API certificate?
Hello Nutanix community!I have a question about the ability to access the AHV/CVM console from https://demo.nutanix.com/Is the experience limited to the Prism GUI or can we also access the cluster console?Thanks
Hi there,It seems that I have a failure near the very end of deployment. Here is the output from karbon_core.out2021-10-17T11:57:31.052Z kube_prometheus.go:1016: [DEBUG] [k8s_cluster=RGS-PA-K8-CLUSTER-STAGING] expecting 5 nodes to be running calico-node daemon pod in kube-system namespace. Currently running: 42021-10-17T11:57:33.093Z kube_prometheus.go:1016: [DEBUG] [k8s_cluster=RGS-PA-K8-CLUSTER-STAGING] expecting 5 nodes to be running calico-node daemon pod in kube-system namespace. Currently running: 42021-10-17T11:57:35.135Z kube_prometheus.go:1016: [DEBUG] [k8s_cluster=RGS-PA-K8-CLUSTER-STAGING] expecting 5 nodes to be running calico-node daemon pod in kube-system namespace. Currently running: 42021-10-17T11:57:36.806Z calico.go:552: [ERROR] [k8s_cluster=RGS-PA-K8-CLUSTER-STAGING] Failed to verify calico addon2021-10-17T11:57:36.806Z k8s_deploy.go:1478: [ERROR] [k8s_cluster=RGS-PA-K8-CLUSTER-STAGING] Failed to deploy calico/flannel: Failed to deploy calico: Failed to verify calico
Hi, I have performed tests of the plugin for integration between Volume Groups in a Nutanix cluster with Dockers containers (https://next.nutanix.com/karbon-kubernetes-orchestration-30/nutanix-dvp-docker-volume-plug-in-25371) and (https://next.nutanix.com/karbon-kubernetes-orchestration-30/docker-nutanix-container-volume-plug-in-18726). While reading the Docker Volume Plugin documentation at (https://portal.nutanix.com/page/documents/solutions/details?targetId=BP-2037-Docker-Containers-on-AHV:BP-2037-Docker-Containers-on-AHV), I did not notice if such a plugin works with Docker containers in swarm mode (Docker swarm cluster). I performed some tests with Ubuntu 20.04, forming a Docker swarm cluster with 03 nodes, all VMs with Nutanix Guest Tools installed. All the tests using .yml files for the creation of stacks and services, as for the volumes the tests were made both with the command to create the volumes, as well as in the .yml files. I noticed that the volumes are created normally
Is it possible to have a replication with 30 minutes RPO, as Near Sync supports 1 to 15 min RPO and Async support >= 60 min. Is there any option by which we can have 30 minutes RPO.
Hi Nutanix,Some Nutanix Nodes are installed at customer end and they don’t have Specification. I tried get on google but no luck. Could you please help me?
Which antivirus enterprise solution do you use on our virtualized Windows servers/client? Any recommendations?We are currently looking into BitDefender and Huntress Labs.
Can we change the CVM password to something else, and will it affect the Nutanix services if we do so?Thanks in advance :)
I have a problem about Recover VM in Rubric. I have made several attempts to recover the vm.As additional info:1. The VM is in Nutanix, previously the SATA storage bus and has been converted to a SCSI storage bus2. Nutanix has been upgraded AOS version 5.20.1.1 LTS NCC version 4.2.0.2 Foundation version 5.0.4-ed564a883. RUBRIK has been upgraded to version 5.3.3-19316
I want to backup the Nutanix AHV VMs, ,but I dont’t kown how to get the list of changed data blocks.In this article (Nutanix 5.0 Features Overview (Beyond Marketing) – Part 2), I see the following:Nutanix CBT utilizes the new REST 3.0 API that can be used to query for the changed meta-data regions given any two snapshots of a virtual disk or virtual machine. The approach is valuable for taking incremental and differential backups; and even useful while taking full backups because the API identifies regions that are spare (zeroed), therefore saving on read operations. I read the api documentations (PRISM v2.0 and PRISM v3) carefully, But I don’t see find a useful api.Does anyone tell me which API to use how to use it?Thank you.
Hi, I just deployed my 2 clusters on separated Data Center. Right now i have no Link between them besides Internet (No VPN). Can i use NAT Destination of IP Public of each Data Center for the target of Remote Sites Configuration?Example:Data Center A : 123.234.123.234:2020 > NAT Destination to > 250.250.123.123:2020Data Center B : 234.123.234.123:2020 > NAT Destination to > 200.200.123.123:2020Can we configure like that in order to Remote Sites Configuration to work?
Hello, when I use the OpenStack integration and try to use vnc via OpenStack, I got some problem on my noVNC like this belows. Command line for enable vnc on OpenStack : /usr/bin/prism_vnc_proxy --bind_address=0.0.0.0 --bind_port=6080 --prism_hostname=[My-IP] --prism_username=[My-Username] --prism_password=[My-Password] --docroot=/usr/share/nutanix_openstack/vnc/static &_________________________________________________________________________/var/log/prism_vnc_proxy.outINFO:nutanix_openstack.vnc.wsgi_prism_websocket_proxy:Authenticating with Prism at [My_Cluster_IP]WARNING:py.warnings:/var/lib/kolla/venv/lib/python2.7/site-packages/urllib3/connectionpool.py:858: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings InsecureRequestWarning)WARNING:py.warnings:/var/lib/kolla/venv/lib/python2.7/site-packages/urllib3/connectionpool.py:858: Insecu