Nutanix Community Podcast: The AI-Ready Platform: Nutanix Simplifies Enterprise AI
The Foundation for Your Hybrid Cloud
Recently active
Hello All,I am new to Nutanix and AHV; however, I have a large amount of experience with Hyper-v and SCVMM. One of the things I am trying to do is automate the creation of a vm’s. My end goal is to take an “image” base drive I have uploaded to images containing an install of my windows OS.Deploy a new vm with that image, have a static IP assigned to the new vm automatically from a pool of IP’s, join my domain, and run a few powershell scripts in the VM. I would have done this all with an IP pool and a template in SCVMM and as much as I have read about templates in AHV I am not seeing anything like it. Am I missing something here?Do we have a resource on how to configure something like this?
I am trying to install the Nutanix virt-who v1.1 rpm on RHEL 6.10 to talk to Red Hat Satellite v6.3. I have installed the virt-who rpm with its dependencies except for systemd (which does not exist for RHEL v6), installed Python 2.7 from the Software collections rpm, set the “scl enable python27 bash” in /etc/rc.d/rc.local, changed the virt-who script to point to that instead of /usr/bin/python, but it fails to run. I have determined that the failure is do the the ‘require’ in the python script not finding the virt-who libraries. To replicate that, I can run “pip -v list” and see that virt-who is not listed.Has anyone gotten past this? Thanks,mark
Does AHV supports MSCS ? I am aware about two node windows cluster with witness VM mentioned in below : https://portal.nutanix.com/#/page/docs/details?targetId=Web-Console-Guide-Prism-v58:wc-cluster-two-node-c.html But just want to confirm if it supports MSCS with shared storage including quorum?
Reported Issue: Nutanix has identified an issue Windows VMs may crash and reboot unexpectedly hosted on AHV Cluster with following bug checks: CRITICAL_STRUCTURE_CORRUPTION (0x109) ATTEMPTED_EXECUTE_OF_NOEXECUTE_MEMORY (0xFC) Solution: If you are experiencing above issue, Nutanix recommends upgrading to AHV Version 20190916.231 or later. These versions contain fix for this issue.For more details reference to KB 9351Nutanix Field AdvisoryAHV Upgrade
As you already know Nutanix provides block storage services in your clusters which are called Volume Groups. This provides SAN like capabilities by leveraging the existing hyper-converged infrastructure. Additionally, volume groups support SCSI unmap commands natively without having to turn on any additional setting as of AOS 4.7 in order to reclaim space from deleted blocks, more on that in the Nutanix Bible in the “Volumes (Block services)” section. However, there are certain caveats when using volume groups as backup targets so you must architect your backup solution wisely as you may have to consider losing some benefits in order to gain others.In this example, I will focus on Veeam’s feature of Fast clone, which you can read more about Veeam’s documentation here. However, this could apply to any backup solution that leverages underlying features from the file system to provide storage savings. In essence, this solution leverages the block cloning feature from ReFS in Windows whi
Something quick for the end of the week.Did you know you can test a copy of your VM with no impact to the production instance?Provided you replication is configured to a remote site, here is how you can do this:1. Open prism element for the remote site2. Navigate to the Protection Domain (PD) dropdown 3. Find the PD containing the VM that you would like to restore 4. Restore the VM with a new name using Out-of-Place Restore restore option (and NOT an In-Place Restore).Important: Understand the networking setup and the testing scenario before executing the task. Consider turning off the NICs on the VM or assigning them to a VLAN that is not routed across to the VM’s original site or externally. Depending on the application it is possible that asymmetric data forwarding or conflicting announcements from the application on both VMs (the original and the recovered temporary copy) may cause data loss. Data Protection and Recovery with Prism Element: Restoration of Protected EntitiesIn case
Learn how Nutanix Clusters can be hibernated to save costs and let you easily continue where you left off with our very own Dwayne Lessner. Share your comments below and let me know what other videos you would like to see. Give this video a thumbs up if it helped you.
Learn how to quickly get to work with native cloud networking with Nutanix Clusters on AWS with our very own Dwayne Lessner. Share your thoughts in the comments and let me know how you are using Nutanix Clusters.
Learn how to quickly deploy Nutanix Clusters on AWS from our very own Dwayne Lessner. Share your thoughts in the comments and let me know how clusters is working for you.
The Kubeconfig is a configuration file for running kubectl commands against the deployed Kubernetes cluster. To deploy applications on your cluster using kubectl, download the Kubernetes cluster configuration file (Kubeconfig) to your host. The Kubeconfig token expires after 24 hours. This is a security feature by design in Karbon. A fresh and valid kubeconfig can be obtained using Karbon UI or using karbonctl. Ensure that you have downloaded kubectl to the machine from which you manage the cluster. Karbonctl has undergone some improvements. You no longer need to pass the password via cleartext nor do you need to pass credentials for every karbonctl command. Also, there is no need to pass UUID of the k8s cluster. A simple login to Prims Central CLI and you can retrieve the Kubeconfig. We recommend you upgrade to Karbon 2.0 if you intend to use karbonctl. For the process of obtaining kubeconfig via Karbon UI refer to Nutanix Karbon Guide: Downloading the Kubeconfig. To perform th
Environment: Windows Server 2019 version 1809 (core and with graphical user interface). At the moment I am working for a customer to do migrations to Windows Server 2019. Since the migration of the virtual environment by the customer from VMWare VSphere 6.xxx to Nutanix AHV, version 20170830.412 I have the following problem. Before the installation I could work with the Window Admin Center on a Windows Server 2019 without any restrictions. After the installation the Windows Admin Center can be opened, but several error messages appear: Error messagesPerformance class not found:CPU performance class not found on this computer.Remote Exception: The property 'Product' cannot be found on this object. Verify that the property exists. Type: ErrorRetrieve user settings and extensions: RemoteException: The property 'Product' cannot be found on this object. Verify that the property exists. This means that no more data is displayed in the overview and the Tools bar of the Admin Center is on
Hi, My company have a internal CA certificate chain and I need to install the internal ca.crt on Karbon, it’s possible? Use case, install a pod using kubectl from a internal registry are not possible due they didn’t know the root CA authority installed at the registry server. Anibal
HI Everyone! Can you map a test network in a recovery plan to both the production network and the Test network at the recovery site? Nothing in the documentation around this. Our customer has VLAN 2 at the recovery location (isolated) and this is the network that matches current VLAN 1 in production. Since it’s isolated, they also want to use it for testing. Thanks!
I ugraded my ESXi hosts using 1-click upgrade to VMware-VMvisor-Installer-6.7.0.update03-15160138.x86_64-DellEMC_Customized-A04 with no issues. Trying to apply patch ESXi670-202004002.zip using 1-click upgrade and get the following error: Upgrade bundle is not compatible with current VIBs installed in hypervisor. [DependencyError] VIB QLC_bootbank_qedi_2.10.19.0-1OEM.670.0.0.8169922 requires qedentv_ver = X.11.15.0, but the requirement cannot be satisfied within the ImageProfile. I am able to successfully patch using esxcli and update manager. I updated the image profile to DellEMC-ESXi-6.7U3-15160138-A04 on the hosts thinking that may resolve the issue but this did not work. Anyone have any suggestions to get 1-click upgrade to function with 6.7u3?
Hi all We have karbon clusters running in our Nutanix environment, and I noticed the timezone in the Karbon cluster is not the same as that in other VMs on the Nutanix cluster. The time zone set for the Nutanix cluster is SAST (UTC +2), and the Karbon VMs timezone is UTC (UTC +0). The UTC+0 timezone is only on the Karbon Nodes. It seems it doesn’t use the Nutanix Cluster’s timezone, our logs are 2 hours behind on Karbon Clusters(all 3 cluster). Is there a way to change the timezone for Karbon?
Hi, It’s possible to manually adjust the file /var/nutanix/etc/kubernetes/manifests/kube-apiserver.yaml and apply this update into the kubernetes cluster? I tried to adjust and after run: sudo systemctl daemon-reload && sudo systemctl restart kubelet-master But when I describe the kube-api pod I see that the adjusts are not applied. Anibal
Hi All, I am trying to migrate one Windows 10 PC to Nutanix Acropolis. I tried to run a full backup with Veeam endpoint and tried the baremetal recovery in to Nutanix and that did not work. I tried this method for almost all of our servers (but they are already on nutanix) to restore into our test environment (running Nutanix CE) with out any issues. But this time I am doing it with Physical device to virtual and did not work. So is there any other method to do this. I see a lot of topics here mentioned vCenter converter but I never used it. Is there any particular steps I need to be careful about when using that tool. Thank you,
Hello Folks, I see usually after satadom replacement of Hyper-V reinstallation we get errors in Live migration related to processor specific features. You can looks at the KB-6617 “Live Migration of VMs fails on Windows Server 2012 R2 or 2016 with processor compatibility error” We have couple of more KBs and Documents on Hyper-V live migrations you can check based on your error KB-3639 Shared Nothing Live Migration in Hyper-V Might Fail if it Takes More Than 10 hours to Complete KB-2247 Hyper-V: Live Migration, Configuration and best practices. KB-4674 Hyper-V: Improving live migration performance on Nutanix Best Practice Guides.https://portal.nutanix.com/page/documents/solutions/details?targetId=BP-2088-Hyper-V-Windows-Server-2016-Networking%3ABP-2088-Hyper-V-Windows-Server-2016-Networking https://portal.nutanix.com/page/documents/solutions/details?targetId=BP-2021_Hyper-V_Windows_Server_2012_R2_Networking%3ABP-2021_Hyper-V_Windows_Server_2012_R2_Networking Please sha
There are 2 nutanix cluster in different regions. I'm curious about a replication issue. I currently have 19.77 TiB free (physical) 50.7 TiB and 9.74TiB free (logical) 25.35 TiB disk space on first nutanix cluster site . there is enough free space in the second zone. I saw it through Nutanix. now if I replication 2 servers ( both 500gb ) in other region. in the configuration, local snapshot and remote snapshot 1 selected. How much to lose from Nutanix storage. For example, in the nutanix cluster there is 9.7 tb of free space, if I replicate 10 servers . 500 gb each. how much space will i lose on nutanix storage. Does this have an account? I want to learn this. I have 9.7 tb of free space, but I want to replicate 10-15 servers. Is there a problem in the storage area? How to calculate?
Hi everybody we are just in the beginning of the process of migrating our environment from VMWare to AHV. One of the scenarios that we need to consider for one of our application sets involves cloning an entire drive from a build server to a reporting server. Essentially right now I have a server that takes data and compiles it into a finished product on an intermittent basis. When the server completes it’s build it calls another script that clones the drive to a reporting server. Our reporting server has duplicate drives 1 online and 1 offline, the script overwrites the offline drive, then gets the drives to swap online state so that our clients are now reporting on the newly refreshed data. I am looking to replicate this concept and am hoping for a few pointers. In our existing environment I am making use of some of the storage commands for the disk clone. I am hoping to understand how to replicate the disk clone from an AHV hosted Windows server. Would anybody have any sugge