Installation & Configuration
This forum is the best way to get up and running with the Nutanix platform
- 1,136 Topics
- 3,091 Replies
Server 2019 Hyper V with Nutanix ?
Please advise me is it support Server 2019 Hyper V on Nutanix? I unable to Find the article on Nutanix Portal with server 2019? I am planing to migrate from Server 2012 hyper v failover cluster to 2019 Server Hyper V failover Cluster? Which AOS Version is support Server 2019?
Is there a script or tool for clearing down the home directory?
Hi, I often get the error of: Warning : Disk space usage for /home on Controller VM... Does anyone have a definitive list of what needs to be deleted beyond the installation files? or a script that deletes the un-needed files? Surely it is something Nutanix should be building in to the system, I never have problems with VMware running out of installation space! Thanks Eric
Windows 11 Installation config
Looking for anyone that has been successful with installing windows 11 on Nutanix AHV?I have tried, in my VM creation i have selected UEFI and Secure Boot. What I can’t seem to get to is how to enable TPM? When VM boots there is no way to get into bios to enable TPM. So how do you do that?Thanks
Nutanix AHV Plug-In for Citrix Director
VDi (Virtual Desktop Infrastructure) is one of the earliest applications that was used in hyper-convergent systems. The closer the storage to the cpu and memory the better the performance. The plugin provides the means for getting more information on Citrix director from Citrix generated desktops running on a Nutanix cluster. Citrix also has its own plugin that can be used. This conversation is about nutanix provided plugin and how to install it. Nutanix AHV Plug-in for Citrix Director creates customized reports for the following VM-level performance statistics: VM IOPS VM I/O Bandwidth VM Average I/O Latency Nutanix AHV Plug-in for Citrix Director gets the data directly from the Nutanix AHV hosts to generate performance statistics of the virtual machines. The link: https://portal.nutanix.com/#/page/docs/details?targetId=AHV-Plugin-Citrix-Director-Installation-Guide-v1110:AHV-Plugin-Citrix-Director-Installation-Guide-v1110 have step by step instructions for installation of this plugin
Upgrade AHV on AWS Cloud instance
HI We use an AWS cloud instance as a remote data protection ‘site’. The production cluster has been successfully updated to AOS 220.127.116.11 (LTS) but this needed an AHV update to keep everything compatible.. Is is possible / desirable to upgrade the AWS cloud instance AHV ? IIRC it’s not supported to do OS upgrades (MIcrosoft OS anyway) on cloud infrastructure, it’s generally a migration to a new server. Is it possible to put a new AWS remote Nutanix instance in place and connect the backup data ??? Any advice on how to keep the whole Nutanix environment up to date is appreciated.
5.18 2020.9.16 cannot open web console
Hi, I just installed a nested CE 5.18 2020.9.16, and it is OK. single node. vmware esxi 6.7 The host IP is pingable, but the CVM IP is unpingable from outside pc.so cannot open web consoleWhen i login the host, can ping the cvm ip. how to fix the issue? thank you so much.
Nutanix Deployment Fails with Minimum requirement error
Has anyone seen this error ? As checked AOS version 5.17 is compatible with HPE DX nodes.INFO Copied /home/nutanix/foundation/templates/crystal_plat_reference.json to phoenix at ip X.X.X.XERROR Command '/usr/bin/python /phoenix/minimum_reqs.py hyp_type=kvm nos_version=5.17.1' returned error code 1stdout:Loading /phoenix/features.jsonstderr:INFO /root/phoenix/hardware_pre_checks/hp_proliant/updates/hp_platform_reference.json override is not presentWARNING Firmware configuration not available for system type HPE DX380-12 G10INFO HPDX cabling validation completed successfullyTraceback (most recent call last): File "/phoenix/minimum_reqs.py", line 653, in <module> main() File "/phoenix/minimum_reqs.py", line 649, in main check_minimum_requirements(param_list, use_layout, boot_disk_controller) File "/phoenix/minimum_reqs.py", line 584, in check_minimum_requirements process_test_results(errors, warnings) File "/phoenix/minimum_reqs.py", line 603, in process_test_results
AOS upgrade path 18.104.22.168 to 5.9.2/Latest ( 22.214.171.124)
WE are running on AOS version 126.96.36.199 and planning to upgrade it. When I checked in Upgrade path, the maximum version I can upgrade to is 5.6.2 I need to know, if I want to upgrade to the latest version which is 188.8.131.52, do I have to follow the path like [b]184.108.40.206 >> 5.6.2 >> 5.9.2 >> 220.127.116.11[/b] , basically a 3 step procedure OR is it that I can upgrade to 5.6.2 only. Please share your views.
A bizzare message - CVM was renamed after installation
Hi AllAfter upgrading the NCC version, I get this message CVM was renamed after installation - I haven't changed it! The recommendation is bizarre - Rename CVM back to its original name. CVM name is noncofigurable by the end user.So if it is non configurable , what changed it and how do I get it changed back ? Or is it just having an identity crisis?Is there a fix?
Keyboard Layout - Prism Console
We are using Nutanix ( AOS 18.104.22.168) with AHV-Hypervisor. Connecting to a VM(W2K12,W2K8, German) via Prim & VNC-Console there is no possibility to change Keyboard-Layout. Our Computers are on Win7 - German with German QWERTZ - Keyboard Layout. No matter which Browser we are using (IE11, Chrome, Firefox) we can not type special characters like \ / . We also tried the American Key for / \ without success. Is there a posibility to fix this Problem? Or is it a Bug ? Looking forward to hear your comments: F.Hil
Can external legacy storage be accessed via Nutanix AHV or ESXi?
There can be legitimate cases where access to external storage, such as massive tape libraries, massive disk archives is needed to some, but not all VMs running in a Nutanix cluster. These VMs would be ideally under the AHV hypervisor, but is ESXi is an acceptable alternative. These external storage systems would likely support 10GbE iSCSI (and faster) and/or 8 gbit fibre channel or faster. I understand that supporting such external access would make the VM “special” and would restrict VM mobility to other specific nodes that were configured for similar external access. The mobility issues are a later discussion, if external storage access is possible. If the external storage was a NAS device, running NFS and/or SMB, I believe it would be straigntforward … just define the external IP address in the VM, and access the data from the VM. The same technique could be used for external iSCSI devices, for Ethernet iSCSi. In many cases, if the external storage needed very high performance
shrink VM disk
HI Team under Prism we’ve created a few VM’s (windows), one of VM is our File Server, current configuration is IDE disk 1TB, (60GB OS, 900GB shared folders). probably is better to create one virtual disk 60GB on OS and the second virtual disk as storage (1TB) or more.question is how to shrink current IDE disk from 1TB → 60GB, and if IDE should be change on SCSI disk, what Nutanix recommended?instruction to shrink disk IDE->SCSIthanks Dariusz
NX-8235-G7 Switch Port Configuration
Good Morning:We are in the process of deploying new NX-8235-G7 nodes that are connected to a pair of Cisco Nexus switch. Everything seems to be working well, with the exception of the switch occasionally receiving RX Pause frames from the servers. While this may not necessarily be a problem, I am doing my due dilligence.The switch ports the nodes are connected to are configured as follows:Connected as active/standby NICs Connected via 1 meter 10GE DAC cables 802.1Q trunk Speed/duplex auto-negotiation Flow control auto-negotiationThe nodes themselves are configured with the default network configuration that comes from the factory with the exception of anything that is necessary to support the use of ESXi and our network environment.Any suggestions or validations of how the switch ports should be configured? Thanks!
Application consistent VM snapshots in Windows Client VMs not possible
Hello everybody, I noticed that application consistent VM snapshots of e.g. Windows 10 Pro VMs with NGT enabled and installed in the VM and verified with ‘ncli ngt list’ doesn’t work and results in the following alert… VSS snapshot is not supported for the VM 'TEST-VM', because VSS software is not installed. and … VSS is enabled but VSS software or pre_freeze/post_thaw scripts are not installed on the guest VM(s) TEST-VM protected by TEST-VM. Using a Windows Server 2016 VM with the respective Protection Domain generates application consistent Nutanix VM snapshots. With Windows 10 VMs it won’t work. Verified in 2 different Nutanix clusters. Clusters are equipped with AOS 5.15.4 LTS version. Nowhere I could find a hint, that Windows 10 VMs are not supported. Regards,Didi7
Error in configuring deduplication in container
Hi all, I have already posted this problem in other forum section but maybe it should not be there so I repost it here so that I could get an appropriate and fast answer To summarize the problem, I am in the same situation in which I cannot enable the deduplication on HDD and I have also tried running the commands below and the result seems a little bit inadequate because I have configured the CVM RAM up to 32GB nutanix@NTNX-13SM31320001-B-CVM:10.9.120.115:~$ allssh 'grep "for on-disk deduplication feature" ~/data/logs/stargate*'Executing grep "for on-disk deduplication feature" ~/data/logs/stargate* on the cluster================== 10.9.120.114 =================FIPS mode initializedNutanix Controller VM================== 10.9.120.115 =================FIPS mode initializedNutanix Controller VM================== 10.9.120.116 =================FIPS mode initializedNutanix Controller VM/home/nutan
Not able to discover the new nodes to expand the cluster
I am not able to discover new node placed in different switch in same VLAN from an existing cluster The issue not able to discover the newly installed nodes in new switch with same VLAN of existing cluster( it is in old switch ) , but not able to discover these new nodes in existing Could anyone help me to give a clue , why these nodes are not able to discover in the existing cluster. I want to expand the existing cluster with these new nodes
LDAP login without the domain name
We have a very long domain name and with the current config we are using the UPN (firstname.lastname@example.org) name to login. This is becoming very irritating for me to enter the domain name each time I login. Can we configure a default domain for all the login users to use if the user didn’t mention any domain and of course if we don’t have local user by the same name?
CVM is not working
Hello all, i have managed to configure the 3 node cluster with AHV and all of the were working at first but after 2 days one cvm has been down can’t communicate with others.i have tryied to fix it but i couldn’t.both the AHV and host ip does reply for ping but the CVM dosen’t.i have run the NCC and got the log but it only says there is a duplicate ip but when i check for duplicate ip there is no duplicate ip in the configuaration. i have attached the NCC report incase you want to see it.any help will be greatly appretiated. thank you in advance.
Shutting down a cluster
The instructions for shutting down a cluster are to use “sudo shutdown -P now” on the CVM’s in the cluster to shut each one down, but the command only works on the first CVM. The command errors out the rest of the CVMs because they can’t reach the first CVM that was shut down. Am I missing something?
SCOM Management Pack Monitoring data disappeared
Hello, Lately I have been adding the Nutanix SCOM Management Pack version 22.214.171.124. From the start on it worked fine as I handed over the Cluster information via Nutanix Cluster Discovery. But now, a few weeks later, SCOM will not display any performance data of the clusters anymore. I am not able to find out where exactely caused this problem. I have different Clusters AHV and ESXI as well as multiple OSVersions running. Currently SCOM displays only one Cluster with it's information. From the other Clusters, there is no performance data shown. All other clusters were discovered correctely (and the same way) but won't show their data anymore. My situation looks like this - this dashboard is showing the clusters information. All other clusters (and their nodes) are not showing data - the dashboard stays empty. [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/c1d55b84-0d1c-4404-b139-9b48e7e361fd.jpg[/img] What could have happend that the data is not reaching SCOM or
Layer2 Issue - AOS 5.5.8 - AHV
I have a problem with arp requests on the bridge for guest traffic In the drawing below the current architecture. If a request arrives from the outside, passing through the firewall, and the firewall is starting to communicate to a VM connected to a NIC to BR1-UP bond of BR1 bridge, the ARP request for the resolution of the VM address stops at the bridge BR1 and do not reach the VM, in this case the firewall and VM ARP tables remain unpopulated and the communications stop. On the other hand, if the communications depart from the VM to the Firewall (for example with a ping) the ARP request is processed by the firewall and the ARP tables of firewall and VM are correctly populated with the respective MAC ADDRs. Firewall IP and VM IP are in the same broadcast domain, no routing. I checked with Wireshark on the windows VM, with tcpdump and ovs-appctl fdb/show on the Nutanix host and when the communications start from the firewall the ARP request goes up to the physical card of the BR1
Already have an account? Login
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.