Replies posted by Tapati
Hello @hyoung ho Foundation fails with "Node position must be specified" when Discovery OS has node_position = None This affects new nodes that were shipped with DiscoveryOS based on Foundation 4.5.2 This is is only relevant on single 1U1N configurations. Currently listed nodes include: NX-3170-G6; NX-1175S-G6; NX-1175S-G7; NX-8170-G7 This is a known issue where node_position being set to None breaks the Foundation workflow. As new nodes are shipped with the later versions of the DiscoveryOS, based on 4.5.3 this issue would no longer be visible Please refer below document https://portal.nutanix.com/page/documents/kbs/details?targetId=kA00e000000CvX8CAK G6 AND G7 PLATFORMS: BMC 7.08 changes the default IPMI password so that every node ships from the factory with a unique password. The new default IPMI credentials are username = ADMIN and password = node-serial-numberPlease refer below document https://portal.nutanix.com/page/documents/details?targetId=Rele
@Eric-The_VikingWhenever we upgrade AOS in any cluster we must upgrade the compatible version of AHV as well. AOS 5.15 is compatible with AHV-20170830.395 Could you please run this below NCC check from any working cvm and share the output with us ncc health_checks hypervisor_checks ahv_version_check
Hello @LeonardThng SNMP trap messages may not be exactly the same as the alert messages that are captured on the Prism SNMP trap messages will be based on the format in the MIB file https://portal.nutanix.com/page/documents/details/?targetId=Web-Console-Guide-Prism-v5_17%3Aman-nutanix-mib-r.html Also SNMP is not for alerts, its for Monitoring the stats or status Thanks
Hi @Anibal Ulisses Could you please let me know what modification you would like to apply to the Kubernetes cluster? Modifying /var/nutanix/etc/kubernetes/manifests/kube-apiserver.yaml file is not supported. These files are normally overwritten during k8s upgrade or host upgrade.
Hello @Nagarjunb You can have a look into these below links which have got some good content that might help you. https://developer.nutanix.com/ https://www.nutanix.dev/reference/prism_element/v2/api/clusters/get-clusters-id-getcluster/ Please let me know how it goes. Thanks
Hi @Tabrezg We currently do not have a native P2V tool, however, the VMs can be converted using VMware converter and add it to AHV. There is an open-source tool virt-p2v that can be used but has limited capabilities. You can also use a recommended third-party application by Sureline Systems for full-featured migration capabilities.
Hi @Tabrezg You have to create an image of the physical Linux server and create a new VM using the image in Nutanix environment. We do have a Nutanix tool call Move to migrate VMs from another hypervisor to AHV You can refer below document Migrating Linux VMs to AHV Linux on AHV
@DirkRasche Solution 1: In early AOS releases, there is no retry logic for the function to fetch datastores so if the RPC call is lost or there is no reply for any reason the high-level operation (such as snapshot, migration, takeover) which trigger will fail too. Instead of an immediate error on timeout, a subsequent retry should be implemented which will work on temporary failures. If the cluster is in the older version of AOS I suggest you upgrade AOS first Solution 2: You can try to re-install NGT on the affected VMs https://portal.nutanix.com/page/documents/kbs/details/?targetId=kA032000000TVEnCAO Solution 3: ++ You can check if VSS snapshot is working correctly when using the same network as CVMs ++ If that is the case then change the network on the VM to another one different from CVMs network, perform changes on the firewall as required. Solution 4: If your environment is in Hyper-V then please check below steps a) Please check
You can refer below documents https://portal.nutanix.com/page/documents/details/?targetId=Web_Console_Guide-Prism_v4_7%3Awc_system_snmp_wc_t.html KB-1333 Configuring and Troubleshooting SNMP monitoring for commands and examples. The article also reference two more KB articles as well as SNMP related PowerShell commands: KB-2448 How to import MIB in zenoss KB-2028 Integrating Nutanix with Solarwinds https://next.nutanix.com/how-it-works-22/monitoring-nutanix-with-nagios-4500
The default configuration when adding multiple uplinks to a vSwitch "Route based on originating port ID" which means the different VM's are connected to the uplinks in round-robin. This configuration allows you to connect the uplinks to different switches without the need of special configurations on the physical switches, since only 1 uplink per virtual machine (MAC) is active at any one time.If you have multiple uplinks, all active. Once one uplink fails, ESXi will immediately fail over the VM to another active uplink. So basically it will load balance between these 2 uplinks. Lets say you have 2 10G uplinks in active/backup -> effective bandwidth is 10G Lets say you have 2 10G uplinks in active/active w/t LACP configured from s/w side -> effective bandwidth is 20G "Route based on originating port ID" you can use active-active without doing any extra additional configuration from the Physical switch side You can also ref
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.