Installation & Configuration
This forum is the best way to get up and running with the Nutanix platform
- 1,114 Topics
- 2,991 Replies
Hello, I am trying to add one new node in my existing Dell Nutanix cluster, we use Dell XC Servers but due to some reason, I don't want Dell to be added as a new node so 4th node would be Nutanix powered hardware ..would that work? How about hardware compatibility ? especially for the processor.
By default, a virtual NIC on a guest VM operates in access mode. In this mode, the virtual NIC can send and receive traffic only over its own VLAN, which is the VLAN of the virtual network to which it is connected. A virtual NIC in trunk mode can send and receive traffic over any number of VLANs in addition to its own VLAN. You can trunk specific VLANs or trunk all VLANs. You can also convert a virtual NIC from the trunk mode to the access mode, in which case the virtual NIC reverts to sending and receiving traffic only over its own VLAN. Trunked NIC can only be added via acli. It is not possible to distinguish between Access and Trunked NIC modes in Prism UI. You can create new NIC for the VM to operate in a required mode or you can change the mode of an existing VM NIC. For the command list and sequence please rerfer to AHV Administration Guide: Configuring a Virtual NIC to Operate in Access or Trunk Mode
Hello ... can anyone provide feedback on getting Network Visualization working with Cisco Nexus 9K Switches. The Nexus Series is on the Supported Switch list, but I've had a ticket open with Support for weeks and we can't get it working. [url=https://portal.nutanix.com/#/page/docs/details?targetId=Web-Console-Guide-Prism-v510:wc-network-visualizer-supported-switches-r.html]https://portal.nutanix.com/#/page/docs/details?targetId=Web-Console-Guide-Prism-v510:wc-network-visualizer-supported-switches-r.html[/url] LLDP is enabled globally on the 9K, and I can populate the 9K switch info into Prism just fine. The issue is I can't get any port data. Nutanix Support is saying the issue is by default the "port-id-subtype" is set "locally-assigned", and it needs to be changed to "interface-name". The 9Ks are in ACI-Mode, not NX-OS, and no one seems to know how or where the change is made. Has anyone gotten this setup to work? Here are some pics from my environment. Thanks! [img]https://d1
The Nutanix Support portal includes a compatibility matrix available from the Compatibility Matrix link. You can filter and display compatibility by Nutanix NX model, AOS release, hypervisor, and feature (platform/cluster intermixing). Nutanix recommends that you consult the matrix before installing or upgrading software on your cluster. Do you want to know if you can mix different types of CPUs, memory, disks or hypervisors in your Nutanix environment? Start with this documents that provide the answers you might be looking for, like: Hardware Restrictions: mixing different Nutanix CPU families Storage restrictions: Mixing all-SSD and hybrid SSD/HDD Mixing NVMe and SSD/HDD Encryption restrictions DIMM restrictions: Mixing DIMM Types Mixing different DIMM Capacity Mixing different DIMM Manufactures Mixing different DIMM speed Hypervisor restrictions You can get all the answers in this document : Product Mixing Restrictions In case you need memory specific repla
Hi All I have something odd in my Network view in Prism - I have 3 switches ( 1 switch is a 2 switch stack) but the view shows 5 switches, 2 classified as None. The None entries have the ip addresses of one of the single recognised switches and the stack which is already identified. The other odd thing is that the Nones have ports that I would expect to see in the identified switches. Can anyone tell me how to correct this? [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/51660407-9d82-4b54-92a0-efb302086840.png[/img]
You have the option of adding a Witness to a Metro Availability configuration (see Data Protection Guidelines (Metro Availability)). A "Witness" is a special VM that monitors the Metro Availability configuration health. The Witness resides in a separate failure domain to provide an outside view that can distinguish a site failure from a network interruption between the Metro Availability sites. The goal of the Witness is to automate failovers in case of site failures or inter-site network failures. The main functions of a Witness include: Making a failover decision in the event of a site or inter-site network failure. Avoiding a split brain condition where the same storage container is active on both sites due to (for example) a WAN failure. Handling situations where a single storage or network domain fails. Metro Availability Failure Process (no Witness)In the event of either a primary site failure (the site where the Metro storage container is currently active) or the link betwee
While at the time of posting this article neither the process referred to nor Windows 2003 OS itself are supported within AHV environment, Nutanix understands that there are situations where customers might find themselves in a position of not being able to move away from certain OS version.To help customers with the process of migration of Windows 2003 servers from ESXi to AHV we share this post by Artur Krzywdzinski where he explains the process in details. We would like to thank Artur for sharing the solution.Share your own ideas and processes that worked (and did not) with community - help someone, encourage cooperation!Please note that Nutanix Xtract referred to in the post is currently known as Nutanix Move.vmwaremine.com: Migrate Windows 2003 to Nutanix AHV by Artur Krzywdzinski
Hi all! I'm currently working on a Nutanix Template for Zabbix that I will share with community once it's finished. I've a little problem with some SNMP answers Zabbix receives from prism. I'm asking for OID 22.214.171.124.4.1.412126.96.36.199.1 This OID is the cstControllerVMStatus.1. It should return Up or Down if the CVM with index 1 is Up or Down. When I run it in Zabbix, the answer is "55 70 00". Is "55 70 00" a value that means 'Up'? In this case what is the value for 'Down'? Regards,
Hi Guys, i have some questions, first, can we enable flow on ESX hypervisor on nutanix? i have same question with calm on esxi and i get the answer we can enable calm on esx based on nutanix KB, but how about flow? can we enable/using flow on top esx hypervisor? and just courious, anyone have table comparation about feature nutanix running on ahv, esx, hyperv?
Hi,I had a problem with the NTP server configured on the Nutanix cluster, unintentionally changed the date to 09012020 instead of 01092020, the point is that after solving that problem, now the cluster reports to me Critical alerts dated September 1, 2020. The date of the NTP is already corrected, but Web Console shows some critical alerts, but those alerts are not shown with ncli, but are shown with Alert tool. I have try to aknowledge and resolved them, but they are NOT eliminated from alert_list.Does anyone know what I can do so that the web console no longer shows them?
Hello - Does anyone know if there is an IPMItool command to set the interface to 100MB full duplex? Is this option to change it even available? We are having some IPMI issue where connectivity drops every so often. We are working with networking to check their side, and we've done various things to test/isolate the issue. On the network switch side, the ports have been configured to 100MB from 1GgE. One suggestion that came up from networking is to set the IPMI interface to 100MB full duplex. Thank you. RV
I’m trying to setup a new bridge (br1). After ssh to the Nutanix AHV, I continue with input “ssh firstname.lastname@example.org” to get to the CVM, it ask me for a password. Is there a way to change/ continue with a different logon name as well at that? The password I input is being denied due a different username for the CVM.
Hi, I'm implementing Nutanix AHV at a customer in a bunker environment. There is no internet access, and the infrastructure will have minimal to no LAN communication outside of the cluster infrastructure. The customer expected to use IP and shot names based on hosts files instead of DNS resolution. Unfortunately, on Foundation, there is no way to setup the cluster with no DNS. And I guess NTP will be the same, but I didn't get there. Any workaround ? Or are NTP and DNS mandatory for the cluster to work ?
I have this same error on 3 nodes - I have checked networking / Time and connectivity and it all seems ok. Running ncc health checks on different services gives me a range of things to check. Not sure where to start with this. The Hosts are Hyper-V.
Let's say for example we have multiple VLANs in our environment to logically segment the traffic. We need to configure the network interface accordingly as we might need a VM to be VLAN aware. Before understanding the method to configure this, let us understand the difference between trunk and access modes. An access port sends and receives untagged frames (i.e. all frames are in the same VLAN) A trunk port supports tagged frames and thus it allows to switch multiple VLANs. Do we have a method to configure trunk mode in a NIC The following article mentions the steps to safely change a NIC mode of a VM to trunk mode. How to change NIC mode (Access, Trunked)
Hello - In Prism on an 8-node cluster, we are getting an alert under Health showing 3 of 8 CVMs that their Memory Usage are @ 100%, for about a week now as shown in the graph (in Prism). However, those CVMs seem fine in vCenter. At a high level, Guest Mem % for the CVMs are around 40% or less. Thoughts? Thanks.
Hi Folks, I'm in the middle of a deployment and have a question. We have a pair of Nexus 3548's acting as 10 G edge switch to our Nuntanix Clusters who are trunked up to a pair of Cisco 4500's We have 4 nodes in the cluster with a total of 10 nics all 10 gig and assigned to the VDS in Vsphere. On the VDS we set the MTU to 9000 Is there a way to separate the CVM traffic from the management traffic? I would like to keep the CVM traffic from going up to the 4500's however we needed to create an SVI on the 4500's so we could manage them. I'm getting Jumbo frame errors on both the Nexus and the 4500's and when I set the MTU on the VDS back to 1500 the errors went away. My guess is the CVM traffic is what's causing the issues and I would prefre not to mess with the MTU settings on the 4500's. What is best practice here? What are other doing? Thanks
Hi Team, I executed the NCC health checks run_all on two clusters and got the following message (at the end of the run_all script): Detailed information for sar_stats_threshold_check:ERR : Execution terminated by exception IndexError('list index out of range',):Traceback (most recent call last): File "/home/hudsonb/workspace/workspace/ncc-2.0.2-stable_release/builds/build-ncc-2.0.2-stable-release/ncc-python-tree/bdist.linux-x86_64/egg/ncc/ncc_utils/plugin_utils.py", line 128, in handle_exceptions result = fn() File "/home/hudsonb/workspace/workspace/ncc-2.0.2-stable_release/builds/build-ncc-2.0.2-stable-release/ncc-python-tree/bdist.linux-x86_64/egg/ncc/plugins/base_plugin.py", line 740, in result = putils.handle_exceptions(lambda : check(*check_args), cls.canvas) File "/home/hudsonb/workspace/workspace/ncc-2.0.2-stable_release/builds/build-ncc-2.0.2-stable-release/ncc-python-tree/bdist.linux-x86_64/egg/ncc/plugins/health_checks/sar_checks.py", line 358, in check_threshol
Hi all, I have a upcoming appliance installation and it might be necessary that we need to downgrade the current NOS version. Is there an out-of-the box approach / method available that could be used to downgrade an existing cluster? Or maybe KB-articles that could help me regarding this issue? Thank you and best regards, Andreas
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.