Installation & Configuration
This forum is the best way to get up and running with the Nutanix platform
- 1,181 Topics
- 3,238 Replies
Let's say for example we have multiple VLANs in our environment to logically segment the traffic. We need to configure the network interface accordingly as we might need a VM to be VLAN aware. Before understanding the method to configure this, let us understand the difference between trunk and access modes. An access port sends and receives untagged frames (i.e. all frames are in the same VLAN) A trunk port supports tagged frames and thus it allows to switch multiple VLANs. Do we have a method to configure trunk mode in a NIC The following article mentions the steps to safely change a NIC mode of a VM to trunk mode. How to change NIC mode (Access, Trunked)
According to the documentation, Backplane traffic segmentation is [u]not[/u] supported in a configuration for "ESXi clusters in which the CVM is connected to a VMware distributed virtual switch" . What I'd like to understand is whether this is related to technical reasons, or it is just a configuration limitation of the CVM installer/Prism management? So far I counldn't find a comprehensive guide/document about setting up a Nutanix cluster with vSphere distributed switches, and nodes with 2x10gbps each. It doesn't seem to make sense to me to set this up with standard swtches (i.e. no Network I/O control) where all traffic (VM, storage, vMotion, ...) use the same physical uplinks.
About this taskTo track and record networking statistics for a cluster, the cluster requires information about the first-hop network switches and the switch ports being used. Switch port discovery is supported with switches that are RFC 2674 compliant. Switch port discovery involves obtaining statistics from the Q-BRIDGE-MIB on the switch and then identifying the MAC address that corresponds to the host. Such discovery is currently best-effort, so it is possible that, at times, an interface might not be discovered.Before you beginBefore you configure network switch information in the Prism web console, configure the corresponding SNMP settings on the first-hop network switch.About this taskTo configure one or more network switches for statistics collection, do the following:Procedure Click the gear icon in the main menu and then select Network Switch in the Settings page.Note: Network switch configuration is supported only for AHV clusters.The Network Switch Configuration dialog box ap
Hi, I'm implementing Nutanix AHV at a customer in a bunker environment. There is no internet access, and the infrastructure will have minimal to no LAN communication outside of the cluster infrastructure. The customer expected to use IP and shot names based on hosts files instead of DNS resolution. Unfortunately, on Foundation, there is no way to setup the cluster with no DNS. And I guess NTP will be the same, but I didn't get there. Any workaround ? Or are NTP and DNS mandatory for the cluster to work ?
Hi,I had a problem with the NTP server configured on the Nutanix cluster, unintentionally changed the date to 09012020 instead of 01092020, the point is that after solving that problem, now the cluster reports to me Critical alerts dated September 1, 2020. The date of the NTP is already corrected, but Web Console shows some critical alerts, but those alerts are not shown with ncli, but are shown with Alert tool. I have try to aknowledge and resolved them, but they are NOT eliminated from alert_list.Does anyone know what I can do so that the web console no longer shows them?
Hi all, A network security audit on a customer infrastructure reported a vulnerability on the cerebro http (port 2020) who is open on http in every CVM and without any security prompt.Some sensitives informations are visible : - AOS version : el7.3-release-euphrates-5.10.7-stable-... - VM Names - Protection Domain names - Witness ip address - ... Is there’s a way to secure this component ?
Hello everybody, I noticed that application consistent VM snapshots of e.g. Windows 10 Pro VMs with NGT enabled and installed in the VM and verified with ‘ncli ngt list’ doesn’t work and results in the following alert… VSS snapshot is not supported for the VM 'TEST-VM', because VSS software is not installed. and … VSS is enabled but VSS software or pre_freeze/post_thaw scripts are not installed on the guest VM(s) TEST-VM protected by TEST-VM. Using a Windows Server 2016 VM with the respective Protection Domain generates application consistent Nutanix VM snapshots. With Windows 10 VMs it won’t work. Verified in 2 different Nutanix clusters. Clusters are equipped with AOS 5.15.4 LTS version. Nowhere I could find a hint, that Windows 10 VMs are not supported. Regards,Didi7
Hi All I have something odd in my Network view in Prism - I have 3 switches ( 1 switch is a 2 switch stack) but the view shows 5 switches, 2 classified as None. The None entries have the ip addresses of one of the single recognised switches and the stack which is already identified. The other odd thing is that the Nones have ports that I would expect to see in the identified switches. Can anyone tell me how to correct this? [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/51660407-9d82-4b54-92a0-efb302086840.png[/img]
I have a problem with arp requests on the bridge for guest traffic In the drawing below the current architecture. If a request arrives from the outside, passing through the firewall, and the firewall is starting to communicate to a VM connected to a NIC to BR1-UP bond of BR1 bridge, the ARP request for the resolution of the VM address stops at the bridge BR1 and do not reach the VM, in this case the firewall and VM ARP tables remain unpopulated and the communications stop. On the other hand, if the communications depart from the VM to the Firewall (for example with a ping) the ARP request is processed by the firewall and the ARP tables of firewall and VM are correctly populated with the respective MAC ADDRs. Firewall IP and VM IP are in the same broadcast domain, no routing. I checked with Wireshark on the windows VM, with tcpdump and ovs-appctl fdb/show on the Nutanix host and when the communications start from the firewall the ARP request goes up to the physical card of the BR1
Hello guys, I have a question regarding schedule auto-reboot of nutanix hyper-v nodes. Understand there is a cau preupdate powershell script to auto reboot the node 1 by 1 and ensure CVM is up and running before executing reboot action on the next node. Is there a similar script we can use so that we can schedule auto reboot the Hyper-V nodes 1 by 1 without having the risk of all the nodes / CVM or cluster down at the same time? Thanks. Regards, Tze Siong
I’m working on building a SQL Cluster on Nutanix using shared storage. Having issues on some of the steps in the ‘Creating a Windows Guest VM Failover Cluster’ walkthrough. (https://portal.nutanix.com/page/documents/details/?targetId=Advanced-Admin-AOS-v510:vmm-failover-cluster-create-t.html) Have (not real values, used for example): SQL01A - 10.100.1.20 SQL01B - 10.100.1.21 Cluster Virtual IP - 10.100.1.10 iSCSI Data Services IP - Not set (blank) SQL01VGroup - Target IQN Prefix here Status/Completed: I’ve already created a volume group with a few disks for testing. I have NOT attached it to the VMs yet. I’ve enabled MPIO on each of the 2 servers I’ve enabled iSCSI Devices in the Multipaths tab Where I’m stuck is the next portion on the Microsoft iSCSI Initiator/Target Portal IP portion. The guide is not very clear on what I should be doing. From the Guide: From the Server Manager, add and enable the Multipath I/O feature in Tools > MPIO. Add support for iSCSI devices by ch
In some scenarios, you may need to move a disk from IDE to SCSI bus or vice versa. Sample scenarios include but are not limited to: VM does not boot due to missing SCSI driver. In such cases, the disk can be converted to IDE to install the missing drivers and then moved back to SCSI. A wrong disk type was used during VM creation. Application requirements dictate the particular type of the disk. You have recently migrated VMs to AHV and noticed that some of the disks appear in IDE format. After following the disks conversion process from IDE to SCSI (as described in step 2 of the uvm_ide_disk_check) the following doubts arise: Question: Once the disk gets converted, does it immediately redirect all I/O to the SCSI drive and leave the IDE disk unused? Answer: Once the conversion is completed (which is actually a cloning process from the original IDE disk), the new SCSI disk needs to be attached to the SCSI bus and then the old IDE disk could be removed. Question: What ar
vCenter 7.0U1c and vSphere Clustering Service VMs on local or shared storage.After test upgrading vCenter from 6.5U3 to 7.0U1c (which worked just fine w 0 issues), every vmware cluster with DRS and/or HA enabled gets 3 new vCLS (very tiny VMs w 2GB storage/.13GB RAM).When NCC is ran it registers 3 WARNs that there are VMs running on the SATADOM’s. I would assume that, even what (little) they do (today), would this wear out Satadom’s or M.2’s?Would this be a concern for G4/G5’s (w Satadoms) and not G6/G7’s (M.2’s)?”While vCLS can be moved to shared storage, wouldn’t they be moved back to local storage during an NX cluster stop?(Like to point out this was done in a LAB environment, not in Production… no bits nor bytes were harmed during the upgrade progress.)
Dears , I have aleady configured 3 nodes Nutanix block(NX-1000 series) more than year ago ,and now our customer requested anew node (NX-3000 serie) to be added into existing cluster.Could you please letme know the best way to add this node without impacting the running nodes? Regards,
When trying to use the allssh "manage_ovs ...." I get the above error. As I understand it Zeus and Zookeeper are close friends(as one would expect). Zookeeper implements the cluster configuration information while Zeus is used by other cluster components to access it? Somewhere I read that when you manually add a new node to a cluster Zookeeper on that new node is not active? Any information would be appreciated (even rabbit holes). thx.
We have an NX-3050 and we frequently have to re-build linux VM's due to their ext4 filesystems corrupting and goin into read only. Our research has pointed us to articles where the linux kernel has issues with SSD's. Has anyone else experienced this and if so, how did you solve it? Edit: We created a container that bypassed the SSD's and we have not yet see the issue there, but we would love to re-engage the SSD's on our servers. The linux version/distro is Ubuntu 12.04.3 LTS. One of the articles we found relating to this is: [url=http://askubuntu.com/questions/262717/ubuntu-12-04-ssd-root-frequent-random-read-only-file-system]http://askubuntu.com/questions/262717/ubuntu-12-04-ssd-root-frequent-random-read-only-file-system[/url] I hope this helps if any of you have experienced this issue.
Now we encounter a problem, after A node with ipmi remote connection address connection, set the vlan from disable error into able, then even not ipmi remote management address, in the local service of esxi host execution ipmi commands, prompt invalid orders, is there other ways to enable remote management of ipmi address.
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.