Installation & Configuration
This forum is the best way to get up and running with the Nutanix platform
- 1,181 Topics
- 3,238 Replies
I’m working on building a SQL Cluster on Nutanix using shared storage. Having issues on some of the steps in the ‘Creating a Windows Guest VM Failover Cluster’ walkthrough. (https://portal.nutanix.com/page/documents/details/?targetId=Advanced-Admin-AOS-v510:vmm-failover-cluster-create-t.html) Have (not real values, used for example): SQL01A - 10.100.1.20 SQL01B - 10.100.1.21 Cluster Virtual IP - 10.100.1.10 iSCSI Data Services IP - Not set (blank) SQL01VGroup - Target IQN Prefix here Status/Completed: I’ve already created a volume group with a few disks for testing. I have NOT attached it to the VMs yet. I’ve enabled MPIO on each of the 2 servers I’ve enabled iSCSI Devices in the Multipaths tab Where I’m stuck is the next portion on the Microsoft iSCSI Initiator/Target Portal IP portion. The guide is not very clear on what I should be doing. From the Guide: From the Server Manager, add and enable the Multipath I/O feature in Tools > MPIO. Add support for iSCSI devices by ch
Hello Ma’am and Sir,Does anybody tried installing or imaging a single node using phoenix? please i need some help/procedure for this as I can’t access the links given in this page it only shows “Not Found or Fordbidden”: Appendix: Imaging A Node (Phoenix) | Nutanix Community…...specifically in “Installing the Controller VM and Hypervisor by using Phoenix” and “Imaging Bare Metal Nodes”…..I created bootable phoenix using USB then i am now in the root@phoenix but I bet it requires some commands to push but have no idea on the commands and theres no search result in the internet regarding that
Hi, We have a currently running nutanix cluster on dells and are looking to move the cvms to a new subnet. The new subnet has esxi management for each host, but on a different vmk, is it possible to point the cvms to the new vmk to get around the 'CVM subnet is currently different than the Hypervisor subnet.' issue? Cheers Thomas
Hi, after upgradeing to AOS 126.96.36.199 from 5.1.3 yesterday I am getting alerts saying that "Pulse cannot connect to REST server endpoint. Connection Status: Unknown, Pulse Enabled: Unknown, Error Message: Unable to determine Pulse connectivity status to REST endpoint." When configuring Pulse it shows me connection method SMTP. Am I missing something? Firewall is open for insights.nutanix.com (no proxy involved): [code]14:15:48 Packet filter rule #106 TCP 10.46.17.xxx:34966→188.8.131.52:443 [SYN] len=52 ttl=62 tos=0x00 srcmac=my:hp:ro:ut:er:00 dstmac=00:1a:8c:f0:36:e4 14:15:49 Packet filter rule #106 TCP 10.46.17.yyy:51438→184.108.40.206:443 [SYN] len=52 ttl=62 tos=0x00 srcmac=my:hp:ro:ut:er:00 dstmac=00:1a:8c:f0:36:e4 [/code] best regards Steffen
Right now we about to push ESXi 5.5 u3b so we can move on up to AOS 220.127.116.11. Since Nutanix doesn't support pushing out patches, we planned to do that with VUM which is fine. At least saving time from having to bring down each node manually for upgrading to u3b. On top of patching, we're also installing NFS VAAI VIB. In our lab, we do it via command line, but has anyone found a zip, or package that VUM can support pushing out that way we can just package our baseline with updates and the VIB file? We can't find anywhere that has a zip file or anything we can leverage thus far. Since we have two data centers a good number of nodes in the cluster at each, We'd love to save time if at all possible, so any advice is greatly appreciated. Thanks!
I have figured out that the version of the kernel is 4.19.100. I have also figured out that this version is not working with my NUC11 and the i225-V NIC. I would like to replace that kernel with the 5.15 kernel as the 5.15 kernel works with the i225-V NIC. Does anyone have a procedure on how to replace the kernel in the ISO? Any help would be greatly appreciated. Thanks,Steve
Hello Everybody,Hope you’re all doing well !I want to install Nutanix using Foundation.I have 3 Lenovo nodes and one Cisco 2960X switch that consists of 24 x 1 Gig RJ45 ports and 04 x 1 Gig SFP ports.Unlike Supermicro nodes, Lenovo nodes do not have Shared IPMI ports.So, for each Lenovo node, I have to connect 01 x 10 Gig port and 01 GbE IPMI Port to the same CISCO 2960X switch (10 Gig ports of the nodes connected to 1 Gig SFP ports of the switch and 1Gig IPMI ports to the 1 Gig RJ45 ports ).Please take a look of the picture below…Since 10 Gig ports of the nodes are connected to 1Gig ports of the switch, I wonder if this architecture can allow me to install nutanix and create the cluster without any issues ? Thanks in advance.
Hi AllHavoc struck this morning when I tried to move my AHV cluster from old switch stack to new switch stack - the hosts, I think went into panic mode and started to restart various VMs.The background to this is: I have 2 x Dell X4012 core switches and 2 x Dell N3024 ToR switches, currently due to moving VMware environment to Nutanix only 1 x X4012 is connected to NIC 1 on each of the 3 hosts. From the X4012 there is a cross connect into the old VMware network Core blade switches (Dell 8024 x 2) which are connected to the old ToR N3048 switch stack, these are then connected to the router for VPLS and DMZ. So to move the connectivity, I thought I could add the second NICs of the hosts to the second X4012 and disconect the crossconnect and the first NICs thus using the newer switches, albeit on the opposite NICs. I planned to reconfigure and update the first X4012 and add back into the stack with a LAG to the N3024 stack and no crossconnect. I moved the NIC connections and lost all
I attempted an upgrade of NCC however it is stuck. Is there a way to restart the upgrade? [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/d9c1c7da-1011-4da7-98c3-5a0d014baade.png[/img] [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/279e1192-82b4-4c34-b612-2c9269707901.png[/img]
Just implementing a new cluster and I've ran foundation with NOS18.104.22.168 and ESX 6. The ESXi hosts have not yet been connected to vCenter. One of the post image steps is to set the correct time zone for the cluster. It says that each CVM needs to be shutdown in Serial. Is this a simple case of connecting to a each ESXi host and shutting down the CVM, Power it back on, connect to the CVM, via http://cvmip to check that it is up, then move on to the next ESXi host and repeat? Thanks Chris
Hi folks, Do you guys have ever think about the question. I read some guides and confused now. 1) node memory number: It seems every type of node has different supported memory number configuration: For NX-1064-G4, It shows only support 16x dimm ! What happens if I install any number， i.g 1, 2, 3,4,5,6 or any number [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/1450i89B4BCCDB3F2197B.png[/img] 2) node memory installed slot location: It also says there`s an fixed slot order for different number dimm scenario. what happens if I install dimm not according to the guide ? [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/1451iA5BA4F2ED2C9A97B.png[/img] 3) memory type: LR or R DIMM it also says there`s two type dimm, How to confirm which type is supported by specific type of node ? P.S my environment I have a node NX-1065-G4 with default 64g(4x16g installed in 1a,1b,1e,1f) and now I add two additional 16g dimm into slot 1g,1h. It work fine till now. but my me
Hi, We have a NX-3060-G4 with 3 VMware hosts on that is currently connected to our Cisco core switch cluster using 10GB twinax cables. A long story short, I need to remove these and connect the NTX to a single 1GB Cisco switch (WS-C3850-12S-S). My question is: can I connect a 1GB SFP link between the NTX and new core switch using the 10GB NTX ethernet port? If so, will the NTX detect that the cable connected can only run at 1GB? Thanks in advance.
I've inherited a Nutanix setup. I've noticed there is 1 Storage Pool setup and 4 Storage Containers. (see screenshot). I don't know what is the Default setup looked like so I don't know which are required. I'd like to have 1 container. Currently attached to Nutanix Host VDI_POC nutanix-peristent-pool Not attached NutanixManagementShare SelfServiceContainer All of my VMs are on the VDI_POC. Can I somehow delete the other 3 and combine them into the VDI_POC without losing data/downtime?
We have a 4-Host Dell Bundle. Per Host: [list] [*]2 x Xeon CPU E5-2620 v3 @ 2.40GHz [*]256GB RAM [*]4.4 TB of storage 400GB of which is SSD[/list]we got this error after running the NCC checks: Node 10.0.0.58:FAIL: CVM memory 16433468 kB is less than the threshold 24 GBNode 10.0.0.55:FAIL: CVM memory 16433468 kB is less than the threshold 24 GBNode 10.0.0.57:FAIL: CVM memory 16433468 kB is less than the threshold 24 GBNode 10.0.0.56:FAIL: CVM memory 16433468 kB is less than the threshold 24 GBRefer to KB 1513 for details on cvm_memory_checkDetailed information for ldap_config_check:Node 10.0.0.56:INFO: No ldap config is specified.Refer to KB 2997 for details on ldap_config_checkDetailed information for cvm_numa_configuration_check:Node 10.0.0.58:INFO: Number of vCPUs(8)on the CVM is more than the max cores(6) per NUMA node.Node 10.0.0.55:INFO: Number of vCPUs(8)on the CVM is more than the max cores(6) per NUMA node.Node 10.0.0.57:INFO: Number of vCPUs(8)on the CVM is more than the
HI, Actually i am able to create folder using nfs whitelist in the nutanix smb share but not able to see the option for that.Then i came to know that we cannot give permission like that for the folder in the nutanix share.is that true.currently we shared a volume from one vm as share. thanks ritchie
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.