Installation & Configuration
This forum is the best way to get up and running with the Nutanix platform
- 1,112 Topics
- 2,982 Replies
Calm is integrated into Prism Central and does not require you to deploy any additional VMs. To start using Calm, you just have to enable Calm from Prism Central. Please ensure, you have met the prerequisites to enable Calm. If the Prism web console is not registered from a Prism Central and the application blueprints have subnet, image or VMs on the Prism web console, the Calm functionality is impacted. Procedure: Log on to Prism Central with your local ADMIN account. For detailed information on how to install and login to Prism Central, see the Prism Central Guide. From the Prism Central UI, click Services ->Calm Click Enable Note that the Enable option appears only if you have logged on to the Prism Central with the local ADMIN account. When Calm is enabled, an extra 8 GiB RAM for large Prism Central and 4 GiB for small Prism Central is allocated automatically to maintain the performance of the infrastructure. To access Calm, click Services → Calm from the entities menu.
I am new and watching all videos. I try to set up nutanix like proxmox CE with:3 x Dell R710 Hosts (nutanix installed on usb internal drive)1 x Dell R720 Freenas 20TB storage connected with direct Fiber With proxmox fiber channel setup was a nightmare, so can I use Fiber Channel here with Nutanix? My goal is if one of a host dies I have other VMs be ready | transferred into another host so like in VMWARE (pricy license). Let me know if I can achieve same with Nutanix and direct me towards hardware (Fiber Channel or 10GB ISCSI) whatever is easier and works well.There was a reply but I did not have a chance to reply.Can I use Nutanix for storage or I have to have 3rd party software like FreeNas?
Hi there. I believe it was 5.18+ that the reverse proxy (apache) is no longer being used for cluster communication but something else. I can’t quite recall what it was called and where the logs can be found. Prior, I could access the httpd logs in the typical location for any http errors/faults but am not sure where to locate these now.
Hi there.A security check on my nutanix clusters (8 nodes) revealed that the IPMI port on every nodes is vulnerable cause the VNC protocol is used to access them through port 5900.Issue:"...Virtual Network Computing (VNC) provides remote users with access to the system it it installed on. If this service is compromised, the user can gain complete control of the system...."Remediation:"...Remove or disable this service..."What are my options? It is possible to disable these ports without affecting the performance of the NUTANIX cluster.Thanks in advance.
I am configuring the LACP on our two clusters, when I did in the cluster 2 is ok, but the same configuration for the cluster 1 does not work.To be sure that is not a mistake on the external switch a put the cluster 2 on the same ports of the that was connected on the cluster 1, and it still working.The problem is that one node (our cluster is with 3 nodes), because it show the link as up, but when I try to modify, either by terminal command or in the prism central settings, the hypervisor on that node goes down and the operation fails, the resilience changes to critical.I need support to identify if it has a problem with the interface.
Technical Question on upgrade :Customer has existing 1365-G6 cluster nodes. We are offering him an additional 1N upgrade of 1165-G7 or 1165-G8. I wanted to confirm :1. If we can add this 1 Node of G7 / G8 into the G6 Slot ?( please provide relevant KB article).2. If the above isn't possible, then can G6 nodes still be ordered as upgrade?3. If we order an upgrade of 1N 1165-G7 or G8 will you be provding the Block chassis with just the 1 Node and any special instructions to be mentioned.
I am new and watching all videos. I try to set up nutanix like proxmox with:3 x Dell R710 Hosts (nutanix installed on usb internal drive)1 x Dell R720 Freenas 20TB storage connected with direct Fiber With proxmox fiber channel setup was a nightmare, so can I use Fiber Channel here with Nutanix? My goal is if one of a host dies I have other VMs be ready | transferred into another host so like in VMWARE (pricy license). Let me know if I can achieve same with Nutanix and direct me towards hardware (Fiber Channel or 10GB ISCSI) whatever is easier and works well.
Foundation Central Foundation Central Overview Limitations Foundation Central OverviewFoundation Central can create clusters from factory-imaged nodes and reimage existing nodes that are already registered with Foundation Central, remotely from Prism Central. This provides benefit such as creating cluster on remote sites such as ROBO, without arranging for a deployment personnel visit.LimitationsFoundation Central has the following limitations: Foundation Central supports creation of one node cluster without imaging on factory shipped nodes with AOS and any hypervisor installed. Foundation Central supports creation of one node cluster with imaging only on nodes that are shipped out of factory with DiscoveryOS installed.Note: If you want to create a one or two-node cluster, ensure that the nodes you select support the creation of a one or two-node cluster. Imaging logs of the remote node are not available on Prism Central. These logs are only accessible within the nodes. It does
Hello,I am new to ovs and AHV configurations in general and have a networking question. I have been reading through the documentation on networking and still am not clear on a question or two.Situation: I am mapping out steps to create a DMZ zone for my server environment. My firewall will be configured with a DMZ port and vlan tagged, public IP will NAT to server in DMZ zone, and ACLs will control what traffic can get threw. This will be routed through vlan tagged ports via lay 3 switch to my Nutanix cluster. The server mentioned above will be on my Nutanix cluster and will need to communicate with with 1-4 other Nutanix VMs in the same DMZ/vlan.Questions: If I create a "network" in Prism with vlan tagging and apply it to all DMZ servers in question, will this keep my DMZ traffic separated from all trusted traffic?If not, do I need to configure a new bridge to keep this network traffic separate from my other networks? My concern is because all traffic is being routed to the same trun
Hi Allhttps://portal.nutanix.com/page/documents/details?targetId=AHV-Admin-Guide-v5_19:ahv-cluster-nw-vs-about-ahv-c.htmlFollow the link,after the AHV was upgraded to 5.19 or lateer, the br0 of OVS was replaced with vs0,If br1 or br2 can coexist or need to be manually migraded,does anyone have some experience .Thank you
Hi,I followed this guide:Installing Nutanix Community Edition (CE) on vSphere 7 - Derek Seaman's IT BlogTo test a install of AHV/Prism CE and the AHV host installs fine but the CVM can not get to the outside world or ping ip’s.I can ssh to the CVM from the host on the correct IP and do the one_node_cluster create command but it just fails with `Failed to reach a node where Genesis is up` if I do `genesis start` it gets past that but just hangs after it starts up some services during the create.I tried not checking the one_node_cluster option in the install and doing it manually afterwards, but I get the same results.Happy to provide any info. The gateway etc look fine in the `/etc/sysconfig/network-scripts/ifcfg-eth0` file
I have been trying to chase down and eliminate this error for a while now. The cvm leader only wants to use itself as the time source - when I run this command (allssh ntpq -pn) it shows remote refid st t when poll reach delay offset jitter==============================================================================x.x.x.x y.y.y.y 2 u 603 1024 377 0.337 55090.5 419.722x.x.x.x 184.108.40.206 3 u 200 1024 4 0.358 75064.8 0.000*127.127.1.0 .LOCL. 10 l 587 1024 377 0.000 0.000 0.000 (The asterisk indicates that it is using itself as a time source, is that correct?) I can run ntpdate successfully to the configured ntp servers and the cvm can connect to them. (I can even run ntpdate to an external time server that is not configured for that matter) How do I get the cvm leader to use the configured ntp servers as the source (and not use itself)?
We are running an image from HPE Aruba for a Mobility Master that crashes daily. The error is an IRQ interrupt:User: [14730.377720] irq 27: nobody cared (try booting with the "irqpoll" option) [14730.378230] handlers: [14730.378461] [<ffffffff813be5d0>] vring_interrupt [14730.378717] Disbling IRQ #27I’ve been directed by the community that there is a KVM fix for this by changing the machine type to Q35. I can see in the acli documentation there are settings for machine type but I can’t seem to find a list of acceptable settings that I could use there. I was hoping to perhaps create a VM through the acli with this already set or perhaps update a VM (is this bad?)Perhaps it would look something like this:acli vm.update machine_type="pc-q35"
Hi All,We’ve got a cluster which we have recently not renewed support/maintenance on as we are in the process of decommissioning it, however still have some workloads which are lagging behind being decommissioned.Noticed today that the license expiry is approaching (November 2021), what will happen once the license expires?CheersJason
This seems like the place to post this, so please tell me if I need to move it.I am trying to setup a lab with a Nutanix G3 3060 cluster running 5.15. Using AHV, which I don’t have a lot of experience with. I think I am having a networking problem. For lab I’m only using 1GB connections.On each link I have a native or untagged VLAN which the CVM’s use. I also have several other Tagged or Trunked VLANs that I want to use for other purposes in the lab.The CMV’s are fine and the cluster comes up no problem. It has some complaints like NTP and the like, but it is otherwise good.Under networking I added the other VLANs and their VLAN IDs.I then created an OPNSense VM and gave it 2 NICs. One on the management VLAN. The same one the CVM’s use that is native. Then a second NIC with one of the tagged VLANs .I have verified that both NICs are set to connected. I even tried adding a 3rd. The management NIC (untagged) comes up and I can access the VM, but the other one won’t. OPNSense is reporting
Hi Team, I will be carrying out upgrades tonight at 8 pm GMT and want to plan before hand: can I please get urgent assistance: I understand that I can jump from AOS 5.15.6 LTS to 220.127.116.11 directly but before doing this I want to confirm below: The order I can see is to upgrade Foundation 1st: we currently have foundation-4.6.2-6e9b7fc8 version can we directly jump to 5.0.4 version of foundation? or we have to install from older base versions one by one to reach 5.0.4? NCC 2nd: we currently have 4.1.0 can we jump to 18.104.22.168 or we have to install 22.214.171.124 first?
Hi everybody,In this moment, I have a 1-node cluster AHV, but in a few months this cluster will be expand could be with 1 or 2 nodes more.May I expand to 2-nodes cluster? If it is possible Is necessary a witness VM?If I have the opportunity, I think buy other 2 nodes, I think that I will not have issue expand from 1-node to 3-nodes cluster. Are I right?What do your recommend in both scenarios?
HiI would like to know which is the minimun VM witness that I need for an specific scenario with several sites and 2-node cluster on each site. At the following article I see this:You can register multiple (different) Metro cluster pairs to a single Witness VM. One Witness VM can support up to 100 instances, that is the combined total number of protection domains in the registered Metro cluster pairs and the number of registered two-node clusters. (A single Witness VM can support both metro availability and two-node clusters.)https://portal.nutanix.com/page/documents/details?targetId=Prism-Element-Data-Protection-Guide-v5_18:sto-metro-availability-witness-c.html However at the following post I see this: You can register multiple (different) Metro cluster pairs to a single Witness VM. One Witness VM can support up to 50 Witnessed Metro protection domains distributed among its registered Metro cluster pairs.https://next.nutanix.com/installation-configuration-23/metro-availability-witness
I have deployed a three node Nutanix cluster (version 5.18) in VMware ESXi. I am trying to deploy Prism Central on the AOS but while uploading the json and tar file of the prism central to the Nutanix cluster, I am getting the error ‘unable to upload file’ at around 6% and the upload fails.Kindly help me to sort out this issue.
Hello FriendsHow are you? Currently i am trying to foundation A Nutanix 3 node environment but after the foundation process begins it will halt at “waiting for the installer to boot” stage with “fatal” error warning i don’t know what is causing the error can anyone tell me something.I have attached screenshots of errors and process.
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.