Installation & Configuration
This forum is the best way to get up and running with the Nutanix platform
- 1,166 Topics
- 3,190 Replies
We just purchased Nutanix ( 2 Appliances , 3 nodes on each ) , we are planning to use VMware Enterprise plus on them. I've few questions, I would be more than happy if I get the answers: 1- if we used VMware, all Nutanix functions will be applied ? ( for example, if one HD failed, the data will be transfered to another Hard disk? ) 2- If I'm using 3 nodes, and I'm utlizing all the CPU and Memory and HD capabilities for all three, once one node goes down, how the VMs on this node will go to the other nodes while we are utlizing their full capacity? 3- if one node fail, the VMware VMs on this Node will be Automatically moved to another Node ? 4- In general, is there any difference if we used Acropolies or VMware ? will we get the same functions on Nutanix?
HI Trying to get a definative answere to what I think should be a simple question. Can I map a iSCSI drive to a Virtual Server by attaching a iSCSI Storage array ( Server has a Drive on the nutanix cluster and a second drive mapped to the exsternal array ( NTFS partitions on the MD iSCSI array ( NO VM's running on the array ) just mapping this as a drive [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/4831f6da-f541-4d2b-a407-0eabbb136259.jpg[/img] Would love to get a formal response from Nutanix if this causes issues within a cluster or will there only be issues with the specific VM's the mappings are assigned to ?
Hello there, I am running AHV 5.5.5 after trying to upgrade BIOS and BMC through LCM, one of my nodes failed, then its not booting up, keeps booting in pheonix, tried disabling maintenence mode for the node with no luck, and i was trying to force boot into host via "python reboot_to_host.py" but the script doesnt seem to exist. Any help ?
Hi, New to the forum, we recently had our Nutanix servers delivered and we are planning to do the install in the coming weeks. We haven’t decided which hypervisor to go with yet its either going to be Hyper-V or AHV. I just wanted to know peoples experiences that they have had? We currently use Hyper-V, so we are comfortable with that but all the nice features seems to be heading to AHV/Prism. I would be grateful for your thoughts and comments Thank you
I have a problem with OS and Distributed Switches on vCenter. When I migrate the vm-network from standard switch to distributed switch show the next error: Detailed information for cvm_startup_dependency_check:Node x.x.x.x: FAIL: .dvsData directory is not persistent yetRefer to KB 2050 (http://portal.nutanix.com/kb/2050) for details on cvm_startup_dependency_check################################################################################PLUGIN RESULTS################################################################################/health_checks/hypervisor_checks/cvm_startup_dependency_check [ FAIL ] I can see that when reboot the ESXi host and start the CVM I lost it network config on the CVM, and I have to assign again network adapter on the CVM. I don't know how configure on the vcenter the distributed switch to make persistent.
I'm looking for some advice.My company recently purchased a used 3460-G4. We're a professional services firm and we want to have it for our lab so we can stand up stuff like ERA, CALM, etc. and bang around on it. You know, lab stuff!We have been trying to Foundation (220.127.116.11 & 18.104.22.168) the block but I am running into a problem with Foundation failing when trying to mount the Phoenix image on the nodes. Here's how things currently stand:BIOS has been upgraded to latest recommended version on Nutanix Support site. (G4G5T6.0)BMC firmware has been upgraded to latest recommended version on Nutanix Support site. (3.64)Each node has x2 SSDs, which this system appears to recognize (I ran an ESXi installer on one of the nodes to test. The installers saw the drives)Each node has 64GB of RAM. The RAM is confirmed to be compatible according to SuperMicro's site.The Motherboard is the X10DRT-P from SuperMicro.IPMI has been set on each node and I can log into the IPMI mgmt page.At first I was wo
Hi everyone, I have an issue with a newly purchased NX-1175S-G6 that need to deploy at the EU environment.The system doesn’t allow to raise a case to the support portal.The installation just can’t seems to pass through the foundation stage.The latest place it stuck is at the screenshot. That is still just a small part. Before this when I try to put in VLAN to setup, it just can’t get through it. Anyone can guide me to a correct direction? Thanks
Hello, I have a cluster up and running for almost a year now without any particular issues. I haven't logged into Prism for a long time and did so recent, and saw an alert on hat the cluster is not able to check NTP. I have them set for North American NTP servers. I think the real problem may be with Zeus, however, as when I SSH into one of the CVMs and almost any command (bur speciifally 'allssh email@example.com date') I get: ================== error: ================= ================== Zeus ================= ================== configuration ================= ================== cache ================= ================== is ================= ================== not ================= ================== created; ================= ================== try ================= ================== again ================= ================== later ================= It is the same with any other command I try to run. Your insight is greatly appreciated.
Hi I have confuse about the output of the AHV network command "mange_ovs show_interfaces", In the result, I can check the link status and speed of the NICs. But what is the meaning of [b][u]mode[/u][/b] ? I attached a snapshot as the following that give the output of a node in AHV which have 10Gb and 25Gb nic cards, but the mode all display 10000. What is the meaning of mode? [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/ceda42e0-451d-4f7e-a384-3fc8c9645c90.png[/img]
Here’s my problem. Node A & B are synchronizing to the wrong host. Node C is synchronizing to the right host. This is the behavior I see on the CVMs. When I do “hostssh ntpq -pn”, the hypervisors (AHV) are reporting the correct ntp server. How do I bring nodes A & B back inline to what they should be? I tried manually correcting their ntp.conf files with the correct IP and restarted the ntpd service. No change and eventually the ntp.conf files changed to the wrong settings. Not sure how to wrestle this one to the ground.
Good morning All I have a bit of an NGT issue, the Prism Central alerts me to say some VMs need a NGT upgrade, when I go to upgrade them I see the message “NGT can only be upgraded on 0/1 VMs which have the available upgrade of NGT” Review and confirm buttons are greyed out so I cannot continue - what does the message mean and what do I do to get it to work? Many thanks Eric
Hi Guys, i have some questions, first, can we enable flow on ESX hypervisor on nutanix? i have same question with calm on esxi and i get the answer we can enable calm on esx based on nutanix KB, but how about flow? can we enable/using flow on top esx hypervisor? and just courious, anyone have table comparation about feature nutanix running on ahv, esx, hyperv?
hi all Im trying to install 4 node nutanix but I got this issue Do you have any idea what is the issue? this error is the same in this 4 nodes [code]20190902 16:16:19 INFO Setting cdrom as boot device for next boot 20190902 16:16:35 INFO Next boot device is set to optical 20190902 16:16:35 INFO Power status is off 20190902 16:16:35 INFO Powering up node 20190902 16:16:58 INFO Exiting SMCIPMITool 20190902 16:16:58 ERROR Exception in ) @bcb0> Traceback (most recent call last): File "foundation\decorators.py", line 77, in wrap_method File "foundation\imaging_step_init_ipmi.py", line 309, in run File "foundation\imaging_step_init_ipmi.py", line 156, in boot_phoenix File "site-packages\bmc_utils\boot_media.py", line 47, in wrapped File "site-packages\bmc_utils\remote_boot_rmh.py", line 193, in boot File "site-packages\bmc_utils\remote_boot_rmh.py", line 240, in boot_from_iso File "site-packages\pyghmi\ipmi\command.py", line 314, in set_power IpmiException: timeout 20190
Hi guys, I can't seem to perform an inventory on LCM any more. [b]It fails on the check[/b]: [i]Check 'test_upgrade_in_progress' failed with 'Failure reason: Another Upgrade operation is in progress. Please wait for that operation to complete before starting an LCM operation.'[/i] However after running [b]progress_monitor_cli --fetchall[/b] This does not show anything in progress. [b] host list -[/b] All 3 hosts show: [i]false (life_cycle_management)[/i] - Which is as expected. ~/data/logs$ [b]upgrade_status[/b] 2019-09-04 14:47:32 INFO zookeeper_session.py:131 upgrade_status is attempting to connect to Zookeeper 2019-09-04 14:47:32 INFO upgrade_status:38 Target release version: el7.3-release-euphrates-5.10.6-stable-294f5f671ba8982a0199e18b756e8ef3a453af9a 2019-09-04 14:47:32 INFO upgrade_status:43 Cluster upgrade method is set to: automatic rolling upgrade 2019-09-04 14:47:32 INFO upgrade_status:96 SVM 10.x.x.x is up to date 2019-09-04 14:47:32 INFO upgrade_status:96 SVM 10.x.
Hi guys, a non technical question about a customization option that I can’t find anywhere. I have two Prism Central used for Synchronous replication. To Identify them at first sight I have customized title and colors of the login page. It was a bit disappointing to not find the customized name on the browser tab or at least on the menu bar title. Even after editing the cluster params to give a name to the PC cluster (only one VM as for now) the name stick on “Prism Central” on the tab and “Prism” on the menu bar.Considering I’m working on a Synchronous replication and that all the objects involved have the same name (Protection Policies, Recovery plans, categories), sometimes i found myself working on the wrong Prism Central. It would be easier to have the PC name always visibile. Any idea? Thanks in advanceil_gianK’ Tab and menu bar title
So, on a dev cluster with Prism Central I created a Proof-of-Concept Nutanix Objects "Store". I did some experimentation (spotted a few problems with the deployment) and then decided I wanted to remove the store. First off I tried to delete the bucket, and was told "You need to delete all versions first": grr! Went and deleted all objects, but this wasn't enough, as versioning was enabled. So, turned version lifespan down to 1 day, and waited 24 hours. Great, now I could delete the bucket - however, I can't find any option to delete the store. So, I'm stuck with the (in my mind, greedy) VMs the deployment created, as I can't even go behind it's back and do anything with the VMs, as they're "special", just like the hidden Calm project which is used to deploy the store. All in all, not very impressed on first experiences...
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.