Installation & Configuration
This forum is the best way to get up and running with the Nutanix platform
- 1,136 Topics
- 3,091 Replies
Acropolis Start Edition cannot use Volume Groups?
Hi there, I am new with Nutanix and I have a doubt about Acropolis licensing. We have 3 nutanix nodes with Acropolis Start Edition installed. Yesterday, my PRISM console report a License Feature Violation. Looking at https://www.nutanix.com/products/software-options , the only thing that I saw that could be wrong in my configuration was a Volume Group created for a test. Just to be sure, in an Acropolis Start Edition can we create volume groups or this option should only be used in PRO and ULTIMATE editions? The alert don’t give me more information about which feature is violating the license. Thanks in advance for the help! BR, Thelmo
Design NTNX site
Hello, We are in the process of moving our VMWare site to Nutanix. With 3 locations i would like some design tips to setup the NTNX sites/cluster(s). I am very new to nutanix. Situation: 3 fysical sites, 2 sites with 4 NTNX hosts and 1 site with 3 NTNX hosts. Every site dedicated fiber connection. The 2 "4 hosts sites" are for production, the 3 hosts site will be otap and/or extra hosts for production. How to design the NTNX cluster(s) with minimal risk of loosing data and the most redundancy. If someone needs more info please let me know grtz
Installing on a HP DL380 G7
I need some help. I have a HP DL380 G& that I am attempting to install the Nutanix Community version on and I have verified the hardware. I have I 240GB SSD in slot on. I have 4 500GB hd in the next 4 slots. I have put the community version on a flash drive to install. I can boot to the USB, I can tell it to install, but it then fails with the message that it can not find any drives. I am new to your program and need a little help
Ports and Protocols Reference Chart
Ports and ProtocolsThe Ports and Protocols Reference allows you to determine ports requirements for multiple Nutanix products and services in a single pane. This document is divided into several sections based on the required ports for each product or service.Ports and Protocols Reference covers detailed port information (like protocol, service description, source, destination, and associated service) for the following products and services.Note: Note: The existing port information in this document is based on the latest product release. Updates to this document is made with major release of products. 1-click Upgrade AHV AOS Calm Collector Portal Collector Tool Disaster Recovery - Leap Disaster Recovery - Metro Availability (ESXi and Hyper-V) Disaster Recovery - Metro Availability with Leap (AHV) Disaster Recovery - Metro Availability with Leap (AHV) - CCLM Disaster Recovery - Protection Domain Era File Analytics Files Files Manager Karbon Platform Service
Do we need to backup anything like configs/settings to a USB drive in case of a serious outage?
I took the admin course two weeks ago and have been tasked with writing an operations manual. One of the sections relates to backup & recovery so my question is "Do we need to backup anything like configs/settings to a USB drive in case of a serious outage?". If not, how does one recover from the most serious of outages? Being new to Nutanix, I'm not even sure what a catastrophic outage would entail so if someone could articulate same and provide a high level recovery process, I would be most appreciative.
Install Splunk on Nutanix
Hi Everyone, I've planed to deploy Spunk on Nutanix on my Company. But i'm newbie in Nutanix, so I have some question about this: 1. Nutanix have hypervisor which I can use is ESXi, HyperV,KVM... so in my case, I will use Esxi and rehat to install Splunk. it's mean I have install Splunk on Redhat which is running on ESXi, is it right? Can i install Redhat directly to Nutanix server (1 node have 4 server?) 2. About the Cluster, Which is control the cluster on VM Host? Vmware vCenter or Nutanix Prism Central? 3. Nutanix have an OS is VM Controller? is it install on Vmware machine or is it a hypervisor? Thanks for all reply to make my clear this stuff.
Physical Relocation of Nutanix clusters
So you have decided to relocate your Nutanix cluster to a different data center. Here are a few things to consider and a brief overview of steps to follow for seamless transition. Caution: This information is only provided to serve as a guide to plan your move. Please engage Nutanix Support if you have any concerns or questions following this process. Before you decide to move: Consider the possibility of incorporating the existing IP address schema into the new infrastructure by reconfiguring the router and switches instead of Nutanix nodes and CVMs. If that is not possible, proceed with this guide. Before you unplug everything:Refer to these guides for the procedure.Doc 1 (CVMs) CHANGING THE CONTROLLER VM IP ADDRESSESDoc 2 (AHV hosts) CHANGING THE IP ADDRESS OF AN ACROPOLIS HOSTDoc 3 (IPMI) CHANGING AN IPMI IP ADDRESS A few things to note:1. Since the cluster is being relocated and the new network will not be able to communicate with the old network, you will need to run through so
Zeus configuration cache is not created; try again later
When trying to use the allssh "manage_ovs ...." I get the above error. As I understand it Zeus and Zookeeper are close friends(as one would expect). Zookeeper implements the cluster configuration information while Zeus is used by other cluster components to access it? Somewhere I read that when you manually add a new node to a cluster Zookeeper on that new node is not active? Any information would be appreciated (even rabbit holes). thx.
We have a scenario in which we are running VMware vSphere environment on existing servers and planning to deploy a DR site. Is it possible that we deploy a Nutanix AHV environment at the DR site and replicate from the existing ESXi environment at the Production site to the Nutanix DR site somehow? We are currently using Veeam backup and replication for VMware for backup. What is the possibility of such a requirement?
Error in remove host from cluster
Hello Nutanix Currently, my demo cluster has had problem in removing host from cluster The host that I wanted to remove has an SSD showing status: "Marked for removal but not detachable" and it has been kept by that status for a long time with nothing changed despite that the host was shown up as successfully removed the host So, after that I have tried reinstalling this host but that status still showing up and I cannot do anything with the new reinstalled host as well as expanding the cluster with it By the way, I noted that before reinstalling this host, this host was running as a cluster member
Would I be able to create storage containers in my cluster for the purpose of "pinning" VMs only to hosts that are assigned to the containers - restricting VMs to the hosts that belong to the container? The need is to control VM migration during Software Upgrades due to multicasting issues with the hosts.
AOS update with resilience issue
Hello, We have a cluster with 4 nodes and I will perform the update from 22.214.171.124 to 126.96.36.199. a some minutes ago, I see that my cluster have a disk space problem and the resilience board is red with the “not resilience” information. In this case I will have a problem when perform the AOS upgrade? I not have time to resolve the space issue now… Please, send me any information.
Power off NX Block
Hello everybody, is there any advise how to shut down the whole NX Block? I got my first NX1465 and run foundation and build the cluster with AHV.After the first configuration steps i will power the whole block off to bring the NX into our datacenter. So, i didn't find a "how to". Any ideas/hints? Greetings, Maikel
How do you remove an Objects store?
So, on a dev cluster with Prism Central I created a Proof-of-Concept Nutanix Objects "Store". I did some experimentation (spotted a few problems with the deployment) and then decided I wanted to remove the store. First off I tried to delete the bucket, and was told "You need to delete all versions first": grr! Went and deleted all objects, but this wasn't enough, as versioning was enabled. So, turned version lifespan down to 1 day, and waited 24 hours. Great, now I could delete the bucket - however, I can't find any option to delete the store. So, I'm stuck with the (in my mind, greedy) VMs the deployment created, as I can't even go behind it's back and do anything with the VMs, as they're "special", just like the hidden Calm project which is used to deploy the store. All in all, not very impressed on first experiences...
genesis service fail
I've this error on the Prism Console: genesis is down on controller VM 10.1.82.21 I've run the command ncc health_checks system_checks cluster_status_check --cvm_list=10.1.82.21 and this is the error log:2016-11-10 00:16:00 INFO salt_helper.py:52 Verifying CVM salt states 2016-11-10 00:16:00 ERROR command.py:156 Failed to execute sudo /bin/ls /home/saltstates: [Errno 12] Cannot allocate memory 2016-11-10 00:16:00 CRITICAL decorators.py:46 Traceback (most recent call last): File "/home/hudsonb/workspace/workspace/User_builds/builds/build-danube-4.7.1-stable-release/python-tree/bdist.linux-x86_64/egg/util/misc/decorators.py", line 40, in wrapper File "/home/hudsonb/workspace/workspace/User_builds/builds/build-danube-4.7.1-stable-release/python-tree/bdist.linux-x86_64/egg/cluster/genesis/node_manager.py", line 2474, in sync_configuration_thr File "/home/hudsonb/workspace/workspace/User_builds/builds/build-danube-4.7.1-stable-release/python-tree/bdist.linux-x86_64/egg/cluster/genesis/node_
Disks conversion process IDE to SCSI - FAQ
In some scenarios, you may need to move a disk from IDE to SCSI bus or vice versa. Sample scenarios include but are not limited to: VM does not boot due to missing SCSI driver. In such cases, the disk can be converted to IDE to install the missing drivers and then moved back to SCSI. A wrong disk type was used during VM creation. Application requirements dictate the particular type of the disk. You have recently migrated VMs to AHV and noticed that some of the disks appear in IDE format. After following the disks conversion process from IDE to SCSI (as described in step 2 of the uvm_ide_disk_check) the following doubts arise: Question: Once the disk gets converted, does it immediately redirect all I/O to the SCSI drive and leave the IDE disk unused? Answer: Once the conversion is completed (which is actually a cloning process from the original IDE disk), the new SCSI disk needs to be attached to the SCSI bus and then the old IDE disk could be removed. Question: What ar
ESXi - Storage Containers/Datastores and vMotion Operation Limits
We are planning a major migration from existing 3-tier ESXi over to Nutanix/ESXi (AHV to come later). Due to time constraints in our move, we are trying to optimize how many VM's can migrate at one time. VMware has a few well-known limits on their operations: vMotion operations per host (10Gb/s network) - 8 vMotion operations per datastore - 128 Storage vMotion operations per host - 2 Storage vMotion operations per datastore - 8 If I take a "one storage container to rule them all" approach, I will be limited to 8 migrations at once, as long as I spread out my migrations to 4 hosts. My target will be a 14 node cluster, so I would like to push up to 28 operations, if at all possible. Would it be considered a best-practice to carve out 4 storage containers, knowing I'm still in the same pool, and fan out the migrations to get around these limits? My goal is to saturate my 10Gbit inter-datacenter link. Yes, I like to push the limits (break things?). Would the multi-data
Host and CVM IP address is ping unreachable
Hi Everyone, Can someone help how to fix my problem First of all, I have 2 new added nodes in my existing 3-node cluster so I have now 5 nodes in my cluster and each node are pinging successfully for AHV, IPMI and CVM IP Address But the problem is someone unplug both 2 10Gb data cables from new node A and I got Data Resiliency Critical and after I plugged the cables back again it goes back to Data Resiliency OK but still the ping is unreachable from Host and CVM IP Address for that node Please help me how can I fix this issue the Host and CVP IP pingunreachable while my Data Resiliency is now back to OK
Already have an account? Login
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.