Installation & Configuration
This forum is the best way to get up and running with the Nutanix platform
- 1,179 Topics
- 3,234 Replies
Hi, My CVM is reporting low on disk space and trying to cleanup. I tried to follow the KB1540 and noticed the logs directory needs to be clean up. Its consuming around 8.1G the rm cmd in article dont see to apply for logs folder. Any suggestion on how to free up space under logs safely? Can i winscp and delete the logs files? Thanks CT
I have a problem with OS and Distributed Switches on vCenter. When I migrate the vm-network from standard switch to distributed switch show the next error: Detailed information for cvm_startup_dependency_check:Node x.x.x.x: FAIL: .dvsData directory is not persistent yetRefer to KB 2050 (http://portal.nutanix.com/kb/2050) for details on cvm_startup_dependency_check################################################################################PLUGIN RESULTS################################################################################/health_checks/hypervisor_checks/cvm_startup_dependency_check [ FAIL ] I can see that when reboot the ESXi host and start the CVM I lost it network config on the CVM, and I have to assign again network adapter on the CVM. I don't know how configure on the vcenter the distributed switch to make persistent.
Good Morning:We are in the process of deploying new NX-8235-G7 nodes that are connected to a pair of Cisco Nexus switch. Everything seems to be working well, with the exception of the switch occasionally receiving RX Pause frames from the servers. While this may not necessarily be a problem, I am doing my due dilligence.The switch ports the nodes are connected to are configured as follows:Connected as active/standby NICs Connected via 1 meter 10GE DAC cables 802.1Q trunk Speed/duplex auto-negotiation Flow control auto-negotiationThe nodes themselves are configured with the default network configuration that comes from the factory with the exception of anything that is necessary to support the use of ESXi and our network environment.Any suggestions or validations of how the switch ports should be configured? Thanks!
We have a very long domain name and with the current config we are using the UPN (firstname.lastname@example.org) name to login. This is becoming very irritating for me to enter the domain name each time I login. Can we configure a default domain for all the login users to use if the user didn’t mention any domain and of course if we don’t have local user by the same name?
UPGRADING SERVER FIRMWARENutanix recommends that you use the Service Pack for ProLiant® (SPP) ISO file for applying firmware updates. Perform this procedure on every host in the cluster, one host at a time. About this taskTo upgrade the firmware on a server, do the following:Procedure If the server is part of a Nutanix cluster, place the server in maintenance mode.Information about placing a server in maintenance mode is available in the host management section of the Acropolis Command-Line Interface (aCLI) documentation. See the Command Reference for the supported AOS version. Turn on the server to the SPP ISO. Connect to the iLO by using the iLO IP address. Log on to the iLO user interface by using the administrator credentials.The default administrator user name is Administrator on all HPE® ProLiant® servers. Passwords for the iLO administrator differ from one server to another, and are available on the service tag on the server. Attach the SPP ISO to the server by usi
We are creating a Nutanix cluster and staging/patching before moving to the production site. We don't have time during staging to stand up the whole infrastructure on the new cluster (vCenter, AD, etc) so we are adding the new hardware to an existing vSphere/vCenter environment. We will need to then move the cluster to the new site, and build the new AD/vCenter servers and use the new hosts to create a new Nutanix cluster. Trying to discern the best way to migrate the cluster from one vCenter/AD to a new vCenter/AD that will be built fresh.
Hi Guys, i have some questions, first, can we enable flow on ESX hypervisor on nutanix? i have same question with calm on esxi and i get the answer we can enable calm on esx based on nutanix KB, but how about flow? can we enable/using flow on top esx hypervisor? and just courious, anyone have table comparation about feature nutanix running on ahv, esx, hyperv?
Hello - Does anyone know if there is an IPMItool command to set the interface to 100MB full duplex? Is this option to change it even available? We are having some IPMI issue where connectivity drops every so often. We are working with networking to check their side, and we've done various things to test/isolate the issue. On the network switch side, the ports have been configured to 100MB from 1GgE. One suggestion that came up from networking is to set the IPMI interface to 100MB full duplex. Thank you. RV
I have three protection domains, all containing VM's. All three have schedules set and I can see that local snapshots are being made. I have restored machines with no issues. All seems well, but I am getting "Backup Schedule" health warnings on all three PD's. [b][i]Causes of failure[/i][/b][i]No backup schedule exists for protection domain protecting some entities.[/i] [i]Backup schedule exists for protection domain not protecting any entity.[/i] Neither of the above suggestions is true. I can't seem to get this one to resolve. Is this a bug or is there something here I am missing?
Hi, Is it possible to assign additional IP to nutanix cluster? Scenario is as follows: 1. Having network class A with 10.x.x.x which is used for CVM, hypervisor communication during configuration. 2. Cluster virtual IP is also from 10.x.x.x range. 3. Added additional IP range of 172.x.x.x to CVM, Hypervisor. 4. Is it possible to add additional IP range IP of range 172.x.x.x to cluster virtual IP? Thanks
Hi Team, I executed the NCC health checks run_all on two clusters and got the following message (at the end of the run_all script): Detailed information for sar_stats_threshold_check:ERR : Execution terminated by exception IndexError('list index out of range',):Traceback (most recent call last): File "/home/hudsonb/workspace/workspace/ncc-2.0.2-stable_release/builds/build-ncc-2.0.2-stable-release/ncc-python-tree/bdist.linux-x86_64/egg/ncc/ncc_utils/plugin_utils.py", line 128, in handle_exceptions result = fn() File "/home/hudsonb/workspace/workspace/ncc-2.0.2-stable_release/builds/build-ncc-2.0.2-stable-release/ncc-python-tree/bdist.linux-x86_64/egg/ncc/plugins/base_plugin.py", line 740, in result = putils.handle_exceptions(lambda : check(*check_args), cls.canvas) File "/home/hudsonb/workspace/workspace/ncc-2.0.2-stable_release/builds/build-ncc-2.0.2-stable-release/ncc-python-tree/bdist.linux-x86_64/egg/ncc/plugins/health_checks/sar_checks.py", line 358, in check_threshol
Warning: Use of Phoenix to re-image or reinstall AOS with the Action titled "Install CVM" on a node that is already part of a cluster is not supported by Nutanix and can lead to data loss. Use of Phoenix to repair the AOS software on a node with the Action titled "RepairCVM" is to be done only with the direct assistance of Nutanix Support. Use of Phoenix to recover a node after a hypervisor boot disk failure is not necessary in most cases. Please refer to the Hardware Replacement Documentation for your platform model and AOS version to see how this recovery is automated through Prism. What is Phoenix? Phoenix is an ISO-based installer that you can use to perform the following installation tasks on bare-metal hardware one node at a time: Configuring hypervisor settings, virtual switches, and so on after you install a hypervisor on a replacement host boot disk. This option does not require you to include AOS and hypervisor installers in the Phoenix ISO image. Installing the Controller V
Executive SummaryThis document makes recommendations for designing, optimizing, and scaling Microsoft SQL Server deployments on the Nutanix enterprise cloud. Historically, it has been a challenge to virtualize SQL Server because of the high cost of traditional virtualization stacks and the impact that a SAN-based architecture can have on performance. Businesses and their IT departments have constantly fought to balance cost, operational simplicity, and consistent predictable performance.Nutanix removes many of these challenges and makes virtualizing a business-critical application such as SQL Server much easier. The Nutanix distributed storage fabric is a software-defined solution that provides all the features one typically expects in an enterprise SAN, without a SAN’s physical limitations and bottlenecks. SQL Server particularly benefits from the following storage features: Localized I/O and the use of flash for index and key database files to lower operation latency. A highly dist
I’m trying to setup a new bridge (br1). After ssh to the Nutanix AHV, I continue with input “ssh email@example.com” to get to the CVM, it ask me for a password. Is there a way to change/ continue with a different logon name as well at that? The password I input is being denied due a different username for the CVM.
Hi all, I have a upcoming appliance installation and it might be necessary that we need to downgrade the current NOS version. Is there an out-of-the box approach / method available that could be used to downgrade an existing cluster? Or maybe KB-articles that could help me regarding this issue? Thank you and best regards, Andreas
hi community,we want to expand the cluster by adding additional node, the server model is HPE ProLiant DX360 , currently 4 nodes are running and we want to add one node of same hardware model. the hardware is factory AOS imaged. AHV is the hypervisor on current cluster.we want to know what are the physical network connectivity is required to add the node and image AHV.on new node IPMI is connected and two 10g are connected , but we are unable to discover the node from prism element. is it required to connect the node port 1g as well to the switch for imaging?
I have this same error on 3 nodes - I have checked networking / Time and connectivity and it all seems ok. Running ncc health checks on different services gives me a range of things to check. Not sure where to start with this. The Hosts are Hyper-V.
Hi Folks, I'm in the middle of a deployment and have a question. We have a pair of Nexus 3548's acting as 10 G edge switch to our Nuntanix Clusters who are trunked up to a pair of Cisco 4500's We have 4 nodes in the cluster with a total of 10 nics all 10 gig and assigned to the VDS in Vsphere. On the VDS we set the MTU to 9000 Is there a way to separate the CVM traffic from the management traffic? I would like to keep the CVM traffic from going up to the 4500's however we needed to create an SVI on the 4500's so we could manage them. I'm getting Jumbo frame errors on both the Nexus and the 4500's and when I set the MTU on the VDS back to 1500 the errors went away. My guess is the CVM traffic is what's causing the issues and I would prefre not to mess with the MTU settings on the 4500's. What is best practice here? What are other doing? Thanks
What is the recommended CPU performance setting for AHV? Im curious if AHV has the ability to interract with the Intel throttling or if it's better to just have it run full speed all the time. The CPU power options are: Performance Per Watt DAPC Performance Per Watt OS Performance Acropolis Hypervisor 201602173 Dell xc730xd nodes with 2 X Xeon E5-2630 v3 @ 2.40GHz The Dell Active Power Control (DAPC) mode allows the BIOS to manage the processor power states in order to achieve Performance/Watt maximized at all utilization levels and workload types while still meeting performance requirements. In the OS (Demand Based Power Management (DBPM) mode, the operating system (OS) controls the processor’s power management. In the Maximum Performance mode, the processor runs at the highest frequency all the time. [list] [*]Performance Per Watt Optimized (DAPC) This mode allows the BIOS to manage the processor power states in order to achieve Performance/Watt maximized at all utiliz
Hi,I followed this guide:Installing Nutanix Community Edition (CE) on vSphere 7 - Derek Seaman's IT BlogTo test a install of AHV/Prism CE and the AHV host installs fine but the CVM can not get to the outside world or ping ip’s.I can ssh to the CVM from the host on the correct IP and do the one_node_cluster create command but it just fails with `Failed to reach a node where Genesis is up` if I do `genesis start` it gets past that but just hangs after it starts up some services during the create.I tried not checking the one_node_cluster option in the install and doing it manually afterwards, but I get the same results.Happy to provide any info. The gateway etc look fine in the `/etc/sysconfig/network-scripts/ifcfg-eth0` file
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.