Installation & Configuration
This forum is the best way to get up and running with the Nutanix platform
- 1,184 Topics
- 3,243 Replies
What is cloud connect? Building upon the native DR / replication capabilities of DSF , the cloud connect feature enables you to back up and restore copies of virtual machines and files to and from an on-premise cluster and a Nutanix Controller VM located on the Amazon Web Service (AWS) or Microsoft Azure cloud. The following figure shows a logical representation of a “remote site” used for Cloud Connect: Cost and management Amazon or Azure customers are charged only for capacity that is used (not charged for the full capacity). Once configured through the web console, the remote site cluster is managed and monitored through the Data Protection dashboard like any other remote site you have created and configured About AWS & Azure Storage: Amazon S3 is used to store data (extents) and Amazon Elastic Block Store (EBS) is used to store metadata. When the AWS Remote feature replicates a snapshot data to AWS, the Nutanix Controller VM on AWS creates a bucket on S3 storage. The buck
Has anyone else had an issue with installing the lated CE image on esxi7? It seems to shutdown after the hypervisor is created and begins the startup. After you reboot and attempt to run it in regular mode I get this message: If I go into safe mode I get the login prompt with the host name configured: I performed a “virsh list --all” in safe mode to get the following: When is attempt to start the vm in safe mode, the host machine shuts down: Any Ideas?
AHV VM High Availability (HA) is a feature built to ensure VM availability in the event of a host or block outage. In the event of a host failure the VMs previously running on that host will be restarted on other healthy nodes throughout the cluster. The Acropolis Master is responsible for restarting the VM(s) on the healthy host(s). But we already know that, right? First, let us be reminded that there are three types of AHV High Availability configuration within AHV Cluster: Best effort Reserved Segments Reserved Host (only available via acli and not recommended in AOS 5.0 and newer) To read about each type as well as to find configuration steps, logs location and common issues with their explanation please read KB-4636 AHV | VM High Availability (HA). For a refresh on AHV VM HA: Acropolis Virtual Machine High Availability Resources Nutanix University: Tech TopX: VM High Availability in AHV Prism Web Console Guide - Virtual Machine Management - VM High Availability in Acropolis Tech
Hi All, I did use Nutanix CE on a standalone server before for testing and for backup recovery test. Today I was trying to install the new CE version ce-2018.05.01-Stable. The previous version I used was CE0-2017.05.22-Stable. Now the hardware is same. 1x 240 GB SSD, 2x 500 GB HDD, 3x 1TB HDD. 128 GB RAM Boot from USB. 3x 1 TB drives are in Raid0 - Storage 2x 500 GB drives are in Raid0 - ISO's 1x 240 GB drive is in Raid0 - OS - (I tried this with Raid and with out raid.) this time when it is checking the disks it says "MinimumRequirementsError: A set of disks matching the minimum requirements was not found" is there any changes to the minimum hardware requirements with new version of CE? Please help me with this. Thank you, Krishna
Two weeks ago I’ve upgraded our two clusters to AOS 5.15 . After that, both clusters began to receive constant warnings for root partiton space usage high(exceeded 80%) on almost all CVMs. Below is the result of ‘df -h’ on one of the CVMs: Filesystem Size Used Avail Use% Mounted on devtmpfs 18G 0 18G 0% /dev tmpfs 512M 0 512M 0% /dev/shm tmpfs 18G 1.2M 18G 1% /run tmpfs 18G 0 18G 0% /sys/fs/cgroup /dev/md1 9.8G 7.4G 1.9G 80% / /dev/loop0 240M 2.3M 221M 2% /tmp /dev/md2 40G 23G 17G 57% /home tmpfs 3.6G 0 3.6G 0% /run/user/1000 /dev/sdg1 5.5T 1.4T 4.0T 26% /home/nutanix/data/stargate-storage/disks/ 17Q0A01WFB9D /dev/sdh1
Hello, I keep getting the following message when I run a health check Running : health_checks system_checks default_password_check [==================================================] 100% /health_checks/system_checks/default_password_check [ INFO ] -------------------------------------------------------------------------------+ Detailed information for default_password_check: Node 18.104.22.168 INFO: One or more hosts are using the default password Refer to KB 6153 (http://portal.nutanix.com/kb/6153) for details on default_password_check or Recheck with: ncc health_checks system_checks default_password_check +-----------------------+ | State | Count | +-----------------------+ | Info | 1 | | Total Plugins | 1 | +-----------------------+ Plugin output written to /home/nutanix/data/logs/ncc-output-latest.log When I check the node with the IP listed I cannot find any user account with the default password set. Admin is not default neither is the Nutanix user. I canno
Hello Nutanix I am currently in progress of deployment of Acropolis File Services (AFS) Everything has been going well until the "Can't contact LDAP Server" error occured I have given it a check by viewing some configuration on NVM and found out that although on Prism, they have been assigned with 2 IPs but actually running with only 1 IP when checked via console by ifconfig command (eth0 only) Let me make it clearer via the "topology" below: CVMs connecting to NVMs by Subnet A IPs Clients, AD and DNS servers should be connected to NVMs by Subnet B IPs Recently, those NVMs are running with only 1 Subnet A IP although on Prism, I saw they have already assign with both Subnets?! My block is running with AOS 4.7.2. I have also given some seaches around and seen that both Internal and External networks could be the same as described in this video?! [url=https://www.youtube.com/watch?v=Q-z11wVhBxA]https://www.youtube.com/watch?v=Q-z11wVhBxA[/url] Thank you in advanced
Hi everybody,We’re doing a slight restructure in our DC and in thus need to change the default gateway used by a Nutanix cluster (3-node cluster).I’m aware of the following guide:https://portal.nutanix.com/page/documents/details?targetId=Advanced_Admin-Acr_v4_6:ip__ip_reconfig_web_c.htmlBut isn’t it at all possible to change something as simple as just the Default Gateway without such an invasive procedure requiring downtime?The part of the article on how to do it manually is...Non-existent:https://portal.nutanix.com/page/documents/details?targetId=Advanced_Admin-Acr_v4_6:ip__cvm_ip_address_reconfigure_t.html
We are fairly new to Nutanix. We have two seperate clusters in seperate datacenters. Our goal is to be able to move workloads between clusters based on needs and resources. We are using VMware 5.5 and everything seems to work as planned with the exception of the moving from cluster to cluster. We have cluster A and cluster B. We are using metro availability to sync the data from cluster A to cluster B at this time and that seems to work correctly. When I attempt to perform a cold migration from cluster A to cluster B I get the following error:Relocate virtual machineArtemisCloneFile/vmfs/volumes/ec936530-956aa-9ac/ArtemisClone/ArtemisClone.-vmdk was not found cold migration from cluster B to cluster A works fine. the whitelists are identical on both clusters. VMWare isn't really much help as their reply is to just browse the datastore and add it to inventory on a server in cluster B
The Simple Network Management Protocol (SNMP) enables administrators to monitor network-attached devices for conditions that warrant administrative attention. In the Nutanix SNMP implementation, information about entities in the cluster is collected and made available through the Nutanix MIB (NUTANIX-MIB.txt). The Nutanix enterprise tree is located at 22.214.171.124.4.1.41263.The Nutanix MIB is divided into the following sections:Cluster information. Status information about the cluster as a whole. Software version information. Version information about the software packages that comprise the Controller VM. Service status information. Information about the status of essential services on each Controller VM. Hypervisor information. Information about each hypervisor instance. Virtual machine information. Information about hosted virtual machines. Disk information. Status information about the disks in the cluster. Controller VM resource information. Indicate how much CPU and memory capacity is
Greetings to all, After an update of the AOS from 5.8.2 to 5.10.5, the following alerts began to be received: IPMI sensor mismatch IPMI sensor data unavailable on host [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/5d37868e-63a7-43bc-afea-cfdf39440885.jpg[/img] This is in a cluster with 3 Lenovo HX host, and all hosts have the same configuration when the cluster was created with the Foundations, I have already reviewed the access to the network to the BMC port and all the IPs respond and the users and passwords with correct. [url=https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.resmgmt.doc/GUID-D247EC2C-92C5-4B9B-9305-39099F30D3B5.html]https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.resmgmt.doc/GUID-D247EC2C-92C5-4B9B-9305-39099F30D3B5.html[/url] This cluster uses the ESXi 6.5 hypervisor with vCenter 6.5, the information in the ESXi on the IPMI has already been added. The 3 CVM's respond to the ping between them and the BMC port
I'm running 3 node cluster, based on 5.10 version (2019-02-11) There are some alerts for each node: [list] [*]Detected Incompatible AHV Version [*]Detected older AHV Version [/list]I've checked the Settings-Upgrade menu - there it nothing available to upgrade for AHV and AOS.
I'm looking for some advice.My company recently purchased a used 3460-G4. We're a professional services firm and we want to have it for our lab so we can stand up stuff like ERA, CALM, etc. and bang around on it. You know, lab stuff!We have been trying to Foundation (126.96.36.199 & 188.8.131.52) the block but I am running into a problem with Foundation failing when trying to mount the Phoenix image on the nodes. Here's how things currently stand:BIOS has been upgraded to latest recommended version on Nutanix Support site. (G4G5T6.0)BMC firmware has been upgraded to latest recommended version on Nutanix Support site. (3.64)Each node has x2 SSDs, which this system appears to recognize (I ran an ESXi installer on one of the nodes to test. The installers saw the drives)Each node has 64GB of RAM. The RAM is confirmed to be compatible according to SuperMicro's site.The Motherboard is the X10DRT-P from SuperMicro.IPMI has been set on each node and I can log into the IPMI mgmt page.At first I was wo
Team, I would like to install SSL Certificate for my cluster. I have 3 nodes in it and I am wondering how many certificates should I request. 1 single certificate for the entire cluster (The CSR will be generated using the Leader, but what is going to happen if the leader changes) or should I request 3 certificates (one for each cvm) ? Many thanks in advance, Regards, Thibaut
We would really like to use the distributed switch in VMware cluster. There seems to be a lack of documentation on how to properly implement it with Nutanix. For example I seem to remember that the Nutanix controller VM’s should not be moved from their installed switch and they have to be able to talk to vSphere and or the ESXi hosts. But if I want to put vSphere and the ESXi hosts into the dvSwitch and how do I do this? I might not be wording this question correctly, but one of the goals would be to have traffic separated by VLAN and protected by QOS where necessary. Is there documentation on how to do this correctly in a Nutanix environment? Has anyone tried to do this?
We are using Nutanix ( AOS 184.108.40.206) with AHV-Hypervisor. Connecting to a VM(W2K12,W2K8, German) via Prim & VNC-Console there is no possibility to change Keyboard-Layout. Our Computers are on Win7 - German with German QWERTZ - Keyboard Layout. No matter which Browser we are using (IE11, Chrome, Firefox) we can not type special characters like \ / . We also tried the American Key for / \ without success. Is there a posibility to fix this Problem? Or is it a Bug ? Looking forward to hear your comments: F.Hil
Every production infrastructure knows the importance of load balancing the network traffic to increase efficiency.Let’s say you have multiple links in your environment and want to use the potential of all the links or want to have a backup configuration in case a link fails, load balancing will come to your rescue.Today we will talk about two load-balancing modesActive-Backup Balance-slbTo know more about the load balancing configuration and AHV networking in detail, give the following document a read AHV Networking Best Practices Guide So how to make a decision regarding active-backup and balance slb?This comparison might help youAdvantages of Bond Mode for the active-backupDefault bond mode is active-backup. One interface in the bond carries traffic and all the other interfaces in the bond are used only when the active link fails. Active-backup is the simplest bond mode that easily allows connections to multiple upstream switches without any additional switch configuration. Disadva
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.