Installation & Configuration
This forum is the best way to get up and running with the Nutanix platform
- 1,125 Topics
- 3,037 Replies
When it comes to migrating your environment into AHV the recommended way would be to use Nutanix Move. Much being said and written and more is coming. Nutanix understand however that using Move may not be an option for whatever reason. Freedom of choice is important after all, right? In case you are looking for an alternative way to migrate your VMs’ disks into AHV then keep reading.The article referred to in this post covers three scenarios:Source virtual disk files can be accessed directly by the Acropolis cluster over NFS (or HTTP). This scenario is generally possible when: The source environment is a Nutanix ESXi cluster. The source environment is a Nutanix Hyper-V cluster. The virtual disk files are hosted on an NFS server and the Acropolis cluster can be provided access to the NFS server export. (Common in non-Nutanix NFS based ESXi environments.) Previously exported virtual disk files are hosted by any NFS or HTTP server and the Acropolis cluster has access to them. Source vi
Security and encryption are backbone of an IT infrastructure. Many organisation have made it a compliance to encrypt data and keep the data on-prem. Nutanix also provides industry standard security features. One of them being :DATA-AT-REST ENCRYPTION Nutanix provides an option to secure data while it is at rest using either self-encrypted drives or software-only encryption and key-based access management (cluster's native or external KMS for software-only encryption). Want to more about D-A-R-E and it’s features? Go through the following guide D-A-R-E guide During various compliance auditing and administrator task, we need to verify if D-A-R-E is enabled and successfully working or we need to configure D-A-R-E How can we safely accomplish that? We have different methods using which you can verify that D-A-R-E is enabled or not . You can use these methods to automate the process of verification or configuration. Go through the following KB to know about the various methods with comm
Hello,is it possible to enable compression on an [b]existing container[/b]?What will happen to the [b]data already written[/b]?We want to enable [b]post-process[/b] compression, so I hope the Curator MapReduce framework will compress [b]all data[/b] concerned.Can somebody please confirm or correct?thanks in advance!W.
Dear contributors, We have purchased Nutanix appliances and I am working as part of the IT Security team. We have enforced the security settings on CVM and AHV (STIG, banner, ...) and, as part of our duties, we carry out regular checks to make sure that the security settings have not been altered. For this, we would like to have nominative accounts which are able to connect through SSH and be able to check on the settings of the components. Is it possible to do? If yes, how can we proceed? Thank you in advance for your assistance.
Hi All An oddity going on here I have upgraded but I still get an alert say the upgrade is available Info : Bundle el6.nutanix.20170830.395 is available for AHV upgrade. My Cluster Version : el7.3-release-euphrates-5.15-stable-4fbdd4d9de331230bb468b3549f530e80ab53bb9 Any ideas how I can correct (if there is a problem) or check what is going on? Thanks Eric
Hi, I am doing some tests with creating and deleting large files in a sles12 installation on our AHV environment. I use an ext4 filesystem and have enabled the trim/discard feature for the filesystem and lvm. But when i delete a large file with random data (5g size) the storage backend of the cluster does not see that the former used storage is now not longer used. I tried fstrim to initiate the cleanup but that doesn´t work. If I write zeros to the file/partition/filesystem then the backend gets backs the storage. Is trim/discard supported to tell the storage backend that filesystem space is no longer needed or does anybody has experience with such a setup ? Thank you for your help. Regards Hans
We are trying to setup SCOM Management Pack 2.4 with SCOM 2012 server. I have created LDAP (Windows Domain) account for Prism and IPMI access, i do able to validate with prism account but i could not IPMI. I am using AOS 10.5 with NX 1065 G6. [list=1] [*]I have tried to create IPMI local account and then tried to validate SCOM Management pack. It worked fine. [*]I tried to validate with LDAP account but i am getting below error. for better administration i am trying to use LDAP. [/list] [b]error validating ipmi account with "x.x.x.x" IP. Continue with next CVM Address? [/b] I have opened case with Nutanix. Just i want know whether Nutanix support to configure LDAP auth with SCOM. Any help will be much appreciated.
Hi A remote site on AWS has been configured earlier. The time sync drifted too far but that has been fixed. The remote site in Data Protection has this alert "Remote site connectivity not normal." It is an AWS instance. The remote CVM IP address is 10.162.10.115. The host ipaddress is unknown, the firewall rules are fully open and replication seems to be occuring. There is a site to site VPN specificaly for this instance It looks as though the aWS remote site was never configured with the local cluster as a remote site which is what most of the webinars and documentation suggest. The IP address will not open a web page on http or https, the only communication available to the AWS node is SSH. How does the configurationof the remote site get completed or how do I get rid of this alert ?? Prisim: 5.6.1 NCC Version: 3.5.2 LCM Version: 1.4.2381 Nutanix 20170830.124 (in cluster, standard AWS instance) Thanks
Hi, There is a fun bug with version 188.8.131.52 ... (or its me?.. :cathappy: ) In version 184.108.40.206, it was possible to update Hypervisor via PRISM UI (with version 5.5u2), but in version 220.127.116.11, its not possible :(see below) [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/399i01EE9B994C869CA1.png[/img] Thks guys ! Sig'
Hi there, i need some help with my Dell OptiPlex Micro Cluster. I wanted to build up a four node cluster but everytime i try to install the Nutanix CE, right at the finish line it says: “Waiting for the Nutanix Controller VM to start up………...…” followed by: “A Problem was encountered. Please review the contents of /home/install/firstboot.out for details, and refer to the documentation or the Nutanix NEXT community for next steps. Press <Enter> to return to the login prompt.” Now if i restart the installation and try to repair the CVM this error appears: “ FATAL: AN exception was raised: Traceback (most recent call last): File “./phoenix”, line 96, in <module> main() File “./phoenix”, line 74, in main params = gui.get_params(gui.CEBootDiskConfirm) File “./phoenix/gui.py”, line 1675, in get_params if guitype(args=args).skip_get_params: File “./phoenix/gui.py”, line 687, in __init__ self.boot_disk = self.get_ce_boot_disk() File “./phoenix/gui.py”, line 772, in get_ce_boot_di
Need details how to store passwords as a secure string and use in a script for access to multiple Prism sites. I have a script to collect storage data from multiple Nutanix sites via Prism. Now I need to remove the plain text passwords. I have located references but only find "NOTE: for security reasons we should store our passwords as a secure string, by declaring these as variables before starting PowerShell." Can anyone provide the steps required to make this work?
I have a user who is requesting that we change our VMWare Host Power Management Policy from Balanced to High Performance. Questions 1: Do Nutanix hosts support VMWare Power Management features?Question 2: We are migrating to all AHV early next year. Does Nutanix have a corresponding setting for VMWare Host Power Management Policies?
I have an NX-1450. I'm using the IMPI to mount a new Phoenix ISO. The IPMI CD-ROM Image status message shows "There is a disk mounted." However when I power cycle the node and hit F11 to bring up the boot device menu, I don't see an IPMI Virtual CDROM in the list at all. What could I be doing wrong? Here are the boot devices I see. [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/286iA5EB2A6F94B845B9.png[/img]
Sometimes it is very boring to install the PrismCentral VM on AHV after your Cluster runs but it gets buggy. So here are some tips to solve problems there Deploy a Network in your new AHV Environment first Deploy the new VM via Prism Element Homepage Often you get an error with ssl and httpd could not start! Login via ssh to ip of your fresh deployed Prism Central Server Login via nutanix / nutanix/4u sudo -i service httpd status (mostly not started) service httpd start (ignore the ssl errors in the output) Now the new PrismCentral Website should arrive after a while under https://ip-of-prismcentral Login not possible with admin / admin or admin / nutanix/4u? Login again via ssh and nutanix / nutanix/4u to prismcentral sudo -i passwd admin set your admin password and repeat it reboot wait a while login to prismcental and admin / your password Register your cluster in prism element
Hello, If I have 3 Nutanix nodes and I cluster them together. How can I make a separate VPC for a test stack and a separate VPC for a Production stack? So that way I can build my VM's on the test stack and once they are good to go...I can do the same steps on my Production stack. I would like to have 2 isolated stacks for testing and production. Is this possible? Thanks
I do not see an explaination as to why a single container is the best practice. Does having multiple containers cause more overhead(other than the memory use)? It seems like an obvious choice would be to put your OS VM's that are virtually identical in a deduped container to maximize space savings. Then put any user data vDisks in a compressed container. Just looking for some input on why this is not a good idea.
Playing around with an old NX-2000 series server and it seems I need to make my 10GbE nics default to eth now while before on boot they seemed to be recognized in eth mode. New module: # modinfo mlx4_corefilename: /lib/modules/4.19.0+1/kernel/drivers/net/ethernet/mellanox/mlx4/mlx4_core.koversion: 4.0-0 Old module: # modinfo mlx4_corefilename: /lib/modules/4.4.0+10/updates/mlx4_core.koversion: 3.4-1.0.0 Booted to a Kubuntu 16.04 Live CD to attempt to do it with the last supported firmware for the card but the Mellanox card doesn't seem to be supported by Mellanox OEM software. MLNX_OFED_LINUX-4.7-18.104.22.168-ubuntu16.04-x86_64.tgz root@kubuntu:~# mlxfwmanager --queryQuerying Mellanox devices firmware ...Failed to identify the device - Can not create SignatureManager!Failed to identify the device - Can not create SignatureManager!Device #1:---------- Device Type: ConnectX2 Part Number: SUPERMICRO X8DTT-IBX/X8DTT-IBXF/X8DTT-HIBXF/X8DTT-HIBXF+ (Rev2.00 or later) Description: ConnectX-2 VPI; s
Hi everyone, I’m looking for your advice please for ESXi-based Nutanix clusters with hybrid storage. Question is regarding the placement of ESXi persistent scratch. With traditional architecture, the advice normally was to configure /scratch to be on a shared datastore, not a local one. The argument was that in case of a host failure, the logs will still be available from other hosts. With Nutanix, I can see the options as 1) leave the default, on vfat partition on SATADOM/M2/BOSS; 2) in a folder on local VMFS datastore where CVM is (same storage device though); 3) on a shared datastore, i.e. on DSF. What is the Community (best) practice with ESXi /scratch in Nutanix environment? Please kindly share your advice and thoughts. Thank you.
UPGRADING SERVER FIRMWARENutanix recommends that you use the Service Pack for ProLiant® (SPP) ISO file for applying firmware updates. Perform this procedure on every host in the cluster, one host at a time. About this taskTo upgrade the firmware on a server, do the following:Procedure If the server is part of a Nutanix cluster, place the server in maintenance mode.Information about placing a server in maintenance mode is available in the host management section of the Acropolis Command-Line Interface (aCLI) documentation. See the Command Reference for the supported AOS version. Turn on the server to the SPP ISO. Connect to the iLO by using the iLO IP address. Log on to the iLO user interface by using the administrator credentials.The default administrator user name is Administrator on all HPE® ProLiant® servers. Passwords for the iLO administrator differ from one server to another, and are available on the service tag on the server. Attach the SPP ISO to the server by usi
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.