5 Essential Tips for Maximizing Your Experience at Nutanix .NEXT for Bloggers
Hi,My company is going through a major organization change, and we will be driving a lot of VM automation through Ansible. Since all of us know that the Powershell support is woefully lacking in AHV, I would like to try to get more automation plugged in through Ansible. Is there an example playbook out there for upgrading/installing NGT for both Windows and Linux?I know that updating NGT via Prism Central works pretty well for Windows, but once again, it fails miserably for Linux (at least OUL). I would like to do it in an automated fashion, however, instead of 60 checkboxes at a time. I need a feature that I can upgrade hundreds of VM’s at once and schedule it.Any assistance would be appreciated!
Hi,Our 3rd party backup software does a poor job of cleaning up vdisks off of a proxy after it does its backups. Many times, it will leave the vdisk from a snapshot mounted to its proxy server and not delete it. This is a known issue that has been brought up with the vendor (Quest NetVault). Our question is, what is the easy way to see what disks are owned by the proxy VM and what disks are owned by a snapshot image? We want to be sure we don’t remove a disk that is owned by the proxy server.Thanks,
There was a thread back in 2016 regarding getting VM activity events logged in Prism (PE or PC). Events such as powering on and off a VM should be displayed in PE or PC so we can try to track down when a reboot may have occurred easily. This has been a core VMware vCenter function for ages, but is glaringly absent in Prism for AHV. My company is in the middle of the AHV conversion right now, and my teammates are asking me how to find out about these events. I don’t really like the idea of giving untrained folks access to the “nutanix” account on the CVM to look at acropolis.log. In the 2016 thread, someone said enhancements are coming to Prism. Are they still coming?
I hope this is an easy question. Am I allowed to increase the VCPU and memory resources on the Move VM/Docker without issues? I know hot-plug is disabled, so it’s in the reboot-world. I found that we are pegging the CPUs out when we are queueing migrations, so I want to try to get some more throughput on it to take advantage of our 25Gb infrastructure. Thanks
My migration team is doing a good job trying to push as many VM’s through Move to stage reboots after-hours. I’ve noticed that replication slows way down when we have about 15-20 VM’s waiting for cutover. I have checked the load on the Move VM and increased the amount of CPUs. Replication goes well (25Gbit network), but now we only have 2 seed streams going with 19 waiting to cutover. I have 4 other VM’s that are in the “seeding data” phase, but haven’t started their copies. Is this a bottleneck within the Move postgres database? Any ideas on how I can get more VM’s copying in parallel? Thanks,
We are planning our AHV migration and would like to migrate about 75% of our VM’s (~400) during their negotiated patch outage windows to minimize how many app teams get to set our schedule for us. Most of these outage windows are in the 3AM time frame. We would like to find an automated/scripted way to do the cutover so that we can set a job to run that script overnight during that window. I’ve seen some REST calls that look like they could do it. Has anyone actually done it? If so, would they mind sharing a sanitized snippet of their Powershell/REST code? Thanks in advance!
I am reimaging my lab cluster after we had some serious problems with a conversion to AHV and rollback to ESXi. 2 of my 3 nodes reimaged fine, once I put a /firstboot directory in to the existing ESXi hosts. My 3rd node is losing track of where the firstboot directory would be. From the foundation log: 20200221 11:40:17 ERROR Command 'scp -i ~/.ssh/id_rsa /tmp/tmp.gkI7z6j7XG/nutanix_provision_network_utils-1.0-py2.7.egg root@192.168.5.1:./vmfs/volumes/4d501d79-cff63899-7951-75210fae7516/Nutanix ./vmfs/volumes/5e4c3359-4b398980-cc34-ac1f6bb9dd8a/Nutanix/firstboot/nutanix_provision_network_utils-1.0-py2.7.egg' returned error code 127 stdout: stderr: FIPS mode initialized bash: line 1: ./vmfs/volumes/5e4c3359-4b398980-cc34-ac1f6bb9dd8a/Nutanix/firstboot/nutanix_provision_network_utils-1.0-py2.7.egg: No such file or directory 20200221 11:40:17 ERROR Failed while copying file /tmp/tmp.gkI7z6j7XG/nutanix_provision_network_utils-1.0-py2.7.egg to host with error Command 'scp -i ~/.ssh/id_r
We are preparing to migrate about 350TB of data from our existing 3-tier architecture to a Nutanix hybrid solution. Our storage pool is 480TB in size with about 25% SSD before storage efficiency kicks in. We have post-compression turned on as well as cache deduplication. We are going to attempt to migrate 50-100 VM's per night, and we are afraid that we will overrun our SSD tier eventually. Is there a way for me to run a data tiering job, or force the data to destage lower, daily after the migrations are done so that it doesn't impact our future migrations? Thanks in advance
We are planning a major migration from existing 3-tier ESXi over to Nutanix/ESXi (AHV to come later). Due to time constraints in our move, we are trying to optimize how many VM's can migrate at one time. VMware has a few well-known limits on their operations: vMotion operations per host (10Gb/s network) - 8 vMotion operations per datastore - 128 Storage vMotion operations per host - 2 Storage vMotion operations per datastore - 8 If I take a "one storage container to rule them all" approach, I will be limited to 8 migrations at once, as long as I spread out my migrations to 4 hosts. My target will be a 14 node cluster, so I would like to push up to 28 operations, if at all possible. Would it be considered a best-practice to carve out 4 storage containers, knowing I'm still in the same pool, and fan out the migrations to get around these limits? My goal is to saturate my 10Gbit inter-datacenter link. Yes, I like to push the limits (break things?). Would the multi-data
Already have an account? Login
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.