Replies posted by MMSW_DE
Hi,I’m not a Nutanix master, but moving nodes from one cluster to another in a production environment IMHO is not a task for beginners.Can you provide more infos please? I suppose you added just one node to “Lotte” and increased storage from 10.56TiB to 14.09TiB. Is this value expected regadring the capacity of the new node? I don’t see yet how 1.8GB occupied by a snapshot would affect a storage of that size.How do you think a snapshot would help if sth went wrong? Adding nodes to a cluster does initialize the added nodes and data including snapshots is eventually redistrubuted to include the added disks.Stay safe,Peter
Appliance vendors can be pretty restrictive to what platforms customers are allowed to run their products on. Even if you manage to get a particular appliance to work on AHV, you probably won’t get support for it should issues occur, even if they are completely unrelated to the platform itself.In our case, appliances by Extreme Networks run fine on AHV (because they include drivers for KVM), but there is only support for Hyper-V, KVM and ESX. Therefore we have to keep ESXi servers as an supported environment for Extreme Networks appliances.
in the HYCU software interface, what is the “job” called to reach into the Nutanix storage and export the vdisk images? Exporting disk images to a network share is an option when you restore a VM with Hycu. It uses its own backups for export, which makes sense because it may not be wise to download a live vdisk file from a running VM.
There is a commandline tool called qemu-img that lets you process vdisk images in all sorts of directions.Even if having a second cluster for proper DR is out of reach for you (as it certainly is for us), you will quickly gain confidence in the stability of your AHV cluster. Set up local snapshots as suggested by Alona to cover the majority of short-term VM issues and be sure to have your Veeam backups out of range of your assumed disaster.A four node cluster and the number/size of workloads you can run on it sounds like a lot to chew on for VMware WS (or CE) anyway.
If I understand the scenario correctly, this is about “Where do we run our VMs when our only cluster is on fire?”. I’m quite positive that you can’t leverage any AOS or backup software feature to copy VMs to a CE cluster.I don’t know the Veeam AHV solution and its features, but with Hycu, it’s possible to export vdisk images to e.g. network shares, where they can be converted to vmdk files that can be attached to Vmware VMs. Please note that it is only vdisks that can be offloaded this way, not complete VMs including their config. VMs will have to be prepared on the standby system. Tested by exporting AHV vdisk file (RAW format) from Hycu to network share, converting it to VMDK, uploading it to ESXi 5.5 datastore and attaching it to a VM that was configured similar to original Ntx VM. ESXi VM needed another reboot to take care for newly discovered devices, but basically ran fine.Maybe any of this can be scripted for periodic vdisk exports.
Hi Charu, 3rd party snapshots are always “scoped”. You can list scoped snapshots per protection domain with CVM$> cerebro_cli query_protection_domain "My-Little-PD" list_snapshot_handles="true;scoped" First rule is: Don’t mess with those snapshots if you don’t know exactly what you are doing. Have fun, Peter
Hi, thank you for this comprehensive information. I was looking for the other way round: I have a given vdisk residing on a container I’d like to delete, I can can see the vdisk path in sftp: /SomeSuperfluousContainer/.acropolis/vmdisk/806fdad9-8684-41ea-a49c-fc956193bcff I’d rather not execute acli vm.get for all VMs on the cluster to see if I can spot this path anywhere, Hence my question whether there is a way to determine which VM this vdisk belongs to. See you, Peter
Hi Mutahir, thank you very much for your comprehensive article. I wonder if there on the other hand is an easy way to find out which VM a particular vdisk belongs to. For example that last vdisk that still is on a container that is scheduled for deletion. Kind regards,Peter
As the entire mapping thing is happening at guest OS level, I don't see why this shouldn't work for additional (not OS boot) disks. Bear in mind though that all iSCSI traffic will go through your cluster's shared network uplinks. For an official evaluation, you may want to open a support ticket.
I'm sorry, I didn't do the Exchange restore tests myself and I frankly don't know how far my colleague got. We are not yet using Nutanix files, so I'm afraid no experience there either. Generally speaking, Hycu's integration is very good and they are highly motivated to make things work.
Hi, we ran into the exact same issue in our production environment: Restoring a multi-Terabyte Exchange VM took many hours while cluster overall performance (we use entry-level 1000-NX series hardware) was clearly affected by sustained bulk writes. We simply decided to keep more Fast Restore points (i.e. cluster based snapshots), reaching further into the past. Storagewise this workaround should not be a big deal, although there is no way to see how large those snaphots actually are. I'm very confident the folks at Hycu will eventually establish a solution for this.
Hi Jon, thank you very much for your comprehensive post! [quote]If you need 4x CPUs, provision some math that gives you 4. 1x4, 2x2, go nuts.[/quote] Just for the final bit of clarification, as this has come up somewhere in this or a related thread: Could we do 4x1 as well on a 2-socket host? CU, Peter
I think that the only Ntx instance to query for guest OS type, release and version are NGT, for a single VM it goes like this: [code]CVM$ nutanix_guest_tools_cli query_vm_tools_entity "d180a45e-7a45-4542-b17f-e4ac21ac55b7" [/code] In the output we find [code] guest_os_type: kWindows guest_os_release: "Windows7Professional" is_windows_server_os: false guest_os_version: "6.1.7601" [/code] A query through arithmos_cli gives OS type and release like "windows:64:WindowsServer2008R2Standard" for all VMs in the cluster: [code]CVM$ arithmos_cli master_get_entities entity_type=vm | grep -A 1 "vm_name\|ngt.guest_os" [/code]
AFAIK all Nutanix certified hardware vendors rely on SAS for SSD and HDD. What type and size of hardware you need entirely depends on the kind of workloads you are going to run on your cluster. Sizing a cluster correctly takes an experienced SE as mistakes made during this phase of the project can be very expensive to rectify later.
If by "this solution" you mean HYCU, according to the [url=https://www.hycu.com/blog/how-hycu-uses-nutanix-volume-group-apis-for-consistent-data-protection/]HYCU BLog[/url] they are leveraging Nutanix Volume Group APIs in version 3.5, and Nutanix volume groups attached to virtual machines are automatically backed up within the VM backup process. I have not tried it myself, though.
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.