This issue is fixed in AOS - 5.1.1 / CMDlets - 2.1.1 version.
PS C:\Windows> $new = New-NTNXObject -Name VMNicSpecDTO
PS C:\Windows> $new
PS C:\Windows> $new
While we have started to include some of the daily VM operations for VMware clusters into Prism you will still perform many items in vCenter. vCenter is still a requirement because it creates and manages ESXi clusters and operations such as HA, DRS, etc.
You can confirm that the XML file is present by checking in the /etc/libvirt/qemu directory on the AHV host. If the file is present, you can execute the command "virsh define NTNX-*-CVM.xml" define the VM. You should then be able to start the VM. You'll also want to run "virsh autostart NTNX-<NEW_NAME>-CVM" so that the VM starts with the node on reboot.
If the file is not present, you can take the following steps (as root on the AVH host) to recreate it:
1) Navigate to /root directory.
A "NTNX-CVM.xml" file will be located here. Note this is a "vanilla" CVM configuration. The following fields will likely need to be modified to match the previous NTNX CVM configuration:
<name>NTNX-<BLOCK_SERIAL>-<NODE>-CVM</name> Modify to match the hostname as needed <memory unit="KiB">16777216</memory> Memory unit and Current unit would be modified if the CVM was modified from the default values <currentMemory unit="KiB">16777216</currentMemory>
Note: Additonal .XML fields may require modification based on your deployment (i.e Network Interfaces).
2) Make a copy of the NTNX-CVM.xml from the /root directory
cp NTNX-CVM.xml /etc/libvirt/qemu/NTNX-<BLOCK_SERIAL>-<NODE>-CVM.xml
3) Edit/ Modify the copy of the .xml file to suit the previous CVM instances configuration.
$ vi /etc/libvirt/qemu/NTNX-<BLOCK_SERIAL>-<NODE>-CVM.xml
4) Define the CVM
$ virsh define/etc/libvirt/qemu/NTNX-<NEW_NAME>-CVM.xml
5) Start the CVM.
$ virsh start NTNX-<NEW_NAME>-CVM
6) Configure autostart in KVM so the CVM boots up with the host
$ virsh autostart NTNX-<NEW_NAME>-CVM
You must license all cores on the physical host which will run the 2016 VMs. The minimum is 16 cores (8x2) as you described. You can then purchase additional licenses in 2 or 4 core packs if you have more than 16 cores in the host.
Standard edition allows 2 OSEs per fully licensed physical server, datacenter allows unlimited OSEs.
To run more than 2 OSEs with Standard you need to license the entire host again. For example to run 4 VMs, you'd need to fully license the server twice. To run 6 VMs you'd need to fully license the server three times.
In your case I would want to license 2 physical servers at a minimum so you have HA should a server fail. You would not have to license all 3 servers since you can use host affinity to ensure your VMs only run on the licensed physical servers. Ideally you'd license all three so you'd not have to worry, but it's not required.
Hope this helps.
This issue occurs when md5sum doesn't match of ca.tar on all the nodes of the cluster.
You can verify the md5sum from this command:-
allssh 'sudo md5sum /home/ngt/ca.tar'
Resolution : - ssh to the CVM containing the reference ca.tar file that you want to copy across the other CVMs for which md5 sum is not matching
You need to delete the existing CA.tar in the CVM before copying it .Make sure you can't copy directly in /home of the CVM from other CVM so you need to create a tmp folder and scp to the tmp folder from other CVM.
After completing this, you can copy locally from /tmp to /home/ngt