Stressed About Managing Multiple Clouds? We've Got Your Back!
From this tech note - http://go.nutanix.com/rs/nutanix/images/TechNote-Nutanix_Storage_Configuration_for_vSphere.pdf, my impression is that Nutanix thin provisions a VM if the disk are set to "Thick Provisioned Lazy Zeroed". All Nutanix containers are thin provisioned by default; this is a feature of NDFS. Thin provisioning is a widely accepted technology that has been proven over time by multiple storage vendors, including VMware. As containers are presented by default as NFS datastores to VMware vSphere hosts, all VMs will also be thin provisioned by default. This results in dramatically improved storage capacity utilization without the traditional performance impact. Thick provisioning on a VMDK level is available if required for the limited use cases such as fault tolerance (FT) or highly demanding database and I/O workloads. Thick provisioning can be accomplished by creating Eager Zero Thick VMDKs. Eager Zero Thick VMDKs will automatically guarantee space reservations within N
trying to connect the maxCapacity and totalImplicitReservedCapacity fields with data from Prism gui. As of yet, unable to connect the dots. I'm assuming the amounts being presented from the cmdlet is in bytes. If so, I've got 28TB per the cmdlet in way of maxCapacity, but the Prism gui for that container returns 24.48TB as the max capacity. --- example out put from cmdlet PS C:usersmeDesktop ools> Get-NTNXContainer -Id 00052bfc-1527-12fd-46e3-246e96026620::13840122 id : 00052bfc-1527-12fd-46e3-246e96026620::13840122containerUuid : f72335e2-bf87-4c9c-a7db-64cf3ce5e88aname : D-RTP-A1-NTX-XC630-1-P-01clusterUuid : 00052bfc-1527-12fd-46e3-246e96026620storagePoolId : 00052bfc-1527-12fd-46e3-246e96026620::13storagePoolUuid : f0efd6cb-8839-43a0-b30f-c2aad97ce5demarkedForRemoval : FalsemaxCapacity : 30999844125064totalExplicitReservedCapacity : 0totalImplicitReservedCapacity : 14227079168000advertisedCapacity :replicationFactor : 2oplogReplicationFactor : 2nfsWhitelist : {}nfsWhitelistI
I'm fairly certain this is possible, but going is rough for finding it documented. Let's say I have a remote container setup in a NTX cluster in LA. Can I created protection domains in a NTX cluster in Miami and NewYork and point them both to the remote container in LA? Sort of a many to one type approach in using protection domains and any limits on how many you can point at one? Any documentation anyone's found covering this and docs on where the snaps are stored (I don't see them in the ESXi datastores) and how much data is used from remote snaps. Thanks!
Anyone aware of a problem when using (unable to authenticate) domain local AD groups in PC 4.6? Worked before we upgraded to 4.6, but seems that a domain local AD group fails.
Already have an account? Login
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.