metadata file also available there look at release.nutanix.com folder 4.0.1
from partner central Add the disks to the pool using the "add-all-free-disks" ncli option.... ncli> sp edit name=storage_pool add-all-free-disks=true ID : 724 Name : storage_pool Disk Count : 24 Disk Ids : 34, 35, 32, 33, 36, 13, 14, 15, 15674, 17, 15675, 16,18, 15678, 15679, 15676, 23, 15677, 25, 24, 27, 26, 28, 31 ILM Threshold : 50% Max Capacity : 17.89 TB (19,672,034,533,360 bytes) Used Capacity : 82.74 GB (88,845,451,264 bytes) Free Unreserved Capacity : 11.87 TB (13,052,529,261,552 bytes) Verify that the Disk IDs in Description section are all present and added to the correct pool ID (see previous command output): nutanix@NTNX-2-CVM:172.X.X.207:~$ ncli disk ls | grep -B 4 -A 6 172.X.X.211 | grep ID Disk ID : 15674 Storage Pool ID : 724 Disk ID : 15675 Storage Pool ID : 724
There were two main problems: 1. our test was incorrect - in Nutanix we cann't give all IOPS only to one VM, we should use multiple VMs with own disk. One VM: 2000 read/500 write in VM and on hosts Two VMs:2500read/750 write in VMs and 5000/1500 on host So if we want to test Nutanix storage performance we should use many VMs. 2. during tests stargate process crash due to an issue with out of memory and all requests were redirected to another CVM. Nutanix engineer help us with gflags. There is no solution in the code yet and the fix is planned for 3.5.5 and 4.0. Now we don't see 0 read and write. before:[img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/170i95783A98C1BDF37A.jpg[/img] after: [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/172iD766A09EA1B4F218.jpg[/img] How should I test to understand what performance I can get per VM not whole cluster? If we are takling about service provider with vCloud Director how we can guarantee IOPS per VM and writ
Ncc check say that everything is OK. High Performance for 5-10 min, then degraded performance. Diagnostics VMs get high performance but they run only fo 5 min... [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/164iE48D442138F1EE6B.jpg[/img] From diagnostics VMs: Waiting for the hot cache to flush ........................ done. Running test 'Random read IOPS' ... Begin fio_rand_read: Sat Mar 29 22:02:25 2014 [b]49495 IOPS[/b] End fio_rand_read: Sat Mar 29 22:04:08 2014 Duration fio_rand_read : 103 secs ******************************************************************************* Waiting for the hot cache to flush ... done. Running test 'Random write IOPS' ... Begin fio_rand_write: Sat Mar 29 22:04:11 2014 [b]26559 IOPS[/b] End fio_rand_write: Sat Mar 29 22:05:54 2014 Duration fio_rand_write : 103 secs
From partner site [b]Description[/b] Sometimes a network design requires the CVMs and ESXi hosts to be on separate networks.Current versions of Nutanix cluster do not allow this, and the ha.py failover script will not function properly.A workaround is detailed below, however this still requires the use of addresses within the CVM's network to assign to the ESXi hosts (in addition to the primary management addresses assigned on the hosts outside of the CVM network). [b]Solution[/b] Workaround: [list=1] [*]Create a storage VMKernel portgroup on each ESXi host.[/list][list] [*]Assign an IP address within the CVMs' subnet. This will allow the CVM to communicate with the host to get details of VM/CPU/memory statistics and automatically mount NFS datastores via the HyperInt API. [*]Unselect vmotion, management traffic, fault tolerant logging and iscsi from this port group.[/list][list=1] [*]Put the ESXi IP address (the new vmk that you created) in the CVM's nfs-white-list:[/list]ncli clus
In Nutanix documentation we could see such slide Is it really usefull to use inline compression for DB?[img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/120iA34CCFFEDEC455B3.jpg[/img]
The question should be not "Nutanix SRA vs vSphere Replication", it should be like "VMware SRM vs VMware Replication". SRA is not any replication soft - it's software that talking for array or hypervisor that to do and how to work with storage. A Storage Replication Adapters (SRA) allows VMware Site Recovery Manager (SRM) to integrate with 3rd party storage array technology. 1. vSphere Replication is included with VMware license, SRM is licensed per VM 2. vSphere Replication is working in one vCenter, in SRM you need two vCenter per one on each site 3. vSphere Replication is async replication - RTO is 15 min minimum, with SRM you potentially could use sync replication 4. With SRM you could create recovery plan and test it-power on replicated vm on recovery site in isolate environment to be sure that your services is ok 5. With SRM you could migrate your VMs to recovery site with one buttom: DR=Site is Dead, or Planned Migration=Correctly power Off VMs, replicate and restratr on r
Already have an account? Login
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.