Connecting Cloud Innovators: Building Community at .NEXT 2024
Hi Tom -- I'd recommend configuring an Nutanix SMB container as a SCVMM unmanaged library share. This will also get you the benefit of using ODX when deploying template from WAP. Here's a good post on how to configure the Library share from a Nutanix container: http://davyneirynck.wordpress.com/2014/04/19/leveraging-odx-to-deploy-a-vm-from-a-template-on-nutanix/ Hope that helps, cheers!
I put together a quick Excel workbook where you can insert values and it will provide the data for expected replication traffic, you can download it here: http://1drv.ms/1odk9A2
Hi -- One way you could break it down is the following: First replication size (seed replication) = (Total data size - data already existing at target site) * compresstion % OR First replication size (seed replication) = (Total data size * expected common data at target) * compresstion % Where: [list] [*]Total data size = size of the VMs being replicated [*]Data existing at target = data that already exists at the target site and can be deduped and doesn't need to be sent over the wire [*]Expected common data at target = % of data expected to exist at target and can be deduped [*]compression % = compression rate after compression (eg. 5% compression would mean it would be 95% of its original size)[/list]So for example if you had 100GB of data 50GB which already existed at the target you'd replicate 50GB with a 0% compression rate. OR, if you had 100GB of data and expected 50% commonality between the sites you'd replicate 50GB with a 0% compression rate. Any other subsequent re
Good find! The Nutanix software has the ability and logic for tiers outside of the locally attached devices (SSD/HDD) which can be integrated with its ILM. However, these haven't really been exposed or undergone the extensive QA and validation to make this a supported and released feature (except for cloud connect) Our cloud connect for data archival essentially used the kCloud Tier Type which allows data to be archived on AWS or Azure. In the future there is the possibility to add in existing NAS appliances as a kNAS type. HINT: I have this running in my environment :)
I can't give too much detail since nothing has been released yet. However, I can say they will still operate as two distinct clusters just "stretched" -> Hint: think remote RF :) Shoot me an email if you want to discuss further
I'd configure them as 2 seperate clusters (1 per site) today since stretch cluster support isn't yet available today and run DR between them. Then, once stretch cluster support is released (can't quote a date :P) all you'll need to do is "stretch" the nutanix clusters and merge your vSphere cluster (as well as configure DRS affinity groups, etc.) The main reason for doing this is for a few reasons: [list] [*]If the inner-site link were to fail you'd lose quorum for 1 site [*]Once strech clustering is released its a simple upgrade and single command to stretch. If it was configured as a single cluster today you'd essentially have to "remove" one full site from the cluster and then stretch [*]You negate and write latency issues as cassandra and writes would need to currently cross the site link[/list]Feel free to reach out with any other questions! Steven Poitras
I always use LoginVSI with the new benchmarking mode in 4.0 which locks all of the variables (based upon medium workload) for apples to apples comparisons. In case you haven't seen them all of our RAs have LoginVSI test data: [list] [*]http://go.nutanix.com/TechGuide-Nutanix-CitrixXenDesktoponHyper-VReferenceArchitecture_Asset.html [*]http://go.nutanix.com/TechGuideNutanixXenDesktopandvSphereonNutanixReferenceArchitecture_LP.html [*]http://go.nutanix.com/TechGuideNutanixHorizononNutanixReferenceArchitecture_LP.html[/list]I would also note that LoginVSI is good at simulating workload, however I wouldn't use it for sizing as a PoC is always best :)
+1 :P Actually our largest deployment is running with KVM and leveraging some of our custom scripts for VM CRUD. Not 100% sure what they're using for management as I know a lot of this is built into their application. We do have support for OpenStack and some of the storage interfaces, however I'm not sure what all is being used there. Also, the recent shift from Redhat for supporting KVM is another key thing that now comes into play on the support side. But as you do know, with Nutanix, we're a single point of support and take care of the solution. [i]And, yes, as you mentioned there is something revolutionary in the works :)[/i]
The main reason has to do with the Linux NFS driver not having the ability to keep enough load on the system to drive the best performance (as compared to ESXi) which is why iSCSI is currently used. The key piece is that all of the iSCSI vDisk creation and attaching is automatically handled through our software and not something the admin has to manually do. When a new VM is created on KVM we'll take care of all of this on the backend There's some even cooler stuff coming here which will make KVM administration even easier than vSphere/Hyper-V :)
Already have an account? Login
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.