5 Essential Tips for Maximizing Your Experience at Nutanix .NEXT for Bloggers
With just the core features you have listed, 24GB per CVM is probably fine but if you plan to use other services like AFS or self-service, then 32GB would be a good size to start at for a bit of future proofing. There really is no limit to the size of a Nutanix cluster. If you were using the Acropolis hypervisor, you could have 100's or 1000's of nodes in a single cluster. With vSphere being the hypervisor, you're limited to 64 nodes per cluster. In my environment we limit cluster size to 32 nodes just because self-imposed restrictions on rack capacity and how we implement failure domains.
As you alluded to, snapshots themselves are not backups. But even if you're replicating the snapshots to a remote system, how would you catelog or search those snapshots for the data you need? file level recovery would involve restoring a snapshot, mounting the data and pulling the file. Doable? Yes. Practicle? Not really. This is where backup software come in to help maintain the catelog of what data was backed up when and then provide the means to reach in and get you the exact data you need. Also worth noting, that depending on your snapshot retention policies, if the source file is deleted before anyone knows it, that deletion is replicated and there might not be a file to recover. The [url=https://www.veeam.com/blog/how-to-follow-the-3-2-1-backup-rule-with-veeam-backup-replication.html]3-2-1 method[/url] still applies to Nutanix platforms just like any other. Ideally you have three copies of your data on two different media/devices and one copy offsite from the source.
You could deploy Acropolis File Services (AFS) and use that CIFS share as a Veeam backup repository. Is this overkill for your situation and you wanted to backup directly through Veeam to a cluster storage container?
An additional option I didn't see posted yet is Rubrik. It's way more than just a basic backup utility and is integrated with AHV and vSphere. More info on Rubrik and AHV here [url=https://www.rubrik.com/blog/data-management-nutanix-ahv/]https://www.rubrik.com/blog/data-management-nutanix-ahv/[/url] When looking at backup/DR software, remember to consider the functionality you already have built into the Nutanix software. Features like snapshots, remote replication, self-service restore and one click recovery are all in there. [url=https://www.nutanix.com/solutions/data-protection-disaster-recovery/]https://www.nutanix.com/solutions/data-protection-disaster-recovery/[/url] Whatever tool you choose, remember to test the recovery plan :-)
As stated above, RHEL on AHV is not certified by Red Hat. RHEL runs perfectly on AHV and Nutanix offers first level support. In most cases, this support will be more than sufficient compared to paid RHEL support. I've heard there have been discussions between Red Hat and Nutanix on offical qualification but nothing has been arranged at this time. The best course of action is to communicate to your Red Hat representitives that you want them to certify AHV. The more pressure customers can create by being vocal the better chance Red Hat will see the need to satisfy customer demand.
We have many database servers using multipble VMDKs from 1-4 TB each. No issues with that and it keeps things simple. If you have reasons not to do this, then the ABS option would be a supported way to give disk to a VM directly.
Nice write up on the new Nutanix managment pack. I put a trial copy in lab to check it out and was impressed by the default dashboards and how the related object info was presented. Some dashboards were super busy but having the KPI data there to reference, both near term and historical, is a handy tool.
Thanks for reeling us back in...working in a supported configuration is also a key consideration :-)
My personal preference is to use a datastore to maintain visibility of storage allocation at the virtualization layer, for monitoring tools, image based backups and replication. Earlier this year I did some comparisons between in guest (single vNIC) and datastore based configurations and found the datastore config was always faster. My testing wasn't extremely scientific, but it was consistent.
Are you presenting the Volume Group to the whole VMware cluster and then creating a datastore out of that or are you directly presenting the ABS Volume Group to the virtual SQL nodes? To replicate the IO separation you used with RDMs, you'll want to go with the option to create a datastore on the VMware cluster. This will let you can allocate multiple PVSCSI controllers and assign virtual disks to the VMs like you're used to for DBs, Logs, TempDB, etc.. Be sure to check VMware KB [url=https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2038869]2038869 [/url]to see if your network configuration requires iSCSI port binding. If you have separate VLANs for iSCSI traffic from other networks you'll need port binding. For redundancy at the host level, you'll want two vmkernel ports added to each host for iSCSI and then bind those each of those ports to a physical NIC. VMware KB [url=https://kb.vmware.com/selfservice/microsites/search.do?language=en_U
Thanks for sharing this, this was on my to do list and you just saved me a nice chunk of time.
Thanks, this is exactly what I was looking for!
Today I'm in discussions for NOS 4.6 upgrades and planning migrations to vSphere 6
Already have an account? Login
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.