Replies posted by ndolson
, What version of NSX are you running where you experienced that? NSX 6.2 doesn't even give you the option to choose a DFW default rule of "deny", but that was an option during setup in previous versions. Obviously, if you deploy NSX with the default rule as deny, it's going to put every VM on an island, including your CVM's. Just interested to hear if you had NFS issues with a default rule of allow, or possibly another firewall rule impacting them. I personally chose not to have the CVM's on an NSX logical network (and they're exempted from DFW as well) just due to the fact that it's the underlying layer everything rides upon, and didn't want to run into a scenario where an issue with NSX also caused me a storage problem. I've gone so far as dedicate separate physical NICs for the CVM/storage traffic and the other physical NICs on a different dvSwitch for NSX/regular VM networking. Might be overkill, but I feel more comfortable that way given some of the bugs I've found in ne
Hi The guides you found include most of the "recommended practices" that I'm personally aware of. A lot of it is what the particular hypervisor (in your case ESXi) recommends but there are some things to take into consideration regarding the CVM's and a few other things to optimize that are unique to Nutanix. This is not an all inclusive prescriptive list but some of the things I do / take into consideration when setting up a Nutanix vSphere cluster are: vSphere HA Settings - Enable host monitoring - Enable admission control - Disable VM restart priority of all CVMs - Set the host Isolation Response of all CVMs to "leave powered on" - Disable VM Monitoring for all CVM's - Enable Datastore Heartbeating and choose Nutanix NFS - Add "das.ignoreInsufficientHbDatastore=true" in the advanced settings (relates to the previous datastore heartbeating setting since there is probably only one datastore) vSphere DRS Settings - Disable automation on all CVMs - Leave power management disa
To add on to what Tim said, we use Nutanix Protection Domains for our primary disaster recovery/business continuity solution, but that's still not what I'd consider a "backup" in the sense I might want to recover a file from days, weeks, or months ago...which from what I can gather, really isn't the purpose of Nutanix PD's anyway. We use a third party backup software solution combined with a deduplicating backup appliance for what I'd call "backups" and could technically rely on that for DR as well if something went awry with what Nutanix had replicated. If you're referring to Nutanix Protection Domains, the "backup location" will likely be a remote site paired with your primary site in Prism, but you can do local only snapshots too. Configure your Protection Domains based on an RPO level (i.e. hourly, 6 hours, 24 hours, etc) by creating multiple Protection Domains with varying snapshot schedules and drop the VM's into them as appropriate. I believe you can only have 50 VM's in a
When I had to move one of my clusters, I first shut down all my VM's except the CVM's, connected to a CVM and issued the "cluster stop" command, shutdown each CVM, shut down the physical host hypervisor, and removed power. I have not done it with AHV so I can't speak to that process specifically, but I imagine it should follow the same general steps.
This is just my .02 but generally speaking, the idea behind hybrid is that your active dataset can fit within the SSD tier with a little bit of head room to spare to be the most cost efficient, and I suppose with the caveat that occasionally you may hit some infrequently used data in the cold tier before/without it getting moved to SSD. Whereas with all flash, you either want/need to guarantee that all IO occurs on SSD and/or have lots of money to spend. My guess would be if you started getting towards the 50/50 SSD/HDD or 60/40 SSD/HDD ratio you're probably getting close to the price of an all flash config anyway so I guess you'd just be getting a little extra overall capacity thanks to the HDD tier. Will you run out of CPU and memory before disk performance becomes an issue? I always do. If so then perhaps more nodes will be required anyway to support the workload from a CPU & memory perspective and you might not truly need the extra capacity in the HDD tier, or maybe the amo
I've done it a couple ways before, including the NFS whitelist option that Donnie mentioned. In addition to using a whitelist, I've also done kind of the "reverse" where I'll present my source storage (iSCSI) to my Nutanix nodes by configuring a software iSCSI adapter/vmkernel adapter so that each node has both the source iSCSI VMFS volumes as well as the Nutanix storage presented as NFS. Then I'd power off and unregister the VM on the source vCenter, browse to that VM's .vmx file in the new vCenter to add to inventory, then do a storage vMotion from the iSCSI VMFS volume to the Nutanix storage. In that case, I had both the maintenance window to tolerate an outage on the VM (power off/deregister/register etc - in fact I left it offline during the whole storage vMotion) and the proper connectivity on the source storage to allow for this sort of migration. If your tolerance for downtime is low then you might have to do something like you mentioned where you add your old hosts to the
To expand on what Jon said, and because we just recently "trued up" our licensing... Microsoft [i]always[/i] gets paid if you run Windows guest VM's, regardless of what underlying hypervisor your have. You can either buy a datacenter license and run Hyper-V on each host and run unlimitied guest Windows VM's on it, you can buy a datacenter license and run another hypervisor on each host and run unlimited guest Windows VM's on it, or you can run another hypervisor on each host and buy Windows Standard licenses for each guest VM. It's been a while since I ran the numbers, but the concensus for the break even point between datacenter and standard seems to be around 7 and 10 guest VM's in Server 2012 R2***, depending on the deal you get on your license. Most people probably have many more than 10 guest Windows VM's on their host so datacenter licensing is pretty common. ***Microsoft is changing to core based licensing for Server 2016 which will probably result in a net increase for
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.