5 Essential Tips for Maximizing Your Experience at Nutanix .NEXT for Bloggers
A known issue in iDRAC 4.40.x branch till 4.40.40 where it was fixed. Nutanix Support recommends downgrading to v.4.20. You can also upgrade to 4.40.40 or 5.x. But there’s another, simpler workaround - iDRAC → Configuration → Virtual Console → Virtual Console plugin type: change from eHTML5 to HTML.Hope this helps.
Hi Alona,Yes I saw that screenshot at your URL, thank you.And thanks for the answer - the DNS update has to be done manually after the Activation with different IPs.This was probably why our earlier implementation of a replicated Files instance was configured with Client Network on a stretched VLAN… Which still keeps our Networks team unhappy. :) Would be nice if Files had the final step with DNS update in the Activation workflow. :)Thanks again Alona!
Hi Alona,Yes it does help, thank you for the explanations.In case when the FSVMs are re-IP-ed, the DNS records will need to be updated in order to the server name in a UNC path resolves to the new IP address. Given the Files are configured to use the AD DNS, will the DNS records be updated automatically by AOS/Files on FSVM startup, or will the cluster administrator need to edit the DNS records manually?Thank you.
Hi Alona,Thanks for replying to my question.It is clear how to initially _configure_ the protection, thank you. The question was more about what exactly happens _during_ the failover. From your link to “ACTIVATING A FILE SERVER”, I can see that the portgroup and IP addresses need to be entered for Client Network, and suppose the same for Storage Network. Does it mean that AOS will re-IP FSVMs’ network interfaces on strat-up? And if so, do I gather it right that in general Files/FSVMs may be recovered to a cluster/datacentre with a completely different networking, as long as it’s routed between the two environments for replication? I.e. VLAN stretching is not a requirement?Thank you.
Thank you very much Alona for your comment on this.
They are proposing to deploy the coming Files version 3.8. And they just adhere to that “Nutanix recommendation”.Properly sizing the cluster is without a doubt. The comment is around their hesitance to guarantee any performance if we deploy Files in a shared environment, even when properly sized.So in brief, Nutanix themselves don’t have such recommendation documented anywhere, or otherwise known?Thanks Alona.
Hi Alona,Thank you for replying to my question.I also recall from various NEXT conferences and other events and documentation, that Nutanix would be perfectly fine with mixed workloads.The reason I asked, was that some external party that works with us on a design of an additional Files instance in our existing environment, advised that:“Please note that Nutanix strongly recommend deploying files on dedicated cluster or server virtualization cluster with minimal CPU / Memory contention, does NOT recommend placing files cluster on busy VDI or database cluster.”The external party was aware that although we’re placing the Files on to a cluster with XenApp and VDI, there wouldn’t be memory contention, and CPU ratio would be around 1:3 or below.I was trying to find any documentation that confirms their statement, but wasn’t able to, therefore decided to reach the source of truth, i.e. you, the vendor. :) Can you please advise if Nutanix indeed has such recommendation, and in what form?Thank
Hi Raaji,Thank you for your reply.My question is specifically about the metric of the container Usage from AOS perspective, the one seen at PRISM → Storage → Table → Storage Container → <container> → Storage Container Usage graph, or Storage Container Details → Used.
Hey DarrylO, From what I know, the FSVM-s can be freely vMotion-ed across the cluster (this is what DRS can be doing on your cluster anyway), and they can be treated as UVMs in that sense. Happy to be corrected by experts (as going to start patching later this week). With patching, that has been in a wish-list for years. The thing is, although you indeed can upload a binary in to PRISM and push via One-Click Update, you cannot upload multiple files at once. Only one at a time. So it may help in case when you deploy an ESXi Update (cumulative), or upgrade ESXi across the hosts. But with normal patching, where you can have multiple files, One-Click is of no use. I mean, well, you can push the patches one by one, waiting for rolled reboots after each patch. But from the experience, it’s quicker (and let’s be honest, more convenient) to use VUM and patch hosts one by one, but handle CVM shutdowns and starts manually, and wait for Data Resilience to return to OK in PRISM after each rebo
@JeremyJ Thank you for the recommendation Jeremy. Also thanks for the links, although I of course checked them before asking. While I understand that out-of-the-box configuration will surely work, there’s a hesitation with each of the options. Leave default. It leaves the Scratch on the SATADOM/M2/BOSS device, which is not intended for intensive writes (i.e. ESXi logs), and therefore will be degrading at a higher pace. Redirect to a shared datastore (i.e. DFS). In an event of “cluster stop” or HCI layer otherwise being unavailable, the Scratch will become unavailable. Also, for point 1, there’s a sub-option to redirect the scratch from default VFAT partition to the local VMFS volume where the CVM is. In VMware’s opinion, this will ensure the logs are preserved over reboots and are easier available. Are there pros and cons from your point of view? Thank you.
Already have an account? Login
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.