Replies posted by Jon
Jon, thanks for the update. The PDF says nothing about Community Edition and does not list specific versions of vulnerable products. Not all clusters will be running on latest LTS/STS. Also if Prism Central (all versions) is vulnerable, does that mean that Prism Element is also vulnerable? v1.6 now posted, which calls out CE not impacted, and now we’ve got specific links to supported versions to clarify that. We’ve also added a clarification for Prism Element, which was all based on your feedback. Thanks for the contribution. Cheers,Jon
Jon, thanks for the update. The PDF says nothing about Community Edition and does not list specific versions of vulnerable products. Not all clusters will be running on latest LTS/STS. Also if Prism Central (all versions) is vulnerable, does that mean that Prism Element is also vulnerable? @Waddles → You bring up good points, thanks for reaching out.RE not listing specific versions: We do say “All Supported Versions”, but you’re right, we should be more specific. What we’re referring to, specifically, is supported versions as defined by our EOL schedules, here: PC: https://download.nutanix.com/misc/PC_EOL/PC_EOL.pdfAOS: https://download.nutanix.com/misc/AOS_EOL/AOS_EOL.pdfFiles: https://download.nutanix.com/misc/FILES_EOL/FILES_EOL.pdfGeneral End of Life Policies: https://www.nutanix.com/support-services/product-support/support-policies-and-faqs?show=accordion-0 I’ve asked the team to add a reference to these links in the SA so its clear for everyone. About CE → Another good point, It
Howdy - Jon from Engineering here - The PDF at that link will be updated at least once per day until we’ve got this driven completely to ground. Also, you should be getting an email blast from the support portal if you have a user account there. IMHO, the security alerts should be out-out, meaning you should get them automagically unless you’ve specifically turned them off, here: https://portal.nutanix.com/page/subscriptions example: Here’s a screenshot of the alert that I got when this first went out.
Piling on to this thread, I addressed this relatively recently when it came up on Reddit, here: https://www.reddit.com/r/nutanix/comments/nvviq1/ahv_vcpu_best_practice_2021_when_to_configure/h21uj35?utm_source=share&utm_medium=web2x&context=3Copying in my list of some key "generalisms" from me, feel free to take these as company level canon, I'll put this in stone right now: Give your apps what they need, nothing more, nothing less If that changes over time, bump it up as needed based on the observed utilization and your performance requirements If you need to make a VM big, don't be wary of making it big (see rule 1) If you can fit things more sanely from a "t-shirt size" perspective that aligns better with single pNUMA node or smaller, do it. Do not just make your t-shirt size aligned to a NUMA node for the sake of alignment, see rule 1 :) Do not arbitrarily pick VM sizes (like 12 vCores because my pSocket is 12) just because the hardware you have today looks a certai
1 - I doubt that anyone actually tried it internally here, so I dont know one way or the other if reverse proxying would work well or not offhand2 - No, Prism (element and central) is published on 9440 as a static thing. I think someone sufficiently crafty could hack it in, but it may blow up in fantastic ways since that is not something we support/qa (changing the prism port)
Jon from Engineering here - piling on this thread at request of our friends in support, I’d like to drop in some thoughts.1 - I agree with Sneha - Defrag is (in general) only useful on physical systems with rotational media that they directly control and not in a virtualized SAN/HCI environment. I’d also wager high on saying that the most in the virtualization and SAN community agrees with this, here’s a handful of links (with VMware’s being the most definitive on the topic)VMware: https://blogs.vmware.com/vsphere/2011/09/should-i-defrag-my-guest-os.htmlPure: https://blog.purestorage.com/purely-technical/best-practice-disable-disk-fragmentation-scheduled-task/NetApp: https://community.netapp.com/t5/VMware-Solutions-Discussions/Defrag-or-No-Windows-guest-OS-defrag-w-in-FC-LUN/td-p/53190 To be clear, We align with just about everyone else in the industry on this - for traditional windows defragmentation, we (in general) do not recommend it simply because it will largely be a waste of tim
We're using some terms interchangeably, let me clarify what I'm talking about with a practical example using a hypothetical Linux VM running on AHV. Let's say that said AHV host had 40 total cores, which would show up as "40" in the "CPU(s)" line of [b]lscpu[/b] on AHV (or hostssh lscpu within a CVM, or total cores in Prism UI -> Hardware). Within a [b]single virtual machine[/b], the amount of "CPU(s)" that [b]lscpu[/b] in Linux should [b]never[/b] exceed the amount of "CPU(s)" on a [b]single AHV host[/b]. Let's say you assigned 40 "CPUs" to this hypothetical single Linux VM. If that VM drove its CPUs to 100%, you'd have no more CPU to run ... anything, as each physical core would be at 100%. This is a universal rule of thumb for any virtualization platform. There are situations where it makes sense to oversubscribe, from a single VM, but if you're looking for an 80/20 recommendation, this is it. [b][i]To be clear, for the 90/10 rule, it does not make a difference 2x
The only time you really, really need to worry about sockets and sockets per core is when you have very big VMs, like SAP HANA, exchange, SQL, etc otherwise, just doing THE RIGHT amount of vCPUs for your workload is almost always the right idea. That and never making more vCPUs (regardless of how you do it) in a single VM, than there is physical cores in any given system.
Stumbled upon this today, sorry that you didn't get a response from anyone [user=73408]UPX[/user] 1) If you're specifically wanting VGLB's performance capabilities (which are top notch, I was part of the team that worked on validating that features top end performance), then yes, 5.6++ is a good fit. If you're worried about supportability, note that all STS releases should get at least two patches (example, 5.6, 5.6.1, 5.6.2) before that branch is moved out to another STS release, like 5.8, 5.8.1, etc. These come roughly every ~4-5 weeks, so the If you're worried about catching bugs, which hey, we're a software company, every software company has bugs, especially in STS/"Current Release" type train models ala us, Microsoft, Citrix, and now VMW ... you could settle on maintenance releases within a STS, like 5.6.1, 5.6.2, then wait to go to other STS releases like 5.8.1, or 5.8.2 (which at time of writing is roughly right around the corner). That should keep you pretty safe.
[quote]testworksau wrote:Hi Jon, Do you know whether or not: a) The API exposes cpu_passthrough b) The cpu_passthrough setting will be configurable on the VM configuration page via the Prism UI anytime soon c) The cpu_passthrough setting (if enabled on a VM) will also be applied to a clone of the given VM d) Support for nested virtualization for Hyper-V is any closer towards coming out of the "wild wild west" We are using the APIs extensively in our organization but can't find reference to the cpu_passthrough setting in the API. [/quote] Hey thanks for reaching out. I'm curious, whats your use case, specifically? Answers as of today: A) no B) no C) good question, I don't recall offhand. Should be an easy test, but I'm on a plane right now, don't have good connectivity back to lab. D) I can't say without saying something forward looking in a public forum. We're working on making overall nested support better. Even then, there is still one key patch missing upstream, more below.
Hey Kevin, I've moved your post from the CE forums to our production product forums. In general, for Hadoop on Nutanix, I'd recommend checking out these three assets which you can cherry pick data from [url=https://portal.nutanix.com/#/page/solutions/details?targetId=RA-2078-Cloudera-with-Nutanix:RA-2078-Cloudera-with-Nutanix]https://portal.nutanix.com/#/page/solutions/details?targetId=RA-2078-Cloudera-with-Nutanix:RA-2078-Cloudera-with-Nutanix[/url] [url=https://portal.nutanix.com/#/page/solutions/details?targetId=RA-2030_Hadoop_with_AHV:RA-2030_Hadoop_with_AHV]https://portal.nutanix.com/#/page/solutions/details?targetId=RA-2030_Hadoop_with_AHV:RA-2030_Hadoop_with_AHV[/url] We dont specifically have a Spark on Nutanix guide out yet; however, those two are rich with content for the type of solution that you might want to roll out. That said, you are correct that HDFS (in general) is designed for non-redundant storage (like bare metal), so it has a lot of the same constru
[url=http://next.nutanix.com/member/profile?mid=67207]sushilkm[/url] - thanks for reaching out, hope the holidays treated you well. The guidance you're looking for is here: [url=https://portal.nutanix.com/#/page/docs/details?targetId=Release-Notes-Acr-v55:rel-memory-requirements-r.html]https://portal.nutanix.com/#/page/docs/details?targetId=Release-Notes-Acr-v55:rel-memory-requirements-r.html[/url] [url=https://portal.nutanix.com/#/page/docs/details?targetId=AHV-Admin-Guide-v55:app-cvm-memory-config-u.html]https://portal.nutanix.com/#/page/docs/details?targetId=AHV-Admin-Guide-v55:app-cvm-memory-config-u.html[/url] Second link is under the AHV docs, but the guidance is the same for ESXi. TLDR - 32GB is a pretty solid recommendation. You can trust [url=http://next.nutanix.com/member/profile?mid=25528]mikegelhar[/url] - he runs one of the largest Nutanix deployments in the world and has seen pretty much everything there is to see :)
Thats a good question [url=http://next.nutanix.com/member/profile?mid=71380]rohanfargade[/url]. A lot of the time spent getting Hyper-V 2016 working was what I'd call "structural" work, as we had to both recode an extremely large portion of our Hyper-V support as well as extend the code to work in both directions (to support rolling upgrades and the sort). Quite a large task, and certainly larger than we thought it would be. Now that the structural work is done, supporting those faster release candidates (knock on wood) should be a heck of a lot easier. Its all for the best though, especially with support for DDA, it greatly simplifies the way we do things, so the overall complexity under the hood has been reduced. Anyhow, looking to the future, we're already looking at those incremental "fast train" releases. Working on the pre-release 1803 code right now
When we say network function VM, in your case we'd be referring to Viavi. It would have to be running on the same host as the system(s) you want to capture traffic from. To be clear, this isn't some special VM we're providing. The chaining feature in AHV allows you to either put "tap mode" devices where you get a local mirror or in-line mode devices, which would be like a IDS/IPS/Firewall type setup
(technically yes), but no, it would not be supported, and we really wouldn't recommend it. Doing an unsupported change like that would very likely break every time you do any sort of operation on a given VM, like power on/power off, migration, high availability restarts, cloning, etc. This is because it would be a change that our control plane didn't program in, so it would just override it as it went about its business. Thats best case. Worst case, we haven't tested it, so we dont know any unintended side effects. That said - Could you expand on what you're hoping to accomplish here? I know what tech you're talking about, but I'm wondering what your specific use case is, so I can take it back to the team here.
Check out the general OVS product level FAQ here: [url=http://docs.openvswitch.org/en/latest/faq/configuration/]http://docs.openvswitch.org/en/latest/faq/configuration/[/url] TLDR - no, OVS doesn't support ERSPAN but does have some other tunneling technologies. Either way, we dont have that particular tunneling technology plumbed into our side, so we can't set up that tunnel automatically, etc
No worries, everyone's gotta start somewhere. In general, its not a problem due to the reasons I mentioned, given you've got copius amounts of bandwidth and live migration events are relatively rare in Nutanix. Stacked together with data locality, where reads are mostly kept off the network, those network adapters will be sitting at lower-ish utilization that you'd expect. We're huge fans of the kiss principle here at nutanix, as most things "just work", which is quite nice. That said, its good to know whats what and know the reasoning behind what we do, so I'd recommend checking out the AHV networking guide here: [url=https://portal.nutanix.com/#/page/solutions/details?targetId=BP-2071-AHV-Networking:BP-2071-AHV-Networking]https://portal.nutanix.com/#/page/solutions/details?targetId=BP-2071-AHV-Networking:BP-2071-AHV-Networking[/url] That should give you some good background. After you read that, you'll find that you'll likely want to use either balance-slb or balance-tcp for
No, we have not enabled traffic shaping in OVS. I certainly know there are valid use cases, and we've been working on a few of them internally already. For most use cases, keep in mind that in Nutanix, each node has full network access, such that (for example) a 3 node cluster would have (at minimum) 60 Gbits of bandwidth going into it (assuming 2x 10Gbits per node). That math, of course, goes up linearly with node count or with an increase in NIC speed (like 25/40/100g interfaces). For folks like Service Providers, this makes more sense, so that they can shape the traffic of specific tenants or applications within a tenant, which is where we've been exploring this use internally. On a related note, we're releasing service chaining with OVS in the very next release as part of the microsegmentation feature, which is quite interesting.
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.