License-Free Virtualization for Your Enterprise
- 430 Topics
- 1,421 Replies
Understanding NX GPU Solution
Let us consider a scenario where you are running specific workloads that require extensive video rendering and needs GPU-enhanced acceleration. Nutanix provides a solution in which the hardware is configured with GPU cards that allow video rendering to be passed through to the User Virtual Machine. Want to read about NVIDIA Grid vGPU on Nutanix? The following guide will help in understanding the architecture and requirements of handling it. Nvidia grid vGPU Guide So which hypervisors support this implementation? The following hypervisors support the implementation : AHV VMware vSphere Microsoft Hyper-V The following document lists the hypervisor implementation in-depth for a vGPU solution. vGPU on Nutanix: Hypervisor Implementation. Have a GPU in your NX environment but wondering regarding the configuration, drivers requirement and basic troubleshooting? Go through the following articles to understand some basic troubleshooting steps, command
Inter site replication traffic
Can anyone tell me if the inter-site (not intra-site, I am talking about DR replication traffic between clusters) is moved to the backplane traffic port when network segmentation is enabled? The Prism Web Console Guide v5.5 does not say so (it only mentions intra-cluster traffic): [quote]The traffic entering and leaving a Nutanix cluster can be broadly classified into the following types: [i]Backplane traffic[/i]Backplane traffic is intra-cluster traffic that is necessary for the cluster to function, and comprises traffic between CVMs, traffic between CVMs and hosts, storage traffic, and so on. (For nodes that have RDMA-enabled NICs, the CVMs use a separate RDMA LAN for Stargate-to-Stargate communications.) [i]Management traffic[/i]Management traffic is administrative traffic, or traffic associated with Prism and SSH connections, remote logging, SNMP, and so on. The current implementation simplifies the definition of management traffic to be any traffic that is not on the back
3rd Party Hypervisors - New Version Support policy
Nutanix offers compatibility with all major hypervisors (Hyper-V, ESXi, XEN) to be able to host virtual machines on it and leverage the benefits of a truly distributed scale out storage fabric.Nutanix qualifies all releases of 3rd Party hypervisors in order to ensure smooth operations for customers using the respective hypervisors.From the Nutanix Support FAQs:For hypervisor releases:For Major/Minor hypervisor releases (eg ESXi 6.5, Windows Server 2016 Build 14393) the goal is to qualify within 90 days of the software GA release date. Nutanix may qualify the latest hypervisor Major/Minor release in conjunction with the latest (or upcoming) AOS release first. Nutanix advises customers to wait until any Major/Minor release is qualified by Nutanix prior to deployment.Please make sure to visit the following link in order to read more on timelines and support policy for 3rd Party hypervisors : https://www.nutanix.com/support-services/product-support/faqs
Managing a Hyper-V cluster
Let’s say you have a new NX Hyper-converged architecture and want to use Hyper-V as a hypervisor, but now you are confused regarding the initial configuration and what all are required.Questions which are hovering your mind right now !!! What are the requirements to create a Nutanix cluster with Hyper-V? How can I image the nodes with Hyper-V and install AOS? How can I manage my Hyper-V host? How can I create a failover cluster and update Hyper-V settings? How can I create and manage VMs, especially HA VMs? Do we provide a documentation which can help you answer all these queries? Yes, absolutely Give this following documentation a read to easily get started with Hyper-V.KB-2235Want to know about the configuration of Hyper-V in Nutanix in-depth? Go through this documentation guide to understand how Hyper-V gets configured in a Nutanix environment. Hyper-V configuration
Hypervisor Upgrade (ESXi) using One Click (Host Maintainance Mode issue)
Apart from listed pre-requisites before starting Hypervisor Upgrade (ESXi) listed below, had to Disable Affinity Rules under DRS to enable ESXi host go into Maintainance mode and continue update. Got stuck since i did found this part the hard way... 1) Genesis.out log pointed the target CVM owning Shutdown token and not letting it go. 2) Then from vCenter: Manual Shutdown of CVM -> Manually put related ESXi host in Maintainance mode -> Exit Maintainance mode -> Start CVM 3) Go to PRIMS and upgrade continues... and so does 2048... - Koji
Virtual Machine Memory and CPU Hot-Plug Configurations
Memory and CPUs are hot-pluggable on guest VMs running on AHV. You can increase the memory allocation and the number of CPUs on your VMs while the VMs are powered on. You can change the number of vCPUs (sockets) while the VMs are powered on. However, you cannot change the number of cores per socket while the VMs are powered on.You can change the memory and CPU configuration of your VMs by using the Acropolis CLI (aCLI), Prism Element (see Managing a VM (AHV) in the Prism Web Console Guide), or Prism Central (see Managing a VM (AHV and Self Service) in the Prism Central Guide).Memory OS Limitations On Linux operating systems, the Linux kernel might not make the hot-plugged memory online. If the memory is not online, you cannot use the new memory. Perform the following procedure to make the memory online. Identify the memory block that is offline.Display the status of all of the memory.$ cat /sys/devices/system/memory/memoryXXX/state Display the state of a specific memory block.$ gr
Distributed object storage on Nutanix
Hi, we're starting to look at using Spark on our Nutanix cluster. Not in a huge way but to run some ETL processes in parallel. I'm under pressure to install Hadoop, or at least HDFS on the cluster but the entire concept of adding a distributed, resilient "filesystem" (actually I think it's more an object store) on top of the one already provided by Nutanix seems somewhat off. Is there a recommended way of doing this? I know that containers are exported to ESXi via NFS. Would that be usable? Would that be able to leverage stargate to access from anywhere? All I really need is a globally available volume shared between all my nodes.
NSX-T Support on Nutanix Infrastructure
What is NSX-T ?NSX-T provides customers a way to run software-defined networking infrastructure. NSX-T data center provides networking, security and automation for cloud-native applications, bare metal workloads, multi-hypervisor environments, public clouds, and multiple clouds. NSX-T is designed to address the needs of these emerging application frameworks and architectures with heterogeneous endpoints and technology stacks. NSX-T allows IT and development teams to choose the technologies best suited for their particular applications.Logical overview of vSphere Switching Components, where NSX-T fits and how the Nutanix platform interacts with them NOTE: NSX-T support is only available from AOS 5.16.1/5.10.9 and above. For more details please review below KB: https://portal.nutanix.com/page/documents/kbs/details?targetId=kA00e000000CsyzCAC
Nutanix Data Center - Riverbed SteelFusion (Regional Offices)
I am considering deploying a hub-spoke architecture with Nutanix Clusters at 1-2 Data Centers with Riverbed SteelFusion Hyper-Converged Appliances at the Regional Offices (To Reduce Server Footprint and Optimize WAN connectivity). Has anyone on the board deployed this solution? If so, please elaborate on your experiences... Appreciatively, Dholling "MacMan"
Space reclaimation for LINUX servers - RHEL 6.7 * VMWare 6.5
Hi All, Hope you are doing well. I have question regarding the storage usage and space reclaimation in VMWare 6.5 with Nutanix clusters. This question is more into VMware instead of Nutanix, thought of checking here. Just to give a overview we have Nutanix nodes and we have got VMWare 6.x installed on the nodes, a single storage pool and multiple containers. I got a RHEL VM migrated from the old environment and it got 4 disks (10GB, 20 GB, 500GB & 5 GB), all the 4 disks are in the same DS and thin provisioned. No deduplication & compression enabled on those. Now when I access the VMware web console and look for the vmdk files by browsing the files in the DS, I can see a different capacity from the actual usage compared at the server level. When I see the same VM via PRISM and I cannot see the 4 disks assigned to it. It shows only 1 disk with some 160 MB used out of 20 GB and none other than that. Could you please help me to understand why its not showing the disks in the P
Incompatibility with Server 2016/Win10 and 2012r2 Hyper-V Cluster (AOS: v4.7.1)
AOS: v4.7.1 Server Platform: Multiple Hyper-V (2012 r2) clusters It appears that if you whitelist an IP that belongs to either a Server 2016 or a Windows 10 machine, that machine still can not access the SMB share being published by our Nutanix (v4.7.1) clusters. We found this out after upgrading our SCVMM server to Server 2016. At this point, the VMM server can 'see' the shares but can not calculate share size (reports 0GB) and therefore can't manage the share(s). Since then we've tested with multiple other 2016 and Windows 10 machines - all with the same result: added to the whitelist but can't browse the share, even from Windows Explorer. We are looking for confirmation that this is a known issue. If so, can you confirm if it's resolved by an upgrade to v5.0.2?
SQL Server 2008 / R2 End of Support -- Your plans?
Wanted ask the community for some feedback on the topic of SQL Server 2008/R2 End of Support. My fellow Nutanix Solutions team members [user=48543]gregwhite[/user] [user=74942]Chris Paap[/user], myself and others were discussing about SQL Server 2008/R2 End of Support and how that would impact IT? Can you help us by sharing your thoughts, actions, tools around DB/app upgrades, migration etc related to SQL Server 2008/R2? You can take the survey here: https://www.surveymonkey.com/r/VJ2NWXS
XenServer to Nutanix AHV VM Migration ERROR [Boot Device not found]
Hi,I am trying to move a Windows Server 2012 R2 VM from Xenserver to AHV. With reference to this article: AHV | Migrating a Windows VM from XenServer to AHV everything is working great except when I am at the last step. When I turn on the VM in AHV, I am getting a boot device not found error and windows will not boot up.Windows will start with a windows icon throws the error and restarts, then goes into disk repair, reboots and same error and goes into the troubleshooting menu and when I am restart the VM in AHV it again and always goes into the boot disk inaccessible error.I made sure all the Xen components are already uninstalled from the vm and Nutanix VirtIO drivers are also installed. Tried with loading the image as scsi and IDE, but still going into the same error. Also tried booting up in safe mode, tried disk repair, etc. but no luck.Please help if someone has a solution.PS: I am migrating Windows Server 2012 R2 from XenServer to Nutanix AHV.
Proper licensing of Microsoft VMs running on PRISM/Acropolis Pro
Good evening. We are evaluating some nutanix offerings from a local partner. However I have some Microsoft related question and the partner is taking too long to respond. 1- Is there a version of Nutanix Acropolis/PRISM (4 node)that allows me to run unlimited Windows VMs without me needing to purchase a MS Windows DataCenter Core License from Microsoft? 2- Is there a PRISM/Acropolis Pro version that comes with a built in Windows Datacenter valid license? Or 3-Do I need (as usual) to buy my Datacenter Core licenses to properly license my new Windows VMs? Thanks
Already have an account? Login
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.