License-Free Virtualization for Your Enterprise
- 506 Topics
- 1,682 Replies
Also see this post: [url=https://next.nutanix.com/t5/Installation/Accepted-hypervisor-ISOs-for-install-through-foundation/m-p/2092/highlight/true#M192]Accepted-hypervisor-ISOs-for-install-through-foundation[/url] Should help you if you are trying to use a recent ESXi 5.5 U2 update.
Hi, On a new install (nx6020) ; during a Storage vMotion, the ESXi CPU increases up to +80% of the total physical CPU ...The Storage vMotion is very long ... A vMotion is ok, no problem, very fast. Any idea ? Info : - Cluster version v22.214.171.124 - RAM CVMs : 32GB I think that there is not enough RAM on CVMs ?... 32GB its a best practices. thks guys ! Sig'
I have running vms in a consistency group within an asynch DR Protection domain. The snapshots are replicating locally and to a remote sign fine. When I manually migrate the Protection Domain to the remote site the vms are coming online fine at the target site, but they are not automatically powering on? I have to right click on them and manually power them on. Is this normal or should the migrate feature power them on, assuming they were online at the original location? Regards, Erik
Next Community,Nutanix has developed a Reference Architecture for designing and deploying Avaya Aura on Nutanix. Aura is a complex Unified Communications and Customer Experience set of applications and a lot of work went into determining the right resources to size Aura on Nutanix. Please use this as a forum to ask any Nutanix and Avaya Aura questions you may have. You can find all of this material here in the Nutanix Aura Reference Architecture: [url=http://go.nutanix.com/virtualizing-avaya-aura-reference-architecture.html]http://go.nutanix.com/virtualizing-avaya-aura-reference-architecture.html[/url]I also wrote some blog posts about it here: [url=http://bbbburns.com/blog/2015/02/nutanix-and-the-2015-avaya-technology-forum/]http://bbbburns.com/blog/2015/02/nutanix-and-the-2015-avaya-technology-forum/[/url] [url=http://bbbburns.com/blog/2015/03/avaya-aura-on-nutanix-in-progress/]http://bbbburns.com/blog/2015/03/avaya-aura-on-nutanix-in-progress/[/url] [url=http://bbbburns.com/bl
Next Community, I wanted to share the Nutanix Best Practices Guide for deploying Microsoft Lync on Nutanix. Many thanks to and Jason Sloan[b]@Jason_D_Sloan[/b] for authoring this document. Best Practices Guide with example deployments here: [url=http://go.nutanix.com/bpg-microsoft-lync.html]http://go.nutanix.com/bpg-microsoft-lync.html[/url] You can also view Derek's personal blog on the subject: [url=http://www.derekseaman.com/2015/01/sizing-microsoft-lync-server-2013-nutanix.html]http://www.derekseaman.com/2015/01/sizing-microsoft-lync-server-2013-nutanix.html[/url] Please feel free to use this community space to ask questions about virtualizing Microsoft Lync on Nutanix.
Its well known that Nutanix is Hypervisor agnostic supporting ESXi, Hyper-V and KVM, but what most people either don’t know, or haven’t considered, is the fact the Nutanix Operating System (NOS) version is not dependant on the hypervisor version. What does this mean? You can run the latest and greatest NOS 4.1.x releases on ESXi 5.0 , ESXi 6.0 or anything in between. In fact, you could run older versions of NOS such as 3.x with vSphere 6.0 as well (although I see no reason you would do this.) [b]Read more [url=https://tr.im/cHiGI]here[/url][/b] [i][b]This is a repost from the blog [url=http://www.joshodgers.com/]CloudXC[/url] by Josh Odgers[/b][/i]
I saw a tweet today (shown below) that reminded me of 27th August, 2012. This was the day that VMware published this article, which demonstrates how VMware vSphere (5.1 at that point) could achieve 1 million IOPS in a single VM. Things have undoubtedly gotten better in vSphere 5.5, and even more so with vSphere 6.0, which was recently released. Even though the test setup at the time (3 years ago) required two dedicated all flash arrays for this one VM, it demonstrated clearly that the hypervisor is not a bottleneck to storage. It also clearly demonstrated that even using VMFS and going through multiple layers to the storage and back, isn’t a bottleneck to high performance. vSphere itself adds so little overhead that it’s a great platform for running any workload. This is important, because 3 years on from this test we have all sorts of things running on top of vSphere in the ever increasing capabilities of the platform. Even high performance storage controllers and not just the varie
If you're interested in running Oracle on the Nutanix platform then you should check out my latest blog article titled Oracle Licensing and Support on Nutanix Virtual Computing Platform - http://longwhiteclouds.com/2014/05/11/oracle-licensing-and-support-on-nutanix-virtual-computing-platform/. We are in the process of publishing a Tech Note on Oracle for Nutanix and also a Best Practice Guide. These should be available on our web site in the next few weeks.
In addition to the video I recorded regarding SQL Server Provisioning with VAAI I've written an article tonight including the scripts I used to clone the VM's, tune the network, and the RunOnce script that ties it all together. I hope this is helpful. [url=http://longwhiteclouds.com/2015/01/27/nutanix-sql-server-db-vaai-clone-performance/]http://longwhiteclouds.com/2015/01/27/nutanix-sql-server-db-vaai-clone-performance/[/url]
I'e just published a video demonstrating provisioning multiple SQL VM's in just minutes using VAAI Clones on the Nutanix platform. The SQL DB VM's are 435GB in size, but due to VAAI the new clones don't use any additional stroage capacity initially.Nutanix Provisioning SQL Server VM's with VAAI Clones
We were one of the first guinea pigs to host Exchange 2010 on Nutanix. our cluster consists of a 3451 and a 6220. We have 1 exchange server tied to each of the 3451 nodes, 4 in total. Each server is configured with 4 2 TB databases. 2 active, 2 passive. We noticed recently as we migrated a majority of our users to the new environment that we are getting alarms that we are hitting a threshold of disk space per host. Not sure what we're missing here. [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/299i4144063A719D65F2.png[/img]
Hi, I have a curios doubt. I read about cloud connect and summarized that it is mainly for backing up the data! And when I connect to a public cloud AWS, a CVM will automatically be created on that platform right! So can I seamlessly offload some of the production workload onto the cloud as well or is it exclusively for backup and restore ? Best, Narayan
Hey there We have a Windows Azure Pack environment deployed on a Nutanix Hyper-V based infrastructure. The infrastructure is build with high availability in mind: [list] [*]Redundant Virtual Machine Manager configuration [*]Redundant SQL Server configuration (using SQL Server AlwaysOn)[/list]But we are currently facing an issue setting up a redundant VMM Library Share. Microsoft does not allow using a redundant VMM installation as the Library Server. Instead they recommend to setup a Scale-Out File Server infrastructure as the Library Server and then add your Library Share to it. However, to my current understanding, it is not possible to setup a Cluster Shared Volume on Nutanix Hyper-V - which is a requirement for a Scale-Out File Server. What are the design recommendations from Nutanix in this matter?
Storage DRS is a feature of vSphere 5.0 onwards which aims to reduce the complexity of managing storage capacity and performance for virtual machines running on traditional shared storage. SDRS helps simplify management of Datastores (LUNs / NFS Mounts) by analyzing the available datastores capacity and placing new virtual machines in datastores with the required available capacity. This is called "Initial Placement". SDRS also allows administrators to create rules ([b]"Affinity" & "Anti-Affinity")[/b] to ensure specific what VMs should be kept together or apart for performance / capacity reasons and finally, monitor I/O metrics and relocate VMs to different datastores where contention or capacity constraints exist. As it is recommended to have one Large NFS Datastore in a Nutanix environment, the issue of initial placement (which SDRS helps resolve) is natively eliminated as all Virtual machines, regardless of performance or capacity requirements can and should be placed into one
Network I/O control is a feature available since vSphere 4.1 with the Virtual Distributed Switch (VDS) which uses Network resource pools determine the bandwidth that different network traffic types are given. When network I/O control is enabled, distributed switch traffic is divided into Customer Network Resource pools and/or the following predefined network resource pools: Fault Tolerance traffic, iSCSI traffic, vMotion traffic, management traffic, vSphere Replication (VR) traffic, NFS traffic, and virtual machine traffic. From vSphere 5.0 onwards, you can also create custom network resource pools for virtual machine traffic. You can control the bandwidth each network resource pool is given by setting the physical adapter shares and host limit for each network resource pool. The physical adapter shares assigned to a network resource pool determine what share of the total available bandwidth will be guaranteed to the traffic associated with that network resource pool in the event o
The following tables show the recommended configuration of VMware Distributed Resource Scheduler (DRS). [b]DRS (General)[/b] [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/258iC0B64EA0AB05913D.jpg[/img] [b]DRS Rules[/b] [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/260iC36A3829678786A2.jpg[/img] [b]DRS Distributed Power Management (DPM)[/b] [b][img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/262i30C4C439D67776FB.jpg[/img][/b]
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.