License-Free Virtualization for Your Enterprise
- 505 Topics
- 1,676 Replies
I've had an intermittant problem with some 2012 VMs I had migrated that have been locking up after a random time, usually going to 100% CPU and all IO stops. VM Console is unresponsive and network not reachable, reset is only solution to bring back online. It would then run again for a number of minutes, hours, even a day or more only to do same thing again at a random time through day or night, so was no logical patten to it. I endup tracking it down to a timezone config issue, when rebooting the VMs in question the timezone would set their clock to the hypervisior host time, which was in a different timezone (default TZ nodes were shipped with) to the the host VM TZ. I had a similar problem in the past, although not with the host lockups, where a number of hosts would lose time sync and be out by the TZ difference between host and VM on our Citrix XenServer Cluster hypervisor host. Because our VM are on an AD domain they will be out of NTP adjustment range becasue time differen
G'day one and all, We're a new nutanix site and have been migrating machines from our previous HyperVisor over to Acropolis/KVM. Our older servers have been no problems at all, 2003 and 2008 servers. however all our 2012R2 servers are having problems where the storage drivers are not being seen. To test I have tried a new install ont a VM from install DVD. Driver load screens comes up and I point it to the correct location on the nutanix VirtIO drivers ISO image. All drivers are seen, i.e. shows the correct drivers to load, but when selected and goes to load the drivers, and error stating "No new device drivers were found. Make sure installation media contains the correct drivers" The VM Disk is configured as scsi Regards, stephen...
Hello friends, I am new here nice to meet you and I hope you can help me.I am working with two Scenarios in my client. One we have Nutanix on its Linux KVM-based hypervisor and the other we have common x86 servers running VMware.The objetive is compare both solutions and decide what's best. (we want Nutanix to the best of course!)The application running on both of them is a jBOSS system, on 3 virtual machines. (application, DB and one apache front end.)I created a recording test using jmeter, where the user makes a lot of steps in the web interface (for instance, search for a specific person, click on the menu, etc). The test runs fine on the x86 server and VMware, but on the Nutanix, after 150 users, I always get: Started: 0 Finished: 0. And also the CPU Usage is higher than the other scenario. Almost the double.I added more CPU and RAM to the virtual machine, but it didn't work. I am also running on non-GUI mode, removed the listeners, added more memory to the "MaxNewSize", but nothi
Greetings community members! I am brand new to the Nutanix platform and have completed my basic initial configuration of my nodes on ESXi 5.5U2, however I am having trouble finding (not through lack of Google-fu) a complete guide on what Nutanix recommends as BPs for configuring hosts for ESXi. This raises some questions... e.g: Should I add anything to my node's Advanced Settings? Should I chanage the power settings Active Policy to High Performance? What is the definitive guide to HA settings with respect to the nodes, VM monitoring, policies therein, etcetera ? Not that this is an exhaustive list of questions, but I forgot to ask our Nutanix Pro Services rep before he left...and I could not immediately find what I need unless I am looking in the wrong place! :$ [i]Edit: Some of what I asked is in the [url=https://portal.nutanix.com/#/page/docs/details?targetId=vSphere_Admin-Acr_v4_5:vSphere_Admin-Acr_v4_5]vSphere Administration Guide for Acropolis[/url] which is very h
Hi guys, I'm still digging to give corrects answer to my manager who is puting pros and cons for nutanix on the table. As I need to make a design without knowing exactly where he want to go, I'm making assomption on it. I understood that they wanted to put NSX on it VRA and vCloud SP. I've found a couple of recommandations / tips on NSX over NUTANIX but nothing that can be used on design part (more on implementation which is quite cool !) Do someone have integrated it on a 4.5 cluster ? do Nutanix plan to give us a technical note / any other cool stuff on NSX ? sheers,
Hi all, My present client is making me working on a design implying to deploy a XAAS plateform with a lot of security inside (NSX is on the cook list). In our team, i was declared as the nutanix man. My mission is to determine the best way to isolate storage from diffrent workload : [list] [*]we'll have an integration zone, a developpement and a production zone as PAAS. [*]Some node will executed heavy VDI (K2 graphic cards will achieve this), [*]Then we'll have a IAAS with vcloud SP for the sandbox zone).[/list]So we can imagine some sort of multicluster with 3 cluster dedicated to 3 zones : 3D VDI zone / SANDBOX IAAS zone / and PAAS zone. It implies at least 9 nodes which is a bit too much for starting plateform. [b]My question is[/b], can I make a 6 node cluster for nutanix / storage point, with 3 volumes pools on all 6 node ? and more than that what will it implies in terms of storage performance, data protection, data seggregation (security team will challenge me heavily on
In case you hadn't seen the release notes for the Nutanix Acropolis Base Software 4.5 (previously NOS), Windows Failover Cluster is now GA and can be configured for SQL Server Failover Cluster Instances and other use cases with In-Guest iSCSI and MPIO support. Please check out the release notes and upgrade to 4.5 if you wish to use this feature. [url=https://portal.nutanix.com/#/page/docs/details?targetId=Release_Notes-Acr_v4_5:rel_Release_Notes-Acr_v4_5.html]https://portal.nutanix.com/#/page/docs/details?targetId=Release_Notes-Acr_v4_5:rel_Release_Notes-Acr_v4_5.html[/url]
Hi, i'm doing a testing on Nutanix with AHV. there is a warning of "10G Compliance"-CVM is running in lower speed. any additional configuration that i need to to on nutanix? any guide? below is the snapshot [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/677iD2264B53EBD63CCA.png[/img] [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/678iDC5EF686EFC646B0.png[/img]
Hi Community, I would like to deploy Prism central to manage 2 clusters. Until now, nothing complex. The issue is : Prism Central is provided as an OVA to be deployed with ESX but I'm using KVM/Acropolis. Is there a way to import it ? Can you please provide me with some guidance ? Thanks a lot.
Hi All, Just wanted to let you know that using traditional SQL failover clusters and SQL FCI, or any MSCS / Microsoft failover cluster is available in tech preview as of NOS 4.1.5. This is because we have introduced SCSI 3 Persistent Reservations to the iSCSI stack. To implement a cluster you need to use the MS iSCSI Initator and the Nutanix Volume Groups contract and iSCSI target for in-Guest iSCSI access. This will become a full GA feature in an upcoming release of NOS. Please experiment with it and let us know your thoughts.
Hi All, I thought this might be of interest to people that want to use SQL Server with Acropolis Hypervisor. Which is fully supported by Microsoft under the SVVP program. So here is a way you can rapidly provision as many SQL Server DB's as you like. [url=http://longwhiteclouds.com/2015/08/07/rapid-provisioning-of-sql-server-test-environments-with-nutanix/]http://longwhiteclouds.com/2015/08/07/rapid-provisioning-of-sql-server-test-environments-with-nutanix/[/url]
I have 3 x 3000 series node each with 256GBs of RAM, and 1 cluster. - 7.2TB Total SSD Flash (2.4TBx3 nodes)- 12TB Usable HDD (before de-duplication/compression)- 2.6GHz/20 Cores, 6 Physical CPUs: 60 Total cores- 768GB Total memory We are running pure Acropolis. The monster VM workload is a Progress database that is 7TB in size and has 200GB RAM allocated to it. We are setting this up as a PoC to migrate off a standalone ESXi host with a Violin all flash array. Previously, the VM on VMware was only pushing 12,000 IOPs. What should we be doing to optimize the performance for this VM? Multiple virtual scsi adapters? Also, is there a way to see if the working set is exceeding the node cache capability?
Also see this post: [url=https://next.nutanix.com/t5/Installation/Accepted-hypervisor-ISOs-for-install-through-foundation/m-p/2092/highlight/true#M192]Accepted-hypervisor-ISOs-for-install-through-foundation[/url] Should help you if you are trying to use a recent ESXi 5.5 U2 update.
Hi, On a new install (nx6020) ; during a Storage vMotion, the ESXi CPU increases up to +80% of the total physical CPU ...The Storage vMotion is very long ... A vMotion is ok, no problem, very fast. Any idea ? Info : - Cluster version v18.104.22.168 - RAM CVMs : 32GB I think that there is not enough RAM on CVMs ?... 32GB its a best practices. thks guys ! Sig'
I have running vms in a consistency group within an asynch DR Protection domain. The snapshots are replicating locally and to a remote sign fine. When I manually migrate the Protection Domain to the remote site the vms are coming online fine at the target site, but they are not automatically powering on? I have to right click on them and manually power them on. Is this normal or should the migrate feature power them on, assuming they were online at the original location? Regards, Erik
Next Community,Nutanix has developed a Reference Architecture for designing and deploying Avaya Aura on Nutanix. Aura is a complex Unified Communications and Customer Experience set of applications and a lot of work went into determining the right resources to size Aura on Nutanix. Please use this as a forum to ask any Nutanix and Avaya Aura questions you may have. You can find all of this material here in the Nutanix Aura Reference Architecture: [url=http://go.nutanix.com/virtualizing-avaya-aura-reference-architecture.html]http://go.nutanix.com/virtualizing-avaya-aura-reference-architecture.html[/url]I also wrote some blog posts about it here: [url=http://bbbburns.com/blog/2015/02/nutanix-and-the-2015-avaya-technology-forum/]http://bbbburns.com/blog/2015/02/nutanix-and-the-2015-avaya-technology-forum/[/url] [url=http://bbbburns.com/blog/2015/03/avaya-aura-on-nutanix-in-progress/]http://bbbburns.com/blog/2015/03/avaya-aura-on-nutanix-in-progress/[/url] [url=http://bbbburns.com/bl
Next Community, I wanted to share the Nutanix Best Practices Guide for deploying Microsoft Lync on Nutanix. Many thanks to and Jason Sloan[b]@Jason_D_Sloan[/b] for authoring this document. Best Practices Guide with example deployments here: [url=http://go.nutanix.com/bpg-microsoft-lync.html]http://go.nutanix.com/bpg-microsoft-lync.html[/url] You can also view Derek's personal blog on the subject: [url=http://www.derekseaman.com/2015/01/sizing-microsoft-lync-server-2013-nutanix.html]http://www.derekseaman.com/2015/01/sizing-microsoft-lync-server-2013-nutanix.html[/url] Please feel free to use this community space to ask questions about virtualizing Microsoft Lync on Nutanix.
Its well known that Nutanix is Hypervisor agnostic supporting ESXi, Hyper-V and KVM, but what most people either don’t know, or haven’t considered, is the fact the Nutanix Operating System (NOS) version is not dependant on the hypervisor version. What does this mean? You can run the latest and greatest NOS 4.1.x releases on ESXi 5.0 , ESXi 6.0 or anything in between. In fact, you could run older versions of NOS such as 3.x with vSphere 6.0 as well (although I see no reason you would do this.) [b]Read more [url=https://tr.im/cHiGI]here[/url][/b] [i][b]This is a repost from the blog [url=http://www.joshodgers.com/]CloudXC[/url] by Josh Odgers[/b][/i]
I saw a tweet today (shown below) that reminded me of 27th August, 2012. This was the day that VMware published this article, which demonstrates how VMware vSphere (5.1 at that point) could achieve 1 million IOPS in a single VM. Things have undoubtedly gotten better in vSphere 5.5, and even more so with vSphere 6.0, which was recently released. Even though the test setup at the time (3 years ago) required two dedicated all flash arrays for this one VM, it demonstrated clearly that the hypervisor is not a bottleneck to storage. It also clearly demonstrated that even using VMFS and going through multiple layers to the storage and back, isn’t a bottleneck to high performance. vSphere itself adds so little overhead that it’s a great platform for running any workload. This is important, because 3 years on from this test we have all sorts of things running on top of vSphere in the ever increasing capabilities of the platform. Even high performance storage controllers and not just the varie
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.