Discussions about Multi-Hypervisors Microsoft Hyper-V, VMware ESXi, and Nutanix AHV.
- 323 Topics
- 1,159 Replies
Next Community, I wanted to share the Nutanix Best Practices Guide for deploying Microsoft Lync on Nutanix. Many thanks to and Jason Sloan[b]@Jason_D_Sloan[/b] for authoring this document. Best Practices Guide with example deployments here: [url=http://go.nutanix.com/bpg-microsoft-lync.html]http://go.nutanix.com/bpg-microsoft-lync.html[/url] You can also view Derek's personal blog on the subject: [url=http://www.derekseaman.com/2015/01/sizing-microsoft-lync-server-2013-nutanix.html]http://www.derekseaman.com/2015/01/sizing-microsoft-lync-server-2013-nutanix.html[/url] Please feel free to use this community space to ask questions about virtualizing Microsoft Lync on Nutanix.
Its well known that Nutanix is Hypervisor agnostic supporting ESXi, Hyper-V and KVM, but what most people either don’t know, or haven’t considered, is the fact the Nutanix Operating System (NOS) version is not dependant on the hypervisor version. What does this mean? You can run the latest and greatest NOS 4.1.x releases on ESXi 5.0 , ESXi 6.0 or anything in between. In fact, you could run older versions of NOS such as 3.x with vSphere 6.0 as well (although I see no reason you would do this.) [b]Read more [url=https://tr.im/cHiGI]here[/url][/b] [i][b]This is a repost from the blog [url=http://www.joshodgers.com/]CloudXC[/url] by Josh Odgers[/b][/i]
I saw a tweet today (shown below) that reminded me of 27th August, 2012. This was the day that VMware published this article, which demonstrates how VMware vSphere (5.1 at that point) could achieve 1 million IOPS in a single VM. Things have undoubtedly gotten better in vSphere 5.5, and even more so with vSphere 6.0, which was recently released. Even though the test setup at the time (3 years ago) required two dedicated all flash arrays for this one VM, it demonstrated clearly that the hypervisor is not a bottleneck to storage. It also clearly demonstrated that even using VMFS and going through multiple layers to the storage and back, isn’t a bottleneck to high performance. vSphere itself adds so little overhead that it’s a great platform for running any workload. This is important, because 3 years on from this test we have all sorts of things running on top of vSphere in the ever increasing capabilities of the platform. Even high performance storage controllers and not just the varie
If you're interested in running Oracle on the Nutanix platform then you should check out my latest blog article titled Oracle Licensing and Support on Nutanix Virtual Computing Platform - http://longwhiteclouds.com/2014/05/11/oracle-licensing-and-support-on-nutanix-virtual-computing-platform/. We are in the process of publishing a Tech Note on Oracle for Nutanix and also a Best Practice Guide. These should be available on our web site in the next few weeks.
In addition to the video I recorded regarding SQL Server Provisioning with VAAI I've written an article tonight including the scripts I used to clone the VM's, tune the network, and the RunOnce script that ties it all together. I hope this is helpful. [url=http://longwhiteclouds.com/2015/01/27/nutanix-sql-server-db-vaai-clone-performance/]http://longwhiteclouds.com/2015/01/27/nutanix-sql-server-db-vaai-clone-performance/[/url]
I'e just published a video demonstrating provisioning multiple SQL VM's in just minutes using VAAI Clones on the Nutanix platform. The SQL DB VM's are 435GB in size, but due to VAAI the new clones don't use any additional stroage capacity initially.Nutanix Provisioning SQL Server VM's with VAAI Clones
We were one of the first guinea pigs to host Exchange 2010 on Nutanix. our cluster consists of a 3451 and a 6220. We have 1 exchange server tied to each of the 3451 nodes, 4 in total. Each server is configured with 4 2 TB databases. 2 active, 2 passive. We noticed recently as we migrated a majority of our users to the new environment that we are getting alarms that we are hitting a threshold of disk space per host. Not sure what we're missing here. [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/299i4144063A719D65F2.png[/img]
Hi, I have a curios doubt. I read about cloud connect and summarized that it is mainly for backing up the data! And when I connect to a public cloud AWS, a CVM will automatically be created on that platform right! So can I seamlessly offload some of the production workload onto the cloud as well or is it exclusively for backup and restore ? Best, Narayan
Hey there We have a Windows Azure Pack environment deployed on a Nutanix Hyper-V based infrastructure. The infrastructure is build with high availability in mind: [list] [*]Redundant Virtual Machine Manager configuration [*]Redundant SQL Server configuration (using SQL Server AlwaysOn)[/list]But we are currently facing an issue setting up a redundant VMM Library Share. Microsoft does not allow using a redundant VMM installation as the Library Server. Instead they recommend to setup a Scale-Out File Server infrastructure as the Library Server and then add your Library Share to it. However, to my current understanding, it is not possible to setup a Cluster Shared Volume on Nutanix Hyper-V - which is a requirement for a Scale-Out File Server. What are the design recommendations from Nutanix in this matter?
Storage DRS is a feature of vSphere 5.0 onwards which aims to reduce the complexity of managing storage capacity and performance for virtual machines running on traditional shared storage. SDRS helps simplify management of Datastores (LUNs / NFS Mounts) by analyzing the available datastores capacity and placing new virtual machines in datastores with the required available capacity. This is called "Initial Placement". SDRS also allows administrators to create rules ([b]"Affinity" & "Anti-Affinity")[/b] to ensure specific what VMs should be kept together or apart for performance / capacity reasons and finally, monitor I/O metrics and relocate VMs to different datastores where contention or capacity constraints exist. As it is recommended to have one Large NFS Datastore in a Nutanix environment, the issue of initial placement (which SDRS helps resolve) is natively eliminated as all Virtual machines, regardless of performance or capacity requirements can and should be placed into one
Network I/O control is a feature available since vSphere 4.1 with the Virtual Distributed Switch (VDS) which uses Network resource pools determine the bandwidth that different network traffic types are given. When network I/O control is enabled, distributed switch traffic is divided into Customer Network Resource pools and/or the following predefined network resource pools: Fault Tolerance traffic, iSCSI traffic, vMotion traffic, management traffic, vSphere Replication (VR) traffic, NFS traffic, and virtual machine traffic. From vSphere 5.0 onwards, you can also create custom network resource pools for virtual machine traffic. You can control the bandwidth each network resource pool is given by setting the physical adapter shares and host limit for each network resource pool. The physical adapter shares assigned to a network resource pool determine what share of the total available bandwidth will be guaranteed to the traffic associated with that network resource pool in the event o
The following tables show the recommended configuration of VMware Distributed Resource Scheduler (DRS). [b]DRS (General)[/b] [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/258iC0B64EA0AB05913D.jpg[/img] [b]DRS Rules[/b] [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/260iC36A3829678786A2.jpg[/img] [b]DRS Distributed Power Management (DPM)[/b] [b][img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/262i30C4C439D67776FB.jpg[/img][/b]
Is anyone running Microsoft Exchange on their Nutanix cluster? If so, approximately how many mailbox databases (EDBs) are you running and what is the approximate size of each? Also, overall feedback/thoughts of the performance of your Exchange environment on the Nutanix cluster?
Hello mates, We have 2x NX3350 and Arista switch which handle roughly 100 VM. Workloads are: 20x VM with heavy disk workload which are used mainly for reporting and analysis. R/W ratio is roughly 50/50. They created 20K IOPS workload when they lived on SAN storage with RAID10 and tiny flash tier and average latency was always below 5 ms. 10x VM which are used for application virtualization 70x VM which are use for desktop virtualization. Now, when we migrated to Nutanix, we have only 4K cluster IOPS and 20 ms latency which does not seems to be very good for us. Trying to resolve the issue, we enabled inline compression and increased CVM memory up to 20GB. We also tried to change tier sequential write priority. Unfortunately, this does not help. ncc, cluster status and prism health claim that everything is ok. Before we migrated our environment, we've run diagnostic VM and the results was roughly 100K IOPS for read. Here is the current configuration of cluster:NOS Version: 4
When we want to reconfigure for HA, installation of the fdm always fails with the following error: syslog.log:2014-09-10T07:34:33Z fdm-installer: VIB Nutanix_bootbank_pynfs_5.0.0-0.0.1 violates extensibility rule checks: [u'(line 22: col 0) Element vib failed to validate content'] We need to remove pynfs, reconfigure for HA and then we can re-install pynfs again. Is there a workaround in stead of keep on removing and reinstalling pynfs? Regards John Grinwis
I know this is a Mandriva notice but I assume CentOS is also affected. _______________________________________________________________________ Mandriva Linux Security Advisory MDVSA-2014:097 http://www.mandriva.com/en/support/security/ _______________________________________________________________________ Package : libvirt Date : May 16, 2014 Affected: Business Server 1.0 _______________________________________________________________________ Problem Description: Multiple vulnerabilities has been discovered and corrected in libvirt: The LXC driver (lxc/lxc_driver.c) in libvirt 1.0.1 through 1.2.1 allows local users to (1) delete arbitrary host devices via the virDomainDeviceDettach API and a symlink attack on /dev in the container; (2) create arbitrary nodes (mknod) via the virDomainDeviceAttach API and a symlink attack on /dev in the container; and cause a denial of service (shutdown or reboot host OS) via the (3) virDomainShutdown or (4) virDomainReboot API and a symlink attack on /
Hello everyone, Does anyone have an official date for the support of vSphere 5.5? And with 5.5 update 1 right around the corner (next month or so), do you now if the U1 version will be supported on the release date or will we need to wait some more? For what it's worth, our block is running fine in 5.5 since ~1month (although unsupported by Nutanix, we updated it with VUM without problem). Thanks in advance for you answers, Sylvain.
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.