Nutanix Cloud Infrastructure
The Foundation for Your Hybrid Cloud
So how much do you run your cluster memory usage? Currently we are at 50% usage but still adding VM's. My boss wants to run this to 100% and doesn't understand why I keep telling him this is a bad idea. I am trying to keep it at 2/3 to 3/4 usage of capacity. Am I right or wrong on this? I am doing a bad job of explaining this to my boss who doesn't seem to believe me so do you guys have any resources that can help me explain this? Right now we are running application servers and no core functions but we are looking to migrate our DHCP servers. We have 1.1 TB of memory capacity running at 51% usage. CPU usage is not a problem running at about 12%. Storage is also fine. We are running 6x Dell 740XC if that helps.
Has anyone successfully been able to get HyperV(or ESXi) nested under AHV? I do see nested [b]KVM[/b] VM's are supported but no mention of anything else. Only reason for exploring this is to get Cisco Umbrella VA's working under AHV so i don't have to re-purpose an old server to run HyperV/ESXi.
Here I am at London Luton airport waiting for my delayed flight to KubeCon 2019 in Barcelona. Three hours ‘free’ to write a new blog, this time about how to deploy Grafana on Nutanix Karbon.When deploying a Kubernetes cluster with Karbon you get by default few add-ons like logging with EFK (Elasticsearch, Fluentd and Kibana) or monitoring with Prometheus.Today the visualisation of Prometheus in Karbon is just for alerts. If you are looking to gather metrics information you will need to deploy Grafana as the visualisation interface. In this blog I’ll show you how easy and quick is to deploy Grafana with Tillerless Helm. Because Karbon is open and upstream Kubernetes, there is no need of complex configurations.PrerequisiteBefore you can proceed with the installation of Grafana, you will need to install Helm. I wrote a blog about how to do it in a secure manner. Refer to Tillerless Helm on Nutanix Karbon blog for the installation.Install GrafanaAs mentioned before, every Kubernetes cluste
Hi, Looking to see if it is possible to manually deploy the CVM to Azure for use with Cloud connect? Also what is the reliance on the classic deployment model? Why can't the CVM be deployed using the Resource Manager model? Microsoft have been pushing customers away from the classic model over the last few years, its likely that the majority of customer with mature Azure deployments won't be using the classic model.
We are moving away from VMware over to Acropolis and I was wondering what others have been using to backup their VMs under Acropolis, 3rd party wise? Under VMware, we had been using Veeam, but this is no longer a choice.Any comments/suggestions would be helpful.
Does anyone have thoughts on how backup and recovery will work with the Acropolis hypervisor? Our company is definitely interested in moving in this direction, but the simplicity of Veeam B&R has been great, and I do not want to move away from that unless there is a solution that provides all of veeams features.
Hello - I'm new to Nutanix and am in the middle of taking the online 5.5 course in the education portal. I have a simple question about Cloud connect: In the section about Cloud Connect features the very first thing you read is "data transmitted is already deduplicated and users can choose to enable compression on the local storage container.." Yet when I look at the section regarding General Recommendations and Limitations I see the following: "It is not recommended to enable deduplication on the source cluster..." Can you please help me understand this? This looks like a complete contradiction.
Hi I'm looking to created clustered files servers using Windows Server Failover Clustering - VMs will be in-guest, i'm using AHV. My understanding that presenting the Nutanix storage via iSCSI to WSFC servers is supported however i'm confused about how to setup the networking and the Nutanix ABS documentation doesn't make this clear. I'm used to creating a separate storage network / VLAN for all storage traffic however the iSCSI target IP address is recommended to be on the same network as the CVMs / all of the Nutanix infrastructure. So the question is do i create a separate network for storage and have it route to the iSCSI target IP address, or have the iSCSI initiators on the same network as the iSCSI target / data services IP? It wouldn't seem logical to mix the CVM traffic with storage traffic, however also not great to have to route storage traffic. Thanks Adam
Tim Wallace, who leads out public sector solutions marketing efforts has a Nutanix community blog coming out talking about the value of Xi Leap for Public Sector including education institutions. Nutanix also has numerous service provider partners (www.nutanix.com/x-powered) that offer Nutanix powered DRaaS for customer's hosted and on-prem deployment. What are some of the reasons why you would consider DRaaS and Xi Leap in your environment? Share with your peers.
We attempted to migrate our ODNS as well as setup new VM's in AHV by extracting the vmdk's but each time we get the below screen on boot. Has anyone else successfully got the ODNS VA to run in AHV? I've tried editing the vm's xml file to change the video type and vram but no luck as of yet:[img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/df42756e-6765-4971-8d7f-63a99d3f8651.png[/img]
Hi guys, we're planning to move/migrate my MS SQL workload from physical standalone host to a new Nutanix cluster. We already engaged with local Nutanix representative for this. However, we encountered a few hurdles in terms of performance during and we can't be sure whether we have the right sizing for our performance requirement. Current Production Physical host:- Processor: Intel Xeon E5-2690v3 @ 2.6GHz (single socket, 12C 24T) RAM: 96GB (However only 64GB usable for the MS SQL 2008R2 DB) Disks: Database MDF resides in a SSD RAID 5 diskgroup. (6 disks), OS and LDF etc resides in other disk groups, SAS disks 10K, 15K RAID 10, RAID 1 etc. Test Unit ( 3 nodes):- Processor: Intel Xeon E5-2650 v4 @ 2.2 GHz (dual socket, 24C 48T) per node RAM: 512GB per node Disks: 6 x 1.6TB Intel S3610 SATA (All Flash) Hypervisor: AHV 126.96.36.199 Network: 10GbE RJ45 SQL Test VM created in Nutanix test cluster:- 12 vCPU 96GB RAM CVM size: 12 vCPU 32GB RAM Test Result from MS SQL s
[b]Disclaimer[/b]: This post is intended for demo purposes and must not be used for production clusters. Istio is not included in Nutanix Karbon today, hence Nutanix support won’t handle any case related to Istio. [h1]What is a service mesh?[/h1]When transitioning from monolithic applications to a distributed microservice architecture the number of services dramatically increases. This decentralisation at scale makes it difficult for developers and operators to enable service-to-service communications. Service meshes enable service-to-service communications providing connectivity, security, control and observability. [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/9d364aad-f1b0-444f-9759-ce0a6f83a210.png[/img] Source: [url=http://blog.microtica.com/2017/03/what-are-microservices-actually/]Microtica[/url] [h1]What is Istio?[/h1]From Istio website: [i]“At a high level, Istio helps reduce the complexity of these deployments, and eases the strain on your d
[video]https://youtu.be/iaP0G2cvaY4[/video] In this video, Michael Haigh walks through scaling a production grade Kubernetes cluster with Nutanix Karbon. Karbon simplifies the provisioning and life cycle management of Kubernetes clusters, freeing administrators and operators from manual and tedious tasks, and enabling developers to focus on their applications. :point_right: Are you running Nutanix Karbon? Be the first to hit reply and continue the conversation on this topic.
I'm trying to deploy FortiAuthenticator v5.5 VM but running into issues on AHV. I've tried creating the VM using the files from Fortinet but no instructions when using AHV per this guide: [url=https://can01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fs3.amazonaws.com%2Ffortinetweb%2Fdocs.fortinet.com%2Fv2%2Fattachments%2F5795878a-1f78-11e9-b6f6-f8bc1258b856%2Ffac-vm-install-guide-43.pdf&data=02%7C01%7Cchris.mcgee%40traveledge.com%7Cb8b1d1749b034300f11e08d6a7523323%7C51cb5df4df054e5bb20c17d576ab85e3%7C0%7C0%7C636880368860136088&sdata=KUI2nTmasgbB5uxRBJhFzd25LM8FGG%2Bv6fzcY%2BwZ%2FkY%3D&reserved=0]https://s3.amazonaws.com/fortinetweb/docs.fortinet.com/v2/attachments/5795878a-1f78-11e9-b6f6-f8bc1258b856/fac-vm-install-guide-43.pdf[/url] The files I've used are the following: [list] [*]KVM - fackvm.qcow2, datadrive.qcow2 [*]Vmware - fac.vmdk, datadrive.vmdk [*]screenshot attached of my options from Fortinet [/list] [list=1] [*]Created vm adding cloned dis
[video]https://youtu.be/BUb9weyoLnk[/video] Are you interested in deploying production grade Kubernetes clusters in your on prem data center? Well, with Nutanix Karbon, you can quickly provision, manage, and operate your Kubernetes clusters all with the Nutanix Prism. Let's get started. Now that Karbon is general availability, we'll just gonna go ahead and click on the menu button, and then down to services, and then Karbon. If Karbon is not enabled in your environment, there'll be a button here to press to enable it and it'll get spun up in about 5 to 10 minutes as a couple of docker containers which lives on the Prism central VM. Since mine is already enabled, I'm gonna click on the link to take me to the Karbon console. Now that that's opened up, I'm gonna go ahead and create a Kubernetes cluster. We're gonna get started here working through the development cluster workflow. All of these settings are configurable, as we'll see. I'm gonna actually come back and deploy a produ
[video]https://youtu.be/IucbVL8lECk[/video] Are you looking to easily upgrade the Host Operating System of your Kubernetes nodes? With Nutanix Karbon, you can quickly and easily upgrade the Host OS of your Kubernetes nodes all with the Nutanix Prism. So let's get started. In a previous video, we successfully deployed this production grade Kubernetes cluster. We see that it is healthy and also that there is an upgrade available. The reason for that is there is actually a couple of Host OS versions available in the Karbon UI here. So you can imagine, if you deploy Kubernetes cluster and then about a month later, updated version of the host Operating System is released, you'll see a new option within the UI here and the ability to download it. So you'll just come in here and click a button to download it and then once it is downloaded like we see here, we get the option to upgrade the cluster. So I'm going to go ahead and select this and if we click on upgrade available, we see tha
I’m planning to move a Windows Server 2012 R2 Datacenter Edition VM from Hyper-V 2012 R2 to AHV on AOS 5.10.2 today. I have created a similar (but not cloned) test VM in the environment and migrated it yesterday with no issues. UEFI worked fine. Today in my final prep I’m seeing that in newer versions of the Nutanix documentation (AOS 5+) that this migration and VM type (AHV with UEFI) is listed as Limited Support and “Not recommended for Production”. [b]Is this a showstopper?[/b] Are other customers moving important production Gen 2 UEFI Hyper-V VMs to AHV? [b]Scenario[/b] [b]Support Level[/b] Generation 2 VM (UEFI) migrated from Hyper-V to AHV Limited support. For information about configuring UEFI for VMs migrated from Hyper-V, see [url=https://portal.nutanix.com/#/page/docs/details?targetId=Migration-Guide-AOS-v510:vmm-vm-migrate-post-migrate-windows-t.html#task_afb_jbt_vw]Post-Migration Tasks, Windows VMs[/url]. UEFI VM migrated from ESXi to AHV Not sup
We are using the ASync Data Protection between two sites to replicate our VMs. But that is only taking a new snapshot of an existing VM and replicating that snapshot. I would like to have replication setup for some of VMs so that the VM and all of the snapshots we've made of it outside of Data Protection are replicated as well. We may need a very specific state of a VM at our remote site. Using only the Async DP snapshots won't fully meet our needs. Is there any way to do this with the existing Nutanix software? I'd rather not purchase something third party to solve this problem.
Beyond simple docker volume support but can Nutanix bring their expertise into the CSI space to make orchestration systems like k8s aware of cluster wide storage resources? It also leads into what Nuatnix is doing to enable a K8s API conformant solution on top of the platform. I recognize that we can layer on a k8s runtime as a customer, but similar to what we see with Pivotal leveraging BOSH to make day2 k8s work there seems to be demand for integrated options. The institutional storage knowledge could really take stateful sets to the next level and BOSH as a lot of similarities to Acropolis. I know that there may be overlap with Calm when it comes to service orchestration, but k8s is becoming the defacto language for orchestrating an application delivered by containers.
Hello all I'm trying to establish the optimal disk layout for a VMware MS SQL 2016 2 node guest failover cluster that will have a single SQL instance hosting multiple small to medium sized databases. Unfortunately we don't really have the funds for licensing to do the recommended approach of creating lot of small VMs to host these databases, so we need to consolidate where possible. I've been reading though the Nutanix SQL 2016 best practice guide and it's raised a few questions. I've decided to allocate multiple vdisks specifically for database files, so I'd have a few database files to each vdisk instead of one vdisk with all the database files on it. I'm assuming this approach might be better due to each vdisk having a 6GB Oplog allocation and so more vdisks would prevent a single oplog from quickly over filling? Is there any there real a benefit here to having more vdisks to spread the databases across or does this only really come into play when you split indi
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.