Topics started by dlink7
I would like to take full blame for people thinking self healing solves all problems with Nutanix Cloud Clusters (NC2). The self-healing is pretty awesome for bad nics, hard drives and nodes but there are some instances where the portal can’t take action.Status checks on AWS are performed every minute, returning a pass or a fail status in the Cloud Portal. If all checks pass, the overall status of the instance is OK. If one or more checks fail, the overall status is impaired.There are two types of status checks, system status checks, and instance status checks. System status checks monitor the AWS systems on which instance runs. Instance status checks monitor the software and network configuration of individual instances. Notification Center in the Clusters PortalThe Clusters portal has an notification service that keeps track of all informational, warnings and critical alerts. Like most Cloud based services there is no support for SNMP in the portal but does have the ability to send
I had made a short video on auto scaling Nutanix cluster in AWS using new Nutanix Playbooks. Playbooks offer a visual way to start automating actions in your hybrid cloud. Due to making the video short, I didn't spend a lot of time talking about options that you could set. Prism central has a lot of alerts that you can use to trigger the action that you want to take. If you don't see the right “Alert” you can customize one of the existing ones to meet your needs and will be a user defined alert. In my example I was using a critical memory capacity over 80% capacity on adding nodes. If your clusters are typically higher than 10 nodes you might try using a higher percentage based on one node is going to be a lower percentage of memory overall. Likewise you could be more aggressive with picking a lower memory percentage. One thing that you'll want to set as well is the “trigger alert if condition persists for". I didn't have it set so I could easily record the demo but I think it would be
Nutanix’s support for bare-metal is getting close. Please take a look at some of the foundational components for networking in Azure.NCM (Nutanix Clusters on Azure) utilizes Flow Networking to create an overlay network in Azure to ease administration for Nutanix administrators and reduce networking constraints across Cloud vendors. Flow Networking is used to abstract the Azure native network by creating overlay virtual networks. On the one hand this abstracts the underlying network in Azure, while at the same time, it allows the network substrate (and its associated features and functionalities) to be consistent with the customer’s on-premise Nutanix deployments.You will be able to create new virtual networks (called Virtual Private Clouds or VPCs) within Nutanix, subnets in any address range, including those from the RFC1918 (private) address space and define DHCP, NAT, routing, and security policy right from the familiar Prism Central interface.Flow networking can mask or reduce Cl
I make no bones about it, bare-metal nodes in AWS is pricey. The questions becomes if we want to save money and have DR with AWS, how do keep the number of nodes small until we failover?In AWS all the Nutanix rules still apply. We need a minimum of a three-node cluster to get started. If we have enough storage capacity, we can keep accepting replications from an on-prem cluster and not worry about CPU and memory constraints. You could also use your on-prem AHV supported backup software to backup directly to S3. Then when you have an outage, you could quickly build a Nutanix Cluster in AWS and start restoring the data to that cluster.DR on the skinnyI was a asked to automate the process of adding nodes to a small Nutanix Cluster in AWS and then start a failover using Leap (DR runbooks) and X-play(automates actions based of events and alerts) in Prism Central (PC). The below video is what I came up with. The code to add a node via API from the Cluster portal is listed here:https://gith
Why a Grizzly bear? Why not? Awhile ago I created a video showing how you could easily save lots of money by saving dollars on the EC2 costs by using hibernate. Nutanix Clusters is the only on-prem vendor offering this for customers that want to lift and shift to the public cloud or use the public cloud for DR. I want to share the code snippet and how to create an API user in the Clusters portal.You can first get familiar with hibernate here: To create an API user to make the call:Go to the Clusters portal Under Organization go to usersGo to the user optionCreate an API userAdd an API user You can limit the scope of the user to a single cluster. Once you have a API user you can use the below script to make an API call or add a script to a VM.#!/usr/local/bin/pythonimport hashlibimport hmacimport timeimport requestsimport base64# Client credentials#client_id from the user from the clusters portalclient_id = "*********.img.frame.nutanix.com" #client_secret password from the user on
Hi Nutanix Community and fellow lurkers!If your just getting started with AWS, we have made some videos to go along with the documentation to make the process even easier. I would appreciate any feedback on what else should be added based on your own experience. In the next week we will add a fourth video showing up to setup your VPN on the AWS side.
Xi Epoch generates live application maps to provide instantaneous visibility into your application health without any code instrumentation. Epoch provides visibility into the interactions between components in distributed architectures, without dependency on specific language or framework implementation. As a result, operations team can quickly ensure reliability and availability of any application in any cloud environment. The picture below is showing two different Kubernetes Clusters, Docker Enterprise and Nutanix Karbon. The collectors are deployed as daemon sets so they can be deployed to any Kubernetes cluster to start monitoring your applications. Epoch collectors can be run in both containerized as well as non-containerized environments. Only one collector is needed per host (VM or bare metal OS). Below is YugaDB running Cassandra commands and Epoch is able to look at the traffic and report on how fast commands are happening and even give the most requested queries.
We have a new blog post that recently went live: Nutanix Releases New Kubernetes CSI–Based Driver CSI is an open and independent interface specification that describes or specifies how third-party storage providers can provide storage operations for Container orchestration systems. CSI makes installing new volume plugins as easy as deploying a pod, and enables third-party storage providers to develop their plugins without needing to add code to the core Kubernetes code base. The CSI Driver for Kubernetes leverages Nutanix Volumes (formerly known as ABS) to provide scalable and persistent storage for stateful applications. It was authored by a great team of folks Subodh Mathur (Engineering), Denis Guyadeen (PM), Dwayne Lessner (TME) and Christophe Jauffret (Architect) Continue the conversation from the article on this thread and let us know what you think!
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.