Nutanix Cloud Infrastructure
The Foundation for Your Hybrid Cloud
Disclaimer: This post is intended for demo purposes and must not be considered production ready. Velero is not included in Nutanix Karbon, hence Nutanix support won’t handle any case related to Velero. OverviewOne of the main principles of containerised applications is stateless. The reason is not make this application portable and non dependant of any data. In this way you can re-use the same container image on any platform with the same result. Because containers started to gain popularity because the portability, scalability, and so on, the community found out a way to containerise stateful applications like databases with the use of local storage, or shared volumes. Recover stateless applications is a very straightforward process, you just need to re-apply your manifest file in another cluster. You just need to make sure your manifest file is up-to-date with the latest state running in your cluster. Remember you shouldn’t make changes directly in your cluster, you should update you
We’re in the process of testing Veeam for AHV on our NTNX running AOS 5.10. So far with Windows issues we have had very few issues and we worked them out quickly. However with a Linux VM running Ubuntu we’re getting errors on the Nutanix where the snapshot is checking for pre-freeze and post-thaw scripts. We’ve attempted a variety of ways to get those scripts in place, but it seems that there is no clear path. Does anyone else have experience with Veeam for AHV and backing up Linux VMs? Does anyone have any info on setting up the pre-freeze and post-thaw scripts? While we have some experience with this on Linux we’re certainly not experts.
Trying to facilitate the use of Karbon for internal users but they have requested a later version of CentOS. When launching the Karbon Portal the only option to download for the OS Image is currently centos7.5.1804-ntnx-0.0. Is it possible to upgrade this to say CentOS-8.1905?
Good dayI have a few questions regarding PVs on Karbon Clusters.Is there a way to access Persistent Volumes created by Kubernetes besides getting to it from within the pod the volume is mounted? If the PV has status Released, can I still be able to access it without having to bind it again?Basically, I want to know if the data on Karbon PVs can be accessed via ssh, or something else other than K8s pods. If backups are saved on a PV, can I access the data from another VM besides the cluster worker nodes?
Hi Friends, I tried to deploy karbon and the deployment getting failed.The error is as below in the log file. Regards Ritchie James 2019-09-18T12:23:03.941951000Z 2019/09/18 12:23:03.939732 etcd_deploy.go:116: [DEBUG] Waiting for connection... 2019-09-18T12:23:05.943064000Z 2019/09/18 12:23:05.942059 etcd_deploy.go:116: [DEBUG] Waiting for connection... 2019-09-18T12:23:07.944711000Z 2019/09/18 12:23:07.943488 etcd_deploy.go:116: [DEBUG] Waiting for connection... 2019-09-18T12:23:09.945585000Z 2019/09/18 12:23:09.944754 etcd_deploy.go:116: [DEBUG] Waiting for connection... 2019-09-18T12:23:11.946956000Z 2019/09/18 12:23:11.946091 etcd_deploy.go:116: [DEBUG] Waiting for connection... 2019-09-18T12:23:13.948607000Z 2019/09/18 12:23:13.947674 etcd_deploy.go:116: [DEBUG] Waiting for connection... 2019-09-18T12:23:15.949911000Z 2019/09/18 12:23:15.948958 etcd_deploy.go:116: [DEBUG] Waiting for connection... 2019-09-18T12:23:17.951693000Z 2019/09/18 12:23:17.950656 etcd_deploy.go:116: [D
Netbackup 8.1 was released yesterday. One of the features is "NetBackup for Nutanix – Parallel, scale-out backups of Nutanix clusters". Seems to only support full backups. There also is a requirement of a Linux backup host of client: [url=https://www.veritas.com/content/support/en_US/doc-viewer.18716246-126559472-0.v127139966-126559472.html]https://www.veritas.com/content/support/en_US/doc-viewer.18716246-126559472-0.v127139966-126559472.html[/url] [list] [*][b]Backup_Host=[/b] The backup host must be a Linux machine. The backup host can be a NetBackup client or a media server. [*][b]Application_Server=[/b] [/list]
We have a dozen or so VMs/appliances that do not support AHV and thus cannot be migrated over to our new AHV cluster that runs 95% of our servers. We would like to expose a container/Volume Group from Nutanix to the VMware hosts as NFS datastores to build these VMs/Appliances on. Our goal is to then replicate that Container/Volume Group to our DR Nutanix cluster and then connect our DR VMware cluster for the same VMs/Appliances to then use as well incase of failover. We have gotten conflicting information from all parties and wanted to know if anyone was already doing this and what your results are? These VMs/Appliances would have fairly high IO.
Netbackup 8.2 has announce release of NetBackup Accelerator support for Nutanix AHV. [b]What is Accelerator :- [/b] If the client has no previous backup, NetBackup performs a full backup and creates a track log. The track log contains information about the client’s data, for comparison at the next backup. At the next backup, NetBackup identifies data that has changed since the previous backup. To do so, it compares information from the track log against information from the file system for each file. For NTFS and ReFS file systems, it also uses the Windows change journal to help identify the data that has changed since the last backup. Accelerator uses the Windows change journal in two ways: To check for changes in the file system metadata, and to help detect which files have changed since the last backup The NetBackup client sends to the media server a backup stream that consists of the following: The client’s changed blocks, and the previous backup ID and data extents (block offset a
I have issue about Rubrik backup. I have monitored that some vms could not backup some days with alerts are same. However, some days work fine.[img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/10508755-dbb2-4d48-a12a-4bb18e04410c.jpg[/img][img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/a6fad98e-7e72-44b8-8b57-38e325ea3cdf.jpg[/img]
Hi, I'm currently installing ubuntu on a VM. I notice the installation process is extremely slow. The downloading of files is going fast, but the actual installing isn't. Is this common when installing ubuntu on a nutanix VM? I tried different version, but they are all very slow. Thanks!
Hello All I am new to Nutanix and new to Kubernetes so my apologises for the newbie questions? I've played with docker and kubernetes on my Windows machine but having a brain freeze transitioning to the karbon cluster. How do I get kubectl to use my configure file I downloaded from Karbon? Currently kubectl is only talking to my docker VM's? Many thanks in advance J
I'm trying to configure a new Kubernetes cluster using Karbon. When I get to the stage of configuring the storage class, I'm prompted for a user name and password. I'm not sure which user I should be entering here, as the few I've tried don't seem to work. The cluster setup tutorial video shows the username `admin`, which seems to suggest using a very highly privileged user, which seems unnecessary to me. How can I create a user with the minimum privileges necessary for the storage class?
I'm testing Karbon. I'm very basic level. I can't make it through the karbon configure testing, so I'll ask you for help. I've upgraded Karbon to the latest version through LCM. After that, I set it up through "Create Kubernets Cluster." That's the problem from here. In the end, an error occurs in the "Deployment Failed" status section. Deploying 8% has an error. I looked up various information. I looked it up, and I think you should setting Proxy http and ERCD. How can setting? Please explain in detail the reason why Karbon can't be distributed. Thank you.
I am migrating from ESX 6.0 to AHV 5.1.2I have successfully mounted the container from VMware and migrated a disk. When trying to start the image from the vmdk file in Prism Console than I get the following error: Booting from Hard Disk... in the boot dos screen. Regards, Onder Avcu
With the release of Nutanix Karbon TP, PC 5.9, you may want to deploy some of the traditional addons like [b]Kubernetes Dashboard with Heapster[/b]. This post walks you through the process to successfully deploy the Kubernetes Dashboard addon. Before you can start with the deployment of the addon you need a working Kubernetes cluster and the [b]kubectl[/b] CLI-tool. The steps to follow are: [list=1] [*]Deploy Heapster [*]Deploy Kubernetes Dashboard [*]Connect to Kubernetes Dashboard [/list] [h2]Deploying Heapster[/h2]From Heapster website: [i]Heapster enables Container Cluster Monitoring and Performance Analysis for Kubernetes (versions v1.0.6 and higher), and platforms which include it.[/i] [b][i]Heapster is deprecated[/i][/b][i]. Consider using metrics-server and a third party metrics pipeline to gather Prometheus-format metrics instead. See the[/i] [url=https://github.com/kubernetes/heapster/blob/master/docs/deprecation.md][i]deprecation timeline[/i][/url][i] fo
Deploying a karbon cluster we got this error on 8% of process:Deployment Failedinvalid argument: internal error: internal error: failed to deploy the ntnx dvp: Failed to configure with SSH: Failed to run command: on host: "xx,xx,xx,xx:22" error: "Process exited with status 1"Has anyone the same problem? Any idea on how to advance?Regards.Javier
Hi all When doing NCC checks, I get a report(example1) that some volume groups usage is high (95%). example1: Detailed information for vg_space_usage_check: Node 10.x.x.x: FAIL: Volume Group pvc-xxxxxx-9xxx-11x9-bf59-xxxxx8d87c391 space usage (95 %) above 90 % _ The volume group is mounted to Kubernetes pod running in a Karbon cluster. My issue is that, when checking usage of the volume group from within the kubernetes pod, I get different usage results, current stats are 24% used, 76% free. The NCC check and the manual check results don't match. NOTE: There is a clean-up mechanism, so disk usage does go up and down. I have re-ran the ncc checks several times, they yield the same stats (95% usage), thou the usage from the pod is 24%(currently). Can you please elaborate on how the NCC vg_space_usage_check work? And what else to look at to further investigate the mismatch of stats. Tx
Hi all, I am new to Nutanix and am looking for best practices around swap space on Nutanix AHV VMs. Having SSD drives it is understood swap (writes) will decrease hardware life. Can swap be allocated only to HDD drives only and/or is it best to use a low swappiness value to lower the amount of swaps? Thanks,
Hello, First-timer here, I don't know if I'm posting on the right place, I'm sorry if I'm not... I have Acronis 12.5 Backup installed on a Windows Server 2019 Essentials 64 bits (that's installed bare-metal on a Lenovo SystemX 3650 M5). Attached to it, there's an IBM TS4300. I'm trying to figure how to backup the VMs running on my AOS 5.10.1 in the tape drivers inside the TS4300. I know I can install Acronis Agents on my Windows VMs, but I don't know if that's the answer for my question. And, if a disaster happens and I need to restore the Acropolis VMs from the tapes, how can I do it? Sorry if my questions seem too basic, but I'm googling for the desired infos all day long with no success up to now, so I'm asking for help here. Best regards,
Hi all Since kubeconfig is only valid for 24hours, is there a way to renew it automatically on expiry? An API call maybe? Using the same credentials to login on the Karbon Console? The idea is to automatically update the kubeconfig from our CI/CD pipeline to deploy to Kubernetes. Please also share, if there is, best practices for integrating Karbon clusters with Jenkins CI/CD pipeline. I thank you
[user=149]joshodgers[/user] has posted an interesting blog on his site on a new AHV feature -- Compute Only nodes. The concept of Compute Only node will enable infra teams to give their application and DB admin counterparts more CPU/Memory per node. Nutanix clusters can now be made up of compute only and storage only nodes minimizing if not eliminating any licensing concerns they had in the past for workloads like Oracle, SQL Server etc. Best practices for setting high performance databases will be coming soon. Reply back in this thread to let our Database SMEs know if you have specific questions you'd like them to address in their best practices. Excerpts from his blog -- http://www.joshodgers.com/2019/02/20/solving-oracle-sql-licensing-challenges-with-nutanix/ >> Compute only nodes compliment the traditional HCI nodes (Compute+Storage) as well as our unique [url=http://www.joshodgers.com/2015/06/09/whats-next-scale-storage-separately-to-compute-on-nutanix/]Storage Only Node
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.