Nutanix Community Podcast: The AI-Ready Platform: Nutanix Simplifies Enterprise AI
The Foundation for Your Hybrid Cloud
Recently active
Hi,I am always taken care.I'll upgrade Hyper-V 2012R2 to 2016 on XC630 so I have some questions.・metadata(json) download from Nutanix Portal. Can I do that Hypervisor ISO file there ?・What difference are metadata(json) each ? ・Use Hypervisor ISO file of Evaluation version, is it OK?・ISO's language adapt any languages? or English only? Sincerely,
Hi,I use vSphere6.0 U3 on NX series and set AsyncDR replication to AWS.Some protection domain’s AsyncDR are errorred after VCSA has broken and re-deploy.Are AsyncDR and vCenter depend on each ?Sincerely,
Setup4 host cluster Every host has: 1 SDD, 1 HDDProblem1 HDD died, CVM doesn’t start: "virsh start" shows: "Failed to start domain" and "Cannot access storage file". HDD is already replaced. QuestionHow do I recreate the CVM as part of the existing cluster?
Hi, I’m tryning to wrap my head arout the differences between Snapshot and recovery point on a functional perspective. My target is to take vmware like snapshot of my vm. For that i saw 2 possibilities :the recovery point, which are managed by PC and allow for easy replication The snapshot which, correct me of i’m wrong) can only be created in PC My issue is that the recvery point cannot be restored “in place”, I mean in the vm object itself, but only by creating a clone of the vm, this create a extra administrative burden if i have to restore a vm from a recent recovery point as i have to :restore/clone the recovery point into a new VM delete the old one update the clone with the previous name, categories, etc ….In addition the newly restored/cloned vm has lost all the previous recovery point However the snapshot can be restored in place, but they are not managed by PC and replicated to the backup cluster … TL;DR ; Is there a way to either create a snapshot via PC or restore in place
When performing a restoration of a VM with a volume group on a different cluster, changes are made to the guest OS in order to connect to the restored volume group.Since the primary site’s data services IP address will no longer be valid, the guest OS will connect to the volume groups using the secondary site’s data services IP address. Also, all other SCSI target IP addresses on the guest OS will be removed.Another change that occurs is the IQN of the VM. When the volume group is restored, the IQN is updated with the timestamp of when the volume group was recovered. This allows for multiple restorations of the volume group if necessary. Therefore, the IQN needs to be changed in the guest OS iSCSI configuration as well. The Nutanix Guest Tools (NGT) CD is mounted on the VM to facilitate this iSCSI configuration change so that no manual changes are required.More information about the above restoration process can be found in the Support Portal here.
Hi,Our 3rd party backup software does a poor job of cleaning up vdisks off of a proxy after it does its backups. Many times, it will leave the vdisk from a snapshot mounted to its proxy server and not delete it. This is a known issue that has been brought up with the vendor (Quest NetVault). Our question is, what is the easy way to see what disks are owned by the proxy VM and what disks are owned by a snapshot image? We want to be sure we don’t remove a disk that is owned by the proxy server.Thanks,
i have two quetion for our scenario : we have to AHV clusters production and DR.in our scenario we need to stop the main be active after powered on when resolve the issue becouse if the main server goes down the DR will be active and after resolve the issue the two cluster will be active active?Q2: How to stop auto power on for VMs on AHV ?that means when the main site be on after solve the issue I do not need the VM powered on automaticlly.. I need to power it manually.
The following tables show the recommended configuration of VMware High Availability. Admission Control The below table shows the percentage values required for N+1 through N+4 availability for vSphere clusters of up to 32 (The Current maximum cluster size for vSphere 5.5). The green highlighted values represent recommended values based on the level of availability. In Summary the recommendations are: Cluster of =N+1 redundancy Cluster of >8 & =N+2 redundancy Cluster of >16 & =N+3 redundancy Cluster of >24 & =N+4 redundancy Host Isolation Response and Advanced Settings
Hello, I am an engineer with a security software vendor. We have a mutual customer that seems be experiencing system hangs on their vms in use on their Nutanix Acropolis platform. Admittedly I am not very familar with Nutanix as I mostly work with Vmware. These hangs are reportedly occurring after they upgraded our security product. Vmware has a process where we can create snapshot files and convert them into a memory dump file. This is helpful for our troubleshooting effort when normally you cannot take any action within the vm image to force a dump. I am trying to find out if the Nutanix Acropolis infrastraucture has any comparable capability. I will be suggesting that our mutual customer reach out to you as well, but am trying to inverstigate capability in parallel. Your assistance and advisement is welcome. David Alexander
We’re standing up some docker-based Nvidia GPU compute workloads for the rapids.ai ecosystem for replacing/accelerating Spark & friends. However, we’re lost in Nutanix GPU virtualization docs, so curious if folks have ideas on the pieces for nutanix to work here. Right now, we’re thinking P100/V100 GPU → ahv / esxi → rhel 8.x → docker, and as an optional reach target, see if we can guest multiple host OS’s to share the same GPU(s). We’ve successfully done GPU → Ubuntu+RHEL → docker, but without ahv/esxi in the mix. Most ahv+esxi gpu articles seem more about VDI than compute, so we’re uncertain. Experiences? Ideas? Tips?
Dear CommunityScenarioI’m currently testing Nutanix DVP (Link) on Centos 7 with Netbox Docker (Link).I’m overriding the original volume definition in the Docker Compose file. Instead of the local driver I use nutanix:latest. volumes: netbox-static-files: driver: nutanix:latest netbox-nginx-config: driver: nutanix:latest netbox-media-files: driver: nutanix:latest netbox-postgres-data: driver: nutanix:latest netbox-redis-data: driver: nutanix:latestI’m then able to start and use Netbox Docker (docker compose up -d).Problem/SymptomsWhen I’m stopping Netbox Docker with docker compose down usually I’m able to bring up Netbox Docker again. Sometimes this works up to three times. But after a few times of up and down it isn’t possible anymore. If it doesn’t work anymore, following error is shown while starting:root@b2cntr-dckwor02[13:06:08]/var/projects/netbox-docker$ docker-compose up -d Creating network "netbox-docker_default" with the default driver Creating ne
We have the following senario at one of our customersIn the core there are 4 VSP8400 connected in a square. Two of the devices are in a Geo Redundant Datacenter.Between the links is an Extreme L2 SPBm fabric.Between the data centers the connections run over active DWDM devices.A measurement showed that if I deactivate a DC connection in the core, it takes about 400ms until the opposite interface reacts over the DWDM connection and also goes down.In the 400ms, however, traffic will still run over the link to nothing.Now my question:Do you see a problem on the Nutanix side if the switching time is 400ms or are there possibilities to adjust the sensitivity (buffer) of the Nutanix Datastore synchronization to the circumstances?
Hi, I'm trying to import a VM using hyper-V manager but it is always saying "can't access folder, you might have not permission to access this folder" and tried to check all VMs and the same issue shows up. Any idea why this error is there? I'm thinking about modifying the permissions on VM folder, can we? knowing that i'm trying to access the VM folder from container share path [example: \container_name.........] We are running hyperV2012R2 on nutanix 4.7.1 Thank you, Pierre
Nutanix Clusters OverviewNutanix Clusters provides a single platform that can span private and public clouds but operates as a single cloud using Prism Central enabling true hybrid cloud architecture.Using the same platform on both clouds, Nutanix Clusters on AWS (NCA) reduces the operational complexity of extending, bursting, or migrating your applications and data between clouds. Because Nutanix Clusters runs Nutanix AOS and AHV with the same CLI, UI, and APIs, existing IT processes or third-party integrations that work on-premises continue to work regardless of where they are running. Nutanix Clusters resources are deployed in your cloud provider account, thereby enabling you to use your existing cloud provider relationship, credits, commits, and discounts.Figure. Overview of the Nutanix Enterprise Cloud SoftwareClick to enlarge Nutanix Clusters place the complete Nutanix hyperconverged infrastructure (HCI) stack directly on a bare-metal instance in Amazon Elastic Compute Cloud (EC
Hi Nutanix Community and fellow lurkers!If your just getting started with AWS, we have made some videos to go along with the documentation to make the process even easier. I would appreciate any feedback on what else should be added based on your own experience. In the next week we will add a fourth video showing up to setup your VPN on the AWS side.
when i do retrieve, i can do for the entire snapshot. but the snapshot contains a lot of vms and i only need one. so how can i pull just one?
Hello, After a new deployment, I downloaded the kubeconfig and upon trying to approve a Certificate Signing Request (CSR), the csr just sits in an approved state, but never becomes Issued. Is there a restriction or something on the default-kubernetes-<clustername> user that I might not be familiar with? That or maybe my process is wrong, its odd that it just sits in an Approved state. Any help is appreciated!