I was trying to figure out if there is an option by which a web hook can be called when a Citrix client connects or disconnects a VDI. For example when a user connects the VDI from his thin client, the infrastructure should notify the hook using an HTTP call with information including the thin client's computer name, the user connected etc. Could someone tell me if this is possible?
We’re standing up some docker-based Nvidia GPU compute workloads for the rapids.ai ecosystem for replacing/accelerating Spark & friends. However, we’re lost in Nutanix GPU virtualization docs, so curious if folks have ideas on the pieces for nutanix to work here. Right now, we’re thinking P100/V100 GPU → ahv / esxi → rhel 8.x → docker, and as an optional reach target, see if we can guest multiple host OS’s to share the same GPU(s). We’ve successfully done GPU → Ubuntu+RHEL → docker, but without ahv/esxi in the mix. Most ahv+esxi gpu articles seem more about VDI than compute, so we’re uncertain. Experiences? Ideas? Tips?
Hi teams, i have question about migration process in nutanix and esx, can i joining nutanix node in existing cluster on existing vcenter, processor in my existing node and my new nutanix node is same, and evc is enable, so can i joining nutanix and general server in same cluster?
I am looking at a way to programmaticly add a Directory User to a Project, as part of a blueprint. Blueprint has a macro for the user identity in question. I just need to be able to set that user as a Project User, and ultimately VM Owner. I have the VM Owner, but they need to be a prior Project User, and adding domain users, to project users, sounds like a bad idea.
Hello, I have questions on SSD pool and ILM mechanism I still did not found answers for after cross reading dozens of web sites and forums. Probably the response is obvious but I feel like I am missing something here and would like to have certainties on these points. So, let's suppose I have a 4 hybrid (1065) node cluster and I am running out of SSD. Q1) Can I technically add an All Flash node in the same cluster container ? Q2) Will the VMs hosted on this specific AF node automatically be all flash VMs (with no ILM and cold tier drain mechanism) ... or will they use the cluster cold storage pool (from other hybrid nodes) ? ===== Regarding the ILM mechanism and different SSD thresold, I (think I) understood that: - data will preferentially be stored on local SSD of each node until it reaches 75% usage - If one node reaches 75% of SSD usage, then ILM will send "less hotest" local SSD data over the network to other remote SSD nodes - If all nodes fi
Hello, does anybody have experiences with a container OS like RancherOS on AHV? In the Compatibility Matrix of Nutanix I can see that RancherOS v1.5.5 is compatible with my AOS 5.10, but I don’t know which image of RancherOS I should use. The are some images for a specific cloud provider and some for a specific hypervisor, but no one for Nutanix, AHV or KVM. I tought then I simply have to use the rancheros.iso, but there a some contradictions that I don’t understand. The good news is that the VM is working as expected, but RancherOS everytime enables the hyperv-vm-tools on startup and the container os-hypervvmtools is restarted every few seconds. The logs of this container are also empty. Does anybody know if there is something special to consider? For example do a need to enable the qemu-guest-tools or the kernel-extras? Any information or tips are welcome. Best regards H.Budde
HP 360g7 config: 6core x2 processor, 96Gb ram, hp 332T 2port1GB lan card.480 SSD connected sata port.62GB class10 memory card.SAS 300GB 10kRPM (raid 0).I’m getting Failed install error message in esxi ( 6.7,7.01,7.02u,7.03).0In ESXi: devices I see the (SSd, hdd and memory card) but 0 Datastores and 0 VM.I tried, Edit esx_first_boot.py get_disk_locations_ce function. old code: if wwn in disk: new code: if (wwn != None and wwn in disk) and (disk not in device_identifiers): remove this elif statement in that function: elif disk[-3:] == ":10":print(disk[:-3])device_identifiers.remove(disk[:-3])disk_dict.pop(disk[:-3])location -= 1In /bootbank/Nutanix/firstboot/ rm firstboot_fail ./esx_first_boot.py Did not even work. Can anyone help me, I can't find a fix for it. can someone help me Please. Thank you
We are in the process of setting LEAP in our environment. We have a Nutanix three node cluster that contains our Production VM’s. We have another Nutanix three node cluster that is going to be used for DR. Right now this DR cluster doesn’t have any VM’s on it besides the Controller VM’s. Currently the DR cluster is located in the same data center as the Production cluster. Our plan is to move the DR to a different data center. What I’d like to be able to do is setup LEAP and have everything replicate prior to moving the DR cluster to the new datacenter. This would allow the process to replicate over our LAN instead of the WAN. Is it possible to set up LEAP to do replication locally like this and then once we move the DR cluster to the new data center, reconfigure LEAP so that it points to the DR cluster. The DR cluster will also be reassigned new IP addresses once it moves to the new data center.
First off, Prism Central RBAC SUCKS compared to vSphere. 11 years supporting VMware, which I am completely self taught, the the simple task of creating a role like the VMware 'VM Power User’ role has me ready to update my resume. This is maddening. I need a role that will be assigned to team members to manage all aspects of VMs except creation/deletion, and they need to be able to mount ISO images to the VMs. I have gone through the granular list of permissions, and I don’t see anything like that under VM or Images. Where is this hidden?
Hello Sirs! I just want to inquire and ask if anybody has an idea if Nutanix has a plan to automate the processes of doing a shutdown of the cluster, entering a Node to Maintenance Mode and exporting VM that resides to a nutanix cluster and integrate this basic operation to their Prism Element or Prism Central GUI just as what the other HCI vendors? currently I belive these things is done via CLI only. I know this is a easy task for a technical people and admin but I think it defies the "one click simplicity / one click operation" that is shown on presentation decks. It would be great and probably an additional edge if processes like this is automated. thanks :)
Hi, I am actually doing my end of study internship, My subject is " implementation of hyperconverged infrastructure" I want to know how to migrate VMs from a legacy infrastructure with vmware Vsphere 5.6 to Nutanix hyperconverged infrastructure with VMware vsphere thank you.
Hello Community, I'm involved in a PreSales action. The customer asks for information regarding deletion of user data from Audit Logs. Is it possible to delete user information from audit logs? Or does the audit log automatically delete entries older than XX months? How long are events stored in audit log? Maybe this is based on a parameter? Thanks for any feedback! Jan-Oliver Ohloff
Consider an application that requires very high IOPS, unattainable within the Nutanix environment due to the overheads of RF2 replication. Can a raw NVMe front bay drive or workload accelerator be mounted onto a VM as a logical drive, and with no data replicated, in order to provide maximum performance. Obviously it will be node locked, but this is acceptable, Is this technically feasible?
I have a setup with two subnets in the (same) native vLAN. The first subnet is (kind of) forced on me by Unifi (router, switch, APs in in the native vLAN) and I created the second subnet for Nutanix (recommendation ist only Nutanix devices/VMs in the same subnet). Now I need a router or routes configuration in the Nutanix subnet to get access to the other subnet and the internet. What do you suggest to achieve this in a simple way or what is the recommended way of doing that?
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.