License-Free Virtualization for Your Enterprise
- 369 Topics
- 1,276 Replies
Hello, when I use the OpenStack integration and try to use vnc via OpenStack, I got some problem on my noVNC like this belows. Command line for enable vnc on OpenStack : /usr/bin/prism_vnc_proxy --bind_address=0.0.0.0 --bind_port=6080 --prism_hostname=[My-IP] --prism_username=[My-Username] --prism_password=[My-Password] --docroot=/usr/share/nutanix_openstack/vnc/static &_________________________________________________________________________/var/log/prism_vnc_proxy.outINFO:nutanix_openstack.vnc.wsgi_prism_websocket_proxy:Authenticating with Prism at [My_Cluster_IP]WARNING:py.warnings:/var/lib/kolla/venv/lib/python2.7/site-packages/urllib3/connectionpool.py:858: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings InsecureRequestWarning)WARNING:py.warnings:/var/lib/kolla/venv/lib/python2.7/site-packages/urllib3/connectionpool.py:858: Insecu
I have a problem with a CVM that won’t boot. This is on a semi-retired production cluster (not CE) that has no workloads running on it.I found the console output in /tmp/NTNX.serial.out.0 and I can see it trying to enable RAID devices, scan for a uuid marker and find 2 of them, then abort and unload the mpt3sas kernel module before trying again in 5 seconds. This repeats a few times before the hypervisor resets it and it starts booting again.The most relevant sections of the log (copious kernel taint messages removed) are [ 9.543553] sd 2:0:3:0: [sdd] Attached SCSI disksvmboot: === SVMBOOTmdadm main: failed to get exclusive lock on mapfile[ 9.790075] md: md127 stopped.mdadm: ignoring /dev/sdb3 as it reports /dev/sda3 as failed[ 9.794087] md/raid1:md127: active with 1 out of 2 mirrors[ 9.796034] md127: detected capacity change from 0 to 42915069952mdadm: /dev/md/phoenix:2 has been started with 1 drive (out of 2).[ 9.808602] md: md126 stopped.[ 9.813330] md/raid1:md126:
Hi Folks,I want to virtualise 2 SQL servers on Cisco UCS hardware to Nutanix. I found references for Xtract for DB, however it seems it is a legacy tool as I am not able to get much information on it. Nutanix ‘Move’ seems to be used for VM migration.On Nutanix I have ESXi, so i know we can use ‘Vmware Converter’, but I would like to know if there is any solution from Nutanix for this.Can some one please guide me about the ways I can achieve this P2V of SQL servers?
When I searched the documentation, it was said that scsi-3 pr3 is supported when attaching volume from AOS 5.17.my curiosity1. In versions prior to 5.17, when allocating a shared volume for vm redundancy configuration, if you use Attach instead of using iscsi-initiator, does it not work?2. Considering I/O speed and stability, which do you think is better, mounting a volume through iscsi-initiator or connecting to a VM as a direct attach?
Hi - I’m having an issue with a new installation of Nutanix on AHV (single node cluster) that I’m hoping someone hear can help with.Basically, using Prism, when I attempt to create a new VM the process fails with the error “VM with ID ‘*********’ was not found”. However, if I attempt to add a VM without a new disk, it successfully creates the VM but I can’t add a disk to it, error “Operation Failed: InternalException”.I’ve been able to upload my ISOs for installation etc. and of course the CVM is seemingly working fine. Any ideas what could be wrong? Many thanks.
Is there the possibility of using the microVM concept in Nutanix? Are solutions like firecracker (https://firecracker-microvm.github.io/) possible with Nutanix?As I understand it is not possible to nested virtualization with Nutanix (https://next.nutanix.com/server-virtualization-27/nested-virtualization-33423), so is there any provision for a feature that enables the use of microVMs with Nutanix?
Virtual Machine High Availability (VMHA) ensures that VMs restart on another AHV host in the cluster if a host fails. VMHA considers RAM when calculating available resources throughout the cluster for starting VMs. VMHA respects affinity and anti-affinity rules. For example, with VM-host affinity rules, VMHA does not start a VM pinned to AHV host 1 and host 2 on another host when those two are down unless the affinity rule specifies an alternate host. There are two VM high availability modes: Default: This mode requires no configuration and is included by default when installing an AHV-based Nutanix cluster. When an AHV host becomes unavailable, the VMs that were running on the failed AHV host restart on the remaining hosts, depending on the available resources. If the remaining hosts do not have sufficient resources, some of the failed VMs may not restart. Guarantee This non-default configuration reserves space throughout the AHV hosts in the cluster to guarantee that all VMs can re
I am looking for a patching product that will tie into Nutanix and be able to take a snapshot before patching, we are looking to automate this process to get away from manually taking snapshots prior to deploying patches. We do have Protection Domains setup but with the timing intervals it is not ideal only use those pre-deploy as they can be hours behind the actual patching deployments. If anyone is using a product that has this capability please let me know or if you may have a better way to accomplish this within Nutanix.
So I use the following script to create a VM. I actually have a small batch file that opens putty, logs into a CVM, prompts me for the $name and then runs the script below. The image is one I made, then sysprepped and uploaded as a disk image.Everything works great except that when I then open the console it goes through the Windows setup. I have an Unnattend file but I am unsure how to use it with my script. If I go through the GUI to create a VM I just point it to the unattened file in our file share.Is there a way to attach the sysprep file? I thought it might be a parameter vm.disk_create but I am not clever enough to figure that part out and thought maybe someone else smarter could help. First world problems I know but it could help save me 30 extra seconds of not having to type the admin password in and set the keyboard settings etc.#creates a vmacli vm.create $name memory=$mem num_cores_per_vcpu=$core num_vcpus=$vcpu uefi_boot=TRUE &&#Create C:acli vm.disk_create $name c
The technical piece below found our way through our partner channels. Installation instructions for Red Hat OpenShift on Nutanix are detailed in the documentation below. Enjoy, and as always feel free to provide us with feedback. User Provisioned Installation of Red Hat OpenShift 4.3 on Nutanix AHV 5.15 This manual was created during a proof of concept environment using Nutanix AHV 5.15, the KVM-based hypervisor of Nutanix, with OpenShift 4.3 in combination with the Nutanix CSI driver. The Nutanix CSI driver provides scalable, persistent storage for stateful applications using Nutanix Files and Nutanix Volumes. Please note: At the time of writing, Nutanix AHV in combination with OpenShift is supported by Nutanix, but not certified by Red Hat. If certification is required, clients are advised to use any of the other hypervisors supported by Nutanix. The installation steps followed are documented in the IBM Cloud Architecture & Solution Engineering repository guide. The PoC envi
I want to introduce Self-Service VM operations via Prism Central for our end-users (do not have CALM). So, I’ve used cloud-init (Linux VMs) and cloudbase-init (Windows VMs) along with customization code (Python) to deliver this Self-Service VM provisioning experience. You may refer https://github.com/cybergavin/nutanix/tree/master/self-service for details of this set up. Any feedback always welcome. Sorry if the code isn’t too pythonic as I’m a python newbie.
I’m new to Nutanix and we have a Nutanix cluster online. For testing purposes I’m running diskspd test in a VM.I’m running some diskspd test with identical VM configs in several enviroments 3-tier Hyper-V/ VMware and HCI Nutanix AHV.Why is it that running a test with -Sh flag (no software/hardware cache) gives almost 10 times lower results in Nutanix compared to other platforms? Without -Sh flag Nutanix is running test with higher results compared to my other testing platforms. Is it because of how CVM is designed to work? I’m just trying to understand.
We are new to Nutanix. We are migrating from ESXi to AHV. We currently have some level of CPU oversubscription in place on ESXi. We anticipate the need to oversubscribe CPU on some clusters and I am looking for ways to do this safely. I’ve looked in the Best practices guide but there was very little there. Any help would be appreciated.
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.