How It works
Have questions about how the Nutanix Platform works? Looking to get started - start here!
- 1,120 Topics
- 1,675 Replies
Hi In my previous publication, publish version v1.2. (https://next.nutanix.com/scripts-32/nutanix-tools-for-ahv-v1-2-32075) It was updated to v1.3 and it already extracts new information. The status of the NGT TOOLS, description, etc. and many new key validators to avoid problems. Additional I add a small script to obtain the information of the connection of the cluster to the TOR switch and the physical ports. Please check the github for more information. https://github.com/dlira2/Nutanix-tools-for-AHV I use the script in large accounts with more than 1000VMs and it works optimally.
Hi, I got a question during the test. Window VMs(C: drive) , linux VMs(df -h) capacity different those capacity in the prism(VM-table). Not all VMs are like this, only a few are like this. These VMs erased a lot of data and got a lot of capacity. The Curator has since been executed, but the capacity has not decreased inf prism-vm-table. Why can't Prism return a VM's erased data capacity value? How do I get Prism to read this capacity? [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/95360a51-c444-4541-8a0d-9d9376566a45.png[/img][img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/eaa67bca-e1c9-4651-b4c2-72dafc3cc28d.png[/img] Window -> Server 2016 Linux -> Centos 7.X Thank you.
Hi all, did you guys know that AHV to ESXi conversion is only possible from Prism (Convert cluster option) if the cluster was earlier Esxi converted to AHV and being converted back to ESXi. AHV to ESXi conversion is not supported from the Prism. Some pre- requisites for converting AHV to ESXi via prism are-https://portal.nutanix.com/#/page/docs/details?targetId=Web-Console-Guide-Prism-v55:man-cluster-conversion-requirements-limitations-r.html#nref_amd_hlp_k5 However one could always re-image an AHV node to ESXi manually. Before converting the cluster, customer should always migrate the running vms on the AHV cluster to their DR cluster. This can be done using Protection Domains. More information on Protection Domains can be found below: https://portal.nutanix.com/#/page/docs/details?targetId=Prism-Element-Data-Protection-Guide-v511:Prism-Element-Data-Protection-Guide-v511 or also refer KB-3059 https://portal.nutanix.com/#/page/kbs/details?targetId=kA03200000098T7CAI Once cluster is c
Nowadays everyone is concerned about the security of their infrastructure as they should be. The Nutanix document referenced below contains an overview of the security development life cycle (SecDL) and host of security features supported by Nutanix. It also demonstrates how Nutanix complies with security regulations to streamline infrastructure security management. In addition to this, this guide addresses the technical requirements that are site specific or compliance-standards (that should be adhered), which are not enabled by default. https://portal.nutanix.com/#/page/docs/details?targetId=Nutanix-Security-Guide-v510:Nutanix-Security-Guide-v510
All clusters will need to be upgraded at a point. If you have metro availability enabled in your environment, you will need to follow the best practices guide lines for it: https://portal.nutanix.com/#/page/docs/details?targetId=Web-Console-Guide-Prism-v510:wc-metro-availability-upgrade-considerations-r.html Specifically Nutanix supports the following replications: Between N to N-2 major versions and vice versa for STS to STS versions or LTS to STS versions. Between N to N-1 major versions and vice versa for LTS to LTS versions.
Nutanix Pulse HD provides diagnostic system data to Nutanix support teams to deliver pro-active, context-aware support for Nutanix solutions. The Nutanix cluster automatically and unobtrusively collects this information with no effect on system performance. Pulse HD shares only basic system-level information necessary for monitoring the health and status of a Nutanix cluster. OK, this is all great but let’s get to real benefits, shall we? Q 1. Why would you enable it? A. Well, for several reasons: So that if you have an issue that you raise with Nutanix support we would be able to start looking at the data about your cluster immediately without making you answer many questions that are often crucially important for a prompt issue resolution. In essence, to reduce the amount of time it takes to fix the problem. So that when Nutanix engineer requires logs they would be able to collect them themselves while you would focus on what you need to do. No frustration with logs collection,
Please Help: As per this video, when the new data is written to the cloned/snapshotted vDisk, what happens when we delete a snapshot? Say we created three snapshots and deleted a second snapshot which have new data written while the base vDisk was read only, what happens to the newly written data? Will it be committed to the base vDisk or the current state snapshot will be alive and other will be deleted? [url=https://www.youtube.com/watch?v=uK5wWR44UYE]https://www.youtube.com/watch?v=uK5wWR44UYE[/url]
Hello All, The infrastructure team within my organization has recently rolled out several Nutanix environments. Unfortunately we are only able to discover the individual VM's and not the actual appliances. When we tried to address this with the Infrastructure team it was stated that no SSH credentials can be provided as the only credential come from the vendor. Our Discovery tool only supports Nutanix discovery via SSH. Which leads me to my Question? Is anybody performing Discovery on Nutanix appliance and VM's using Discovery tools in there environment and if so how are you achieving this?
I just had a security scan done against my cluster running AOS 5.5.6 and the only 'high' risk that came back was with cve-id CVE-1999-0548 (NFS Server Without Shares Detected): [b]Description;[/b] A superfluous NFS server that is not sharing any file systems has been detected. [b]How to Fix;[/b] Disable the NFS server. Obliviously, I don't think I want to disable the NFS server service on all of my cvm's - is there any official documentation that I can share with my peers to support this so that I can get an exemption from this risk on these systems?
Hi Team, Please let us know how to get below values using v2 apiversion. Previously we are using apiversion v1 and getting metrics using /vms [url=https://hostname4:9440/PrismGateway/services/rest/v1/vms/]https://hostname4:9440/PrismGateway/services/rest/v1/vms/[/url] Now with api version v2 we are not able to get below mentioned metrics [b]hypervisor_cpu_usage_ppm[/b] not able to find this with both urls(vms,virtual disks) [b]hypervisor_memory_usage_ppm[/b] not found [b]memoryCapacityInBytes[/b] not found [b]ipaddresses[/b] not found but found requested_ip_address are both same?? [b]controllerVm[/b] not found Able to get few metrics using virtual_disks and above mentioned metrics we are not able to get, Are these not available in v2 or modified to anyother url.
Below are new knowledge base articles published on the week of November 17-23, 2019. KB 8193 - NCC Health Check: secure_boot_check KB 8223 - NCC Health Check: sed_key_availability_check and sw_encryption_key_availability_check KB 8513 - VSS Snapshot Fails for Windows VM having Dynamic Volume with Multiple Disk Extents KB 8514 - NCC Health Check: fs_inconsistency_check KB 8569 - Accessing Prism from a browser on MacOS 10.15 "Catalina" blocked by ERR_CERT_REVOKED error. KB 8571 - LCM firmware update unable to commence as VMs are unable to migrate off KB 8580 - When configuring a remote site configuration (Physical Cluster), "vStore Name Mapping" field will show an error "Remote site is currently not reachable. Please try again later." KB 8594 - How to change a snapshot's expiration time to indefinite KB 8600 - Calm License Changes and Grandfathering of Existing Users KB 8614 - Nutanix Files - Deleting TLDs in a NFS distributed share Note: You may need to log in to the Support Portal to v
Hello, I need to understand how nutanix snapshot he lives together with vmware snapshot. It's possible to protect VM with Nutanix snapshot although maybe some users make snapshots from vmware side ? Do they work together ? are there any contraindications or special attentions ? Do they not interfere with each other? I think only be careful not to interfere any restore snapshot from nutanix side with any backup/snapshot made vmware side (aka veeam) ? Thanks Manuel
According to the Prism Web console guide, there is no mention about how to create a two-node cluster. [url=https://portal.nutanix.com/#/page/docs/details?targetId=Web-Console-Guide-Prism-v510:wc-cluster-single-node-c.html#nconcept_fxk_fxl_pbb]https://portal.nutanix.com/#/page/docs/details?targetId=Web-Console-Guide-Prism-v510:wc-cluster-single-node-c.html#nconcept_fxk_fxl_pbb[/url] It might be this ? " cvm$ cluster create -s --redundancy_factor=2" Thanks, Kazu
I'm new to Nutanix and learning as I go. I'm looking at monitoring and alerting and noticed an alert that has not auto resolved. The specific alert is: Node Marked To Be Detached From Metadata Ring I've run ALL NCC Health Checks on the node and it comes back healthy. This alert is set to auto resolve but for some reason will not resolve. I check policy and the auto-resolve checkbox is checked. I did some digging and found a KB article (#000004269) for alert_auto_resolve_helper. I read about this NCC check and I don't see this as an available check in my environment. Any thoughts and tips are appreciated. Thanks!
Hi, My customer is considering leveraging NearSyncDR for their production clusters. Looks like I have to explain to them how NearSync should be configured and how it works to match their needs for future plan. As it matters, I have dived around Nutanix portal for some documents related to this technology but nothing explains clearly why it should run by its behavior in the example below: In the case I configured a schedule that repeats 1 min and retention policy of local cluster for 5 days, NearSync will work as that it will: [list] [*]Create a snapshot per minute and retain it for 15 minutes. [*]Create hourly snapshots and retain for 6 hours. [*]Create daily snapshots and retain for 5 days. [/list]So the question is if a snapshot A was created and deleted in 15 mins later, in the case user wanted to use this snapshot A to recover his file, how did he do? Is snapshot retaining time able to be changed by any command?
Hi Friends, It's nice joining you in this wonderful community. I'm Ridwan Oladipupo from Algorism Limited, Nigeria. I'm representing my company as a System Engineer saddled with the responsibility of handling the technicalities of Nutanix during POC. Please I'm absolutely new to the system, I've started with nutanix school, but I felt that is not sufficient enough to get the Job done. I need to get with the guys doing the job. I would be glad if anyone can volunteer to mentor me on this solution. Thanks in anticipation.
Hi, I can see both the NGT and vmware tools in a VM deployed in Nutanix Esxi cluster,when access it via prism? Which one I can able to install?Is there any difference between NGT and Vmware tools?or shall I install NGT in a VM deployed in Nutanix ESxi cluster?Please clarify Regards Jai
There appears to be an issue with the latest Powershell Cmdlets for Version 5.11 that reports itself as Version 5.1.1 (not 5.11) so there is a mismatch. This will bring up a Warning dialogue box asking you if you want to continue with this mismatch (Cluster Version 5.11 and CmdLets Version 5.1.1). This can affect any Powershell Automation Scripts you have running that will be waiting for user intervention! You can change the Nutanix Connection method to Force the Connection using the following syntax: Connect-NTNXCluster -Server "Your server here" -UserName "Your user" -Password "Your password" -AcceptInvalidSSLCerts [b]-ForcedConnection[/b] [b]Note: It is the ForcedConnection that enables the bypass of the mismatch for your scripts.[/b] Hopefully they will update the Version with the next iteration of the Cmdlets without the apparent typo of 5.1.1 instead of 5.11. Please feel free to "Like" to mark as "Best Answer" Kind Regards Andy [user=95361]andymlloyd[/user]
Nutanix REST API gives flexibility to a developer or an administrator to create scripts which can execute administrative jobs on a Nutanix cluster. Using the API, you can request information about different entities in the cluster or even change some configuration. Everyone at the end of the day wants to make sure that their infrastructure is fully secure. So what about Authentication? What kind of authentication does REST API require? Nutanix support multiple authentication options. Want to know about them and how to configure and use them in your scripts? Give the following KB a readKB-2257 Want to know more about Nutanix REST API and different dev tools we provide? Log on to https://www.nutanix.dev/ Want to know more about Nutanix REST API Explorer and play around(with caution) with the APIs in your Nutanix Cluster? REST API Explorer
Hello, I will need to power off a "Metro Standby Site", which has VMs running on it. I would like to manually migrate those VMs to the primary (active) site and Shut down / Power Off the nodes in the Standby site approx. 24-36 hours. All VMs need to be running on the active side and no VMs are to be restarted (RTO + RPO =0). The steps I propose to take to achieve the goal are as follows: I have included as much environment info as possible. Any insight or corrections to this process are welcome. Thanks in Advance [b]Phase I (Configure Standby Site)[/b] [list=1] [*]Disable/suspend the (x1) asynch PD (from Primary) [*]Update VMware DRS Rules (Manual) (from Primary) [*]Disable HA [*]Manually perform vMotion of all VMs to active site. [*]Configured VM affinities to the hosts of the primary site and set the DRS rules to Fully Automated. [*]Create new VMGroup and Host Group if needed [*]I do not want to disable DRS and lose my DRS Affinity Rules [*]Disable Metro per PD (x3) (from
Can you use a FC card in one node of a Nutanix cluster to do a storage Vmotion / migration to the New esxi Hosts in the cluster ? FC card will present the current VMFS partition to the new esxi host running in the cluster .Then storage vmotion the vm's across to the new storage( NFS partition in the nutanix cluster ) Is this possible ? Prefered option as NFS would only be on a 1 gig network and we need to do this quickly Any input will be appreciated ?
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.