How It works
Have questions about how the Nutanix Platform works? Looking to get started - start here!
- 1,303 Topics
- 1,985 Replies
Below are the top knowledge base articles for the month of June 2020. KB 4116 - Alert - A1187, A1188 - ECCErrorsLast1Day, ECCErrorsLast10Days KB 7503 - G6, G7 platforms - DIMM Error handling and replacement policy KB 1540 - What to do when /home partition or /home/nutanix directory is full KB 4141 - Alert - A1046 - PowerSupplyDown KB 7604 - Disk space usage for root on Controller VM has exceeded 80% KB 4158 - Alert - A1104 - PhysicalDiskBad KB 1113 - HDD/SSD Troubleshooting KB 4409 - LCM: (LifeCycle Manager) Troubleshooting Guide KB 2090 - AHV | Host and Guest Networking KB 4519 - NCC Health Check: check_ntp KB 3357 - NCC Health Check: ipmi_sel_cecc_check KB 2473 - NCC Health Check: cvm_memory_usage_check KB 2486 - NCC Health Check: cvm_mtu_check KB 4273 - NCC Health Check: aged_third_party_backup_snapshot_check and aged_entity_centric_third_party_backup_snapshot_check KB 1863 - NCC Health Check: sufficient_disk_space_check KB 3741 - Nutanix Guest Tools Troubleshootin
HiI'm hoping someone has a similar setup. We have a 2 sites connected by fast links with one Nutanix cluster at each site (16 nodes in each cluster). We use Metro availability to replicate between sites. We have 1 stretched ESXi cluster hosting mainly Window servers.We have a number of VM guest failover SQL clusters (2 nodes in each cluster) which require shared storage and therefore require the use of Nutanix Volume groups. Each VM is connected to the volume group via ISCSI.We can't use Metro replication to replicate these VMS and their associated volume groups to the other site as you can't replicate volume groups with Metro availability - it's not supported.So, the only other option is to use Async replication. Here are my questions:In the manual the conditions for Async say "Do not have VMs with the same name on the primary and the secondary clusters. Otherwise, it may affect the recovery procedures"Q: If we failover the async protection domain containing the cluster node VM
I received this answer a few days ago[Just plug your VM on a VLAN where there is a router connected to the internet. Then either use DHCP if there is some, or configure an IP address and set your router as gateway. That’s it.]but Is there anyone who can explain in more detail?I am currently configuring guestvm with windows10The network settings are like this The port of the node where the GUEST VM is installed is directly connected to the Internet-enabled LANOr should I change the settings inside Windows vm? It is composed of dhcp
Below are new knowledge base articles published on the week of August 23-29, 2020.KB 9683 - Move: User privileges required for migrating Resource Pool based VMs KB 9760 - When creating/updating a Recovery plan on Network settings' page, Production and Test failover subnets are not shown correctly and a message 'No options available' is displayed in the UI KB 9819 - VM power on failure on AWS. Error: Failed to PlugNic KB 9848 - SSP - Project creation failed error "user_capability mandatory field: username is missing" KB 9898 - LCM 2.3.3 - How to update lcm_family attribute for Intel DCB nodes? KB 9905 - AHV upgrade becomes stuck because host is in the process of an HA failover and could not be placed into Maintenance Mode KB 9916 - LCM: Inventory failed for release.smc.sata_update.update due to failed drive on node KB 9918 - Enabling HA on vSphere 7.0 cluster fails to enable VMCP because of ESXi host(s) with APD Timeout disabled. KB 9925 - How to delete all data sent to Nutanix by Pulse
Below are new knowledge base articles published on the week of December 1-7, 2019. KB 8546 - Pre-check: test_nsx_configuration_in_esx_deployments KB 8624 - PulseHD shows RED if dmidecode.exe is missing KB 8631 - Accessing Prism Via Citrix NetScaler (ADC) KB 8669 - Nutanix Files - long filename isn't supported yet. KB 8671 - How to determine which M.2 device failed on the node KB 8680 - Metro - Recovery procedure after two-node down scenarios Note: You may need to log in to the Support Portal to view some of these articles.
I was trying to get AlienVault (OSSIM) to run on Nutanix, but I have hit a road block. I was able to get the install dialog to run after setting the boot to legacy bios. However, after install, It fails to boot.My questions, do any of you use any open source SIMS solution that works in Nutanix?I could accept a SIMS solution at a cost. However, after going through sales with LogRythim and splunk, I was frustrated because both of them incure log ingestion fees. We may have a good budget, but we are limited with our operational cost.Thanks for any assistance you can provide.
What is happening to my 2 node cluster during a failover or an upgrade?What does a recovery process look like after a node failure?If you are wondering the above, we have the answer for you! You can monitor the progress of your 2 node cluster in these situations through Prism Element.To monitor node recovery progress after failover: Registering a witness is highly recommended to help the cluster handle the failover situation automatically and gracefully. Stand-Alone mode: A failed node would trigger cluster to transition into stand-alone mode during which the following occurs: Failed node is detached from metadata ring. Auto rebuild is in progress. Surviving node continues to serve the data. Heartbeat: Surviving node continuously pings its peer. As soon as it gets a successful reply from its peer, clock starts to ensure that the pings are continuous for the next 15 minutes. If a ping fails after a successful ping, the timer will be reset. Prism Element Home page shows Critical
Most of us are aware of what maintenance mode is. But how is its execution different in ESXi and AHV in Nutanix? Well… it’s just a small tweak! While trying to put ESXi host in maintenance mode, it will get stuck as the CVM, which cannot be migrated off to other hosts, needs to be powered off and then the action will be completed. In the case of AHV, this need not be done, once you provide the instruction to put the host in maintenance mode, the user VMs will be migrated to other hosts and the host will enter maintenance mode, irrespective of the CVM being powered off. In short, “The CVM need not be powered off to put AHV host in maintenance mode, which is the case with ESXi” Have any questions? Leave them below and let’s start a discussion! Check out the community post https://next.nutanix.com/how-it-works-22/cluster-maintenance-or-relocation-33391 for more on cluster maintenance or relocation.
Hello, warriors of the digitalOur customer is in trouble because of a big problem In order to install Windows 2019, Customer try to upload ISO files to Prism, but it keeps failing [Image of ISO file]========================================================= [Error message]================================================================Traceback (most recent call last): File "build/bdist.linux-x86_64/egg/ergon/client/legacy/base_task.py", line 468, in _resume File "/home/jenkins.svc/workspace/postcommit-jobs/nos/euphrates-5.15.2-stable/x86_64-aos-release-euphrates-5.15.2-stable/builds/build-euphrates-5.15.2-stable-release/python-tree/bdist.linux-x86_64/egg/acropolis/image/update_task.py", line 57, in _run File "/home/jenkins.svc/workspace/postcommit-jobs/nos/euphrates-5.15.2-stable/x86_64-aos-release-euphrates-5.15.2-stable/builds/build-euphrates-5.15.2-stable-release/python-tree/bdist.linux-x86_64/egg/acropolis/image/modify_task.py", line 162, in _modify_image File "/home/jenkins.svc/wo
Below are the top knowledge base articles for the month of March 2022.KB 7503 - NX Hardware [Memory] - DIMM Error handling and replacement policy KB 3827 - Alert - A130087 - Node Degraded KB 1113 - HDD or SSD disk troubleshooting KB 1540 - [AOS Only] What to do when /home partition or /home/nutanix directory on a Controller VM (CVM) is full KB 4409 - LCM: (Life Cycle Manager) Troubleshooting Guide KB 2090 - AHV host networking KB 6153 - NCC Health Check: default_password_check and pc_default_password_check KB 4158 - Alert - A1104 - PhysicalDiskBad KB 4519 - NCC Health Check: check_ntp KB 2473 - NCC Health Check: cvm_memory_usage_check KB 4141 - Alert - A1046 - PowerSupplyDown KB 4272 - Alert - A6516 - Average CPU load on Controller VM is critically high KB 6945 - How Upgrades Work at Nutanix KB 11663 - Unable to launch Prism Element from Prism Central due to missing "PrismUI.tar.gz" file after upgrade KB 3741 - Nutanix Guest Tools Troubleshooting Guide KB 8094 - NCC Health Check: disk_
I’m sure you have seen that one before. In most cases you expect it or at least understand what caused it. In some instances you probably ignore it (we all do, no shame). What if this happens when you log into the CVM or the host? Has cluster security been compromised?During the upgrade or rescue of the AOS new keys are created for each node in the cluster. When you open SSH session, these keys are compared to those that were noted on the client previously and since there is a mismatch a warning is triggered.KB-2388 Upgrade/Re-install of AOS changes the ssh key for remote host identification explains how to clean up the keys to get rid of the warnings.
We have numerous clusters and each have their own hardware platform (and in a couple of cases, we’ve even mixed hardware models within the same cluster). Our current Nutanix footprint;VENDOR MODELS NODE COUNTSuper Micro NX-8155-G7 12Super Micro NX-1175S-G6 7Super Micro NX-3060-G7 6Dell XC630-10 14Super Micro NX-8035-G7 12Super Micro NX-8035-G6 36Is there a preferred or recommended solution for hardware monitoring and alerting? For instance; Dell OpenManage (DOM) or Supermicro Server Manager (SSM). I don’t readily know if either would support the others solution (like DOM supporting IPMIs or SSM supporting iDracs) Or, would Prism Central (and Prism Element) be sufficient for hardware monitoring and alerting?We currently have and are using DOM but not for the SuperMicro hardware so I’m wondering if I’m potentially missing out on a better solution or not.
Let’s say you received an alert stating that all CVMs are not in the same timezone or all hosts are not in the same timezone. What does it mean? Well, as simple as the alert indicates, the CVMs/hosts are not in the same timezone. We need to ensure that the same timezone is configured across all the CVMs/Hosts as it ensures that all the guest VMs log messages are timestamped consistently. How will you know about the timezone issue? There is an NCC health check, “same_timezone_check” in place to inform any discrepancy in the timezones. To know more about the alerts and errors which can be seen and how to change the timezone, take a look at https://support-portal.nutanix.com/#/page/kbs/details?targetId=kA0600000008hm9CAA Have any questions? Leave a comment and let’s start a discussion.
Below are new knowledge base articles published on the week of September 20-26, 2020.KB 8911 - Alert - Flow Visualization Statistics Collector Service Restart Detected - ConntrackStatsCollectorServiceRestart KB 9622 - NCC Health Check: pc_fs_inconsistency_check KB 10023 - Push images from one PC cluster to another PC cluster KB 10036 - Windows Portable Foundation Application may fail to start without any error on Windows 10 1809 and above. KB 10048 - Metric-server cannot scrape metrics on K8s cluster KB 10049 - Prism Central throwing "BEARER_TOKEN_BAD_SIGNATURE" error KB 10056 - Troubleshooting common issues when discovering a node’s BMC information for out-of-band power management in X-RayNote: You may need to log in to the Support Portal to view some of these articles.
Many customers buy #hci technology from the usual suspects (#nutanix, #vmware, #dell, etc.) without any consideration to what happens when things break. Support becomes REALLY important when your systems are not operating as expected. Thankfully, they don't have to worry if they bought Nutanix. Hear what #nutanix customers say about our award winning #nps90 #support.
Below are new knowledge base articles published on the week of December 15-21, 2019. KB 8204 - Alert - A1061 - vDisk Block Map Usage High Critical KB 8284 - Alert - A130151 - Two node cluster state change to KB 8640 - Prism one click upgrade : Preupgrade/Upgrade options not available after manually uploading metadata json and upgrade bundle KB 8743 - Alerts relating to IPMI sensors report that the component cannot be monitored or be permanently damaged. KB 8747 - Anonymous IPMI user Note: You may need to log in to the Support Portal to view some of these articles.
Below are new knowledge base articles published on the week of November 8-14, 2020.KB 9233 - Alert - A150005 - AcropolisDefaultVSwitchError KB 9406 - Alert - A150004 - AcropolisVSwitchConfigFailed KB 9544 - NCC Health Check: conntrack_connection_limit_check KB 9675 - NCC Health Check: ahv_bridge_config_check KB 10180 - VM migration failures between AHV hosts due to network MTU mismatch KB 10208 - Alert - A130160- Host Network Uplink Configuration Failed KB 10209 - A6416 - Common port group between ESXi hosts is absent KB 10222 - Alert - A111076 - OVA Upload Interrupted KB 10223 - Alert - A200402 - vNUMA VM Pinning Failure KB 10238 - Nutanix Files Unavailable due to Stale ARP entries when ARP Flooding is disabled in Cisco ACI KB 10239 - Citrix VDI and Daylight Savings Time KB 10241 - Alert ID - A500104 - Entity Sync failed for the Availability Zone KB 10242 - Alert ID - A130149 - Guest Power Operation Failed KB 10245 - Unable to upgrade Era from 2.0 to 126.96.36.199 and above KB 10249 - Alert
Do you already have many security policies defined within one instance of Flow and, have another instance where you need the same set of policies but do not wish to recreate them? Or, do you wish to simply have a backup of your existing security policies just in case you should ever need to restore them sometime in the future?Flow has the native ability to export (and subsequently import) security policies that have already been defined previously. Policies are exported into a single binary file, which can then be transferred to a different instance of Flow or stored-away for backup purposes.Please also note that, when importing a previously exported binary file, any existing policies which are already defined within a given Flow instance are automatically removed in-favor of the newly imported policies.You can find more information regarding this within the Exporting and Importing Security Policies section of the Flow Microsegmentation Guide.
Files supports AOS software encryption and in-flight message encryption for SMB3 shares.You can only apply AOS software encryption to files by activating it through Prism. Refer to the file release Notes to ensure that you are running a compatible version of AOS. SMB3 Message Encryption: After enabling message encryption, the file encrypts messages on the file server side and decrypts them on the client side (only on the new connections for the share). Clients that do not support encryption (Linux, Mac, windows 7) cannot access a share with encryption enabled. Authentication: Active Directory with Kerberos provides secure authentication. File supports the following Kerberos encryption types. Advanced Encryption Standard 128 with the Keyed-Hash Message Authentication code and Cryptographic Hash Function SHA1 Advanced Encryption Standard 256 with the Keyed-Hash Message Authentication Message Authentication code and Cryptographic Hash function SHA1 (AES-256-HMAC-SHA1) Rivet cipher
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.