5 Essential Tips for Maximizing Your Experience at Nutanix .NEXT for Bloggers
I have been trying to chase down and eliminate this error for a while now. The cvm leader only wants to use itself as the time source - when I run this command (allssh ntpq -pn) it shows remote refid st t when poll reach delay offset jitter==============================================================================x.x.x.x y.y.y.y 2 u 603 1024 377 0.337 55090.5 419.722x.x.x.x 184.105.182.16 3 u 200 1024 4 0.358 75064.8 0.000*127.127.1.0 .LOCL. 10 l 587 1024 377 0.000 0.000 0.000 (The asterisk indicates that it is using itself as a time source, is that correct?) I can run ntpdate successfully to the configured ntp servers and the cvm can connect to them. (I can even run ntpdate to an external time server that is not configured for that matter) How do I get the cvm leader to use the configured ntp servers as the source (and not use itself)?
Currently running PrismCentral pc.2021.3.0.2After upgrading to this version, the /home partition is supposed to expand to 50gb from 40gb. This unfortunately did not happen.I found KB9525, and tried to run this command to manually expand the partition:sudo python /usr/local/nutanix/bootstrap/bin/pc_config_home_partition.py --skip_end_sector_validationThat didn’t succeed in expanding the /home partition (which was sitting at 91% utilisation)In the pc_config_home_partition.log file, I found this error::178 BLKRRPART: Device or resource busysfdisk: The command to re-read the partition table failed.Run partprobe(8), kpartx(8) or reboot your system now, before using mkfsI ran partprobe, didn’t helpkpartx requires parameters, and I wasnt sure what inputs to provide, so I didn’t run thatI did reboot Prism Central, but that didn’t help either, except it did somehow reduce disk utilisation of /home to 76% from 91%. Still couldn’t expand the disk thoughAnyone faced a similar situation, or have
We added a couple of storage nodes to a modest 3-node cluster that was running out of disk space. The alerts about running out of storage, and not having enough for redundancy, are gone. A few months in, and every time I look in Prism, under Hardware, I see that the storage nodes' "Total Disk Usage" has not gone up very much. The compute nodes' disk usage is about 5-6 times as much as the storage nodes (and the compute nodes actually have more storage onboard than the storage nodes). I understand that Nutanix tries to keep a VM's storage on the same host that provides its compute resources (and the copy is sharded and spread out among other nodes). Is that why I am seeing so little utilisation of the new storage nodes in Prism?
Already have an account? Login
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.