Replies posted by bbbburns
I’m not very familiar with API troubleshooting, but I know a little bit about it. I have also heard that the v4 API offers even more flexible filtering. Is it possible for you to write this using the v4 API instead?https://developers.nutanix.com/api-reference?namespace=clustermgmt&version=v4.0.a2https://www.nutanix.dev/api-reference-v4/ Could you also paste the JSON body of your v3 API request here so we could try to reproduce it? Also - what Prism Central and Prism Element versions are you using?
The short answer is that the maximum cluster size documents what Nutanix has tested and what Nutanix supports as a result.The longer answer is that so many factors go into choosing the maximum cluster size. The Hybrid Cloud Reference Architecture has a great section about this called “Choosing the Optimum Cluster Size”. If you factor in maintenance windows, upgrades, failure domains, power, cooling, and network ports, you might find out that the operationally optimum size for a compute & storage cluster is far smaller than the advertised maximum of just a hypervisor cluster.
The current maximum cluster sizes are documented on the Nutanix Support Portal under the “Configuration Maximums” section. These list the currently tested and supported configuration maximums. For AHV, the maximum cluster size as of 2023-03-14 is 32 nodes per cluster. https://portal.nutanix.com/page/documents/configuration-maximum/list?software=AHV&version=AHV-20201105.30398_6.5 In the past, this was documented as unlimited because there was no theoretical guard rail. However, it’s impossible to test a cluster of infinite size, so changing these to an integer value that aligns with tested numbers seems more rational to me.In practice, you will often find your max cluster size is smaller than 32 because of other practical constraints as discussed in the Nutanix Validated Design. https://portal.nutanix.com/page/documents/solutions/details?targetId=NVD-2099-Hybrid-Cloud:cluster-design.html
Hello! Can you share some details about your setup so we can help identify what’s happening?What version of Prism Central? What version of AOS? What version of AHV? Can you share a screenshot of the policy in question? What traffic are you trying to send between the two VMs and on which ports or protocols, and is that traffic already blocked by a VM level firewall?
Hello @hyoung ho there is another flag in manage_ovs called --host that may help you here. You can run manage_ovs from a CVM already in the cluster and specify the --host you’d like the operation to execute on.You could use ovs-vsctl as a last resort if you can’t get manage_ovs to work for you, but you should use caution there and make sure the configuration matches our best practices.We’re adding a feature in the future that will allow you to configure new nodes automatically as they are added to the cluster from the GUI, so keep an eye out for that in a future release.
Hi Moises,This configuration will ONLY work if both VMs are running on the same AHV host. The network that you created is VLAN backed, and that means that we can’t communicate across nodes unless the VLAN exists on the top of rack switches.If you run the VMs both on the same AHV host, the VLAN exists inside the vSwitch, and the VMs should be able to talk to each other.
Jayakumar, You can have only one bond inside a bridge. This bond is created by default during the imaging process. If you want to edit the bond members using manage_ovs after it’s already created, it deletes the bond, then creates it with the desired name and desired members. That’s why you need to specify the name. You can choose any name you like, but please use br0-up. Bonds in br0 used to be named bond0 in older versions of code, but now we name the bond br0-up.
Dennis, Please connect to the CVM first. Then you can connect to the AHV host with “ssh firstname.lastname@example.org”. It should not prompt you for a password. 192.168.5.1 is the special address used for the AHV host’s internal interface connecting it to the CVM. The CVM uses the IP addresses 192.168.1.2 and 192.168.1.254 on that internal link.
If you use balance-slb, your VM source MAC addresses are converted to a hash (1-255) by OVS. This hash value is then balanced between the active physical adapters in the bond at the bond_rebalance_interval (10 second by default, Nutanix recommends 30 seconds). You can view the active hashes on each adapter by using the following command from the CVM: ssh email@example.com ovs-appctl bond/show You’ll see output like: $ ssh firstname.lastname@example.org ovs-appctl bond/show ---- br0-up ---- bond_mode: balance-slb next rebalance: 4762 ms active slave mac: 00:e0:ed:73:f3:5f(eth3) slave eth2: enabled may_enable: true hash 10: 5 kB load hash 22: 1 kB load hash 34: 1 kB load hash 58: 1 kB load hash 60: 6 kB load hash 68: 6 kB load hash 78: 21 kB load hash 81: 1 kB load hash 83: 1574 kB load ... slave eth3: enabled active slave may_enable: true hash 2: 2 kB load hash 3: 11 kB load hash 4: 23 kB load hash 8: 3 kB load hash 13: 48 kB load hash 17: 25 kB load hash 19: 1 kB load As far as I know, there
Hello, the Nutanix AHV Networking Best Practices Guide outlines how to configure your VM NICs in both access and trunk mode and how to move between these modes. Look for the section titled: VM NIC VLAN Modes The only way to do this today is using the aCLI command line (or with REST if desired). This is not currently possible to do through a GUI.
Hello, There are absolutely no special requirements needed to install F5 BIGIP on Nutanix AHV. Simply move the F5 BIGIP qcow KVM disk image to the Nutanix image service. Then create a VM using this disk image and connect as many network adapters to the VM as you need. At that point you’re ready to power up the VM and follow the F5 installation and configuration instructions.
Hello! We don’t maintain an explicit list of supported switches, it would be MUCH too long. We do make recommendations about what is required in switches for high performance environments, or ROBO environments. We also give examples for each of these environments. You are free to use any switch vendor or model you like as long as they meet our general requirements. Finally - we do list some switches we know should never be used for high performance storage networking. Take a look at the Physical Networking Best Practices Guide. Find the section called: Choosing a Physical Switch.
@fredzone.net I would recommend changing the subnet mask so you’re able to use the same VLAN for multiple clusters if you desire. This allows you to dedicate a subnet for the backplane network of each cluster, and avoid overlapping the backplane IP addresses. For example: Cluster 1 Backplane: Subnet: 172.16.250.0 Netmask: 255.255.255.128 VLAN: 202 Usable IPs: 172.16.250.1 - 172.16.250.127 Cluster 2 Backplane: Subnet: 172.16.250.129 Netmask: 255.255.255.128 VLAN: 202 Usable IPs: 172.16.250.129 - 172.16.250.254 That gives you the ability to add 127 nodes in every cluster - however - that may not be practical so you could subnet that even further with a smaller netmask if you require.
Yes, your understanding is correct. You'd remove the proxy configuration from the PE. Then PE would automatically use the PC for sending pulse data. PC would in turn send that data out over whatever mechanism is available. PC will try to directly reach the destination (or use your configured proxy). If those mechanisms don't succeed, then it will send data using the configured SMTP server. You can find more info in the Prism Central guide here: [url=https://portal.nutanix.com/#/page/docs/details?targetId=Prism-Central-Guide-Prism-v511:mul-support-pulse-recommend-pc-c.html]https://portal.nutanix.com/#/page/docs/details?targetId=Prism-Central-Guide-Prism-v511:mul-support-pulse-recommend-pc-c.html[/url]
Starting in AOS 5.6.1 with NCC 3.5.2, Prism Central should by default act as a Proxy for any connected PE clusters that have Pulse enabled. There is no configuration required beyond enabling Pulse in PC and enabling Pulse in PE. Make sure that there is NO manual proxy configuration in the PE. Here is the official documentation: [url=https://portal.nutanix.com/#/page/docs/details?targetId=Prism-Central-Guide-Prism-v511:mul-pulse-proxy-server-c.html]https://portal.nutanix.com/#/page/docs/details?targetId=Prism-Central-Guide-Prism-v511:mul-pulse-proxy-server-c.html[/url] The logic on the PE looks like the following: [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/254107da-bf4e-4775-b68d-94ad048bad8d.png[/img]
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.