Connecting Cloud Innovators: Building Community at .NEXT 2024
Thanks for your answer.
@Neel Kotak Hi Neel, thanks for your responseyes I understand that, but now currently my customers have limited infrastructure and will be improve in the next year
Hi @Sudhir9 great, thanks for your answer
Hy @Sudhir9 thanks for your best answer, but in real production environment, is there any experience with the topology as above (using 2 different switches in one cluster)
hhaha its true Sergei, thanks for your answer :)
ohhh my god :D, so if i have one node fail (rf2), i need repair as soon as posible :D, thanks for your clarification Sergei :)
ok thanks Sergei for your awasome answer and i need clarify “ If another node goes down while the data rebuild is not finished, the cluster will go down.” this is all cluster down, or all vm on failed node only will be down and the other vm on other node still up?
There are 2 options of fault tolerance - RF2 and RF3. RF2 means there are 2 copies of all data. With RF2 one node can go down at a given time. RF3 means there are 3 copies of all data. With RF3 two nodes can go down at a given time. If you have 10 nodes and RF2 configuration, one node can go down and the cluster will stay up. When the node goes down, the data starts rebuilding and the cluster recreates the copies of data that went missing. If another node goes down while the data rebuild is not finished, the cluster will go down. If you have enough free space in the cluster, after some time, when the rebuild is complete, one more node can go down and so on. hi Sergei, thanks for your clear answer, one more , do you have estimate time for rebuilding data? you can give me example used space data and i need clarify “ If another node goes down while the data rebuild is not finished, the cluster will go down.” this is all cluster down, or all vm on failed node only will be
please any answer for my question?
Hi, At the moment there is no storage motion in AHV. There is already a feature request open for this and will be addressed in future AOS release. To move VMs between containers you can follow the KB below: https://portal.nutanix.com/kbs/2663 hi Alejandra, thank you for the answer ya
hi team, thanks for the answer, so if i already have isolated connection from category A to category B and my single prism central going down, my isolated connection still work?
Hi HITESH0801 ok thanks for your help
Hi HITESH0801 ok thanks for your answer bro :D once again, do you have like a table comparation feature enabled if we using ahv,esx, and hyperv?
Hello @randyramandani The minimum requirements for a worker node is 8GiB and for a master is 4 Gib. If we’ll try to create a Karbon cluster with less memory requirement, the pre-checks will fail and it will show you an error, like in the image below You can go through the following guide to know more about Karbon https://portal.nutanix.com/#/page/docs/details?targetId=Karbon-v10:kar-karbon-deploy-karbon-t.html Hi HITESH thanks for your answer, yeah i know this requirement for karbot, but my problem is i just have 48 GB memory for my CE, can i modified custom requirement for this karbon like i downgrade memory cvm in CE?
"do the new node(s) have more storage than the existing ones? if yes, you would have to add two of them to the cluster." can you explain to me more detail? this is my existing environtment, i have 4 node existing with XC630-10 and evc in cluster already enabled[img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/20aa3056-57de-4aaf-8d31-b8199dbc71b8.png[/img]
Already have an account? Login
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.