Can't try Microsegmentation | Nutanix Community
Skip to main content
Solved

Can't try Microsegmentation


yamachan
Forum|alt.badge.img

I want to test Microsegmentation on prism central, but "AHV Free Memory Minimum 1GB for each Prism Central VM" showed up. And I can't enable Enable Microsegmentation check box. 20gb memory assigned to prism central. What was the problem ?
 

 

Best answer by AnishWalia20

Hey @yamachan , actually we notice this pre-check failing when generally there is less than 1GB of RAM available on the AHV host on which the PC-VM is running.

 

Please make sure that AHV hosts, where Prism Central VMs are running, have at least 1 Gb of memory free. You can check this on Hardware page in Prism or 
If you want to check memory usage from the command line, then check "Memory Free" column in "Hosts" section of Scheduler page on Acropolis master for the particular host where PC-VM is running:

You can get to know the Acropolis master amongst the CVM using the commands mentioned in this KB http://portal.nutanix.com/kb/2305

SSH to the CVM which is the acropolis master and run this command below to go to the Acropolis Links page: cvm$ links http:0:2030/sched <----- on the acropolis master

NOTE: Do not use "free -h", "top", "cat /proc/meminfo" commands, as AHV reserves all free memory and output of these commands will in most cases show that almost no free memory is left.

If there is enough memory available on AHV host, where Prism Central VMs are running, then please make sure that time between Prism Central VMs and Prism Element CVMs is in sync. If time is not in sync, then it needs to be fixed. Flow enablement workflow relies on Prism element statistics stored in IDF, so if time is not synced, we will not be able to query for most recent host memory usage and precheck will fail.
So to troubleshoot NTP issues you can use this awesome KB http://portal.nutanix.com/kb/4519 .

 

If the issue still persists and you are not clear then you can always go ahead and open a support case with us so that a technical expert can take a closer look and help.

 

Let me know if you need anything else.:relaxed:

 

View original
Did this topic help you find an answer to your question?
This topic has been closed for comments

9 replies

AnishWalia20
Nutanix Employee
Forum|alt.badge.img+5
  • Nutanix Employee
  • 201 replies
  • Answer
  • June 26, 2020

Hey @yamachan , actually we notice this pre-check failing when generally there is less than 1GB of RAM available on the AHV host on which the PC-VM is running.

 

Please make sure that AHV hosts, where Prism Central VMs are running, have at least 1 Gb of memory free. You can check this on Hardware page in Prism or 
If you want to check memory usage from the command line, then check "Memory Free" column in "Hosts" section of Scheduler page on Acropolis master for the particular host where PC-VM is running:

You can get to know the Acropolis master amongst the CVM using the commands mentioned in this KB http://portal.nutanix.com/kb/2305

SSH to the CVM which is the acropolis master and run this command below to go to the Acropolis Links page: cvm$ links http:0:2030/sched <----- on the acropolis master

NOTE: Do not use "free -h", "top", "cat /proc/meminfo" commands, as AHV reserves all free memory and output of these commands will in most cases show that almost no free memory is left.

If there is enough memory available on AHV host, where Prism Central VMs are running, then please make sure that time between Prism Central VMs and Prism Element CVMs is in sync. If time is not in sync, then it needs to be fixed. Flow enablement workflow relies on Prism element statistics stored in IDF, so if time is not synced, we will not be able to query for most recent host memory usage and precheck will fail.
So to troubleshoot NTP issues you can use this awesome KB http://portal.nutanix.com/kb/4519 .

 

If the issue still persists and you are not clear then you can always go ahead and open a support case with us so that a technical expert can take a closer look and help.

 

Let me know if you need anything else.:relaxed:

 


AnishWalia20
Nutanix Employee
Forum|alt.badge.img+5
  • Nutanix Employee
  • 201 replies
  • June 28, 2020

Hey @yamachan , just following up. Were you able to resolve the above issue with the above information?

 

Let me know if you need help with anything as I would be more than happy to assist. :smile:

 


yamachan
Forum|alt.badge.img
  • Author
  • Adventurer
  • 5 replies
  • June 29, 2020

@AnishWalia20thank you for your thoughtful response.

The node that showed smallest “Memory free” was 56983MB among four nodes. Ntp server(s) referenced by Prism Central was 127.0.0.1, so I fixed it.

But still “AHV Free Memory Minimum 1GB for each Prism Central VM” showed up, so I rebooted Prism Central.

Then another problem came up… when I look at Prism Central url in my browser, I see "Prism Services have not started yet. Later." and Prism Element's dashboard has "Prism Central – Disconnected". When I logged in to Prism Central with ssh and ran ncli, ncli replied: Connection refuse.

What's going on?

Prism Central – Disconnected
nutanix@NTNX-10-10-13-197-A-PCVM:~$ ncli
Error: Connection refused (Connection refused)

 


AnishWalia20
Nutanix Employee
Forum|alt.badge.img+5
  • Nutanix Employee
  • 201 replies
  • June 29, 2020

Hey @yamachan , so when you reboot the PC-VM this happens and you see such messages about Prism service being down etc.

How long has it been since you rebooted the PC-VM?

 

Is it up now and are you able to connect to NCLI from PC-VM? Also are you able to manage the PE cluster from PC now, and is the status of PC back to OK on PE cluster dashboard

 

Can you send me the output of the below commands from PC-VM:

)PC-VM$ ncc health_checks run_all

​​​​​​​2)PC-VM$ cs | grep -v UP

 


yamachan
Forum|alt.badge.img
  • Author
  • Adventurer
  • 5 replies
  • June 29, 2020

Running ncli from PC is still “Connection refused”.

nutanix@NTNX-10-10-13-197-A-PCVM:~$ date; uptime
Mon Jun 29 17:15:11 JST 2020
 17:15:11 up  2:10,  2 users,  load average: 0.03, 0.21, 0.16
nutanix@NTNX-10-10-13-197-A-PCVM:~$ cs | grep -v UP
2020-06-29 17:10:48 INFO zookeeper_session.py:143 cluster is attempting to connect to Zookeeper
2020-06-29 17:10:48 INFO cluster:2784 Executing action status on SVMs 10.10.13.197
2020-06-29 17:10:49 INFO cluster:2935 Success!
The state of the cluster: stop
Lockdown mode: Disabled
        CVM: 10.10.13.197 Up, ZeusLeader
                       SSLTerminator DOWN       []
                              Medusa DOWN       []
                  DynamicRingChanger DOWN       []
                          InsightsDB DOWN       []
                InsightsDataTransfer DOWN       []
                               Ergon DOWN       []
                              Athena DOWN       []
                               Prism DOWN       []
                        AlertManager DOWN       []
                             Catalog DOWN       []
                               Atlas DOWN       []
                               Uhura DOWN       []
                    SysStatCollector DOWN       []
                       ClusterConfig DOWN       []
                             Mercury DOWN       []
                         APLOSEngine DOWN       []
                               APLOS DOWN       []
                               Lazan DOWN       []
                               Kanon DOWN       []
                              Delphi DOWN       []
                          Metropolis DOWN       []
                                Flow DOWN       []
                             Magneto DOWN       []
                              Search DOWN       []
                               XPlay DOWN       []
                            KarbonUI DOWN       []
                          KarbonCore DOWN       []
                       ClusterHealth DOWN       []
                              Neuron DOWN       []
nutanix@NTNX-10-10-13-197-A-PCVM:~$ ncc health_checks run_all
-- snip -
nutanix@NTNX-10-10-13-197-A-PCVM:~$ cat  /home/nutanix/data/logs/ncc-output-latest.log
-- snip -

The results were long, so I'll leave them here.


AnishWalia20
Nutanix Employee
Forum|alt.badge.img+5
  • Nutanix Employee
  • 201 replies
  • June 29, 2020

Thanks for attaching the output @yamachan . It seems after the reboot the PC-VM services failed to come up.

 

Can you run the below commands and send the output:

 

1)PC-VM$ cluster start

Check the logs for genesis inside ~/data/logs/genesis.out, and tail them:

2)PC-VM$ tailf /data/logs/genesis.out

 


yamachan
Forum|alt.badge.img
  • Author
  • Adventurer
  • 5 replies
  • June 29, 2020

Thx. Cluster start command worked. The PC web management screen also works.
But still “AHV Free Memory Minimum 1GB for each Prism Central VM” showed up.

nutanix@NTNX-10-10-13-197-A-PCVM:~$ cluster start
2020-06-29 20:40:47 INFO zookeeper_session.py:143 cluster is attempting to connect to Zookeeper
2020-06-29 20:40:47 INFO cluster:2784 Executing action start on SVMs 10.10.13.197
Waiting on 10.10.13.197 (Up, ZeusLeader) to start:  SSLTerminator Medusa DynamicRingChanger InsightsDB InsightsDataTransfer Ergon Athena Prism AlertManager Catalog Atlas Uhura SysStatCollector ClusterConfig Mercury APLOSEngine APLOS Lazan Kanon Delphi Metropolis Flow Magneto Search XPlay KarbonUI KarbonCore ClusterHealth Neuron

Waiting on 10.10.13.197 (Up, ZeusLeader) to start:  SSLTerminator Medusa DynamicRingChanger InsightsDB InsightsDataTransfer Ergon Athena Prism AlertManager Catalog Atlas Uhura SysStatCollector ClusterConfig Mercury APLOSEngine APLOS Lazan Kanon Delphi Metropolis Flow Magneto Search XPlay KarbonUI KarbonCore ClusterHealth Neuron

-- 15 times --

Waiting on 10.10.13.197 (Up, ZeusLeader) to start:  DynamicRingChanger InsightsDB InsightsDataTransfer Ergon Athena Prism AlertManager Catalog Atlas Uhura SysStatCollector ClusterConfig Mercury APLOSEngine APLOS Lazan Kanon Delphi Metropolis Flow Magneto Search XPlay KarbonUI KarbonCore ClusterHealth Neuron

-- 19 times --

Waiting on 10.10.13.197 (Up, ZeusLeader) to start:  Prism AlertManager Catalog Atlas Uhura SysStatCollector ClusterConfig Mercury APLOSEngine APLOS Lazan Kanon Delphi Metropolis Flow Magneto Search XPlay KarbonUI KarbonCore ClusterHealth Neuron

Waiting on 10.10.13.197 (Up, ZeusLeader) to start:  APLOSEngine APLOS Lazan Kanon Delphi Metropolis Flow Magneto Search XPlay KarbonUI KarbonCore ClusterHealth Neuron

Waiting on 10.10.13.197 (Up, ZeusLeader) to start:  APLOS Lazan Kanon Delphi Metropolis Flow Magneto Search XPlay KarbonUI KarbonCore ClusterHealth Neuron

Waiting on 10.10.13.197 (Up, ZeusLeader) to start:  APLOS Lazan Kanon Delphi Metropolis Flow Magneto Search XPlay KarbonUI KarbonCore ClusterHealth Neuron

Waiting on 10.10.13.197 (Up, ZeusLeader) to start:  APLOS Lazan Kanon Delphi Metropolis Flow Magneto Search XPlay KarbonUI KarbonCore ClusterHealth Neuron

Waiting on 10.10.13.197 (Up, ZeusLeader) to start:  APLOS Lazan Kanon Delphi Metropolis Flow Magneto Search XPlay KarbonUI KarbonCore ClusterHealth Neuron

Waiting on 10.10.13.197 (Up, ZeusLeader) to start:  Delphi Metropolis Flow Magneto Search XPlay KarbonUI KarbonCore ClusterHealth Neuron

Waiting on 10.10.13.197 (Up, ZeusLeader) to start:  Delphi Metropolis Flow Magneto Search XPlay KarbonUI KarbonCore ClusterHealth Neuron

Waiting on 10.10.13.197 (Up, ZeusLeader) to start:  Delphi Metropolis Flow Magneto Search XPlay KarbonUI KarbonCore ClusterHealth Neuron

Waiting on 10.10.13.197 (Up, ZeusLeader) to start:  Delphi Metropolis Flow Magneto Search XPlay KarbonUI KarbonCore ClusterHealth Neuron

Waiting on 10.10.13.197 (Up, ZeusLeader) to start:  Metropolis Flow Magneto Search XPlay KarbonUI KarbonCore ClusterHealth Neuron

Waiting on 10.10.13.197 (Up, ZeusLeader) to start:  Search XPlay KarbonUI KarbonCore ClusterHealth Neuron

Waiting on 10.10.13.197 (Up, ZeusLeader) to start:  Search XPlay KarbonUI KarbonCore ClusterHealth Neuron

Waiting on 10.10.13.197 (Up, ZeusLeader) to start:  Search XPlay KarbonUI KarbonCore ClusterHealth Neuron

Waiting on 10.10.13.197 (Up, ZeusLeader) to start:  Search XPlay KarbonUI KarbonCore ClusterHealth Neuron

Waiting on 10.10.13.197 (Up, ZeusLeader) to start:  Search XPlay KarbonUI KarbonCore ClusterHealth Neuron

Waiting on 10.10.13.197 (Up, ZeusLeader) to start:  KarbonUI KarbonCore ClusterHealth Neuron

Waiting on 10.10.13.197 (Up, ZeusLeader) to start:  KarbonUI KarbonCore ClusterHealth Neuron

Waiting on 10.10.13.197 (Up, ZeusLeader) to start:  KarbonUI KarbonCore ClusterHealth Neuron

Waiting on 10.10.13.197 (Up, ZeusLeader) to start:  KarbonUI KarbonCore ClusterHealth Neuron

Waiting on 10.10.13.197 (Up, ZeusLeader) to start:  KarbonUI KarbonCore ClusterHealth Neuron

Waiting on 10.10.13.197 (Up, ZeusLeader) to start:  KarbonUI KarbonCore ClusterHealth Neuron

Waiting on 10.10.13.197 (Up, ZeusLeader) to start:  KarbonCore ClusterHealth Neuron

Waiting on 10.10.13.197 (Up, ZeusLeader) to start:  KarbonCore ClusterHealth Neuron

Waiting on 10.10.13.197 (Up, ZeusLeader) to start:  KarbonCore ClusterHealth Neuron

Waiting on 10.10.13.197 (Up, ZeusLeader) to start:  KarbonCore ClusterHealth Neuron

Waiting on 10.10.13.197 (Up, ZeusLeader) to start:  KarbonCore ClusterHealth Neuron

Waiting on 10.10.13.197 (Up, ZeusLeader) to start:  ClusterHealth

Waiting on 10.10.13.197 (Up, ZeusLeader) to start:

The state of the cluster: start
Lockdown mode: Disabled

        CVM: 10.10.13.197 Up, ZeusLeader
                                Zeus   UP       [5930, 5963, 5964, 5965, 5975, 5992]
                           Scavenger   UP       [4277, 4307, 4308, 4309]
                       SSLTerminator   UP       [213205, 213233, 213234, 213235]
                              Medusa   UP       [213285, 213351, 213352, 213664, 214362]
                  DynamicRingChanger   UP       [216931, 216971, 216972, 217434]
                          InsightsDB   UP       [216935, 217004, 217005, 217221]
                InsightsDataTransfer   UP       [216939, 217025, 217026, 217122, 217123, 217124, 217125, 217126, 217127, 217128]
                               Ergon   UP       [216961, 217053, 217054, 217055]
                              Athena   UP       [216994, 217155, 217156, 217157]
                               Prism   UP       [217270, 217429, 217430, 217762, 218725, 218790]
                        AlertManager   UP       [217306, 217664, 217666, 217783, 223238]
                             Catalog   UP       [217352, 217747, 217748, 217749]
                               Atlas   UP       [217378, 217514, 217515, 217516]
                               Uhura   UP       [217400, 217608, 217609, 217613]
                    SysStatCollector   UP       [217418, 217581, 217582, 217588]
                       ClusterConfig   UP       [217433, 217623, 217625, 217627]
                             Mercury   UP       [217456, 217701, 217702, 217769]
                         APLOSEngine   UP       [217580, 217752, 217753, 217754]
                               APLOS   UP       [218488, 218583, 218584, 218586, 219144, 219156, 219157]
                               Lazan   UP       [218510, 218632, 218633, 218636]
                               Kanon   UP       [218533, 218645, 218646, 218647]
                              Delphi   UP       [219454, 219488, 219489, 219490]
                          Metropolis   UP       [219493, 219572, 219573, 219694]
                                Flow   UP       [219528, 219621, 219622, 219623]
                             Magneto   UP       [219567, 219637, 219638, 219639]
                              Search   UP       [220470, 220543, 220544, 220545, 220555, 220556, 220557, 220792]
                               XPlay   UP       [220495, 220586, 220587, 220676]
                            KarbonUI   UP       [221512, 221627, 221628]
                          KarbonCore   UP       [222947, 222991, 222992]
                       ClusterHealth   UP       [222987, 223134, 223135]
                              Neuron   UP       [223048, 223178, 223179]
nutanix@NTNX-10-10-13-197-A-PCVM:~$ tailf data/logs/genesis.out
2020-06-29 04:43:25 INFO framework_updater.py:342 Intent version unset
2020-06-29 04:43:25 INFO operation_queue.py:145 Operation queue is empty
2020-06-29 04:43:27 INFO framework_updater.py:342 Intent version unset
2020-06-29 04:43:27 INFO operation_queue.py:145 Operation queue is empty
2020-06-29 04:43:27 INFO framework_updater.py:342 Intent version unset
2020-06-29 04:43:27 INFO operation_queue.py:145 Operation queue is empty
2020-06-29 04:43:27 INFO framework_updater.py:342 Intent version unset
2020-06-29 04:43:27 INFO operation_queue.py:145 Operation queue is empty
2020-06-29 04:43:28 INFO framework_updater.py:342 Intent version unset
2020-06-29 04:43:28 INFO operation_queue.py:145 Operation queue is empty
2020-06-29 04:43:38 INFO framework_updater.py:342 Intent version unset
2020-06-29 04:43:38 INFO operation_queue.py:145 Operation queue is empty
2020-06-29 04:43:38 INFO framework_updater.py:342 Intent version unset
2020-06-29 04:43:38 INFO operation_queue.py:145 Operation queue is empty
2020-06-29 04:43:43 INFO framework_updater.py:342 Intent version unset
2020-06-29 04:43:43 INFO operation_queue.py:145 Operation queue is empty
2020-06-29 04:43:56 INFO framework_updater.py:342 Intent version unset
2020-06-29 04:43:56 INFO operation_queue.py:145 Operation queue is empty
2020-06-29 04:43:56 INFO framework_updater.py:342 Intent version unset
2020-06-29 04:43:56 INFO operation_queue.py:145 Operation queue is empty
2020-06-29 04:43:58 INFO framework_updater.py:342 Intent version unset
2020-06-29 04:43:58 INFO operation_queue.py:145 Operation queue is empty
2020-06-29 04:44:02 INFO zookeeper_session.py:143 genesis is attempting to connect to Zookeeper
2020-06-29 04:44:03 INFO framework_updater.py:342 Intent version unset
2020-06-29 04:44:03 INFO operation_queue.py:145 Operation queue is empty
2020-06-29 04:44:03 INFO framework_updater.py:342 Intent version unset
2020-06-29 04:44:03 INFO operation_queue.py:145 Operation queue is empty
2020-06-29 04:44:03 INFO framework_updater.py:342 Intent version unset
2020-06-29 04:44:03 INFO operation_queue.py:145 Operation queue is empty
2020-06-29 04:44:03 INFO framework_updater.py:342 Intent version unset
2020-06-29 04:44:03 INFO operation_queue.py:145 Operation queue is empty
2020-06-29 04:44:03 INFO framework_updater.py:342 Intent version unset
2020-06-29 04:44:03 INFO operation_queue.py:145 Operation queue is empty
2020-06-29 04:44:08 INFO framework_updater.py:342 Intent version unset
2020-06-29 04:44:08 INFO operation_queue.py:145 Operation queue is empty
-- snip --

 


yamachan
Forum|alt.badge.img
  • Author
  • Adventurer
  • 5 replies
  • June 30, 2020

Today, when I selected Microsegmentation on PC, showed “Precheck Successful” in a short time and I was able to ☑️ Enable Microsegmentation.

The reasons for failure related to ntp was not enabled and the cluster startup failed to come up.

Why was PC a little sluggish… that when I first synchronized ntp on PC, time was about 30,000 seconds late (or future, I forget whether it was plus or minus).

But, well, thanks to your help, I can test Microsegmentation.


AnishWalia20
Nutanix Employee
Forum|alt.badge.img+5
  • Nutanix Employee
  • 201 replies
  • June 30, 2020

Hey @yamachan , I am really happy that you resolved the issue. Good luck.

Exactly as I said the time between Prism Central VMs and Prism Elementcluster should be in sync. If time is not in sync, then it needs to be fixed. Flow enablement relies on Prism element statistics stored in a service IDF(insights data fabric: it’s a central database for storing metrics and cluster statistics), so if time is not synced, PC will not be able to query for most recent host memory usage/or will fetch wrong memory usage for the host which is not currently correct and precheck will fail.

 

Anyways, you can definitely reach out to me again in future if you need help with anything else, as I would be more than happy to help again:smile: .