Solved

Progress Database optimization for Nutanix cluster - Monster VM on Acropolis

  • 6 August 2015
  • 2 replies
  • 5476 views

Userlevel 4
Badge +10
I have 3 x 3000 series node each with 256GBs of RAM, and 1 cluster.
- 7.2TB Total SSD Flash (2.4TBx3 nodes)- 12TB Usable HDD (before de-duplication/compression)-  2.6GHz/20 Cores, 6 Physical CPUs: 60 Total cores-  768GB Total memory

We are running pure Acropolis.

The monster VM workload is a Progress database that is 7TB in size and has 200GB RAM allocated to it.

We are setting this up as a PoC to migrate off a standalone ESXi host with a Violin all flash array. Previously, the VM on VMware was only pushing 12,000 IOPs.

What should we be doing to optimize the performance for this VM?

Multiple virtual scsi adapters? Also, is there a way to see if the working set is exceeding the node cache capability?
icon

Best answer by vcdxnz001 6 August 2015, 23:11

Hi Daemon,

There shouldn't be anything that needs tuning on the Acropolis side of things. There is no need for multipe vSCSI adapters or anything with Acropolis. Except I would recommend setting the container to use inline compression (delay 0). This overall will have a performance benefit as well as saving valuable SSD and HDD capacity.

If you want to see the working set and also the IO pattern of the VM you can do that through the Stargate 2009 pages. http://:2009, scroll to one of the vDisks attached to the VM, then click on the link of the vDisk ID. This will display all of the disk characteristics and IO patters, including working set size over the last 2 minutes and last hour. By default this info is protected by firewall. So you can either change the ip tables settings or use links on the CVM to view it in text format.

General recommendations would be to run the latest version of NOS, use multiple vDisks to the guest OS where it makes sense, like splitting up data files to different vDisks and having the log files on another vDisk / mount point. If using Linux use elevator=noop and iommu=soft in the boot options, change the max_sectors_kb to 1024.

I look forward to seeing how this goes. What type of workload or test do you intend to run across the database? It's been a long time since I've done anything at all with Progress. So I'd be interested to hear your experience. If you need any help my team and I are here and your local Nutanix team knows how to reach us. We have team members distributed around the globe in different timezones, so someone is always available to assist.

Kind regards,

Michael
View original

2 replies

Userlevel 4
Badge +19
Hi Daemon,

There shouldn't be anything that needs tuning on the Acropolis side of things. There is no need for multipe vSCSI adapters or anything with Acropolis. Except I would recommend setting the container to use inline compression (delay 0). This overall will have a performance benefit as well as saving valuable SSD and HDD capacity.

If you want to see the working set and also the IO pattern of the VM you can do that through the Stargate 2009 pages. http://:2009, scroll to one of the vDisks attached to the VM, then click on the link of the vDisk ID. This will display all of the disk characteristics and IO patters, including working set size over the last 2 minutes and last hour. By default this info is protected by firewall. So you can either change the ip tables settings or use links on the CVM to view it in text format.

General recommendations would be to run the latest version of NOS, use multiple vDisks to the guest OS where it makes sense, like splitting up data files to different vDisks and having the log files on another vDisk / mount point. If using Linux use elevator=noop and iommu=soft in the boot options, change the max_sectors_kb to 1024.

I look forward to seeing how this goes. What type of workload or test do you intend to run across the database? It's been a long time since I've done anything at all with Progress. So I'd be interested to hear your experience. If you need any help my team and I are here and your local Nutanix team knows how to reach us. We have team members distributed around the globe in different timezones, so someone is always available to assist.

Kind regards,

Michael
Userlevel 4
Badge +10
The workload is the backend database for a global financial company. They are looking at alternatives to the their legacy vendors to provide the same performance in a smaller profile and reduced cost. I'll let you know the result.

Reply