Connect

No Such Thing as a Quick Buck

  • 19 November 2015
  • 0 replies
  • 14133 views
No Such Thing as a Quick Buck
Userlevel 4
Badge +21
The ability to make fast decisions in an intelligent matter would be a good trait regardless of your role, device, or system. If you were a financial investor and you only looked at your portfolio once a month you could be in real sad state of affairs. Companies and the people that run them spend a lot of time and money to make trades at almost at the speed of light. Sure, there may be times that you make a bad trade or market forces necessitate dumping the stock and reinvest elsewhere. Nothing is perfect but you want the ability to react quickly so you can recover before it’s too late. While the “set and forget” mentality may reduce the stress for most of the month, you probably don’t want to be the one left telling your loved ones the college tuition is gone.

What I have just described is what Nutanix is doing in the Hyper Converged Infrastructure space. But instead of your money, I am talking about your data. Nutanix is always spreading your investments (data) across the cluster. One copy of data is stored locally to deliver consistent performance and reduce network congestion. The 2nd copies of data are spread throughout out the cluster for availability. Just like a good investor you want to diversify your stocks. You don’t want rely on just a few stocks to get you to retirement. Those 2nd copies of data protect you from hard drives dying and node failures. With your investments evenly spread out you can quickly rebuild your data from multiple sources and you won’t run into a bottleneck.

Ju

st like the ability to rebuild quickly is important that you do not keep shoveling money into a bad investment or wait around to pull your money out. This is why Nutanix has a strict requirement to always write a minimum of 2 copies of data. If a node is down due to failure or maintenance, two copies are always being saved. You don’t have move data around or wait for a time out value to expire. There are multiple vendors in the Hyper-converged space where this is not true.

When you are paranoid about protecting data, you need to think in terms of worst case scenarios. For example, what happens when a node is offline for an extended period and a second node goes down? If the second node was where the 2nd copy of data resided you could have data unavailability, corruption, or data loss with a system that doesn’t always generate two copies of data. Hoping for the best case, that the first failed node will come back online soon, becomes a costly decision. With Nutanix the worst case scenario would be that the cluster chooses to stop writing data if it is unable to save it in at least two different nodes. Once you had 1 out of the 2 failed nodes return to an online state the cluster can resume operations. If you’re really risk adverse, setup your Nutanix cluster to save 3 copies of data. Same logic applies. The Nutanix cluster would rather suspend IO than commit IO and store a single copy. The investment in raw data capacity can be recovered by using both compression and dedupe plus erasure coding to lower the overhead. RF=3 plus EC-X is a nice combo for clusters 5 nodes or larger.

The ability to select new nodes for secondary writes is a luxury afforded by the rich metadata layer of Nutanix. An in-depth overview of how Nutanix’s metadata works is available here.http://www.nutanix.com/2015/09/24/the-wonderful-world-of-distributed-systems-and-metadata-management/

The rich metadata services also allows for the flash tier to be used effectively and avoid hot spots which makes performance very consistent even if the working set is larger than the flash tier capacity on 1 node. This can help make a hybrid solution just as performant as all flash solutions. The metadata powering the solution is truly the best investment broker you can have on your side.

0 replies

Be the first to reply!

Reply