I do not see an explaination as to why a single container is the best practice. Does having multiple containers cause more overhead(other than the memory use)?
It seems like an obvious choice would be to put your OS VM's that are virtually identical in a deduped container to maximize space savings. Then put any user data vDisks in a compressed container.
Just looking for some input on why this is not a good idea.
Already have an account? Login
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
You should have multiple container depend on your requirements
Dedupe and Compression fetures are deployed per Container level
Container is like a folder providing datastore to your Hypervisor
Happy to sync up locally or on webex if you'd like, but here's my 2 cents for now.
You can have a ton of containers if you'd like, but like the LUN's/volumes of yesteryear, it tends to make things more obfuscated and complicated.
Especially in the AHV world, having one container just makes life easy. Also, in 4.5++, you can run all storage feature simultaneously, and the system just "figures it out" (i.e Dedupe, Compression, Erasure Coding, etc)
At minimum, my KISS principle here (and what consulting recommends across the board) is at least turn on in-line compression. It's magic, and quite frankly, works darn great.
The hypervisor doesn't see one common datastore with files on it, just the VM's and their respective disks.
Honestly, in AHV, the container is just a "policy object", rather than a traditional volume or datastore. This goes for data policies like compression/dedupe, as well as things like the AHV Image Service.
In the real world AHV deployment's I've seen, its pretty rare to have more than one container because of all of this..