We've implemented Microsoft's MFA Authentication via LDAP in our Nutanix environment with mixed results.
To become truly enterprise ready though we need a better solution. SAML would be ideal - but a 2-Factor solution would be sufficient. I'm not sure if this is on the Roadmap but you might want to consider it.
I got some feedback and its on the roadmap. Thanks for the interest.
When we are trying to manage VMs in AHV under the PRISM platform, we just realised that it is easy to accidentally click on the delete VM button and the VM could be gone just with one single confirmation pop-up page, even the VM is still at power-on state.
I know we could protect the VMs with snapshots or remote site replication, but is there something that Nutanix could work on to maybe add a new feature for extra protection of the VMs, e.g. a fail safe lock etc.
Hey @jack0729 - Thanks for reaching out, sorry for the slow response here. I see you filed a RFE ticket with support just yesterday actually, I pinged the case owner as we already have an open enhancement request for this within engineering, so he will link your case with that ticket.
I've also linked this product idea thread to that backend ticket as well, so we're covered.
I'm actually the one who filed that enhancement request long ago, so this issue is near to my heart. We've been exploring a few ideas but haven't come to a conclusion yet. This is actually one of those issues that sounds really clear cut, but is actually incredibly nuanced. As an example, modern public cloud providers behave in the same way that we do when deleting instances last I checked, so there is precendence for both the "immediate delete" approach as well as the "only delete powered off VM's" approach.
New Dell Nutanix customer here and loving it however I wanted to add a feature request. I am unsure if I can turn in a support ticket here on Nutanix since I deal directly with Dell on support.
My request is for the protection snapshot schedule to have a little bit more flexibility. Basically we are an 8-6 shop everyday and say I want to take snaps every 2 hours from 8-6 which equals basically six snaps a day for Mon - Fri. At this time if I set snaps at 2 hours it can only do them all day which equals 12 snaps a day and it will even take them on weekend when I dont need them then either. I realize since nothing much is changing that space wise they dont take up anything for space after hours however it would be nice to not have these unnecessary snaps to hang onto and be able to hang onto more useful snaps.
So basically be able to choose snaps every x minutes/hours between x and x time on x day(s). Your snaps schedule is very close now its just missing between x and x time.
First off, welcome to the Nutanix family! It's always nice to see new customers hit up the forums. Second, we sure can add a request for something like that into a newer release. I can take a look and see if we have anything similar to that potentially coming down the pipe, as well.
As a temporary workaround, you can set up a weekly schedule that takes a snapshot at your intervals. So, in this case, setting up 6 weekly schedules (M,T,W,T,F) at the specified times would look like this:
It's a little more manual to set it up that way, but it'll be a way right now to achieve what you are looking for.
So, I came up with and idea to automate the process of doing snapshots at regular intervals. There is nothng better than being able to take a task off your weekly calandar. So I went into a CVM and created an automated task to do the snapshots.
I ran into a problem though. Should that CVM ever go down the auto-snapshot function that I created would stop working.
So I ventured to do the next best thing. Create a VM, which has the ability to move around, that would ssh into an available CVM and run the task for me. But the problem with that one is I am taking up resources that I would not need to use otherwise.
So why not do the next best thing right? Take the script and copy it to all the CVMs and schedule the task? I'm sure you can see the problem immediately. Now I'm creating 3 snapshots instead of the one. I could fix this by doing some error checking though. Have the script look at the snapshots and if one exists between a certain date and time range skip it. Turns out this doesnt work quite like I had hoped it would for a few reasons.
So, it needs to be integrated into or inside of Ergon. Basically make it the type of object as a health check, one that is replicated and ran across all of the nodes and managed by Acropolis. Maybe even a function of Cerebro.
Just in case you haven't tried it. http://next.nutanix.com/t5/Nutanix-Education-Blog/Configuring-a-Protection-Domain/bc-p/5458
I've been running performance tests with Microsoft DISKSPD on Nutanix. By default DISKSPD uses data sets which are "empty", filled with "Zero" characters. Nutanix will detect this and store data in special "Estore Zero" or "Oplog Zero" bucket, no data is written to disk. Only metadata gets updated with information "this block contains only zeros". When data is read, block content is generated in CPU/Memory, rather than reading anything from back-end storage.
Using DISKSPD or similar products writing only zeros will give overly positive performance figures and if used as base for performance sizing, can lead to grossly undersized environments.
Since data is already available in Stargate statistics, how about alerting about this?
"Hey there, it seems that you are running some sort of performance test. The tool you are using generates data with all-zero content. Using all-zero content will give overly positive performance figures. XX% of current workload for virtual machine X is all-zero content"
i.e a "BS detector alert"
The challenge here is that, its possible for an application to spit out all zero content, so getting stargate (or anything else) to pick up when you're actually just running a BS storage test, or when the application is actually doing something that slightly resembles a BS test (rare, but it could happen), could be tricky.
I love the zero suppression technology that we have, its actually quite a huge strength, and in my mind, no amount of fancy alerting is going to undo the fundamental misunderstanding that people having around running performance test (i.e, not everyone knows that DISKSPD generally is a bunch of zeros)
I would love to see the ability to disable a NIC card in Prism when running AHV. VMware has this feature, and I find it incredibly useful when you want to bring up a server that you don't want on the network just yet.
AFAIK the closest thing that AHV has to this is deleting the NIC adapter and then booting the system, but that causes more work.
Good point. We've got this on the short term roadmap in 2017. We can talk about it more over a beer if you want, I'm around all December in town.
Can you do me a solid, and submit a support ticket on portal.nutanix.com with priority RFE request for enhancement, and ask this same question, and ask them to associate the ticket to the these two tickets?
This will help us track real world volume for these requests.
Got it. Right now, you can define an exception to exclude a cluster for an alert policy in Prism Central. What you ask is the exception of a VM. Good suggestion.
So as we try to figure out how to displace a lot of things, one of the things that seems to get caught up is the backup/restore function for which there has been good things created and coming (web interface for file based restore!).
The thought here is to be able to create a remote cluster (in AWS or on premise) that doesn't require RF2 thus allowing for maximum storage capacity for data that essentially is replacing tape or more likely "nearline" storage. Perhaps there is an existing answer to this, but looking for a model that is different that primary storage under a running workload set - namely just a storage target cluster. I think also the cluster minimum number of nodes is already being looked at, but that is another thing to continue to drive down cost where storage target only is the need, Looking to get the cost as low as possible, but ensure all the rich function of Prism and NOS are not lost !
I hope this will be considered and keep the innovation coming !
Definitely leverage in-line compression for pretty much everything, it works really well.
Speaking of that, we've got a huge release (next release out), code named Asterix, where we've done work to make compression even better. Internal numbers are quite awesome that I've seen thusfar, but of course its subjective to workload and data types, BUT, comparing old compression vs new compression, we're seeing even better savings at same or better performance (and in-line already performed really well on existing code).
TLDR - Do in-line now, and keep an eye out for the next big release (you'll see it make a splash), and know that it's projected to be even better.
Besides having the HTML Web Interface in managing Nutanix Clusters, it would be nice to have to include a mobile / tablet app to manage the environment.
I agree with you, and we've actually demo'd a ton of concepts around this both at the .NEXT conference as well as our internal engineering "Hackathons", we've just never pushed it out as a product ... as, well, no one's ever asked us to.
Looks like you guys are a NX customer. Do me a favor, go to portal.nutanix.com and file a ticket with priority RFE Request for Enhancement, and mention both this thread as well as engineering ticket FEAT-1726
FEAT-1726 is our internal feature request that we've opened as a placeholder a while back, but we can link your request there, and it helps us track real user demand for such a feature.
Hello - with the addition of Volume Groups for in-guest iSCSI mappings, are there any plans to introduce support for CHAP or IPSec iSCSI security features? If there are any features existant today that I don't know about please guide me to the light - I'd be happy if I could whitelist down the allowed IP's per iSCSI target even, just something to work in tandem with or go the next step beyond Identifier whitelisting.
Great question, and a very fair ask.
Pleased to say we're one step ahead of you, CHAP support is targeted for the next major release, we just committed it to the code base literally yesterday.