Nutanix Unified Storage
Unified Multicloud Data Management
Hello.After upgrading AOS and AHV (upgraded from AOS 5.10.1 to 220.127.116.11), I attempted to upgrade from Files 3.20 to 18.104.22.168.The first attempt to upgrade the File Server out of a total of four File Servers failed.After the VM update and VM NIC update, the VM change power state experienced a long time of Hang, and after 30 minutes the "File server upgrade task Hung for too long" alert was generated.An hour later, the "File Server upgrade task stuck" alert was generated and the File Server task stopped at 44%.I tried to force restarting that server and restarting all file servers, but it was not resolved.After restarting all file servers, the afs command was also unavailable on the CVM.The cluster status of the file server that attempted the upgrade is currently in the DOWN state.The Minerva service does not start and does not start the cluster.The shares of the files are serviced by NFS, the connection is normal, and the data can be read/write normally.However, Data Protection (DP) has st
Hi,We are looking to take an application which is currently running in AWS using S3 storage and deploy it on-prem on Nutanix.So I'm looking at Nutanix Objects, not because we need massive scale or throughput (a few TB and a few operations per sec), but simply so we get an S3 interface so don't need any application changes.However high availability is important, the service needs to stay up through VM crashes, node failures, upgrades etc.I can't seem to find any advice on sizing or much specifics about HA in the docs.Going through the create object store wizard, if I select 2 worker nodes I get only one load balancer which is presumably a single point of failure, when you get to 3 worker nodes you get 2 x LBs. So would I be correct that 3 x workers & 2 x LBs is the minimum HA setup? This wants a total of 104GB & 34 cores which feels massively excessive for a small deploy.Any advice is appreciated.Thanks,Tim
I have encountered a peculiar situation here. I am synchronizing two file shares, and everything appears to have been copied successfully. However, there is one folder that doesn't show up in the target file explorer. I am certain it was copied because I can see it in the log. Additionally, if I attempt to create the same folder in the target location, I receive an error stating that the folder already exists, and it creates a new one with a '(1)' appended at the end. I have ensured that the hidden view is enabled in the file explorer. Any thoughts on what might be causing this issue?
HI,I can no longer access File Analytics web page, I'm afraid that some service has not started.I tried to SSH into the VM console but I can't login with any password both with root and nutanix user.How can I force a password change or alternatively remove the current Analytics and reinstall it?
L.s,First time here with a question (there must be a first time)We want to create a environment with a Splunk/Nutanix combination. The storage for Splunk will be smartstore, so the warm storage is S3 compliant storage, something Nutanix can provide.The question comes from the fact that we will span Splunk acros two site’s. So each site will have a storage cluster and a compute cluster (multiple clusters because more then 3 hops). Site A has a compute cluster and a storage cluster, site B has the same. The Storage cluster will be synced between eachother.Splunk validated architecture says to have a loadbalancer between the Splunk indexers and the S3 storage. So when the storage in Site B fails.. Site B can use Site A as storage. But also the loadbalancer sends the compute cluster to the nearest storage available.Does Nutanix has something for this loadbalancer question? Can two clusters be adressed by one ip, ore is it indeed necessary to use a external loadbalancer?Thanks in advance
Hi Team,What if some random people run a feature like Nutanix Object without having a proper license for it, what they just have is only for AOS license.1. will they successfully use it in the cluster? 2.will that Object feature be affected when the active AOS license expires?3.Nutanix license is perpetual right? so when it expires all features are still working except support from TAC but what about the features that runs illegally, will they be affected once active license that doesnt supports the feature they enabled expires? Thank you so much for clarification.
How do I fix the Object configuration when redeploying the PC。AOS 6.5.2. ahv20220304.342 object 3.6 mps controller 4.2.2The Object client can be accessed to Buckets. How do I restore the object configuration information on the newly deployed PC
Hello guys,Nutanix is offering the Mine product integrated with Commvault, however, I do not see a guide in Nutanix Portal, though I found the following guide in the Commvault website:https://documentation.commvault.com/fujitsu/v11/essential/144801_configuring_commvault_plug_in_for_nutanix_mine.htmlIt describes the steps to install and configure the Mine Plug-in for Commvault. Still, a step-by-step guide procedure is missing to install and configure the entire product (Mine with Commvault). I wonder if someone can help me with that?>Do I install Commserve manually as a VM? In Mine with Veeam, there is a Foundation Mine with Veeam wizard that orchestrates the installation and configuration.>Do I create the S3 Object Store manually?>If I want to externalise the backups on the S3 store to a tape server for example, what about replicating that storage, is that possible?>Is the NUS Pro license the only license I need for this to work? I also have a few Commvault Backup & Rec
Hi everyone!I have an existing Veeam Backup Server running version 9.5 and I wonder if I can use it in the Foundation Mine with Veeam installer or do I have to upgrade to 10? (As far as I know there’s no documentation on Mine compatibility with Veeam Backup & Replication versions).Also, I have a perpetual socket licence for the Veeam Server I use actually. Will they work?Thank you in advance,
I need assistance in resolving a problem with File Analytics, which indicates that my memory has exceeded 90%. I update the VM and add 8 GB after it displays the critical error "One or more File Analytics VM components have failed." I did a power reboot, but nothing changed. Please assist.
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.