One of the awesome things about community, is the sharing and learning that can happen. It's always encouraging to see others want to help solve problems they are not directly involved in, offer new idea's to old problems and look for new ways of doing things.
It also helps me connect people who may have similar interests, and can benefit from sharing their experiences.
I thought this would be a great place to share what you are working on, get introduced to the community and let us know how we can help. Add your comments below, and let's begin the conversation.
We have an EPIC community, let's connect and help each other power the next generation of enterprise computing.
Community Manager, 
Page 4 / 24
Just joined Xtribe, working on expanding my Nutanix skillset.
I'm working on migrating our existing workload from VMware to AHV. The migration process is actually pretty painless, and with the exception of a couple of 'missing features', I think it's going to work out great!
I'm working on transfering disk images from one cluster to another so that our ansible plays run on the same template.
We just finished migrating our last remote site off of old hardware and onto a Nutanix cluster. In addition just updated both our our datacenters to a new version of AOS, as well as esxi using the one click upgrade!
We are currently working on migrating our existing Vmware platform to Nutanix
Working on expanding my cluster!
AFS migration
I am consolidating a 4 clusters into a single 41 node cluster. The workload on these clusters are Linux VMs providing Dev/Test and Production environments. A large number of these are Oracle RAC. I may write something up about the project more in detail and the learnings.
The justification is that scheduling and conducting upgrades on 5 clusters multiple times over the year is costly. We also will gain capacity due to less resources committed for redundancy .
One of the surprises was that its recommended using RF3 if for clusters greater than 24 nodes. This fact has me considering keeping some key production workloads on a separate cluster but as time passes we probably end up with a cluster with more than 24 nodes anyway.
Tonight is a big night, we are consolidating 2 production clusters which will free up 2 blocks and 8 nodes... finally enough headroom that we can start rolling through clusters. There are some big challenges coming up as we have multiple storage nodes in 2 clusters so the capacity required to swing the remaining workloads of those clusters will be a little interesting... no there now.
Sorry for the long post, but thought I'd share. If anyone has experience or wants to hear about this journey let me know and maybe there will be a more fitting forum for an on-going discussion on the topic of migrations.
The justification is that scheduling and conducting upgrades on 5 clusters multiple times over the year is costly. We also will gain capacity due to less resources committed for redundancy .
One of the surprises was that its recommended using RF3 if for clusters greater than 24 nodes. This fact has me considering keeping some key production workloads on a separate cluster but as time passes we probably end up with a cluster with more than 24 nodes anyway.
Tonight is a big night, we are consolidating 2 production clusters which will free up 2 blocks and 8 nodes... finally enough headroom that we can start rolling through clusters. There are some big challenges coming up as we have multiple storage nodes in 2 clusters so the capacity required to swing the remaining workloads of those clusters will be a little interesting... no there now.
Sorry for the long post, but thought I'd share. If anyone has experience or wants to hear about this journey let me know and maybe there will be a more fitting forum for an on-going discussion on the topic of migrations.
We have done esxi upgrades via PRISM, and it works very well. You can upgrade using a full image, or using a update bundle (zip) for a specific patch. Just remember to have DRS enabled on your cluster, otherwise you have to use VUM. It will evacuate each host one at a time, place it into maintainence mode, patch/upgrade, restart the host, and then bring it out of maintnence mode, and move VMs back to that host.
At this time, Nutanix is recommending that everyone hold off on upgrading to 6.5. I would definitely reference their AOS to ESXi compatability matrix and look for updates for when they plan to have an AOS version compatible.
At this time, Nutanix is recommending that everyone hold off on upgrading to 6.5. I would definitely reference their AOS to ESXi compatability matrix and look for updates for when they plan to have an AOS version compatible.
Automating snapmonitoring of the VMs
Hi everyone!
We'll be migrating our datacenter to a colo using Nutanix later this year. As I'm brand new to Nutanix the first step for me will be quickly learning as much as possible. After perusing some of the other discussions it looks like jjdurrant had some excellent recommendations to get started:
- Nutanix Bible
- nu.School YouTube Channel
- longwhiteclouds
- Self Paced Study (Once I get access)
- NEXT community, of course!
The goal is to hit the ground running. I look forward to learning from you, and one day returning the favor!
We'll be migrating our datacenter to a colo using Nutanix later this year. As I'm brand new to Nutanix the first step for me will be quickly learning as much as possible. After perusing some of the other discussions it looks like jjdurrant had some excellent recommendations to get started:
- Nutanix Bible
- nu.School YouTube Channel
- longwhiteclouds
- Self Paced Study (Once I get access)
- NEXT community, of course!
The goal is to hit the ground running. I look forward to learning from you, and one day returning the favor!
Researching an upgrade of ESXi from 5.5 to 6 via Prism.
Currently piloting our new VDI infrastructure... We've had the NTNX clusters for over a year, but constant struggles with UniDesk & Windows 10 compatibility has slowed things down...
We've now dropped UniDesk from this project & instead using Full Clone VM's, on a NTNX Metro Cluster for Active/Active support across our 2 primary sites. Today is day 1 of the staff pilot & looking good so far, though we only have a limited number of users.
We've now dropped UniDesk from this project & instead using Full Clone VM's, on a NTNX Metro Cluster for Active/Active support across our 2 primary sites. Today is day 1 of the staff pilot & looking good so far, though we only have a limited number of users.
Upgrading customer nutanix to 5.5 yay new releases!
We're deploying two new Nutanix clusters to facilities over seas.
We are working on standing up our initial order of Nutanix Blocks
Working on Xtribe profile
Converting old RH web server to virtual.
Hi AstainHellbring Jedidavid dsimmons chrisbarnett
Sounds like you folks are all super busy - be sure to check out the forums and community blogs [go] when you have time, we are always looking for members to help contribute. If you are a podcast listener - check out the Nutanix community podcast iTunes, Stitcher or SoundCloud.
Sounds like you folks are all super busy - be sure to check out the forums and community blogs [go] when you have time, we are always looking for members to help contribute. If you are a podcast listener - check out the Nutanix community podcast iTunes, Stitcher or SoundCloud.
Consolidating all the servers on to Nutanix
Operation Falcon, cosolidating 3 labs into 1
I'm working on getting up to speed with Nutanix and what it can offer in the server-less space.
Working on spinning up our 3rd and 4th cluster (Express nodes this time) with AHV for running a Internal Control System for sales that we will be deploying in the near future. AHV is a differant beast then VMware that i am use to.
Im building a test image for our new Xen environment. Citrix on AHV is working great so far!
We are working on standing up our 3rd Nutanix cluster. I'm also looking into some of the Hashicorp tools (Packer and Terraform) to automate all the things.
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.