Connecting Cloud Innovators: Building Community at .NEXT 2024
Hi guys, I'm still digging to give corrects answer to my manager who is puting pros and cons for nutanix on the table. As I need to make a design without knowing exactly where he want to go, I'm making assomption on it. I understood that they wanted to put NSX on it VRA and vCloud SP. I've found a couple of recommandations / tips on NSX over NUTANIX but nothing that can be used on design part (more on implementation which is quite cool !) Do someone have integrated it on a 4.5 cluster ? do Nutanix plan to give us a technical note / any other cool stuff on NSX ? sheers,
Hi all, My present client is making me working on a design implying to deploy a XAAS plateform with a lot of security inside (NSX is on the cook list). In our team, i was declared as the nutanix man. My mission is to determine the best way to isolate storage from diffrent workload : we'll have an integration zone, a developpement and a production zone as PAAS. Some node will executed heavy VDI (K2 graphic cards will achieve this), Then we'll have a IAAS with vcloud SP for the sandbox zone).So we can imagine some sort of multicluster with 3 cluster dedicated to 3 zones : 3D VDI zone / SANDBOX IAAS zone / and PAAS zone. It implies at least 9 nodes which is a bit too much for starting plateform. My question is, can I make a 6 node cluster for nutanix / storage point, with 3 volumes pools on all 6 node ? and more than that what will it implies in terms of storage performance, data protection, data seggregation (security team will challenge me heavily on this part) ? Last but not
Hi folks ! My client asked me a quite though question today : since we are bounded to CISCO Nexus for physical switches, we digged on a low latency nexus with SFP+ connection. It seems that 3064-X is a good choice but as far as I know, Cisco SFP+ compatibility with 3rd party is azardeous. 2 questions : Are there some kind of hardware restriction w/ Nutanix Supermicro stuff appart from SFP+ cable (MOLEX 074752 series as far as I know) ? Is there a compatibility matrix of this granularity level somewhere ? best regards,
Hi all ! My client have encountered a strange behaviour on some of its Nutanix-Dell Blocks and i'd like to share the resolution on this. Thanks to Nutanix / Dell support team for their 'patience' with IT team on this false-positive critical alert resolution. context : Multiple critical alert Seen on PRISM : FAN SPEED LOW for a whole cluster (this mean all FANS on ALL blocks). No alerts detected on IPMI / IDRAC and VIclient (directly on ESXi for HW messaging). HW: DELL XC 630-10 SW : NOS 4.6.4 HV : ESX 6.0 u2 Resolution : This is a false positive message which occurs when NOS cannot correctly interpret IPMI messaging due to a miconfig on a /etc/nutanix/hardware_config.json. the sensor of type "fans : rpm" has a misconfig on its adress. GOOD adress is "ipmi_sensor:FAN" BAD adress is "ipmi_sensor:FAN RPM" resolution will be to modify this descriptor for each FAN (14 on a XC630-10 model) on each host. If you encounter this kind of issue, please contact NUTANIX / DELL suppor
Already have an account? Login
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.