The distributed nature of the Kubernetes platform offers advantages for running workloads that use machine learning (ML) and AI. But while public cloud AI and ML resources are easily provisioned and operated, achieving success on-premises requires the proper configuration and networking of GPU nodes, and data storage with consistent high throughput. This session offers up expert guidance on how to use Nutanix to build and configure GPU-based Kubernetes clusters and storage that support distributed AI/ML models and datasets.
Speaker
Pranav Desai
Debojyoti Dutta