Earlier in the year we shared our Nutanix Docker Volume Plugin on the Docker store. The Nutanix Docker Volume Plugin (DVP) enables Docker Containers to use storage persistently. Normally, if a container is moved from one container machine to another, storage does not move with it.
The Acropolis Container Services (ACS) provide a storage volume plugin that enables Docker deployments to be integrated with Nutanix external storage systems and enable data volumes to persist beyond the lifetime of a single container machine host.
You can find it here: Nutanix DVP (Docker Volume Plug-in)
Hit REPLY and share your experience with others in the community!
Page 1 / 1
Do you need to be running Acropolis on your Nutanix Nodes to use ACS? Could this plugin work with Nodes that are running VMware NSX instead of Acropolis?
mp3 wrote:Do you need to be running Acropolis on your Nutanix Nodes to use ACS? Could this plugin work with Nodes that are running VMware NSX instead of Acropolis?
Hi mp3 I suspect you meant ESXi in the above comment - If so then yes it will work with nodes running ESXi.
Today I got this Docker Volume Plugin working. All the instructions out there do not warn about what to do if something goes wrong. The error from docker itself is that 'nutanix.sock' does not exist.
[bjackson@localhost ~]$ docker run -it -v TestVol:/opt ubuntu /bin/bash
docker: Error response from daemon: iSCSI initiator error: Could not find disk on host.
Example command to get the plugin installed:
docker plugin install ntnx/nutanix_volume_plugin:latest PRISM_IP="172.30.0.50" DATASERVICES_IP="172.30.3.9" PRISM_PASSWORD="Nutanix/4u&" PRISM_USERNAME="bjackson" DEFAULT_CONTAINER="BI_Storage" --alias nutanix
The PRISM_IP is obvious. The DATASERVICES_IP, the existing documents make sound super complicated and special. In the cluster details within Prism, you just make up an IP that exists within your subnet (IP Range) that is free and you want to use for all VM's that use iSCSI pointed back at the Nutanix Cluster, and that is it. If you change it later, you will probably have to remove and reinstall the docker volume plugin again with the updated IP address.
If you leave off the '--alias nutanix', you'll have to use the ugly full name of the driver to reference it (ex: ntnx/nutanix_volume_plugin:1.1.1 or some other tag name).
Not sure if it is possible to update the existing plugin once it is installed. So far, I had to do 'docker plugin rm nutanix', then rerun the above command for installing it in the first place. Success means you don't get any messages back beyond asking for permission for the plugin to access various things.
Absolutely nothing is documented about these. One password change and things could randomly go south with your containers and attached volumes. Hope this helps someone over the hill.
- Check your password and make sure it is updated and accurate in the command.
- When they say default container, it means the name of the Nutanix Storage Container. It is not some volume group name or other thing.
- You must have one volume group, named anything you want, that is configured to allow your iscsi initiator name to connect. I created one called "docker-initiators" just added the iqn to the client list. If you do not do this, then the Docker Volume Plugin will allow you to create volumes in nutanix with attached disks, but when you go to use one you'll get an error from the iSCSI initiator saying that it could not find the disk.
[bjackson@localhost ~]$ docker run -it -v TestVol:/opt ubuntu /bin/bash
docker: Error response from daemon: iSCSI initiator error: Could not find disk on host.
Example command to get the plugin installed:
docker plugin install ntnx/nutanix_volume_plugin:latest PRISM_IP="172.30.0.50" DATASERVICES_IP="172.30.3.9" PRISM_PASSWORD="Nutanix/4u&" PRISM_USERNAME="bjackson" DEFAULT_CONTAINER="BI_Storage" --alias nutanix
The PRISM_IP is obvious. The DATASERVICES_IP, the existing documents make sound super complicated and special. In the cluster details within Prism, you just make up an IP that exists within your subnet (IP Range) that is free and you want to use for all VM's that use iSCSI pointed back at the Nutanix Cluster, and that is it. If you change it later, you will probably have to remove and reinstall the docker volume plugin again with the updated IP address.
If you leave off the '--alias nutanix', you'll have to use the ugly full name of the driver to reference it (ex: ntnx/nutanix_volume_plugin:1.1.1 or some other tag name).
Not sure if it is possible to update the existing plugin once it is installed. So far, I had to do 'docker plugin rm nutanix', then rerun the above command for installing it in the first place. Success means you don't get any messages back beyond asking for permission for the plugin to access various things.
Absolutely nothing is documented about these. One password change and things could randomly go south with your containers and attached volumes. Hope this helps someone over the hill.
Continuing the experimentation with the Docker Volume Plugin, what I said about the iscsi initiator volume group may not always be true. My first CentOS 7 host reported the iSCSI initiator error. The error went away after I created a volume group and added the initiator name as client to that group. Secondary Docker hosts, did not seem to have this problem. I would run the "docker create volume --driver nutanix -opt sizeMB=100000 mydatavol" and it would get created and be immediately usable. It would also be visible on every docker host that I performed the 'docker volume list' command on. That is expected behavior. The learning journey continues.
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.