Docker Plugins
https://docs.docker.com/engine/extend/legacy_plugins/
Plugins are used to extend Docker functionalities. Plugins serve to extend Docker resource limitations. Plugins are used to extend networks, volumes, and authorization.
Let's think for example about a remote filesystem to be used in volumes, like AWS S3, we need to extend Docker functionality.
Plugins are distributed as Docker images. So you can find Docker plugins on DockerHub itself.
Plugins are not widely used and tend to be discontinued, which is why it's considered legacy_plugins in the URL.
To install a plugin the command is docker plugin
For this command we have the following subcommands:
- create
- disable
- enable
- inspect
- install
- ls
- push
- rm
- set
- upgrade
Let's install a famous plugin called sshfs that accesses a filesystem from another machine via SSH. Note that we reference it as an image, so we could even specify the version if we want. Remember that this is just for study and should not be used in production, it's just to understand plugin usage.
vagrant@worker1:~$ docker plugin install vieux/sshfs
Plugin "vieux/sshfs" is requesting the following privileges:
- network: [host]
- mount: [/var/lib/docker/plugins/]
- mount: []
- device: [/dev/fuse]
- capabilities: [CAP_SYS_ADMIN]
Do you grant the above permissions? [y/N] y
latest: Pulling from vieux/sshfs
Digest: sha256:1d3c3e42c12138da5ef7873b97f7f32cf99fb6edde75fa4f0bcf9ed277855811
52d435ada6a4: Complete
Installed plugin vieux/sshfs
vagrant@worker1:~$
vagrant@worker1:~$ docker plugin ls
ID NAME DESCRIPTION ENABLED
6915ebc10c2d vieux/sshfs:latest sshFS plugin for Docker true
https://github.com/vieux/docker-volume-sshfs To create a volume using SSH with this driver on machine2, we'll need to enable PasswordAuthentication in the /etc/ssh/ssh_config file on worker2.
Then restart the service daemon with sudo systemctl restart sshd
# Uncommented the line to have yes in PasswordAuthentication
[vagrant@worker2 ~]$ sudo vi /etc/ssh/ssh_config
[vagrant@worker2 ~]$ sudo systemctl restart sshd
[vagrant@worker2 ~]$ cat /etc/ssh/ssh_config
# $OpenBSD: ssh_config,v 1.34 2019/02/04 02:39:42 dtucker Exp $
#......
Host *
# ForwardAgent no
# ForwardX11 no
PasswordAuthentication yes
# HostbasedAuthentication no
# GSSAPIAuthentication no
# GSSAPIDelegateCredentials no
# ....
# /etc/ssh/ssh_config.d/ which will be automatically included below
Include /etc/ssh/ssh_config.d/*.conf
[vagrant@worker2 ~]$ ip -c -br a
lo UNKNOWN 127.0.0.1/8 ::1/128
eth0 UP 10.0.2.15/24 fe80::5054:ff:fe27:8b50/64
# To get this machine's IP
eth1 UP 192.168.56.120/24 fe80::a00:27ff:fe50:4d11/64
Now let's create a volume using this driver, but from worker1. Since we're inside vagrant we have a vagrant user on all machines and the password is also vagrant. We'll use this user.
# -d was to choose the driver
# -o is options
vagrant@worker1:~$ docker volume create -d vieux/sshfs --name remotevolume -o sshcmd=[email protected]:/vagrant -o password=vagrant
remotevolume
# Checking if the volume is available
vagrant@worker1:~$ docker volume ls
DRIVER VOLUME NAME
vieux/sshfs:latest remotevolume
vagrant@worker1:~$ docker volume inspect remotevolume | jq
[
{
"CreatedAt": "0001-01-01T00:00:00Z",
"Driver": "vieux/sshfs:latest",
"Labels": {},
"Mountpoint": "/mnt/volumes/39cc9d0f99226f9dcafb42710622b6d6",
"Name": "remotevolume",
"Options": {
# See the insecurity, it shows the password
"password": "vagrant",
"sshcmd": "[email protected]:/vagrant"
},
"Scope": "local"
}
]
vagrant@worker1:~$
Checking what we have in the folder.
vagrant@worker1:~$ docker container run -it --rm -v remotevolume:/data alpine ls /data
NFS Plugin
https://registry.hub.docker.com/r/trajano/glusterfs-volume-plugin
Let's test NFS now on the master machine which will serve as a file server. On the master machine it's necessary to install the nfs-server package and on the worker the nfs-common
master
vagrant@master:~$ sudo apt-get install nfs-server -y
vagrant@master:~$ mkdir -p /home/vagrant/storage
vagrant@master:~$ sudo echo "/home/vagrant/storage 192.168.56.0/24(rw)" > /etc/exports
vagrant@master:~$ cat /etc/exports
vagrant@master:~$ mkdir -p /home/vagrant/storage
vagrant@master:~$ sudo vi /etc/exports
vagrant@master:~$ cat /etc/exports
/home/vagrant/storage 192.168.56.0/24(rw)
# Creating a file and restarting the system and checking what is exported
vagrant@master:~$ echo "just testing" > /home/vagrant/storage/arquivo
vagrant@master:~$ sudo systemctl restart nfs-server.service
vagrant@master:~$ showmount -e
Export list for master.docker-dca.example:
/home/vagrant/storage 192.168.56.0/24
vagrant@master:~$
worker
sudo apt-get install nfs-common -y
docker plugin install trajano/glusterfs-volume-plugin
docker volume create -d trajano/glusterfs-volume-plugin --opt servers=192.168.56.100 volume_nfs
vagrant@worker1:~$ docker volume inspect volume_nfs | jq
[
{
"CreatedAt": "0001-01-01T00:00:00Z",
"Driver": "trajano/nfs-volume-plugin:latest",
"Labels": {},
"Mountpoint": "",
"Name": "volume_nfs",
"Options": {
"device": "192.168.56.100:/home/vagrant/storage",
"nfsopts": "hard,proto=tcp,nfsvers=3,intr,nolock"
},
# Local scope is for this machine only, but with global any machine in the cluster can use it
"Scope": "global",
"Status": {
"args": [
"-t",
"nfs",
"-o",
"hard,proto=tcp,nfsvers=3,intr,nolock",
"192.168.56.100:/home/vagrant/storage"
],
"mounted": false
}
}
]
vagrant@worker1:~$
docker run --rm -v nfs_volume:/data alpine ls /data