I am running Docker on MacOS which means that 'boot2docker' is required. Because Docker is using Linux kernel features, a Linux-VM with a Docker daemon is running in the background. So this VM is used to host our containers.
- Install 'boot2docker'
- Start it by using the application menu, which opens you a shell which has all required environment variables set (boot2docker init --> boot2docker start --> boot2docker shellinit)
- docker run -d -P --name couchbase-1 couchbase
This starts a container in a daemon mode with the id 'couchbase-1' by using the Couchbase image. Ports are exposed/mapped to the outside.
In order to double check which containers are running use the following command:
To see the port mappings, the following command can be used
docker port couchbase-1
This gives me the following output for my first Couchbase node:
11207/tcp -> 0.0.0.0:32772 11210/tcp -> 0.0.0.0:32773 11211/tcp -> 0.0.0.0:32774 18091/tcp -> 0.0.0.0:32768 18092/tcp -> 0.0.0.0:32769 8091/tcp -> 0.0.0.0:32770 8092/tcp -> 0.0.0.0:32771
The Docker host VM is accessible via the following command which returns 192.168.59.103 in my case.
So we can access the Admin-UI of our node by using http://192.168.59.103:32770 (instead of using port 8091).
The internal IP can be determined by using Docker's exec command:
docker exec -it couchbase-1 ifconfig
So far so good. But how to build a cluster?
- Setup Couchbase within the container 'couchbase-1' via the Admin-UI by creating a new cluster. I used the internal IP address as the name. Productive environments should care about the name resolution.
- Just run a second Couchbase container and name it 'couchbase-2'
- Retrieve the port mapping for 'couchbase-2' and access the Admin-UI via the port which is mapped to 8091.
- Join the existing cluster by pointing to the internal IP of the container which was called 'couchbase-1'. The default ports are accessible internally (between containers).
- Repeat the steps for a third node 'couchbase-3'
Additional containers could host applications. By default my containers could reach each other via the internal IP addresses. I think it will be more challenging to access the Cluster from the outside even if the ports are mapped, but the idea here is anyway to deploy especially your applications via Docker. Access from the outside would need to configure the used Couchbase client library specifically regarding these port mappings. Also, here some further details about Docker's networking: https://docs.docker.com/articles/networking/ .
!Important! The Docker trend causes that containers are used within VM-s (those are deployed via IaaS cloud systems like Amazon EC2). I even saw it in a VMWare environment whereby the intention was to use the VM-s more efficiently and to unify the application deployment process. So what you see there are multiple containers per VM. Unlike Application Servers, Couchbase is designed to use dedicated resources, so for instance a specific per node RAM quota. The cluster is sized in a way that it fits your requirements, so it's not expected to have a real waste of resources in this case. Especially for performance critical use cases it would NOT follow Couchbase's best practices to run Couchbase Server in a container (Docker) which is again running in a container (VM).
- Docker in general is useful for setting up Test and Development environments (even for containers those are hosted in VM-s)
- As soon as your use case is more performance critical then my opinion is that you should use Docker at least on bare-metal for your productively used Couchbase Server deployments
This is especially underlined by the documentation of the Docker image: https://registry.hub.docker.com/_/couchbase/ . There you can see that the only productively recommended deployment is the one which is named 'Multiple hosts, single container on each host'.
Feedback is highly appreciated! :-)
Feedback is highly appreciated! :-)