If you follow the CoreOS documentation and try to get a Vagrant cluster up and running the etcd cluster is not created by default.
After cloning the Vagrant CoreOS repo, cd into the cluster folder and run vagrant up. This will start 3 core-os clusters numbered core-01 to core-03. You can ssh into them by running :
vagrant ssh core-01
In my case the IP addresses of those coreos clusters are:
- core-01: 192.168.65.2
- core-02: 192.168.65.3
- core-03: 192.168.65.4
etcd -s 192.168.65.2:7001 -cl 0.0.0.0 -c 192.168.65.2:4001 -d nodes/node1 -n node1&
This will start the etcd server (port 7001) and a client (port 4001), store the files (configuration, logs) in nodes/node1 and name it node1. The -cl 0.0.0.0 option is to enable etcd to listen to both internal (0.0.0.0) and external IPs.
etcd -s 192.168.65.2:7001 -c 192.168.65.2:4001 -C 192.168.65.3:7001 -d nodes/node2 -n node2&
This is pretty much similar to the parameters passed to etcd on core-01 except the -C option which points to the IP address of core-01
etcd -s 192.168.65.4:7001 -cl 0.0.0.0 -c 192.168.65.4:4001 -C 192.168.65.3:7001 -d nodes/node3 -n node3&
Note: Daemonizing the etcd daemons the way I did is definitely not the way to go. I’ve seen examples where they are executed within a docker container but this will do for a quick test run.
Now you have an etcd cluster. To test that they all see each other you can run:
curl -L http://192.168.65.3:4001/v1/machines
The output should list the three clusters:
http://192.168.65.2:4001 http://192.168.65.3:4001 http://192.168.65.4:4001
You can access the etcd dashboard by pointing your browser to: (on your host machine)
You can now follow the rest of the documentation and publish messages to etcd, read them on all the clusters.