Containers have been at the development forefront for some time, largely thanks to CoreOS and their container optimized approach, but others are also working to bring containers to the mainstream. A more recent entry to the game comes from Google in the name of Kubernetes, an open-source platform for container deployment automation, scaling, and operations across clusters of hosts.

Kubernetes on CoreOS Cluster

Kubernetes, though a fairly new system, is titled for production-grade container orchestration, aiming to further ease the management and discovery of containerized application by grouping them into logical units. Their strengths lie in flexible growth, environment agnostic portability, and practically unlimited scaling. Getting started with Kubernetes might seem daunting due to the number of possible configuration and environments, and as such this guide is to help in deploying your first Kubernetes installation on a CoreOS cluster.

Getting started

Below you will find the steps and example files needed to successfully deploy a master and two worker nodes on a CoreOS cluster. If you are not already familiar with CoreOS or would like some help booting up a new cluster, follow our guide for Getting Started with CoreOS Cluster, use the following cloud-config to enable flannel networking. This guide sets up Kubernetes on a three node cluster, but it works just as well on a single CoreOS host when discovery and peer components in the cloud-config are omitted.

#cloud-config
coreos:
  etcd2:
    discovery: https://discovery.etcd.io/<token>
    advertise-client-urls: "http://$private_ipv4:2379"
    initial-advertise-peer-urls: "http://$private_ipv4:2380"
    listen-client-urls: "http://0.0.0.0:2379,http://0.0.0.0:4001"
    listen-peer-urls: "http://$private_ipv4:2380,http://$private_ipv4:7001"
    data-dir: /var/lib/etcd2
  fleet:
    public-ip: $private_ipv4
    etcd_servers: "http://$private_ipv4:2379"
  flannel:
    interface: $private_ipv4
  units:
    - name: etcd2.service
      command: start
    - name: fleet.service
      command: start
    - name: flanneld.service
      drop-ins:
        - name: 50-network-config.conf
          content: |
            [Service]
            ExecStartPre=/usr/bin/etcdctl set /coreos.com/network/config \
              '{ "Network": "10.10.0.0/16" }'
      command: start
ssh_authorized_keys:
  - "<SSH-key>"

Deploying a Kubernetes cluster involves the following processes.

  • Generate certificates for communication between Kubernetes components.
  • Configure flannel networking for the cluster.
  • Setting up a Kubernetes master node.
  • Setting up Kubernetes worker nodes.
  • Configure kubectl to work on the cluster.
  • Test the configuration.

Some variables will be used throughout this guide. The provided defaults can be left unchanged, but etcd servers, node IPs and the master IP will need to be customized to your infrastructure. Each part that needs editing will be mentioned in the guide.

Generate cluster public and private keys

You will need to generate the following keys to allow secure communication between Kubernetes components.

Root CA public and private key

  • ca.pem
  • ca-key.pem

API server public and private key

  • apiserver.pem
  • apiserver-key.pem

Worker node public and private key

  • <FQDN>-worker.pem
  • <FQDN>-worker-key.pem

To start off, use the command below to create a new directory on every node in your cluster.

mkdir ~/kube-ssl && cd ~/kube-ssl

Next, you will need to choose the master node for your cluster as the master and the worker nodes are configured differently. There are no special qualifications for the master node, just pick one e.g. the first node in your CoreOS cluster.

The following is done only on the master node, you will later copy the worker keys onto their corresponding nodes.

Generate the root keys with the following commands.

openssl genrsa -out ca-key.pem 2048
openssl req -x509 -new -nodes -key ca-key.pem -days 10000 -out ca.pem -subj "/CN=kube-ca"

Some of the options for the keys cannot be set through flags, so instead, create a configuration file with the name and the content as below. Replace the <master public IP> and the <master private IP> at the bottom of the file with the public and private IP address of your master node.

vi openssl.cnf
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.cluster.local
IP.1 = 10.13.0.1
IP.2 = <master public IP>
IP.3 = <master private IP>

You can then use the configuration file to generate the API server keys with the commands below. This is still done only on the master node.

openssl genrsa -out apiserver-key.pem 2048
openssl req -new -key apiserver-key.pem -out apiserver.csr -subj "/CN=kube-apiserver" -config openssl.cnf
openssl x509 -req -in apiserver.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out apiserver.pem -days 365 -extensions v3_req -extfile openssl.cnf

Next, create a similar configuration file for the worker keys.

vi worker-openssl.cnf
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
IP.1 = $ENV::WORKER_IP

Then generate the worker keys for every node using the following two variables. Replace the <node hostname>, with e.g. $HOSTNAME,  to give each key a unique name matching the nodes. If the nodes do not have a routable hostname, set the FQDN to a unique, per-node placeholder name. Also, set the <node private IP> for each worker node.

FQDN=<node hostname>
WORKER_IP=<node private IP>
openssl genrsa -out ${FQDN}-worker-key.pem 2048
WORKER_IP=${WORKER_IP} openssl req -new -key ${FQDN}-worker-key.pem -out ${FQDN}-worker.csr -subj "/CN=${FQDN}" -config worker-openssl.cnf
WORKER_IP=${WORKER_IP} openssl x509 -req -in ${FQDN}-worker.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out ${FQDN}-worker.pem -days 365 -extensions v3_req -extfile worker-openssl.cnf

Repeat the steps above to generate a new key for every worker node in turn.

Copy the appropriate keys to the worker nodes by running the following commands on each of the worker nodes.

scp core@<master node IP>:~/kube-ssl/ca.pem ~/kube-ssl/
scp core@<master node IP>:~/kube-ssl/${HOSTNAME}* ~/kube-ssl/

You may also want to create an admin keys for remote management from your own computer. This is optional.

openssl genrsa -out admin-key.pem 2048
openssl req -new -key admin-key.pem -out admin.csr -subj "/CN=kube-admin"
openssl x509 -req -in admin.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out admin.pem -days 365

Apply the TLS assets

Create a directory for the assets and copy the keys to it on every node.

sudo mkdir -p /etc/kubernetes/ssl
sudo cp ~/kube-ssl/*.pem /etc/kubernetes/ssl

Make a symlink to the worker keys on each node to simplify the setup later on.

cd /etc/kubernetes/ssl/
sudo ln -s ${HOSTNAME}-worker.pem worker.pem
sudo ln -s ${HOSTNAME}-worker-key.pem worker-key.pem

And set proper permission for the private keys.

sudo chmod 600 /etc/kubernetes/ssl/*-key.pem
sudo chown root:root /etc/kubernetes/ssl/*-key.pem

Flannel network configuration

Next up is to configure the flannel networking to allow Kubernetes to assign virtual IP addresses to pods. Perform these steps on all nodes in your cluster.

Make a new directory for storing flannel options.

sudo mkdir /etc/flannel

Then create an options file with the following command and save the content shown below. Replace the <node private IP> and each of the private IP segments with the values applicable to your cluster. The etcd endpoints refer to the URLs etcd is reachable at, set the private IPs to each of your nodes.

sudo vi /etc/flannel/options.env
FLANNELD_IFACE=<node private IP>
FLANNELD_ETCD_ENDPOINTS=http://<master private IP>:2379,http://<node1 private IP>:2379,http://<node2 private IP>:2379

You will also need to tell flannel to use these options, create the following directory and a new flannel configuration file.

sudo mkdir -p /etc/systemd/system/flanneld.service.d/
sudo vi /etc/systemd/system/flanneld.service.d/40-ExecStartPre-symlink.conf

Add the service segment to the file as shown underneath, then save and exit the editor.

[Service]
ExecStartPre=/usr/bin/ln -sf /etc/flannel/options.env /run/flannel/options.env

With the config files created, add the following network info into etcd from any node.

etcdctl set /coreos.com/network/config "{\"Network\":\"10.12.0.0/16\",\"Backend\":{\"Type\":\"vxlan\"}}"

When employing the vxlan backend, the kernel uses UDP port 8472 for sending encapsulated packets. Make sure that any firewall rules you set allow this traffic for all hosts participating in the overlay network.

Configure the master node

Kubernetes clusters operate on master-worker principle where the master node(s) arrange the application pods for the worker nodes to run.

Deploy Kubernetes cluster master

Create a kubelet unit file with the following command and content. Replace the <master private IP> to set a hostname-override.

sudo vi /etc/systemd/system/kubelet.service
[Service]
Environment=KUBELET_VERSION=v1.3.5_coreos.0
ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/manifests
ExecStart=/usr/lib/coreos/kubelet-wrapper \
  --api-servers=http://127.0.0.1:8080 \
  --allow-privileged=true \
  --config=/etc/kubernetes/manifests \
  --hostname-override=<master private IP> \
  --cluster-dns=10.13.0.10 \
  --cluster-domain=cluster.local
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target

Then make a new directory for Kubernetes manifests which define the rest of the services that run on the master node.

sudo mkdir -p /etc/kubernetes/manifests

Create a manifest file for the API server as shown below. Set the etcd server IP addresses again and replace the <master private IP> with the private IP for the master node.

sudo vi /etc/kubernetes/manifests/kube-apiserver.yaml
apiVersion: v1
kind: Pod
metadata:
  name: kube-apiserver
  namespace: kube-system
spec:
  hostNetwork: true
  containers:
  - name: kube-apiserver
    image: quay.io/coreos/hyperkube:v1.3.5_coreos.0
    command:
    - /hyperkube
    - apiserver
    - --bind-address=0.0.0.0
    - --etcd-servers=http://<master private IP>:2379,http://<node1 private IP>:2379,\
http://<node2 private IP>:2379
    - --allow-privileged=true
    - --service-cluster-ip-range=10.13.0.0/24
    - --secure-port=443
    - --advertise-address=<master private IP>
    - --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota
    - --tls-cert-file=/etc/kubernetes/ssl/apiserver.pem
    - --tls-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem
    - --client-ca-file=/etc/kubernetes/ssl/ca.pem
    - --service-account-key-file=/etc/kubernetes/ssl/apiserver-key.pem
    - --runtime-config=extensions/v1beta1=true,extensions/v1beta1/networkpolicies=true
    ports:
    - containerPort: 443
      hostPort: 443
      name: https
    - containerPort: 8080
      hostPort: 8080
      name: local
    volumeMounts:
    - mountPath: /etc/kubernetes/ssl
      name: ssl-certs-kubernetes
      readOnly: true
    - mountPath: /etc/ssl/certs
      name: ssl-certs-host
      readOnly: true
  volumes:
  - hostPath:
      path: /etc/kubernetes/ssl
    name: ssl-certs-kubernetes
  - hostPath:
      path: /usr/share/ca-certificates
    name: ssl-certs-host

Kubernetes directs traffic from outside the cluster to the pods using proxies which are needed on every node. Create one on the master node using the yaml file below. The workers will be set up later with slightly different settings. No customization needed for this file.

sudo vi /etc/kubernetes/manifests/kube-proxy.yaml
apiVersion: v1
kind: Pod
metadata:
  name: kube-proxy
  namespace: kube-system
spec:
  hostNetwork: true
  containers:
  - name: kube-proxy
    image: quay.io/coreos/hyperkube:v1.3.5_coreos.0
    command:
    - /hyperkube
    - proxy
    - --master=http://127.0.0.1:8080
    - --proxy-mode=iptables
    securityContext:
      privileged: true
    volumeMounts:
    - mountPath: /etc/ssl/certs
      name: ssl-certs-host
      readOnly: true
  volumes:
  - hostPath:
      path: /usr/share/ca-certificates
    name: ssl-certs-host

Create a new file for a Kubernetes controller on the master node.

sudo vi /etc/kubernetes/manifests/kube-controller-manager.yaml

Save the pod information from below in the yaml file as is.

apiVersion: v1
kind: Pod
metadata:
  name: kube-controller-manager
  namespace: kube-system
spec:
  hostNetwork: true
  containers:
  - name: kube-controller-manager
    image: quay.io/coreos/hyperkube:v1.3.5_coreos.0
    command:
    - /hyperkube
    - controller-manager
    - --master=http://127.0.0.1:8080
    - --leader-elect=true
    - --service-account-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem
    - --root-ca-file=/etc/kubernetes/ssl/ca.pem
    livenessProbe:
      httpGet:
        host: 127.0.0.1
        path: /healthz
        port: 10252
      initialDelaySeconds: 15
      timeoutSeconds: 1
    volumeMounts:
    - mountPath: /etc/kubernetes/ssl
      name: ssl-certs-kubernetes
      readOnly: true
    - mountPath: /etc/ssl/certs
      name: ssl-certs-host
      readOnly: true
  volumes:
  - hostPath:
      path: /etc/kubernetes/ssl
    name: ssl-certs-kubernetes
  - hostPath:
      path: /usr/share/ca-certificates
    name: ssl-certs-host

And lastly, create a manifest for the scheduler as shown below.

sudo vi /etc/kubernetes/manifests/kube-scheduler.yaml
apiVersion: v1
kind: Pod
metadata:
  name: kube-scheduler
  namespace: kube-system
spec:
  hostNetwork: true
  containers:
  - name: kube-scheduler
    image: quay.io/coreos/hyperkube:v1.3.5_coreos.0
    command:
    - /hyperkube
    - scheduler
    - --master=http://127.0.0.1:8080
    - --leader-elect=true
    livenessProbe:
      httpGet:
        host: 127.0.0.1
        path: /healthz
        port: 10251
      initialDelaySeconds: 15
      timeoutSeconds: 1

Start the master kubelet

With all of the configuration and manifest files in place, go ahead and reload the systemcl daemon to apply the changes, then start and enable the kubelet service.

sudo systemctl daemon-reload
sudo systemctl start kubelet
sudo systemctl enable kubelet

Check that the service started using the status command.

systemctl status kubelet

You should see the service as active and a list of recent actions. Kubelet will start downloads required file if everything is working correctly. Wait until the downloads are complete, and the node gets registered to the cluster as shown in the example output below.

Sep 12 12:55:24 <hostname> kubelet-wrapper[1320]: I0912 12:55:24.419978 1320 kubelet.go:1202] Successfully registered node <private IP>

Test the API server with a simple curl requests for version details.

curl http://127.0.0.1:8080/version
{
  "major": "1",
  "minor": "3",
  "gitVersion": "v1.3.5+coreos.0",
  "gitCommit": "d7a04b1c6044647f5919fadf3cecb9ee70c10fc5",
  "gitTreeState": "clean",
  "buildDate": "2016-08-15T21:01:42Z",
  "goVersion": "go1.6.2",
  "compiler": "gc",
  "platform": "linux/amd64"
}

If the API server is responding as expected, add the following namespace to it.

curl -H "Content-Type: application/json" -XPOST -d '{"apiVersion":"v1","kind":"Namespace","metadata":{"name":"kube-system"}}' "http://127.0.0.1:8080/api/v1/namespaces"

You can see the kubelet components running in docker.

docker ps
CONTAINER ID  IMAGE                                     COMMAND                 CREATED
5ea5587bf3b8  quay.io/coreos/hyperkube:v1.3.5_coreos.0  "/hyperkube proxy --m"  ...
32a03abd7532  quay.io/coreos/hyperkube:v1.3.5_coreos.0  "/hyperkube controlle"  ...
dfc6e9b523df  quay.io/coreos/hyperkube:v1.3.5_coreos.0  "/hyperkube apiserver"  ...
a6f66f99794c  quay.io/coreos/hyperkube:v1.3.5_coreos.0  "/hyperkube scheduler"  ...
ec185e157467  gcr.io/google_containers/pause-amd64:3.0  "/pause"                ...
7d960a6beebf  gcr.io/google_containers/pause-amd64:3.0  "/pause"                ...
111b6d3cb963  gcr.io/google_containers/pause-amd64:3.0  "/pause"                ...
1370f56d902b  gcr.io/google_containers/pause-amd64:3.0  "/pause"                ...

Hyperkube is a framework for Kubernetes to run the server components. It allows Kubernetes to combine all of the services into a single binary that can run the different components in individual processes.

Download kubectl

Kubernetes is given instructions through a separate command line utility called kubectl. Download the application using the following command on the master node. Optionally, you can also download kubectl on your own computer, if you generated the required keys, and wish to use Kubernetes without having to SSH to your master node.

curl https://storage.googleapis.com/kubernetes-release/release/v1.3.5/bin/linux/amd64/kubectl -o ~/kubectl

Set the application executable.

sudo chmod +x ~/kubectl

Then use the following commands to make a new directory, move the now executable file to it, and include the location to your PATH environment variable.

sudo mkdir -p /opt/bin
sudo mv ~/kubectl /opt/bin/kubectl
PATH="$PATH:/opt/bin/"

You can now test the Kubernetes cluster readiness with the command below.

kubectl get nodes
NAME        STATUS  AGE
10.1.5.104  Ready   2m

Configure the worker nodes

With the master node ready you can add your worker nodes to the Kubernetes cluster.

Deploy Kubernetes cluster worker

Create a kubelet service on the workers like with the master node.

sudo vi /etc/systemd/system/kubelet.service

Enter the following to the unit file. Replace the <master private IP> with the private IP address of your master node, and the <node private IP>with the private IP address of the worker node.

[Service]
Environment=KUBELET_VERSION=v1.3.5_coreos.0
ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/manifests
ExecStart=/usr/lib/coreos/kubelet-wrapper \
  --api-servers=https://<master private IP> \
  --register-node=true \
  --allow-privileged=true \
  --config=/etc/kubernetes/manifests \
  --hostname-override=<node private IP> \
  --cluster-dns=10.13.0.10 \
  --cluster-domain=cluster.local \
  --kubeconfig=/etc/kubernetes/worker-kubeconfig.yaml \
  --tls-cert-file=/etc/kubernetes/ssl/worker.pem \
  --tls-private-key-file=/etc/kubernetes/ssl/worker-key.pem
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target

Next, make a manifest directory for the rest of the kubelet files.

sudo mkdir -p /etc/kubernetes/manifests

Then create a proxy file with the name and content as shown below. Again, replace <master private IP> with the master node IP address in the file.

sudo vi /etc/kubernetes/manifests/kube-proxy.yaml
apiVersion: v1
kind: Pod
metadata:
  name: kube-proxy
  namespace: kube-system
spec:
  hostNetwork: true
  containers:
  - name: kube-proxy
    image: quay.io/coreos/hyperkube:v1.3.5_coreos.0
    command:
    - /hyperkube
    - proxy
    - --master=https://<master private IP>
    - --kubeconfig=/etc/kubernetes/worker-kubeconfig.yaml
    - --proxy-mode=iptables
    securityContext:
      privileged: true
    volumeMounts:
      - mountPath: /etc/ssl/certs
        name: "ssl-certs"
      - mountPath: /etc/kubernetes/worker-kubeconfig.yaml
        name: "kubeconfig"
        readOnly: true
      - mountPath: /etc/kubernetes/ssl
        name: "etc-kube-ssl"
        readOnly: true
  volumes:
    - name: "ssl-certs"
      hostPath:
        path: "/usr/share/ca-certificates"
    - name: "kubeconfig"
      hostPath:
        path: "/etc/kubernetes/worker-kubeconfig.yaml"
    - name: "etc-kube-ssl"
      hostPath:
        path: "/etc/kubernetes/ssl"

Lastly, create a configuration file that tells the node the relevant information about the cluster. With the TLS assets already in place, the file can be used as is.

sudo vi /etc/kubernetes/worker-kubeconfig.yaml
apiVersion: v1
kind: Config
clusters:
- name: local
  cluster:
    certificate-authority: /etc/kubernetes/ssl/ca.pem
users:
- name: kubelet
  user:
    client-certificate: /etc/kubernetes/ssl/worker.pem
    client-key: /etc/kubernetes/ssl/worker-key.pem
contexts:
- context:
    cluster: local
    user: kubelet
  name: kubelet-context
current-context: kubelet-context

After saving the last configuration file, reload the systemctl daemon on the worker nodes, then start and enable kubelet.

sudo systemctl daemon-reload
sudo systemctl start kubelet
sudo systemctl enable kubelet

Check that the downloads start on the worker nodes as they did on the master.

systemctl status kubelet

Once the downloads finish you should see the nodes added to the cluster. On the master, check for nodes again to see that the workers joined successfully.

kubectl get nodes
NAME        STATUS AGE
10.1.5.104  Ready  5m
10.1.7.73   Ready  1m
10.1.8.155  Ready  1m

Summary

Congratulations, with a fully working Kubernetes cluster up and running on your CoreOS servers, you are now free to deploy Kubernetes ready applications. The possibilities with kubelets are nearly endless, and therefore it might be difficult to think what to do next. You can check out the documentation over at Kubernetes for inspiration to keep going.

Mastering the Kubernetes clusters will take some time and practice, but it can be a new powerful tool in your arsenal with the potential to run anywhere and never to be outgrown.