Wordpress and Varnish on Kubernetes Step by Step
Overview
My goal with this post is to explore Kubernetes' Ingress Controller and Ingress resources. The best way to do that is to deploy a real application: a Wordpress site with a Varnish HTTP cache and a MariaDB database.
If you're following along, I'm using minikube (v0.19.0).
Let's dive right in. This is the overall setup we want to end up with:
Ingress Controller
The Ingress Controller is a control loop that handles inbound traffic and applies rules to deliver that traffic to applications within our Kubernetes cluster.
It runs separately from the Kubernetes master and allows you to write your own controller code. This is great because we can decouple the routing rules from our application.
Minikube ships with an Nginx Ingress Controller that you can enable by running:
minikube addons enable ingress
If you are using GCE, the controller will be an L7 Cloud Load Balancer, while on AWS you will get an Application Load Balancer (ALS). In either case, the controller listens to ingress events in the Kubernetes API and creates all the resources needed to route the traffic.
Unless you plan to write your own controller, this will be done behind the scenes. All you need to do is create an Ingress resource that will be picked up by that controller. Take a look at this sample ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: sample-ingress
spec:
tls:
- secretName: tls-certificate
hosts:
- example.com
rules:
- host: example.com
http:
paths:
- path: /
backend:
serviceName: example-service
servicePort: 80
The ingress will route the traffic based on domain and path. This example handles https traffic as well and receives the certificates via a secret resource.
Minikube Ingress Controller
With minikube this means the Ingress Controller will detect the Ingress we created and update the Virtual Hosts with the appropriate server
config blocks.
kubectl -n kube-system get all
You should see a few objects that we just created:
- rc/nginx-ingress-controller
- po/nginx-ingress-controller-***
- svc/default-http-backend
Let's connect to the nginx-ingress-controller pod to see what minikube created for us:
kubectl -n kube-system exec -it po/nginx-ingress-controller-*** bash
We know this is an Nginx server so let's check out the main nginx.conf:
cat /etc/nginx/nginx.conf
This is pretty standard, a few items worth mentioning is the default SSL certs and the default-backend:
upstream upstream-default-backend {
least_conn;
server 172.17.0.2:8080 max_fails=0 fail_timeout=0;
}
server {
ssl_certificate /ingress-controller/ssl/default-fake-certificate.pem;
ssl_certificate_key /ingress-controller/ssl/default-fake-certificate.pem;
location / {
proxy_pass http://upstream-default-backend;
}
}
Notice that Nginx will proxy all requests to upstream-default-backend
. This service will return a 404 error if the request is not host-matched by any of the running ingresses. We can check this by curl-ing the IP associated with minikube:
~ $ curl -I $(minikube ip)
HTTP/1.1 404 Not Found
Server: nginx/1.11.12
Date: Thu, 18 May 2017 20:35:55 GMT
Content-Type: text/plain; charset=utf-8
Content-Length: 21
Connection: keep-alive
Strict-Transport-Security: max-age=15724800; includeSubDomains;
Database
Let's deploy the MariaDB instance. I added everything in a single yaml file for this writeup. How you split those resources depends on how you plan to upgrade them in your cluster. If you think you will modify your services more often than the deployments then it makes sense to keep them separate.
We have a Persistent Volume and a Claim for it. The ReplicationController defined what container image we want our pods to run, how many replicas we want etc.
I will use the official MariaDB docker image. If the image has good documentation you will usually see a list of ENV variables it accepts. It looks like we can set the root password, an empty DB and a user.
apiVersion: v1
kind: Service
metadata:
name: wordpress-mariadb
spec:
ports:
- port: 3306
selector:
app: mariadb
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: mariadb-pv-volume
labels:
type: local
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/tmp/data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mariadb-pv-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
---
apiVersion: v1
kind: ReplicationController
metadata:
name: mariadb-rc
spec:
replicas: 1
selector:
app: mariadb
template:
metadata:
name: mariadb-rc
labels:
app: mariadb
spec:
volumes:
- name: mariadb-pv-storage
persistentVolumeClaim:
claimName: mariadb-pv-claim
containers:
- name: mariadb
image: mariadb:latest
env:
- name: MYSQL_DATABASE
value: wp-kube-db
- name: MYSQL_USER
value: wp-kube-usr
- name: MYSQL_PASSWORD
value: q1w2e3r4t5
- name: MYSQL_ROOT_PASSWORD
value: root-pass
ports:
- containerPort: 3306
volumeMounts:
- mountPath: "/var/lib/mysql"
name: mariadb-pv-storage
Now let's create the resources:
kubectl -f mariadb.yaml
$ kubectl get all
NAME READY STATUS RESTARTS AGE
po/mariadb-rc-hr35k 1/1 Running 0 47s
NAME DESIRED CURRENT READY AGE
rc/mariadb-rc 1 1 1 47s
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/kubernetes 10.0.0.1 <none> 443/TCP 1d
svc/wordpress-mariadb None <none> 3306/TCP 47s
We can connect to the pod and try the database connection (remember to change the pod id):
kubectl exec -it mariadb-rc-hr35k bash
Wordpress
Just like in our previous step, we will use the official Wordpress Docker image. There are multiple PHP versions and a choice between Apache and Nginx. For simplicity we'll go for the latest tag that is pulling from php:5.6-apache
.
We can also setup a lot of configuration options through ENV variables. For our setup we need the database name, host, user and password. For the host we can use the name of the MariaDB service we just created (wordpress-mariadb) and have Kubernetes resolve that for us.
We are using a Persistent Volume and we're mounting that at the location where Wordpress installs the app. To find that out I had to check the Dockerfile of the latest tag.
apiVersion: v1
kind: Service
metadata:
name: wordpress-site
spec:
ports:
- port: 80
selector:
app: wordpress
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: wordpress-pv-volume
labels:
type: local
spec:
capacity:
storage: 3Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/tmp/data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: wordpress-pv-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
---
apiVersion: v1
kind: ReplicationController
metadata:
name: wordpress-rc
spec:
replicas: 1
selector:
app: wordpress
template:
metadata:
name: wordpress-rc
labels:
app: wordpress
spec:
volumes:
- name: wordpress-pv-storage
persistentVolumeClaim:
claimName: wordpress-pv-claim
containers:
- name: wordpress
image: wordpress:latest
env:
- name: WORDPRESS_DB_HOST
value: wordpress-mariadb
- name: WORDPRESS_DB_USER
value: wp-kube-usr
- name: WORDPRESS_DB_PASSWORD
value: q1w2e3r4t5
- name: WORDPRESS_DB_NAME
value: wp-kube-db
ports:
- containerPort: 80
volumeMounts:
- mountPath: "/var/www/html"
name: wordpress-pv-storage
Let's create the wordpress related objects as well:
kubectl create -f wordpress.yaml
When the container creation is done and we see everything in the Ready state we can connect directly to the Wordpress service and check that it's working:
minikube service wordpress-site
To get the port number and IP you can run minikube ip and kubectl get all:
svc/wordpress-site 10.0.0.24 <nodes> 80:31761/TCP 2m
Ingress
We explored the Ingress Controller earlier and now it's time to look at the Ingress resource that defines the host matching rules.
Let's create our initial ingress resource and direct the traffic to the wordpress service (we'll add Varnish in between in the next step)
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: wp-ingress
spec:
rules:
- host: wordpress.local
http:
paths:
- path: /
backend:
serviceName: wordpress-site
servicePort: 80
Note that I'm using wordpress.local to match requests for our Wordpress site. I had to add our cluster's IP to my /etc/hosts file
192.168.64.2 wordpress.local
We create the ingress by running the classic kubectl create -f ingress.yaml
command. We should be able to see the site if we browse to http://wordpress.local
.
Let's revisit the nginx-ingress-controller pod we checked last time and see the changes:
kubectl -n kube-system exec -it po/nginx-ingress-controller-*** bash
Let's cat the output of nginx.conf:
server {
server_name wordpress.local;
listen 80;
listen [::]:80;
location / {
proxy_pass http://default-wordpress-site-80;
}
}
It looks like the Ingress Controller added our newly created Ingress.
Varnish
Varnish is an in-memory HTTP Cache. You can have it listen on port 80 and direct the traffic to a backend running your application. In most cases it will sit behind an Nginx or HAProxy instance in order to handle SSL termination. As long as your application responds with proper cache headers Varnish will work with minimal configuration.
We will use this default.vcl config for Wordpress. I picked million12/varnish for the Varnish Docker image because it allows us to override the default.vcl
config very easily.
Let's create the configmap for the default.vcl file:
kubectl create configmap varnish-vcl --from-file=default.vcl
Now we can create the varnish service and RC:
kubectl create -f varnish.yaml
apiVersion: v1
kind: Service
metadata:
name: varnish-svc
spec:
ports:
- port: 80
selector:
app: varnish-proxy
---
apiVersion: v1
kind: ReplicationController
metadata:
name: varnish-proxy
spec:
replicas: 1
selector:
app: varnish-proxy
template:
metadata:
name: varnish-proxy
labels:
app: varnish-proxy
spec:
volumes:
- name: varnish-config
configMap:
name: varnish-vcl
items:
- key: default.vcl
path: default.vcl
containers:
- name: varnish
image: million12/varnish
env:
- name: VCL_CONFIG
value: /etc/varnish/default.vcl
volumeMounts:
- name: varnish-config
mountPath: /etc/varnish/
ports:
- containerPort: 80
Ingress Updates
Now that we have the Varnish service up and running we can direct the traffic to it. All we have to do is update our ingress resource and specify the Varnish service instead of the Wordpress one.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: wp-ingress
spec:
rules:
- host: wordpress.local
http:
paths:
- path: /
backend:
serviceName: varnish-svc
servicePort: 80
Now if we curl for http://wordpress.local we can check that Varnish handles that request:
$ curl -I http://wordpress.local
HTTP/1.1 302 Found
Access-Control-Allow-Origin: *
Age: 0
Cache-Control: no-cache, must-revalidate, max-age=0
Content-Length: 0
Content-Type: text/html; charset=UTF-8
Date: Fri, 19 May 2017 13:58:49 GMT
Expires: Wed, 11 Jan 1984 05:00:00 GMT
Location: https://wordpress.local/wp-admin/install.php
Server: Apache/2.4.10 (Debian)
Via: 1.1 varnish-v4
X-Cache: MISS
X-Cacheable: NO:Not Cacheable
X-Powered-By: PHP/5.6.30
X-Unsetcookies: TRUE
X-Varnish: 2
Conclusion
Did you notice how easy it was for us to update the route mapping? After we added the Varnish Service all we had to do is specify that in our ingress. We didn't have to mess with the Wordpress deploy at all! It may seem trivial for this example, but if you are running a complex set of services with different domains and sub-domains, this separation will make your life a lot easier.
Future posts
Stay tuned for Part 2
where we will handle HTTPS traffic and improve the Wordpress deploy with an Nginx/PHP-FPM Docker image.
Member discussion