For a lot of reasons, VPNs can be useful. Personally, as an expat, I am regularly facing geo-resctricted video content. It can be very frustrating at times. Subscribing to a VPN provider could be a solution, however I always found it a bit expensive for my usage. Since I’m renting a dedicated server in France, I was using it for years as a SOCKS proxy, when needed. However, I now miss some contents from another country I lived in: the UK. I haven’t any server there I could hop to. In short, I need a cheap VPN solution that I could use for time to time, with the ability to select the target location.

It looks like public clouds fit perfectly those requirements! We can pay only on usage by creating/destroying our whole setup. With a bit of automation, let’s see how to create our own VPN.

Disclaimer: This solution is by all mean not supposed to be as convenient to use as retailed VPNs. It is not a comprehensive solution either. Finally, since we’re dealing with public cloud, mind the billing! I obviously won’t be responsible for any excessive bill you might encounter by following this post or using my templates.

OK let’s go! For this post I will use Kubernetes on Google Cloud (GCP). You can of course do something similar on AWS, but I chose to go on GCP because they give away one zonal Kubernetes cluster (GKE) per billing account for free (excluding the worker nodes cost).

I know what you might think: Wow, why using Kubernetes for a simple VPN used by a single user? It sounds overkill. Well, it’s true we don’t really need Kubernetes here. But since we only pay the woker node, it makes it a no brainer solution for easily deploying containers. Which is even oversimplified by Helm. In one word, I chose to go with Kubernetes for one reason: simplicity.

Prerequisites

You’ll need the following tools on your machine:

  • gcloud
  • terraform
  • kubectl
  • helm

I won’t cover the install and setup of those tools here, please refer to their respective documentation.

Set up the GCP project & infra with Terraform

I wrote a small terraform template that creates a project, network and GKE cluster in GCP. You can get it on GitHub: jmthvt/ownVPN.

This template relies mainly on official (as in, supported by Hashicorp) modules from the terraform registry.

A few highlights:

  • It spins everything up in us-central1 (one of the region where one zonal GKE per account is free). We could parametize the region to allow an easier switch. But currently that’s harcoded.
  • We’re using a preemptible node for the GKE worker. That’s a bit similar to AWS spot instances: cheaper instances than on-demand, but short-lived (up to 24 hours).
  • The g1-small instance type is very small for GKE. So small we will need to lower the openVPN CPU request, later. Feel free to bump if you don’t mind spending.
  • One node and no autoscaling, again for saving money.
  • In order to create the project, you’ll need your billind account ID. You can get it with gcloud alpha billing accounts list.
  • If you’re using an organisation, you’ll have to submit the org ID.

OK, once you’ve cloned the repo:

cd terraform/
terraform init
terraform plan
terraform apply

After a moment, the ownvpn project and the infra should be provisionned.

Apply complete! Resources: 30 added, 0 changed, 0 destroyed.

Set up OpenVPN on Kubernetes with Helm

Let’s list the projects and switch to the new owvpn project.

gcloud projects list
gcloud config set project <PROJECT_ID>

Let’s jump onto the Kubernetes cluster:

gcloud container clusters get-credentials gke-1 --region us-central1

Quick check that we can indeed reach the Kubernetes API:

kubectl get nodes
NAME                                        STATUS   ROLES    AGE   VERSION
gke-gke-1-default-node-pool-fda1c71d-79dt   Ready    <none>   10m   v1.15.11-gke.11

Looks good, so let’s add the helm stable repo and install the OpenVPN chart:

helm repo add stable http://storage.googleapis.com/kubernetes-charts
helm install openvpn stable/openvpn --set resources.requests.cpu=100m --set ipForwardInitContainer=true

We have to lower the CPU request otherwise the pod couldn’t spin up on our super small node. Once the chart is installed, wait a few more minutes to allow OpenVPN to generate certs/keys and the load balancer to be ready.

Then, run the following in a shell:

POD_NAME=$(kubectl get pods --namespace "default" -l "app=openvpn,release=openvpn" -o jsonpath='{ .items[0].metadata.name }')
SERVICE_NAME=$(kubectl get svc --namespace "default" -l "app=openvpn,release=openvpn" -o jsonpath='{ .items[0].metadata.name }')
SERVICE_IP=$(kubectl get svc --namespace "default" "$SERVICE_NAME" -o go-template='{{ range $k, $v := (index .status.loadBalancer.ingress 0)}}{{ $v }}{{end}}')
KEY_NAME=kubeVPN
kubectl --namespace "default" exec -it "$POD_NAME" /etc/openvpn/setup/newClientCert.sh "$KEY_NAME" "$SERVICE_IP"
kubectl --namespace "default" exec -it "$POD_NAME" cat "/etc/openvpn/certs/pki/$KEY_NAME.ovpn" > "$KEY_NAME.ovpn"

You can now import the kubeVPN.opvn file in your OpenVPN client and connect!

Wrap-up

While it’s been quite straightforward to get our own VPN in the cloud, it took quite some time to spin up. I haven’t timed properly, but it took easily 10-15mn overall, mostly waiting for the infrastructure. If we want to gain some time next time, we could just downscale the node group to 0 instead of destroying the whole infra. Since the GKE cluster is free, we would end up paying nothing until we scale up the node pool again.

Otherwise, do not forget to destroy the infra:

terraform destroy

I’m well aware that a lot of things could be streamlined and enhanced to get the VPN up and running quicker. I might (or might not) improve things in the future, in which case I would ipdate this post. I also don’t really know how much this will cost me. Due to my low usage I expect it to be very low, but due to how public cloud bill everything, it’s quite tricky to estimate.

It’s pretty cool to be able to get this working though, that was fun!