Setting up minikube and docker
The following are some notes that I put together when trying to get docker and minikube to play nicely. Unless otherwise indicated, assume that these notes were generated from running a local copy on Ubuntu 20.04, Docker 20.10.6, minikube 1.20.0.
Standard commands the minikube
The following commands are the basic commands required to run up a minikube and deploy some services on it.
minikube start --insecure-registry localhost:5000
Kick off minikube (creating new container if necessary), opening up the registry on 5000. This allows us to shift containers onto the minikube from a local docker registry. We'll need this later.
minikube stop
Stop the minikube (but don't delete its state)
minikube delete
Delete the minikube and its state
minikube dashboard
This will open up a browser showing the status of the minikube: the available services, and the deployments and pods that are active.
kubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.4
This creates a deployment named "hello-minikube", using that image (this is the sample one from the minikube website). The deployment will fire up a pod automatically.
kubectl expose deployment hello-minikube --type=NodePort --port=8080
Pods created by this deployment will serve the port 8080, so that it other pods can listen to it.
minikube service hello-minikube
Minikube service will allow you to access a pod using your local machine.
minikube service list
This will list the full range of services running on the minikube that have been set up for local examination. This will expose a local port so that it is forwarding to the exposed port from the expose deployment call above. As such, you won't need the following command, but just in case:
kubectl port-forward service/hello-minikube 7080:8080
Forward the local 7080 port to the 8080 port on the pod
Necessary add ons
minikube addons enable registry
This will allow you to support a local registry (which means you'll be able to create your own docker images and push them to the kube without uploading them to a 3rd party website). Follow the windows section of this.
In separate terminals, you will want to run the following two commands. For the first, you will need the local IP of the minikube. For the second, you'll need to find the name of the currently active minikube pod for the registry-proxy (set to rk5kw below). Both of these will be visible via the minikube dashboard.
docker run --rm -it --network=host alpine ash -c "apk add socat && socat TCP-LISTEN:5000,reuseaddr,fork TCP:$(minikube ip):5000"
kubectl port-forward --namespace kube-system registry-proxy-rk5kw 5000:5000
The combination of these two commands should mean that any docker image that is tagged to localhost:5000 and pushed will end up on the kube. It is worth running curl http://localhost:5000
just to make sure you can get through here.
Building a new application
First, docker app init xyz
WILL NOT WORK. Docker app images do not play nicely with monokube. Follow the dockerfile code, e.g. building images with python, to create a docker set up that will work.
# Build your image
docker build -t testregister .
# When your image is built, the output will mention the id of the new image (or get it from the output of "docker images"). Tag that id with localhost:5000/xyz - this indicates it has to go onto the kube
docker tag 840fea4fb027 localhost:5000/testregister
# Push the image onto the kube
docker push localhost:5000/testregister
# Create a pod for the image. Note the image identifier is prepended with localhost:5000
kubectl run testregisterfromlocal --image=localhost:5000/testregister
# Look at logs (or do this via the minikube dashboard)
kubectl logs testregisterfromlocal
# Expose a port for the pod
kubectl expose pod testregister --port=4000 --type=NodePort
# Create a service in minikube, which will then show this on your dashboard
minikube service testregister
Setting up postgres
Postgres is a bit involved, because we have to set up some persisting volumes for it to store its data in, in case we decide to restart the kube. First, however, we need to create the standard dockerfile.
# syntax=docker/dockerfile:1
FROM postgres:13.2
WORKDIR /pg_server
ENV POSTGRES_PASSWORD abc123
ENV POSTGRES_USER myuser
ENV POSTGRES_DB default_db
ENV PGDATA /data/postgresql
Then we build, tag and push the image to the kube as above.
Next, we need to create a persistent volume. This is created via the yaml file. Something along the following lines:
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/data"
The important parts here are the storage amount (more than enough), and the hostPath (storing /data). Run this file using, e.g.
kubectl apply -f persist_volume.yaml
A persisted volume should become visible in the minikube dashboard
Next, we need to create a claim for this space.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
This claim will be used by a deployment to access the persistent volume. Run this file using, e.g.
kubectl apply -f persist_volume_claim.yaml
A persisted volume claim should become visible in the minikube dashboard, and it should take the persisted volume from the previous section.
Finally, we will attempt to deploy our code. This yaml file replaces the create deployment
command earlier. Note that it uses the
kind: Deployment
apiVersion: apps/v1
metadata:
name: minikube-postgres
labels:
app: minikube-postgres
spec:
replicas: 1
selector:
matchLabels:
app: minikube-postgres
template:
metadata:
labels:
app: minikube-postgres
spec:
containers:
- name: postgres
image: 'localhost:5000/postgres'
env:
- name: PGDATA
value: /data/postgresql
- name: POSTGRES_USER
value: myuser
- name: POSTGRES_PASSWORD
value: abc123
- name: POSTGRES_DB
value: defaultdb
ports:
- containerPort: 5432
volumeMounts:
- mountPath: "/data"
name: postgres-storage
volumes:
- name: postgres-storage
persistentVolumeClaim:
claimName: postgres-claim
Note that the claimName above is the same as the name of the claim in the previous yaml file. Also run this deployment using:
kubectl apply -f deploy_postgres.yaml
This should then start up.
Note that there are all sorts of issues with minikube and file permissions on persistent volumes, which causes a lot of trouble along the lines. But that's not an issue here, because postgres does the sensible thing. PGAdmin, however, runs most of its commands using a second uid/gid (5050/5050). The HostPort persistent volume type that minikube uses doesn't make saving files down as anything other than root easy (I didn't managed to get chown commands to execute correctly to solve that issue), which makes deploying a persisted volume for pgadmin difficult. The dockerfile for pgadmin is this one.. Code along similar lines:
kubectl apply -f deploy_pgadmin.yaml
kubectl expose deployment minikube-pgadmin4 --port=5005 --type=NodePort
minikube service pgadmin4
should do the trick.
Using services to connect two deployments
Services allow you to set up a standard DNS rerouting so that the pods of one deployment always have the same visible host. The following service aliases the 2181 port at http://kafkazookeeperservicelocal to the 2181 port of the pod labelled as the kafka-zookeeper app. Note that the difference between this service and one the created by the command line in the notes above is this has the "ClusterIP" type. It's this type that allows you to redirect DNS requests inside the cluster.
apiVersion: v1
kind: Service
metadata:
name: kafkazookeeperservicelocal
spec:
selector:
app: kafka-zookeeper
ports:
- protocol: TCP
port: 2181
targetPort: 2181
Apache Kafka - Listener settings on Minikube
If you are getting timeouts when trying to connect to an apache kafka broker server (e.g. "Local: Timed Out" when using kafkacat to get metadata, "org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment" when trying to use kafka-console-consumer.sh or kafka-topics.sh), then run the following code. PLAINTEXT-mode listeners can't communicate without it. From this bug report.
minikube ssh
sudo ip link set docker0 promisc on