cd ~/Projects/unifi-controller
Podman ships with built-in support for Kubernetes configuration files but not for Docker Compose. As described in Podman Compose, the Podman Compose utility can use Docker Compose files to create Podman containers. However, you might want to migrate to the Kubernetes format, eschewing Podman Compose and Docker Compose entirely. This is what I ended up doing, and I describe the process here.
This tutorial provides the steps necessary to convert a simple Docker Compose file to an equivalent Kubernetes configuration using Podman Compose and Podman. It continues where Podman Compose left off, having created a Podman container from the Docker Compose for the UniFi Controller from the UniFi Controller post. So, complete this tutorial before following the steps below. This tutorial targets Ubuntu 18.04, and you should be familiar with Linux Containers, Docker Compose, Podman, the command-line, and the Kubernetes configuration format.
Change into the directory containing the UniFi Controller’s Docker Compose file.
cd ~/Projects/unifi-controller
Check for the previously created UniFi Controller pod with podman-pod-ps(1).
podman pod ps
POD ID NAME STATUS CREATED INFRA ID # OF CONTAINERS
241f0bf222a3 unifi-controller Running 2 hours ago d5eaaf6d5625 2
Okay, it’s present and accounted for!
To generate the Kubernetes configuration from a Podman container, use podman-kube-generate(1).
Here I output the configuration to the file unifi-controller.yml using the -f
flag.
The -s
flag produces the necessary network service configuration.
podman kube generate -s -f unifi-controller.yml unifi-controller
Examine the generated YAML file, reproduced below.
# Generation of Kubernetes YAML is still under development!
#
# Save the output of this file and use kubectl create -f to import
# it into Kubernetes.
#
# Created with podman-3.0.1
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2021-03-14T15:41:03Z"
labels:
app: unifi-controller
name: unifi-controller
spec:
containers:
- command:
- /init
env:
- name: PATH
value: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
- name: TERM
value: xterm
- name: container
value: podman
- name: HOME
value: /root
- name: LANGUAGE
value: en_US.UTF-8
- name: LANG
value: en_US.UTF-8
- name: MEM_LIMIT
value: 1024M
image: ghcr.io/linuxserver/unifi-controller
name: unifi-controllerunifi-controller1
ports:
- containerPort: 6789
hostPort: 6789
protocol: TCP
- containerPort: 3478
hostPort: 3478
protocol: UDP
- containerPort: 5514
hostPort: 5514
protocol: UDP
- containerPort: 8880
hostPort: 8880
protocol: TCP
- containerPort: 8080
hostPort: 8080
protocol: TCP
- containerPort: 8443
hostPort: 8443
protocol: TCP
- containerPort: 10001
hostPort: 10001
protocol: UDP
- containerPort: 8843
hostPort: 8843
protocol: TCP
- containerPort: 1900
hostPort: 1900
protocol: UDP
resources: {}
securityContext:
allowPrivilegeEscalation: true
capabilities:
drop:
- CAP_MKNOD
- CAP_NET_RAW
- CAP_AUDIT_WRITE
privileged: false
readOnlyRootFilesystem: false
seLinuxOptions: {}
workingDir: /usr/lib/unifi
dnsConfig: {}
restartPolicy: Never
status: {}
---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2021-03-14T15:41:03Z"
labels:
app: unifi-controller
name: unifi-controller
spec:
ports:
- name: "6789"
nodePort: 32062
port: 6789
protocol: TCP
targetPort: 0
- name: "3478"
nodePort: 32030
port: 3478
protocol: UDP
targetPort: 0
- name: "5514"
nodePort: 30747
port: 5514
protocol: UDP
targetPort: 0
- name: "8880"
nodePort: 30295
port: 8880
protocol: TCP
targetPort: 0
- name: "8080"
nodePort: 32396
port: 8080
protocol: TCP
targetPort: 0
- name: "8443"
nodePort: 32319
port: 8443
protocol: TCP
targetPort: 0
- name: "10001"
nodePort: 30786
port: 10001
protocol: UDP
targetPort: 0
- name: "8843"
nodePort: 31695
port: 8843
protocol: TCP
targetPort: 0
- name: "1900"
nodePort: 31076
port: 1900
protocol: UDP
targetPort: 0
selector:
app: unifi-controller
type: NodePort
status:
loadBalancer: {}
This generated file warrants some additional attention. Most importantly, the generated Kubernetes configuration is conspicuously lacking any volumes.
Add a section for an associated named volume that will hold the persistent data.
In the Docker Compose file, a volume was created like so.
version: "2.1"
services:
unifi-controller:
...
volumes:
- data:/config (1)
...
volumes:
data: (2)
1 | Associate the unifi-controller with the volume dubbed data which is mounted at /config inside the container. |
2 | Declare the named volume data which will be created automatically if it doesn’t exist. |
The way to accomplish the same behavior in the Kubernetes YAML is to use a Persistent Volume Claim. Podman has recently added support for using Persistent Volume Claims to associate Podman containers with named Podman volumes. See Podman pull request #8497 for details. This wasn’t in the generated YAML because the functionality to generate the corresponding YAML is still outstanding per Podman issue #5788.
For the time being, we’ll just have to add this manually.
spec:
containers:
- command:
- /init
...
volumeMounts: (1)
- mountPath: /config
name: unifi-data
volumes:
- name: unifi-data (2)
persistentVolumeClaim:
claimName: unifi-controller-data
1 | Mount the volume dubbed unifi-data at /config inside the container. |
2 | Declare the Persistent Volume Claim, unifi-data, using the claim name unifi-controller-data. Podman associates the claim name with the name of the Podman named volume to use for this particular pod. |
In an attempt to preserve what little sanity remains in my possession in this moment, I named the volume using |
Optionally, you can remove some of the environment variable cruft in the env
section.
I reduced this to just the values below.
env:
- name: container
value: podman
- name: MEM_LIMIT
value: 1024M
If you want to allow automatic updates of the image, add the appropriate label.
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2021-03-13T17:21:54Z"
labels:
app: unifi-controller
io.containers.autoupdate: image (1)
name: unifi-controller
1 | Add the label io.containers.autoupdate and set it to image to enable automatic updates for the containers herein. |
This is a bit of a tease for an upcoming blog post which will describe this in more detail. You’ll need to make sure that Podman’s auto-update systemd timer is enabled. Details forthcoming.
Before starting this pod up, use podman-compose to destroy the existing unifi-controller pod.
podman-compose down
Provide the generated Kubernetes YAML to podman-kube-play(1) to create and launch the pod.
podman kube play ~/Projects/unifi-controller/unifi-controller.yml
Access the controller’s web console at https://127.0.0.1:8443/.
open http://127.0.0.1:8443
xdg-open http://127.0.0.1:8443
I have a GitHub repository for this Kubernetes configuration file which you might find helpful. RedHat has several blog posts related to Podman and Kubernetes YAML including Podman can now ease the transition to Kubernetes and CRI-O, From Docker Compose to Kubernetes with Podman, and The podman play kube command now supports deployments.
You should now have a better idea of how the Docker Compose format translates to the Kubernetes format plus how to get the conversion started with Podman and Podman Compose. This also sets the stage for transitioning to using Kubernetes for managing container deployments. Hopefully you’ve found this post helpful. Posts on automatic image updates and setting up a Podman container as a systemd service to follow.