Traditional container solutions (Docker/Podman) are great for single point deployments, but Kubernetes seems to be where most people are going to nowadays.
There are many reasons to do this, the biggest being that devs can deploy containers to Kubernetes in a more abstracted fashion than you can running a container directly. A dev can have a YAML file, toss it to their server and Kubernetes will do all the ingress and silly sysadmin stuff for them.
My understanding of containers though comes from the opposite direction. I'm very familiar with how Docker and Podman tools work. But I didn't know how to translate a traditional Docker or Podman container to a Kubernetes pod. Where do you even begin?
Oh, Podman will do it for you: https://developers.redhat.com/blog/2019/01/29/podman-kubernetes-yaml#enough_teasing__show_me_the_goods
[NOTE] This post is making alot of assumptions, mainly that you have already setup your Kubernetes cluster. For context, I am using k3s with k3sup.
How to Create it #
Redhat's blog post really covers this well, but the gist is that you would do a podman generate kube yourcontainername > yourcontainername.yml
on a container you have running with Podman. It'll spit out a config file like this:
# Save the output of this file and use kubectl create -f to import
# it into Kubernetes.
#
# Created with podman-3.4.4
# NOTE: If you generated this yaml from an unprivileged and rootless podman container on an SELinux
# enabled system, check the podman generate kube man page for steps to follow to ensure that your pod/container
# has the right permissions to access the volumes added.
---
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2023-10-26T03:47:14Z"
labels:
app: lumel-unifi
name: lumel-unifi
spec:
containers:
- args:
- start
image: docker.io/lumel/unifi-controller:latest
name: lumel-unifi
securityContext:
capabilities:
drop:
- CAP_MKNOD
- CAP_NET_RAW
- CAP_AUDIT_WRITE
volumeMounts:
- mountPath: /usr/lib/unifi/data
name: legacy_unifi-controller_f1abbfc5ac2fc189b2e1a8329c2d15aaca77268a40dc6301a1b876c5d2d8ef51-pvc
volumes:
- name: legacy_unifi-controller_f1abbfc5ac2fc189b2e1a8329c2d15aaca77268a40dc6301a1b876c5d2d8ef51-pvc
persistentVolumeClaim:
claimName: legacy_unifi-controller_f1abbfc5ac2fc189b2e1a8329c2d15aaca77268a40dc6301a1b876c5d2d8ef51
But you don't really want to use it in this way. I mean you could, but you will likely need to massage this for your own needs. The names, path, and claims need to be updated.
For a basic app like this, it's OK to stick with a Pod
type of workload because I don't need fancy management of the app. However, it may be worth converting your app into a Deployment instead if you need things like replicas and automatic redeployments. This post is out of scope for that.
I would highly suggest cleaning up any paths, mountpaths, claims, etc. That may have to come later because...
You need to define a volume too #
There are a few things you want to focus on in the above config. You may have caught some new terms. persistentVolumeClaim
for instance. There is the concept of local file paths, shared volumes, etc. in Docker and Podman. In that way Kubernetes is not much different.
My use case for this particular container only needed a local file path. So I did some searching, and ended up finding out that I would need multiple YAML configs in order to get this working.
- The container itself, which we got above with some cleanup
- The "volume" local or otherwise defined. I suppose you can throw this in the first config, but I've found it easier to deal with separated.
- The concept of defining ingress to your container
After some DuckDuckGo searches, I found a stack overflow article that gave me the basis for my own config.
apiVersion: v1
kind: PersistentVolume
metadata:
name: unifi-pvc
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: /data/unifi
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: workload
operator: In
values:
- production
For storage
capacity, I opted for a low amount. My app (Unifi) doesn't need much for the configuration.
The storageClassName
is the driver so to speak that you're telling Kubernetes to use. In my case, I'm doing local filesystem storage so it's simply local-storage
.
An aside on nodeAffinity #
I think path
is self explanatory here, but nodeAffinity
was something I had to search for separately.
There are legitimate uses for using nodeAffinity
, but for me it's mostly a shortcut. My needs are not to use Kubernetes for it's scalability or anything, I am simply grinding my skillset on a homelab. So, I used nodeAffinity
to make my container stick on a specific node, so I can assign it a static IP address without having to deal with a more complicated ingress. Simple as that.
Now claim it #
Once you have figured out how your storage will work, you need to make a quick config to claim it. Here is an example:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: unifi-claim
namespace: default
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-storage
resources:
requests:
storage: 10Gi
You are telling Kuberentes "hey I want ALL of that good stuff I setup before" (in my case 10GB). I think most of these look the same, and again I bet you can combine these configs into a single YAML file...I've opted to keep it separated.
If you were following along, you'd probably have something like this:
yourcontainerkube.yml
= The actual containeryourcontainervolume.yml
= The volumeyourcontainerclaim.yml
= the claim
Cool, now setup a way to get to it. #
I mentioned above the idea of "ingress" in Kubernetes. There is also the concept of "services" to direct where traffic goes within the cluster. This is where the tire meets the road so to speak. You could have been deploying each of these files already with kubectl apply -f youcontainer.yml
and you wouldn't have anything to show for it yet. Maybe a pod that's been started when you run kubectl get pods
, but nothing you can interact with. To be honest, because of the way my app is setup I don't need to have an ingress or service (since it's using host networking) but most apps will need something like this:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: unifi-ingress
annotations:
kubernetes.io/ingress.class: "traefik"
spec:
rules:
- http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: unifi-controller
port:
number: 8443
I am using the built-in "traefik" ingress here with k3s as I mentinoed above. This is your bare basic ingress that you'll likely find when searching on your own. The main thing is to change the port to whatever is appropriate for your app.
And using the kubernetes.io
example to show what your service may look like with this same setup:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app.kubernetes.io/name: MyApp
ports:
- protocol: TCP
port: 8443 # This can be whatever port makes sense, for Unifi the clients expect to connect to 8443
targetPort: 8443
Putting it all together. #
So all in all you end up with 4 config files. You'll want to kubectl apply -f
in this order likely:
- Volume
- Claim
- Container
- Ingress
- Service
I have no doubt that it would still work out of this order, but I think it makes sense to make the volume first so that your app can use it.
If your first go at this does not cause a working Kubernetes pod, keep trying! While the YAML can see daunting at first you can do kubectl apply -f yourfile.yml
as many times as you want. It'll even overwrite whatever you've tweaked with incremental updates. Kubernetes is meant to be easier than building a full on container straight away.
Other paths to take #
Realistically....you could also just stick with Docker and/or Podman for your container. But if you want to learn Kuberentes or you want to develop a more resilient and scalable app deployment, I think it's worth learning.
Or ya know, just put it up in PikaPods. It's the same methods they use.
#kubernetes #podman #containers