In Full Metal Kubernetes we talked about Cilium, a container network interface (CNI) that has some great features like BGP or layer-2 for service announcements. Either of those can work great in scenarios where you have BGP available, or you control the layer-2.

If you use cloud-managed Kubernetes, you probably have a native LoadBalancer type that can provision IP addresses. If you roll your own k8s, either on cloud instances, or on bare metal however, you don’t have anything to provision these addresses.

If this sounds like something you need Cilium has a neat trick, Node IPAM LB, inspired by the k3s “ServiceLB”. It works by getting the Node addresses of the selected Nodes and advertising them. It will respect the .spec.ipFamilies to decide if IPv4 or IPv6 addresses shall be used and will use the ExternalIP addresses if any or the InternalIP addresses otherwise.

If you’re already using Cilium, all that’s required is to enable it:

helm upgrade cilium ./cilium \
  --namespace kube-system \
  --reuse-values \
  --set nodeIPAM.enabled=true
kubectl -n kube-system rollout restart deployment/cilium-operator

If you’re running in the cloud, you can use tools like ACK, or AWS Controllers for Kubernetes.

If you use a different provider, they’ll probably have a similar Helm chart to install their own integration toolkit. Without such a toolkit, your nodes will only have the InternalIP address available for LoadBalancer use which is not optimal. Most bare metal Kubernetes have this problem.. until now.

Our preferred self-managed k8s distro, k0s, has their own solution to this problem. Their provide a stub cloud integration toolkit; k0s cloud provider.

Setting these flags on your k0s install means you can now annotate a node to indicate the external IP address:

kubectl annotate \
    node <node> \
    k0sproject.io/node-ip-external=<external IP>[,<external IP 2>][,<external IP 3>]

Finally, we bare metallers get access to externalIP in our LoadBalancer services! You can see the externalIP represented via kubectl like so:

kubectl get nodes -o wide
NAME   STATUS   ROLES           AGE    VERSION       INTERNAL-IP    EXTERNAL-IP      OS-IMAGE                                             KERNEL-VERSION   CONTAINER-RUNTIME
cn1    Ready    control-plane   128d   v1.31.1+k0s   192.168.1.52   203.0.113.11   Flatcar Container Linux
cn2    Ready    control-plane   128d   v1.31.1+k0s   192.168.1.51   203.0.113.12   Flatcar Container Linux
cn3    Ready    control-plane   128d   v1.31.1+k0s   192.168.1.50   203.0.113.13   Flatcar Container Linux

and you can see a resulting service:

Name:                     whoami
Namespace:                who
Labels:                   <none>
Annotations:              io.cilium.nodeipam/match-node-labels: kubernetes.io/hostname=cn3
Selector:                 app=whoami
Type:                     LoadBalancer
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.106.128.104
IPs:                      10.106.128.104
LoadBalancer Ingress:     203.0.113.13 (VIP)
Port:                     web  8888/TCP
TargetPort:               80/TCP
NodePort:                 web  32203/TCP
Endpoints:                10.0.1.160:80,10.0.2.180:80
Session Affinity:         None
External Traffic Policy:  Local
Internal Traffic Policy:  Cluster
HealthCheck NodePort:     30282
Events:                   <none>

Et voila! More reasons to give bare metal k8s a shot.