* Cilium v1.14 seems to have problems reliably accessing the
apiserver via default in-cluster service discovery (relies on
kube-proxy instead of DNS) after some time
* Configure Cilium agents to use the DNS name resolving to the
cluster's load balanced apiserver and port. Regrettably, this
relies on external DNS rather than being self-contained, but its
what Cilium pushes users towards
* Cilium KubeProxyReplacement mode used to support a partial
option, but in v1.14 it became true or false
* Emulate the old partial mode by disabling KubeProxyReplacement
but turning on the individual features
* The alternative of enabling KubeProxyReplacement has ramifications
because Cilium then needs to be configured with the apiserver server
address, which creates a dependency on the cloud provider's DNS,
clashes with kube-proxy, and removing kube-proxy creates complications
for how node health is assessed. Removing kube-proxy is further
complicated by the fact its still used by other supported CNIs which
creates a tricky support matrix
Docs: https://docs.cilium.io/en/latest/network/kubernetes/kubeproxy-free/#kube-proxy-hybrid-modes
* In Cilium v1.14, kube-proxy-replacement mode again changed
its valid values, this time from partial to true/false. The
value should be true for Cilium to support HostPort features
as expected
```
cilium status --verbose
Services:
- ClusterIP: Enabled
- NodePort: Enabled (Range: 30000-32767)
- LoadBalancer: Enabled
- externalIPs: Enabled
- HostPort: Enabled
```
14ced84f7e
introduced `enable-ipv6-masquerade` and `enable-ipv4-masquerade`. This
updates the ConfigMap of cilium to align with the expected flag.
enable-ipv4-masquerade is enabled and enable-ipv6-masquerade is
disabled.
* https://github.com/cilium/cilium/releases/tag/v1.13.0
* Change kube-proxy-replacement from probe (deprecated) to
partial and disable nodeport health checks as recommended
* Add ciliumloadbalanacerippools to ClusterRole
* Enable BPF socket load balancing from host namespace
* Add init container to auto-mount /sys/fs/cgroup cgroup2
at /run/cilium/cgroupv2 for the Cilium agent
* Enable CNI exclusive mode, to disable other configs
found in /etc/cni/net.d/
* https://github.com/cilium/cilium/pull/16259
* seccomp graduated to GA in Kubernetes v1.19. Support
for seccomp alpha annotations will be removed in v1.22
* Replace seccomp annotations with the GA seccompProfile
field in the PodTemplate securityContext
* Switch profile from `docker/default` to `runtime/default`
(no effective change, since docker is the runtime)
* Verify with docker inspect SecurityOpt. Without the
profile, you'd see `seccomp=unconfined`
Related:
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.19.md#seccomp-graduates-to-general-availability
* Allow Cilium operator Pods to leader elect when Deployment
has more than one replica
* Use topology spread constraint to keep multiple operators
from running on the same node (pods bind hostNetwork ports)
* Accept experimental CNI `networking` mode "cilium"
* Run Cilium v1.8.0 with overlay vxlan tunnels and a
minimal set of features. We're interested in:
* IPAM: Divide pod_cidr into /24 subnets per node
* CNI networking pod-to-pod, pod-to-external
* BPF masquerade
* NetworkPolicy as defined by Kubernetes (no L7)
* Continue using kube-proxy with Cilium probe mode
* Firewall changes:
* Require UDP 8472 for vxlan (Linux kernel default) between nodes
* Optional ICMP echo(8) between nodes for host reachability (health)
* Optional TCP 4240 between nodes for host reachability (health)