* AWS IPv4 address pricing is quite high compared to other
clouds, and an NLB unavoidably uses at least 3.
* Unlike Azure's nice outbound through LB options, AWS has
only NAT options which are even more costly than IPv4 in
budget clusters. Another option is to simply forget about
accessing nodes via IPv4 or outbound IPv4 internet access
(tradeoff: GitHub is a notable website that only serves
via IPv4, so cut ties)
* Several v6 SKU types come with ephemeral OS disks with Nvme so
you get faster local storage and avoid managed disk costs
* Ensure worker_disk_size is set to the appropriate size for the
SKU's ephemeral storage, since you pay for it either way
* Requires https://github.com/hashicorp/terraform-provider-azurerm/pull/30044
* Set a rolling upgrade policy so that changes to the worker node
pool are rolled out gradually. Previously, the VMSS model could
change, but instances would not receive it until manually replaced
* Align Azure node pool behaviors more closely with AWS and GCP:
* On AWS, worker instance template changes trigger an instance refresh
* On GCP, worker instance template changes roll out via proactive
* Define Azure automatic instance repair using Application Health
Extension probes to 10256 (kube-proxy or Cilium equivalent) to match
the strategy used on Google Cloud
* Azure Load Balancers charge by load balancer rues (5 included)
so its useful to provide ways to stay under that number, either
by dropping support for port 80 traffic or IPv6 traffic. When
using global proxies, you can usually serve IPv6 or http->https
redirects separately anyway
* Update Google Cloud TCP proxies from classic to current
* Google Cloud TCP proxies no longer restrict which frontend
ports may be used
* Switch apiserver to listen on 6443 to match other cloud
platforms
* Switch the HTTP (port 80) proxy to a TCP proxy to match
what's done for HTTPS traffic to ingress/gateway controllers
* Add a variable `enable_http_lb` to make TCP/80 IPv4/IPv6
forwarding rules optional. Default to false. Google Cloud
charges by forwarding rule, so dropping support for plaintext
http traffic can save costs. And if you front traffic with
global load balancer providers, you may handle http->https
redirects there anyway, so there's no loss