19 Commits

Author SHA1 Message Date
Dalton Hubble
dbfb11c6ea Update assets generation for bootkube v0.6.2
* Update hyperkube to v1.7.5_coreos.0
* Update etcd-operator to v0.5.0
* Update pod-checkpointer
* Update flannel-cni to v0.2.0
* Change etcd-operator TPR to CRD
2017-09-08 13:46:28 -07:00
Dalton Hubble
5ffbfec46d Configure the Calico MTU
* Add a network_mtu input variable (default 1500)
* Set the Calico CNI config (i.e. workload network interfaces)
* Set the Calico IP in IP MTU (for tunnel network interfaces)
2017-09-05 10:50:26 -07:00
Dalton Hubble
a52f99e8cc Add support for calico networking
* Add support for using Calico pod networking instead of flannel
* Add variable "networking" which may be "calico" or "flannel"
* Users MUST move the contents of assets_dir/manifests-networking
into the assets_dir/manifests directory before running bootkube
start. This is needed because Terraform cannot generate conditional
files into a template_dir because other resources write to the same
directory and delete.
https://github.com/terraform-providers/terraform-provider-template/issues/10
2017-09-01 10:27:43 -07:00
Dalton Hubble
1c1c4b36f8 Enable hairpin mode on cbr0 in kube-flannel-cfg 2017-08-16 18:22:42 -07:00
Dalton Hubble
c4e87f9695 Update assets generation for bootkube v0.6.1 2017-08-16 18:20:40 -07:00
Dalton Hubble
4cd0360a1a Add MIT License 2017-08-02 00:05:04 -07:00
Dalton Hubble
e7d2c1e597 Update assets generation for bootkube v0.6.0 2017-07-24 13:12:32 -07:00
Dalton Hubble
ce1cc6ae34 Update assets generation for bootkube v0.5.1 2017-07-19 10:46:24 -07:00
Dalton Hubble
498a7b0aea Merge pull request #5 from dghubble/bootkube-v0.5.0
Update assets generation for bootkube v0.5.0
2017-07-12 20:07:23 -07:00
Dalton Hubble
c8c56ca64a Update assets generation for bootkube v0.5.0 2017-07-12 19:17:11 -07:00
Dalton Hubble
99f50c5317 *: Upgrade manifests for Kubernetes v1.6.6 and bootkube v0.4.5
* Enable TLS for experimental self-hosted etcd
* Update the flannel Daemonset based on upstream
* Switch control plane components to run as non-root
* Add UpdateStrategy to control plane components
2017-06-24 14:05:32 -07:00
Dalton Hubble
dd26460395 Fix bootkube version mentioned in the README 2017-06-12 14:11:43 -07:00
Dalton Hubble
21131aa65e Add generated etcd credentials to kube-apiserver-secret.yaml 2017-06-07 16:11:19 -07:00
Dalton Hubble
f03b4c1c60 Update etcd_servers example in README.md 2017-06-07 13:55:11 -07:00
Dalton Hubble
99bf97aa79 Fix etcd_servers interpolation and example 2017-06-07 13:51:07 -07:00
Dalton Hubble
4cadd6f873 Output etcd TLS assets and fmt configs 2017-06-07 11:33:56 -07:00
Dalton Hubble
dc66e59fb2 Merge pull request #3 from dghubble/on-host-etcd
Generate on-host etcd CA, client, and peer TLS cert/key pairs
2017-06-07 10:56:24 -07:00
Dalton Hubble
6e8f0f9a1d Generate on-host etcd CA, client, and peer TLS cert/key pairs 2017-06-06 18:01:36 -07:00
Dalton Hubble
3720aff28a Bump hyperkube to v1.6.4_coreos.0 for bootkube v0.4.4 2017-05-19 16:26:52 -07:00
39 changed files with 863 additions and 153 deletions

2
.gitignore vendored
View File

@@ -1,2 +1,4 @@
*.tfvars
.terraform
*.tfstate*
assets

21
LICENSE Normal file
View File

@@ -0,0 +1,21 @@
The MIT License (MIT)
Copyright (c) 2017 Dalton Hubble
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

View File

@@ -1,18 +1,22 @@
# bootkube-terraform
`bootkube-terraform` is a Terraform module that renders [bootkube](https://github.com/kubernetes-incubator/bootkube) assets, just like running the binary `bootkube render`. It aims to provide the same variable names, defaults, features, and outputs.
`bootkube-terraform` is Terraform module that renders [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrapping assets. It functions as a low-level component of the [Typhoon](https://github.com/poseidon/typhoon) Kubernetes distribution.
The module provides many of the same variable names, defaults, features, and outputs as running `bootkube render` directly.
## Usage
Use the `bootkube-terraform` module within your existing Terraform configs. Provide the variables listed in `variables.tf` or check `terraform.tfvars.example` for examples.
Use [Typhoon](https://github.com/poseidon/typhoon) to create and manage Kubernetes clusters in different environments. Use `bootkube-terraform` if you require low-level customizations to the control plane or wish to build your own distribution.
Add the `bootkube-terraform` module alongside existing Terraform configs. Provide the variables listed in `variables.tf` or check `terraform.tfvars.example` for examples.
```hcl
module "bootkube" {
source = "git://https://github.com/dghubble/bootkube-terraform.git"
source = "git://https://github.com/dghubble/bootkube-terraform.git?ref=SHA"
cluster_name = "example"
api_servers = ["node1.example.com"]
etcd_servers = ["http://127.0.0.1:2379"]
etcd_servers = ["node1.example.com"]
asset_dir = "/home/core/clusters/mycluster"
experimental_self_hosted_etcd = false
}
@@ -30,18 +34,21 @@ terraform apply
### Comparison
Render bootkube assets directly with bootkube v0.4.2.
Render bootkube assets directly with bootkube v0.6.2.
#### On-host etcd
```sh
bootkube render --asset-dir=assets --api-servers=https://node1.example.com:443 --api-server-alt-names=DNS=node1.example.com --etcd-servers=http://127.0.0.1:2379
bootkube render --asset-dir=assets --api-servers=https://node1.example.com:443 --api-server-alt-names=DNS=node1.example.com --etcd-servers=https://node1.example.com:2379
```
Compare assets. The only diffs you should see are TLS credentials.
```sh
diff -rw assets /home/core/cluster/mycluster
pushd /home/core/mycluster
mv manifests-networking/* manifests
popd
diff -rw assets /home/core/mycluster
```
#### Self-hosted etcd
@@ -53,6 +60,11 @@ bootkube render --asset-dir=assets --api-servers=https://node1.example.com:443 -
Compare assets. Note that experimental must be generated to a separate directory for terraform applies to sync. Move the experimental `bootstrap-manifests` and `manifests` files during deployment.
```sh
diff -rw assets /home/core/cluster/mycluster
pushd /home/core/mycluster
mv experimental/bootstrap-manifests/* boostrap-manifests
mv experimental/manifests/* manifests
mv manifests-networking/* manifests
popd
diff -rw assets /home/core/mycluster
```

View File

@@ -1,11 +1,11 @@
# Self-hosted Kubernetes bootstrap manifests
# Self-hosted Kubernetes bootstrap-manifests
resource "template_dir" "bootstrap-manifests" {
source_dir = "${path.module}/resources/bootstrap-manifests"
destination_dir = "${var.asset_dir}/bootstrap-manifests"
vars {
hyperkube_image = "${var.container_images["hyperkube"]}"
etcd_servers = "${var.experimental_self_hosted_etcd ? format("http://%s:2379,http://127.0.0.1:12379", cidrhost(var.service_cidr, 15)) : join(",", var.etcd_servers)}"
etcd_servers = "${var.experimental_self_hosted_etcd ? format("https://%s:2379,https://127.0.0.1:12379", cidrhost(var.service_cidr, 15)) : join(",", formatlist("https://%s:2379", var.etcd_servers))}"
cloud_provider = "${var.cloud_provider}"
pod_cidr = "${var.pod_cidr}"
@@ -20,12 +20,11 @@ resource "template_dir" "manifests" {
vars {
hyperkube_image = "${var.container_images["hyperkube"]}"
etcd_servers = "${var.experimental_self_hosted_etcd ? format("http://%s:2379", cidrhost(var.service_cidr, 15)) : join(",", var.etcd_servers)}"
etcd_servers = "${var.experimental_self_hosted_etcd ? format("https://%s:2379", cidrhost(var.service_cidr, 15)) : join(",", formatlist("https://%s:2379", var.etcd_servers))}"
cloud_provider = "${var.cloud_provider}"
pod_cidr = "${var.pod_cidr}"
service_cidr = "${var.service_cidr}"
cloud_provider = "${var.cloud_provider}"
pod_cidr = "${var.pod_cidr}"
service_cidr = "${var.service_cidr}"
kube_dns_service_ip = "${cidrhost(var.service_cidr, 10)}"
ca_cert = "${base64encode(var.ca_certificate == "" ? join(" ", tls_self_signed_cert.kube-ca.*.cert_pem) : var.ca_certificate)}"
@@ -33,11 +32,25 @@ resource "template_dir" "manifests" {
apiserver_cert = "${base64encode(tls_locally_signed_cert.apiserver.cert_pem)}"
serviceaccount_pub = "${base64encode(tls_private_key.service-account.public_key_pem)}"
serviceaccount_key = "${base64encode(tls_private_key.service-account.private_key_pem)}"
etcd_ca_cert = "${base64encode(tls_self_signed_cert.etcd-ca.cert_pem)}"
etcd_client_cert = "${base64encode(tls_locally_signed_cert.client.cert_pem)}"
etcd_client_key = "${base64encode(tls_private_key.client.private_key_pem)}"
}
}
# Generated kubeconfig
resource "local_file" "kubeconfig" {
content = "${data.template_file.kubeconfig.rendered}"
filename = "${var.asset_dir}/auth/kubeconfig"
}
# Generated kubeconfig with user-context
resource "local_file" "user-kubeconfig" {
content = "${data.template_file.user-kubeconfig.rendered}"
filename = "${var.asset_dir}/auth/${var.cluster_name}-config"
}
# Generated kubeconfig (auth/kubeconfig)
data "template_file" "kubeconfig" {
template = "${file("${path.module}/resources/kubeconfig")}"
@@ -49,12 +62,6 @@ data "template_file" "kubeconfig" {
}
}
resource "local_file" "kubeconfig" {
content = "${data.template_file.kubeconfig.rendered}"
filename = "${var.asset_dir}/auth/kubeconfig"
}
# Generated kubeconfig (auth/kubeconfig)
data "template_file" "user-kubeconfig" {
template = "${file("${path.module}/resources/user-kubeconfig")}"
@@ -66,8 +73,3 @@ data "template_file" "user-kubeconfig" {
server = "${format("https://%s:443", element(var.api_servers, 0))}"
}
}
resource "local_file" "user-kubeconfig" {
content = "${data.template_file.user-kubeconfig.rendered}"
filename = "${var.asset_dir}/auth/${var.cluster_name}-config"
}

66
conditional.tf Normal file
View File

@@ -0,0 +1,66 @@
# Assets generated only when experimental self-hosted etcd is enabled
resource "template_dir" "flannel-manifests" {
count = "${var.networking == "flannel" ? 1 : 0}"
source_dir = "${path.module}/resources/flannel"
destination_dir = "${var.asset_dir}/manifests-networking"
vars {
pod_cidr = "${var.pod_cidr}"
}
}
resource "template_dir" "calico-manifests" {
count = "${var.networking == "calico" ? 1 : 0}"
source_dir = "${path.module}/resources/calico"
destination_dir = "${var.asset_dir}/manifests-networking"
vars {
network_mtu = "${var.network_mtu}"
pod_cidr = "${var.pod_cidr}"
}
}
# bootstrap-etcd.yaml pod bootstrap-manifest
resource "template_dir" "experimental-bootstrap-manifests" {
count = "${var.experimental_self_hosted_etcd ? 1 : 0}"
source_dir = "${path.module}/resources/experimental/bootstrap-manifests"
destination_dir = "${var.asset_dir}/experimental/bootstrap-manifests"
vars {
etcd_image = "${var.container_images["etcd"]}"
bootstrap_etcd_service_ip = "${cidrhost(var.service_cidr, 20)}"
}
}
# etcd subfolder - bootstrap-etcd-service.json and migrate-etcd-cluster.json TPR
resource "template_dir" "etcd-subfolder" {
count = "${var.experimental_self_hosted_etcd ? 1 : 0}"
source_dir = "${path.module}/resources/etcd"
destination_dir = "${var.asset_dir}/etcd"
vars {
bootstrap_etcd_service_ip = "${cidrhost(var.service_cidr, 20)}"
}
}
# etcd-operator deployment and etcd-service manifests
# etcd client, server, and peer tls secrets
resource "template_dir" "experimental-manifests" {
count = "${var.experimental_self_hosted_etcd ? 1 : 0}"
source_dir = "${path.module}/resources/experimental/manifests"
destination_dir = "${var.asset_dir}/experimental/manifests"
vars {
etcd_service_ip = "${cidrhost(var.service_cidr, 15)}"
# Self-hosted etcd TLS certs / keys
etcd_ca_cert = "${base64encode(tls_self_signed_cert.etcd-ca.cert_pem)}"
etcd_client_cert = "${base64encode(tls_locally_signed_cert.client.cert_pem)}"
etcd_client_key = "${base64encode(tls_private_key.client.private_key_pem)}"
etcd_server_cert = "${base64encode(tls_locally_signed_cert.server.cert_pem)}"
etcd_server_key = "${base64encode(tls_private_key.server.private_key_pem)}"
etcd_peer_cert = "${base64encode(tls_locally_signed_cert.peer.cert_pem)}"
etcd_peer_key = "${base64encode(tls_private_key.peer.private_key_pem)}"
}
}

View File

@@ -1,68 +0,0 @@
# Experimental self-hosted etcd
# etcd pod and service bootstrap-manifests
data "template_file" "bootstrap-etcd" {
template = "${file("${path.module}/resources/experimental/bootstrap-manifests/bootstrap-etcd.yaml")}"
vars {
etcd_image = "${var.container_images["etcd"]}"
bootstrap_etcd_service_ip = "${cidrhost(var.service_cidr, 200)}"
}
}
resource "local_file" "bootstrap-etcd" {
count = "${var.experimental_self_hosted_etcd ? 1 : 0}"
content = "${data.template_file.bootstrap-etcd.rendered}"
filename = "${var.asset_dir}/experimental/bootstrap-manifests/bootstrap-etcd.yaml"
}
data "template_file" "bootstrap-etcd-service" {
template = "${file("${path.module}/resources/etcd/bootstrap-etcd-service.json")}"
vars {
bootstrap_etcd_service_ip = "${cidrhost(var.service_cidr, 200)}"
}
}
resource "local_file" "bootstrap-etcd-service" {
count = "${var.experimental_self_hosted_etcd ? 1 : 0}"
content = "${data.template_file.bootstrap-etcd-service.rendered}"
filename = "${var.asset_dir}/etcd/bootstrap-etcd-service.json"
}
data "template_file" "etcd-tpr" {
template = "${file("${path.module}/resources/etcd/migrate-etcd-cluster.json")}"
vars {
bootstrap_etcd_service_ip = "${cidrhost(var.service_cidr, 200)}"
}
}
resource "local_file" "etcd-tpr" {
count = "${var.experimental_self_hosted_etcd ? 1 : 0}"
content = "${data.template_file.etcd-tpr.rendered}"
filename = "${var.asset_dir}/etcd/migrate-etcd-cluster.json"
}
# etcd operator deployment and service manifests
resource "local_file" "etcd-operator" {
count = "${var.experimental_self_hosted_etcd ? 1 : 0}"
depends_on = ["template_dir.manifests"]
content = "${file("${path.module}/resources/experimental/manifests/etcd-operator.yaml")}"
filename = "${var.asset_dir}/experimental/manifests/etcd-operator.yaml"
}
data "template_file" "etcd-service" {
template = "${file("${path.module}/resources/experimental/manifests/etcd-service.yaml")}"
vars {
etcd_service_ip = "${cidrhost(var.service_cidr, 15)}"
}
}
resource "local_file" "etcd-service" {
count = "${var.experimental_self_hosted_etcd ? 1 : 0}"
depends_on = ["template_dir.manifests"]
content = "${data.template_file.etcd-service.rendered}"
filename = "${var.asset_dir}/experimental/manifests/etcd-service.yaml"
}

View File

@@ -22,6 +22,36 @@ output "user-kubeconfig" {
value = "${local_file.user-kubeconfig.filename}"
}
# etcd TLS assets
output "etcd_ca_cert" {
value = "${tls_self_signed_cert.etcd-ca.cert_pem}"
}
output "etcd_client_cert" {
value = "${tls_locally_signed_cert.client.cert_pem}"
}
output "etcd_client_key" {
value = "${tls_private_key.client.private_key_pem}"
}
output "etcd_server_cert" {
value = "${tls_locally_signed_cert.server.cert_pem}"
}
output "etcd_server_key" {
value = "${tls_private_key.server.private_key_pem}"
}
output "etcd_peer_cert" {
value = "${tls_locally_signed_cert.peer.cert_pem}"
}
output "etcd_peer_key" {
value = "${tls_private_key.peer.private_key_pem}"
}
# Some platforms may need to reconstruct the kubeconfig directly in user-data.
# That can't be done with the way template_file interpolates multi-line
# contents so the raw components of the kubeconfig may be needed.

View File

@@ -9,8 +9,6 @@ spec:
image: ${hyperkube_image}
command:
- /usr/bin/flock
- --exclusive
- --timeout=30
- /var/lock/api-server.lock
- /hyperkube
- apiserver
@@ -20,6 +18,10 @@ spec:
- --authorization-mode=RBAC
- --bind-address=0.0.0.0
- --client-ca-file=/etc/kubernetes/secrets/ca.crt
- --etcd-cafile=/etc/kubernetes/secrets/etcd-client-ca.crt
- --etcd-certfile=/etc/kubernetes/secrets/etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/secrets/etcd-client.key
- --etcd-quorum-read=true
- --etcd-servers=${etcd_servers}
- --insecure-port=0
- --kubelet-client-certificate=/etc/kubernetes/secrets/apiserver.crt

View File

@@ -0,0 +1,13 @@
apiVersion: apiextensions.k8s.io/v1beta1
description: Calico BGP Peers
kind: CustomResourceDefinition
metadata:
name: bgppeers.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: BGPPeer
plural: bgppeers
singular: bgppeer

View File

@@ -0,0 +1,12 @@
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: calico-node
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: calico-node
subjects:
- kind: ServiceAccount
name: calico-node
namespace: kube-system

View File

@@ -0,0 +1,53 @@
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: calico-node
namespace: kube-system
rules:
- apiGroups: [""]
resources:
- namespaces
verbs:
- get
- list
- watch
- apiGroups: [""]
resources:
- pods/status
verbs:
- update
- apiGroups: [""]
resources:
- pods
verbs:
- get
- list
- watch
- apiGroups: [""]
resources:
- nodes
verbs:
- get
- list
- update
- watch
- apiGroups: ["extensions"]
resources:
- networkpolicies
verbs:
- get
- list
- watch
- apiGroups: ["crd.projectcalico.org"]
resources:
- globalfelixconfigs
- bgppeers
- globalbgpconfigs
- ippools
- globalnetworkpolicies
verbs:
- create
- get
- list
- update
- watch

View File

@@ -0,0 +1,29 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: calico-config
namespace: kube-system
data:
# The CNI network configuration to install on each node.
cni_network_config: |-
{
"name": "k8s-pod-network",
"cniVersion": "0.3.0",
"type": "calico",
"log_level": "debug",
"datastore_type": "kubernetes",
"nodename": "__KUBERNETES_NODE_NAME__",
"mtu": ${network_mtu},
"ipam": {
"type": "host-local",
"subnet": "usePodCidr"
},
"policy": {
"type": "k8s",
"k8s_auth_token": "__SERVICEACCOUNT_TOKEN__"
},
"kubernetes": {
"k8s_api_root": "https://__KUBERNETES_SERVICE_HOST__:__KUBERNETES_SERVICE_PORT__",
"kubeconfig": "__KUBECONFIG_FILEPATH__"
}
}

View File

@@ -0,0 +1,13 @@
apiVersion: apiextensions.k8s.io/v1beta1
description: Calico Global Felix Configuration
kind: CustomResourceDefinition
metadata:
name: globalfelixconfigs.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: GlobalFelixConfig
plural: globalfelixconfigs
singular: globalfelixconfig

View File

@@ -0,0 +1,13 @@
apiVersion: apiextensions.k8s.io/v1beta1
description: Calico Global BGP Configuration
kind: CustomResourceDefinition
metadata:
name: globalbgpconfigs.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: GlobalBGPConfig
plural: globalbgpconfigs
singular: globalbgpconfig

View File

@@ -0,0 +1,13 @@
apiVersion: apiextensions.k8s.io/v1beta1
description: Calico IP Pools
kind: CustomResourceDefinition
metadata:
name: ippools.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: IPPool
plural: ippools
singular: ippool

View File

@@ -0,0 +1,13 @@
apiVersion: apiextensions.k8s.io/v1beta1
description: Calico Global Network Policies
kind: CustomResourceDefinition
metadata:
name: globalnetworkpolicies.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: GlobalNetworkPolicy
plural: globalnetworkpolicies
singular: globalnetworkpolicy

View File

@@ -0,0 +1,5 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: calico-node
namespace: kube-system

View File

@@ -0,0 +1,138 @@
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: calico-node
namespace: kube-system
labels:
k8s-app: calico-node
spec:
selector:
matchLabels:
k8s-app: calico-node
template:
metadata:
labels:
k8s-app: calico-node
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
hostNetwork: true
serviceAccountName: calico-node
tolerations:
# Allow the pod to run on master nodes
- key: node-role.kubernetes.io/master
effect: NoSchedule
# Mark the pod as a critical add-on for rescheduling
- key: "CriticalAddonsOnly"
operator: "Exists"
containers:
- name: calico-node
image: quay.io/calico/node:v2.5.1
env:
# Use Kubernetes API as the backing datastore.
- name: DATASTORE_TYPE
value: "kubernetes"
# Enable felix info logging.
- name: FELIX_LOGSEVERITYSCREEN
value: "info"
# Cluster type to identify the deployment type
- name: CLUSTER_TYPE
value: "k8s,bgp"
# Disable file logging so `kubectl logs` works.
- name: CALICO_DISABLE_FILE_LOGGING
value: "true"
# Set Felix endpoint to host default action to ACCEPT.
- name: FELIX_DEFAULTENDPOINTTOHOSTACTION
value: "ACCEPT"
# Disable IPV6 on Kubernetes.
- name: FELIX_IPV6SUPPORT
value: "false"
# Set MTU for tunnel device used if ipip is enabled
- name: FELIX_IPINIPMTU
value: "${network_mtu}"
# Wait for the datastore.
- name: WAIT_FOR_DATASTORE
value: "true"
# The Calico IPv4 pool CIDR (should match `--cluster-cidr`).
- name: CALICO_IPV4POOL_CIDR
value: "${pod_cidr}"
# Enable IPIP
- name: CALICO_IPV4POOL_IPIP
value: "always"
# Enable IP-in-IP within Felix.
- name: FELIX_IPINIPENABLED
value: "true"
# Set node name based on k8s nodeName.
- name: NODENAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: FELIX_HEALTHENABLED
value: "true"
securityContext:
privileged: true
resources:
requests:
cpu: 250m
livenessProbe:
httpGet:
path: /liveness
port: 9099
periodSeconds: 10
initialDelaySeconds: 10
failureThreshold: 6
readinessProbe:
httpGet:
path: /readiness
port: 9099
periodSeconds: 10
volumeMounts:
- mountPath: /lib/modules
name: lib-modules
readOnly: true
- mountPath: /var/run/calico
name: var-run-calico
readOnly: false
# Install Calico CNI binaries and CNI network config file on nodes
- name: install-cni
image: quay.io/calico/cni:v1.10.0
command: ["/install-cni.sh"]
env:
- name: CNI_NETWORK_CONFIG
valueFrom:
configMapKeyRef:
name: calico-config
key: cni_network_config
- name: CNI_NET_DIR
value: "/etc/kubernetes/cni/net.d"
# Set node name based on k8s nodeName
- name: KUBERNETES_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumeMounts:
- mountPath: /host/opt/cni/bin
name: cni-bin-dir
- mountPath: /host/etc/cni/net.d
name: cni-net-dir
volumes:
- name: lib-modules
hostPath:
path: /lib/modules
- name: var-run-calico
hostPath:
path: /var/run/calico
- name: cni-bin-dir
hostPath:
path: /opt/cni/bin
- name: cni-net-dir
hostPath:
path: /etc/kubernetes/cni/net.d
updateStrategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate

View File

@@ -1,13 +1,13 @@
{
"apiVersion": "etcd.coreos.com/v1beta1",
"kind": "Cluster",
"apiVersion": "etcd.database.coreos.com/v1beta2",
"kind": "EtcdCluster",
"metadata": {
"name": "kube-etcd",
"namespace": "kube-system"
},
"spec": {
"size": 1,
"version": "v3.1.6",
"version": "v3.1.8",
"pod": {
"nodeSelector": {
"node-role.kubernetes.io/master": ""
@@ -21,7 +21,16 @@
]
},
"selfHosted": {
"bootMemberClientEndpoint": "http://${bootstrap_etcd_service_ip}:12379"
"bootMemberClientEndpoint": "https://${bootstrap_etcd_service_ip}:12379"
},
"TLS": {
"static": {
"member": {
"peerSecret": "etcd-peer-tls",
"serverSecret": "etcd-server-tls"
},
"operatorSecret": "etcd-client-tls"
}
}
}
}

View File

@@ -12,18 +12,30 @@ spec:
command:
- /usr/local/bin/etcd
- --name=boot-etcd
- --listen-client-urls=http://0.0.0.0:12379
- --listen-peer-urls=http://0.0.0.0:12380
- --advertise-client-urls=http://${bootstrap_etcd_service_ip}:12379
- --initial-advertise-peer-urls=http://${bootstrap_etcd_service_ip}:12380
- --initial-cluster=boot-etcd=http://${bootstrap_etcd_service_ip}:12380
- --listen-client-urls=https://0.0.0.0:12379
- --listen-peer-urls=https://0.0.0.0:12380
- --advertise-client-urls=https://${bootstrap_etcd_service_ip}:12379
- --initial-advertise-peer-urls=https://${bootstrap_etcd_service_ip}:12380
- --initial-cluster=boot-etcd=https://${bootstrap_etcd_service_ip}:12380
- --initial-cluster-token=bootkube
- --initial-cluster-state=new
- --data-dir=/var/etcd/data
env:
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- --peer-client-cert-auth=true
- --peer-trusted-ca-file=/etc/kubernetes/secrets/etcd/peer-ca.crt
- --peer-cert-file=/etc/kubernetes/secrets/etcd/peer.crt
- --peer-key-file=/etc/kubernetes/secrets/etcd/peer.key
- --client-cert-auth=true
- --trusted-ca-file=/etc/kubernetes/secrets/etcd/server-ca.crt
- --cert-file=/etc/kubernetes/secrets/etcd/server.crt
- --key-file=/etc/kubernetes/secrets/etcd/server.key
volumeMounts:
- mountPath: /etc/kubernetes/secrets
name: secrets
readOnly: true
volumes:
- name: secrets
hostPath:
path: /etc/kubernetes/bootstrap-secrets
hostNetwork: true
restartPolicy: Never
dnsPolicy: ClusterFirstWithHostNet

View File

@@ -0,0 +1,10 @@
apiVersion: v1
kind: Secret
metadata:
name: etcd-client-tls
namespace: kube-system
type: Opaque
data:
etcd-client-ca.crt: ${etcd_ca_cert}
etcd-client.crt: ${etcd_client_cert}
etcd-client.key: ${etcd_client_key}

View File

@@ -6,6 +6,11 @@ metadata:
labels:
k8s-app: etcd-operator
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
replicas: 1
template:
metadata:
@@ -14,10 +19,10 @@ spec:
spec:
containers:
- name: etcd-operator
image: quay.io/coreos/etcd-operator:v0.3.0
image: quay.io/coreos/etcd-operator:v0.5.0
command:
- /usr/local/bin/etcd-operator
- --analytics=false
- /usr/local/bin/etcd-operator
- --analytics=false
env:
- name: MY_POD_NAMESPACE
valueFrom:
@@ -29,6 +34,9 @@ spec:
fieldPath: metadata.name
nodeSelector:
node-role.kubernetes.io/master: ""
securityContext:
runAsNonRoot: true
runAsUser: 65534
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists

View File

@@ -0,0 +1,10 @@
apiVersion: v1
kind: Secret
metadata:
name: etcd-peer-tls
namespace: kube-system
type: Opaque
data:
peer-ca.crt: ${etcd_ca_cert}
peer.crt: ${etcd_peer_cert}
peer.key: ${etcd_peer_key}

View File

@@ -0,0 +1,10 @@
apiVersion: v1
kind: Secret
metadata:
name: etcd-server-tls
namespace: kube-system
type: Opaque
data:
server-ca.crt: ${etcd_ca_cert}
server.crt: ${etcd_server_cert}
server.key: ${etcd_server_key}

View File

@@ -3,6 +3,10 @@ kind: Service
metadata:
name: etcd-service
namespace: kube-system
# This alpha annotation will retain the endpoints even if the etcd pod isn't ready.
# This feature is always enabled in endpoint controller in k8s even it is alpha.
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
selector:
app: etcd

View File

@@ -12,7 +12,8 @@ data:
"name": "cbr0",
"type": "flannel",
"delegate": {
"isDefaultGateway": true
"isDefaultGateway": true,
"hairpinMode": true
}
}
net-conf.json: |

View File

@@ -15,7 +15,7 @@ spec:
spec:
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.7.1-amd64
image: quay.io/coreos/flannel:v0.8.0-amd64
command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr", "--iface=$(POD_IP)"]
securityContext:
privileged: true
@@ -40,13 +40,19 @@ spec:
- name: flannel-cfg
mountPath: /etc/kube-flannel/
- name: install-cni
image: busybox
command: [ "/bin/sh", "-c", "set -e -x; TMP=/etc/cni/net.d/.tmp-flannel-cfg; cp /etc/kube-flannel/cni-conf.json $TMP; mv $TMP /etc/cni/net.d/10-flannel.conf; while :; do sleep 3600; done" ]
image: quay.io/coreos/flannel-cni:v0.2.0
command: ["/install-cni.sh"]
env:
- name: CNI_NETWORK_CONFIG
valueFrom:
configMapKeyRef:
name: kube-flannel-cfg
key: cni-conf.json
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
mountPath: /host/etc/cni/net.d
- name: host-cni-bin
mountPath: /host/opt/cni/bin/
hostNetwork: true
tolerations:
- key: node-role.kubernetes.io/master
@@ -62,3 +68,10 @@ spec:
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
- name: host-cni-bin
hostPath:
path: /opt/cni/bin
updateStrategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate

View File

@@ -9,3 +9,6 @@ data:
apiserver.crt: ${apiserver_cert}
service-account.pub: ${serviceaccount_pub}
ca.crt: ${ca_cert}
etcd-client-ca.crt: ${etcd_ca_cert}
etcd-client.crt: ${etcd_client_cert}
etcd-client.key: ${etcd_client_key}

View File

@@ -21,8 +21,6 @@ spec:
image: ${hyperkube_image}
command:
- /usr/bin/flock
- --exclusive
- --timeout=30
- /var/lock/api-server.lock
- /hyperkube
- apiserver
@@ -34,6 +32,10 @@ spec:
- --bind-address=0.0.0.0
- --client-ca-file=/etc/kubernetes/secrets/ca.crt
- --cloud-provider=${cloud_provider}
- --etcd-cafile=/etc/kubernetes/secrets/etcd-client-ca.crt
- --etcd-certfile=/etc/kubernetes/secrets/etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/secrets/etcd-client.key
- --etcd-quorum-read=true
- --etcd-servers=${etcd_servers}
- --insecure-port=0
- --kubelet-client-certificate=/etc/kubernetes/secrets/apiserver.crt
@@ -79,3 +81,7 @@ spec:
- name: var-lock
hostPath:
path: /var/lock
updateStrategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate

View File

@@ -30,7 +30,7 @@ spec:
- key: k8s-app
operator: In
values:
- kube-contoller-manager
- kube-controller-manager
topologyKey: kubernetes.io/hostname
containers:
- name: kube-controller-manager
@@ -60,6 +60,9 @@ spec:
readOnly: true
nodeSelector:
node-role.kubernetes.io/master: ""
securityContext:
runAsNonRoot: true
runAsUser: 65534
tolerations:
- key: CriticalAddonsOnly
operator: Exists

View File

@@ -6,6 +6,7 @@ metadata:
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
# replicas: not specified here:
# 1. In order to make Addon Manager do not reconcile this replicas parameter.
@@ -25,9 +26,22 @@ spec:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
nodeSelector:
node-role.kubernetes.io/master: ""
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
volumes:
- name: kube-dns-config
configMap:
name: kube-dns
optional: true
containers:
- name: kubedns
image: gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.1
image: gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.4
resources:
# TODO: Set memory limits when we've profiled the container for large
# clusters, then set request = limit to keep this container in
@@ -78,7 +92,7 @@ spec:
- name: kube-dns-config
mountPath: /kube-dns-config
- name: dnsmasq
image: gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.1
image: gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.4
livenessProbe:
httpGet:
path: /healthcheck/dnsmasq
@@ -116,7 +130,7 @@ spec:
- name: kube-dns-config
mountPath: /etc/k8s/dns/dnsmasq-nanny
- name: sidecar
image: gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.1
image: gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.4
livenessProbe:
httpGet:
path: /metrics
@@ -140,16 +154,3 @@ spec:
memory: 20Mi
cpu: 10m
dnsPolicy: Default # Don't use cluster DNS.
nodeSelector:
node-role.kubernetes.io/master: ""
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
volumes:
- name: kube-dns-config
configMap:
name: kube-dns
optional: true

View File

@@ -16,7 +16,7 @@ spec:
checkpointer.alpha.coreos.com/checkpoint: "true"
spec:
containers:
- image: quay.io/coreos/kenc:48b6feceeee56c657ea9263f47b6ea091e8d3035
- image: quay.io/coreos/kenc:0.0.2
name: kube-etcd-network-checkpointer
securityContext:
privileged: true
@@ -24,6 +24,9 @@ spec:
- mountPath: /etc/kubernetes/selfhosted-etcd
name: checkpoint-dir
readOnly: false
- mountPath: /var/etcd
name: etcd-dir
readOnly: false
- mountPath: /var/lock
name: var-lock
readOnly: false
@@ -43,6 +46,13 @@ spec:
- name: checkpoint-dir
hostPath:
path: /etc/kubernetes/checkpoint-iptables
- name: etcd-dir
hostPath:
path: /var/etcd
- name: var-lock
hostPath:
path: /var/lock
updateStrategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate

View File

@@ -19,7 +19,7 @@ spec:
- name: kube-proxy
image: ${hyperkube_image}
command:
- /hyperkube
- ./hyperkube
- proxy
- --cluster-cidr=${pod_cidr}
- --hostname-override=$(NODE_NAME)
@@ -53,3 +53,7 @@ spec:
- name: etc-kubernetes
hostPath:
path: /etc/kubernetes
updateStrategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate

View File

@@ -47,6 +47,9 @@ spec:
timeoutSeconds: 15
nodeSelector:
node-role.kubernetes.io/master: ""
securityContext:
runAsNonRoot: true
runAsUser: 65534
tolerations:
- key: CriticalAddonsOnly
operator: Exists

View File

@@ -16,8 +16,8 @@ spec:
checkpointer.alpha.coreos.com/checkpoint: "true"
spec:
containers:
- name: checkpoint
image: quay.io/coreos/pod-checkpointer:2cad4cac4186611a79de1969e3ea4924f02f459e
- name: pod-checkpointer
image: quay.io/coreos/pod-checkpointer:0cd390e0bc1dcdcc714b20eda3435c3d00669d0e
command:
- /checkpoint
- --v=4
@@ -56,3 +56,7 @@ spec:
- name: var-run
hostPath:
path: /var/run
updateStrategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate

View File

@@ -1,5 +1,6 @@
cluster_name = "example"
api_servers = ["node1.example.com"]
etcd_servers = ["http://127.0.0.1:2379"]
asset_dir = "/home/core/clusters/mycluster"
etcd_servers = ["node1.example.com"]
asset_dir = "/home/core/mycluster"
networking = "flannel"
experimental_self_hosted_etcd = false

216
tls-etcd.tf Normal file
View File

@@ -0,0 +1,216 @@
# etcd-client-ca.crt
resource "local_file" "etcd_client_ca_crt" {
content = "${tls_self_signed_cert.etcd-ca.cert_pem}"
filename = "${var.asset_dir}/tls/etcd-client-ca.crt"
}
# etcd-client.crt
resource "local_file" "etcd_client_crt" {
content = "${tls_locally_signed_cert.client.cert_pem}"
filename = "${var.asset_dir}/tls/etcd-client.crt"
}
# etcd-client.key
resource "local_file" "etcd_client_key" {
content = "${tls_private_key.client.private_key_pem}"
filename = "${var.asset_dir}/tls/etcd-client.key"
}
# server-ca.crt
resource "local_file" "etcd_server_ca_crt" {
content = "${tls_self_signed_cert.etcd-ca.cert_pem}"
filename = "${var.asset_dir}/tls/etcd/server-ca.crt"
}
# server.crt
resource "local_file" "etcd_server_crt" {
content = "${tls_locally_signed_cert.server.cert_pem}"
filename = "${var.asset_dir}/tls/etcd/server.crt"
}
# server.key
resource "local_file" "etcd_server_key" {
content = "${tls_private_key.server.private_key_pem}"
filename = "${var.asset_dir}/tls/etcd/server.key"
}
# peer-ca.crt
resource "local_file" "etcd_peer_ca_crt" {
content = "${tls_self_signed_cert.etcd-ca.cert_pem}"
filename = "${var.asset_dir}/tls/etcd/peer-ca.crt"
}
# peer.crt
resource "local_file" "etcd_peer_crt" {
content = "${tls_locally_signed_cert.peer.cert_pem}"
filename = "${var.asset_dir}/tls/etcd/peer.crt"
}
# peer.key
resource "local_file" "etcd_peer_key" {
content = "${tls_private_key.peer.private_key_pem}"
filename = "${var.asset_dir}/tls/etcd/peer.key"
}
# certificates and keys
resource "tls_private_key" "etcd-ca" {
algorithm = "RSA"
rsa_bits = "2048"
}
resource "tls_self_signed_cert" "etcd-ca" {
key_algorithm = "${tls_private_key.etcd-ca.algorithm}"
private_key_pem = "${tls_private_key.etcd-ca.private_key_pem}"
subject {
common_name = "etcd-ca"
organization = "etcd"
}
is_ca_certificate = true
validity_period_hours = 8760
allowed_uses = [
"key_encipherment",
"digital_signature",
"cert_signing",
]
}
# client certs are used for client (apiserver, locksmith, etcd-operator)
# to etcd communication
resource "tls_private_key" "client" {
algorithm = "RSA"
rsa_bits = "2048"
}
resource "tls_cert_request" "client" {
key_algorithm = "${tls_private_key.client.algorithm}"
private_key_pem = "${tls_private_key.client.private_key_pem}"
subject {
common_name = "etcd-client"
organization = "etcd"
}
ip_addresses = [
"127.0.0.1",
"${cidrhost(var.service_cidr, 15)}",
"${cidrhost(var.service_cidr, 20)}",
]
dns_names = "${concat(
var.etcd_servers,
list(
"localhost",
"*.kube-etcd.kube-system.svc.cluster.local",
"kube-etcd-client.kube-system.svc.cluster.local",
))}"
}
resource "tls_locally_signed_cert" "client" {
cert_request_pem = "${tls_cert_request.client.cert_request_pem}"
ca_key_algorithm = "${join(" ", tls_self_signed_cert.etcd-ca.*.key_algorithm)}"
ca_private_key_pem = "${join(" ", tls_private_key.etcd-ca.*.private_key_pem)}"
ca_cert_pem = "${join(" ", tls_self_signed_cert.etcd-ca.*.cert_pem)}"
validity_period_hours = 8760
allowed_uses = [
"key_encipherment",
"digital_signature",
"server_auth",
"client_auth",
]
}
resource "tls_private_key" "server" {
algorithm = "RSA"
rsa_bits = "2048"
}
resource "tls_cert_request" "server" {
key_algorithm = "${tls_private_key.server.algorithm}"
private_key_pem = "${tls_private_key.server.private_key_pem}"
subject {
common_name = "etcd-server"
organization = "etcd"
}
ip_addresses = [
"127.0.0.1",
"${cidrhost(var.service_cidr, 15)}",
"${cidrhost(var.service_cidr, 20)}",
]
dns_names = "${concat(
var.etcd_servers,
list(
"localhost",
"*.kube-etcd.kube-system.svc.cluster.local",
"kube-etcd-client.kube-system.svc.cluster.local",
))}"
}
resource "tls_locally_signed_cert" "server" {
cert_request_pem = "${tls_cert_request.server.cert_request_pem}"
ca_key_algorithm = "${join(" ", tls_self_signed_cert.etcd-ca.*.key_algorithm)}"
ca_private_key_pem = "${join(" ", tls_private_key.etcd-ca.*.private_key_pem)}"
ca_cert_pem = "${join(" ", tls_self_signed_cert.etcd-ca.*.cert_pem)}"
validity_period_hours = 8760
allowed_uses = [
"key_encipherment",
"digital_signature",
"server_auth",
"client_auth",
]
}
resource "tls_private_key" "peer" {
algorithm = "RSA"
rsa_bits = "2048"
}
resource "tls_cert_request" "peer" {
key_algorithm = "${tls_private_key.peer.algorithm}"
private_key_pem = "${tls_private_key.peer.private_key_pem}"
subject {
common_name = "etcd-peer"
organization = "etcd"
}
ip_addresses = [
"${cidrhost(var.service_cidr, 20)}",
]
dns_names = "${concat(
var.etcd_servers,
list(
"*.kube-etcd.kube-system.svc.cluster.local",
"kube-etcd-client.kube-system.svc.cluster.local",
))}"
}
resource "tls_locally_signed_cert" "peer" {
cert_request_pem = "${tls_cert_request.peer.cert_request_pem}"
ca_key_algorithm = "${join(" ", tls_self_signed_cert.etcd-ca.*.key_algorithm)}"
ca_private_key_pem = "${join(" ", tls_private_key.etcd-ca.*.private_key_pem)}"
ca_cert_pem = "${join(" ", tls_self_signed_cert.etcd-ca.*.cert_pem)}"
validity_period_hours = 8760
allowed_uses = [
"key_encipherment",
"digital_signature",
"server_auth",
"client_auth",
]
}

View File

View File

@@ -9,13 +9,13 @@ variable "api_servers" {
}
variable "etcd_servers" {
description = "List of etcd server URLs including protocol, host, and port"
description = "List of etcd server URLs including protocol, host, and port. Ignored if experimental self-hosted etcd is enabled."
type = "list"
}
variable "experimental_self_hosted_etcd" {
description = "(Experimental) Create self-hosted etcd assets"
default = false
default = false
}
variable "asset_dir" {
@@ -29,6 +29,18 @@ variable "cloud_provider" {
default = ""
}
variable "networking" {
description = "Choice of networking provider (flannel or calico)"
type = "string"
default = "flannel"
}
variable "network_mtu" {
description = "CNI interface MTU (applies to calico only)"
type = "string"
default = "1500"
}
variable "pod_cidr" {
description = "CIDR IP range to assign Kubernetes pods"
type = "string"
@@ -38,10 +50,11 @@ variable "pod_cidr" {
variable "service_cidr" {
description = <<EOD
CIDR IP range to assign Kubernetes services.
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for kube-dns, the 15th IP will be reserved for self-hosted etcd, and the 200th IP will be reserved for bootstrap self-hosted etcd.
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for kube-dns, the 15th IP will be reserved for self-hosted etcd, and the 20th IP will be reserved for bootstrap self-hosted etcd.
EOD
type = "string"
default = "10.3.0.0/24"
type = "string"
default = "10.3.0.0/24"
}
variable "container_images" {
@@ -49,8 +62,8 @@ variable "container_images" {
type = "map"
default = {
hyperkube = "quay.io/coreos/hyperkube:v1.6.2_coreos.0"
etcd = "quay.io/coreos/etcd:v3.1.6"
hyperkube = "quay.io/coreos/hyperkube:v1.7.5_coreos.0"
etcd = "quay.io/coreos/etcd:v3.1.8"
}
}