Add eksctl method of cluster creation, and disable terraform method

Signed-off-by: Carsten Schafer <Carsten.Schafer@kinarasystems.com>
This commit is contained in:
Carsten Schafer
2023-11-10 11:54:47 -05:00
parent 08e5db2822
commit 7a044cd33a
14 changed files with 1021 additions and 2 deletions

View File

@@ -5,7 +5,8 @@ This repository is used for Telecom Infra Project WiFi infrastructure configurat
- [helm-values/assembly-ucentral](./helm-values/assembly-ucentral) - contains helm values used for Cloud SDK deployments, encrypted by SOPS;
- [helmfile/cloud-sdk](./helmfile/cloud-sdk) - contains Helmfile definition for infrastructure deployed to EKS cluster;
- [terraform](./terraform) - contains Terraform manifests for AWS accounts and all resources deployed to them.
- [eksctl](./eksctl) - contains scripts to create EKS clusters using eksctl and awscli.
Repository has CI/CD pipelines for automated Helmfile and Terraform validation and deployment using Atlantis and GitHub Actions that allows get changes diffs in Pull Requests before pushing them into master branch.
This repository has CI/CD pipelines for automated Helmfile and Terraform validation and deployment using Atlantis and GitHub Actions.
All changes to the repository should be done through PRs from branches in this repositories to master branch and should be approved by at least one of the repository administrators.

View File

@@ -0,0 +1,8 @@
.cluster.yaml.*
*-kube-config
*-logs
env_cs
future
id_rsa*
kms-key-for-encryption-on-ebs.json
route53policy.json

View File

@@ -0,0 +1,57 @@
# EKSCTL Based Cluster Installation
The script and associated files should make it possible to deploy an EKS cluster and
a few nodes. It sets up the EKS cluster bsaed on provided environment variables.
The scripts should work on MacOS and Linux (as of yet untested).
## Requirements
### MacOS
- Homebrew (Mac)
- gettext for envsubst (via Homebrew v0.21.1)
### General
- eksctl (v0.157.0+)
- aws-cli (v2.13.19)
## Setup
- Prepare an environment file (see [env\_example](./env_example).
- Make sure all required utilities are installed.
- Make sure that you can run "aws --version" and "eksctl version"
- Make sure that any AWS SSO environment variables are set.
## Installation
- Run "source env\_FILE ; ./installer" (using the env file you created above)
- If the identity check succeeds the installer will create the following resources:
- EKS cluster
- Policy and service accounts for EBS, ALB and Route 53 access.
- EBS addon and OIDC identity providers
- Reads cluster config into a temporary file.
- Shows some information about the created cluster.
- Shows how to run "aws eks update-kubeconfig" command to update your .kube/config file in place.
## Scaling nodegroups
Set the desiredCapacity for the nodegroup in cluster.CLUSTER_NAME.yaml and run:
```bash
source env\_FILE
eksctl scale nodegroup -f cluster.$CLUSTER_NAME.yaml
```
## Next Steps
After creating the cluster proceed to [helmfile/cloud-sdk](../../../helmfile/cloud-sdk) to install
shared services.
## Cleanup
- Run "source env\_FILE ; ./cleaner" (using the env file you created above)
Note that sometimes AWS has trouble cleaning up when things are or appear in-use. The eksctl
command to delete the cluster may thus fail requiring chasing down the noted rewsources. One of the
resources that seems to always linger are LBs. Deleting these manually and restarting cleanup,
sometimes works. Other times inspecting the CloudFormation resource for this cluster for errors
will lead to discovery of the problematic resources. After you delete these resources manually, you may retry deletion of the CloudFormation stack. That should take care of deleting any remaining resources.

View File

@@ -0,0 +1,241 @@
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"iam:CreateServiceLinkedRole"
],
"Resource": "*",
"Condition": {
"StringEquals": {
"iam:AWSServiceName": "elasticloadbalancing.amazonaws.com"
}
}
},
{
"Effect": "Allow",
"Action": [
"ec2:DescribeAccountAttributes",
"ec2:DescribeAddresses",
"ec2:DescribeAvailabilityZones",
"ec2:DescribeInternetGateways",
"ec2:DescribeVpcs",
"ec2:DescribeVpcPeeringConnections",
"ec2:DescribeSubnets",
"ec2:DescribeSecurityGroups",
"ec2:DescribeInstances",
"ec2:DescribeNetworkInterfaces",
"ec2:DescribeTags",
"ec2:GetCoipPoolUsage",
"ec2:DescribeCoipPools",
"elasticloadbalancing:DescribeLoadBalancers",
"elasticloadbalancing:DescribeLoadBalancerAttributes",
"elasticloadbalancing:DescribeListeners",
"elasticloadbalancing:DescribeListenerCertificates",
"elasticloadbalancing:DescribeSSLPolicies",
"elasticloadbalancing:DescribeRules",
"elasticloadbalancing:DescribeTargetGroups",
"elasticloadbalancing:DescribeTargetGroupAttributes",
"elasticloadbalancing:DescribeTargetHealth",
"elasticloadbalancing:DescribeTags"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"cognito-idp:DescribeUserPoolClient",
"acm:ListCertificates",
"acm:DescribeCertificate",
"iam:ListServerCertificates",
"iam:GetServerCertificate",
"waf-regional:GetWebACL",
"waf-regional:GetWebACLForResource",
"waf-regional:AssociateWebACL",
"waf-regional:DisassociateWebACL",
"wafv2:GetWebACL",
"wafv2:GetWebACLForResource",
"wafv2:AssociateWebACL",
"wafv2:DisassociateWebACL",
"shield:GetSubscriptionState",
"shield:DescribeProtection",
"shield:CreateProtection",
"shield:DeleteProtection"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ec2:AuthorizeSecurityGroupIngress",
"ec2:RevokeSecurityGroupIngress"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ec2:CreateSecurityGroup"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ec2:CreateTags"
],
"Resource": "arn:aws:ec2:*:*:security-group/*",
"Condition": {
"StringEquals": {
"ec2:CreateAction": "CreateSecurityGroup"
},
"Null": {
"aws:RequestTag/elbv2.k8s.aws/cluster": "false"
}
}
},
{
"Effect": "Allow",
"Action": [
"ec2:CreateTags",
"ec2:DeleteTags"
],
"Resource": "arn:aws:ec2:*:*:security-group/*",
"Condition": {
"Null": {
"aws:RequestTag/elbv2.k8s.aws/cluster": "true",
"aws:ResourceTag/elbv2.k8s.aws/cluster": "false"
}
}
},
{
"Effect": "Allow",
"Action": [
"ec2:AuthorizeSecurityGroupIngress",
"ec2:RevokeSecurityGroupIngress",
"ec2:DeleteSecurityGroup"
],
"Resource": "*",
"Condition": {
"Null": {
"aws:ResourceTag/elbv2.k8s.aws/cluster": "false"
}
}
},
{
"Effect": "Allow",
"Action": [
"elasticloadbalancing:CreateLoadBalancer",
"elasticloadbalancing:CreateTargetGroup"
],
"Resource": "*",
"Condition": {
"Null": {
"aws:RequestTag/elbv2.k8s.aws/cluster": "false"
}
}
},
{
"Effect": "Allow",
"Action": [
"elasticloadbalancing:CreateListener",
"elasticloadbalancing:DeleteListener",
"elasticloadbalancing:CreateRule",
"elasticloadbalancing:DeleteRule"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"elasticloadbalancing:AddTags",
"elasticloadbalancing:RemoveTags"
],
"Resource": [
"arn:aws:elasticloadbalancing:*:*:targetgroup/*/*",
"arn:aws:elasticloadbalancing:*:*:loadbalancer/net/*/*",
"arn:aws:elasticloadbalancing:*:*:loadbalancer/app/*/*"
],
"Condition": {
"Null": {
"aws:RequestTag/elbv2.k8s.aws/cluster": "true",
"aws:ResourceTag/elbv2.k8s.aws/cluster": "false"
}
}
},
{
"Effect": "Allow",
"Action": [
"elasticloadbalancing:AddTags",
"elasticloadbalancing:RemoveTags"
],
"Resource": [
"arn:aws:elasticloadbalancing:*:*:listener/net/*/*/*",
"arn:aws:elasticloadbalancing:*:*:listener/app/*/*/*",
"arn:aws:elasticloadbalancing:*:*:listener-rule/net/*/*/*",
"arn:aws:elasticloadbalancing:*:*:listener-rule/app/*/*/*"
]
},
{
"Effect": "Allow",
"Action": [
"elasticloadbalancing:ModifyLoadBalancerAttributes",
"elasticloadbalancing:SetIpAddressType",
"elasticloadbalancing:SetSecurityGroups",
"elasticloadbalancing:SetSubnets",
"elasticloadbalancing:DeleteLoadBalancer",
"elasticloadbalancing:ModifyTargetGroup",
"elasticloadbalancing:ModifyTargetGroupAttributes",
"elasticloadbalancing:DeleteTargetGroup"
],
"Resource": "*",
"Condition": {
"Null": {
"aws:ResourceTag/elbv2.k8s.aws/cluster": "false"
}
}
},
{
"Effect": "Allow",
"Action": [
"elasticloadbalancing:AddTags"
],
"Resource": [
"arn:aws:elasticloadbalancing:*:*:targetgroup/*/*",
"arn:aws:elasticloadbalancing:*:*:loadbalancer/net/*/*",
"arn:aws:elasticloadbalancing:*:*:loadbalancer/app/*/*"
],
"Condition": {
"StringEquals": {
"elasticloadbalancing:CreateAction": [
"CreateTargetGroup",
"CreateLoadBalancer"
]
},
"Null": {
"aws:RequestTag/elbv2.k8s.aws/cluster": "false"
}
}
},
{
"Effect": "Allow",
"Action": [
"elasticloadbalancing:RegisterTargets",
"elasticloadbalancing:DeregisterTargets"
],
"Resource": "arn:aws:elasticloadbalancing:*:*:targetgroup/*/*"
},
{
"Effect": "Allow",
"Action": [
"elasticloadbalancing:SetWebAcl",
"elasticloadbalancing:ModifyListener",
"elasticloadbalancing:AddListenerCertificates",
"elasticloadbalancing:RemoveListenerCertificates",
"elasticloadbalancing:ModifyRule"
],
"Resource": "*"
}
]
}

View File

@@ -0,0 +1,53 @@
#!/bin/bash
. ./utils.sh
check_env
echo "Cleaning up cluster:"
show_env
echo "Press ENTER to continue [or CTRL-C to exit]"
read enter
declare -a steps
max_steps=10
for ((i=0; i < $max_steps; i++)) ; do
steps[$i]=""
done
if [ -n "$1" ] ; then
for ((i=0; i < $1; i++)) ; do
steps[$i]="echo"
done
fi
cstep=1
logv startclean "$(date)"
#set -x
echo "Determine caller identity"
if [ -n "$AWS_PROFILE" ] ; then
account_id=$(aws sts get-caller-identity --query Account --output text --profile $AWS_PROFILE)
else
account_id=$(aws sts get-caller-identity --query Account --output text)
fi
logv accountid $account_id
if [ -z "$account_id" ] ; then
echo "Unable to determine caller-identity!"
exit 1
fi
${steps[$cstep]} eksctl \
delete cluster --name $CLUSTER_NAME --region $AWS_REGION --wait
logv deleted $CLUSTER_NAME
#----------------------------------
((cstep++))
role_name="${CLUSTER_NAME}-alb-ingress"
arn="arn:aws:iam::${account_id}:policy/${role_name}"
logv delete "$arn"
${steps[$cstep]} aws iam delete-policy \
--policy-arn $arn
role_name="${CLUSTER_NAME}-external-dns"
arn="arn:aws:iam::${account_id}:policy/${role_name}"
logv delete "$arn"
${steps[$cstep]} aws iam delete-policy \
--policy-arn $arn
#set +x
cstep=-1
logv endclean "$(date)"

View File

@@ -0,0 +1,150 @@
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: tip-wlan-main
region: ap-south-1
version: "1.27"
iam:
withOIDC: true
serviceAccounts:
- metadata:
name: aws-load-balancer-controller
namespace: kube-system
wellKnownPolicies:
awsLoadBalancerController: true
#- metadata:
# name: ebs-csi-controller-sa
# namespace: kube-system
# wellKnownPolicies:
# ebsCSIController: true
#- metadata:
# name: efs-csi-controller-sa
# namespace: kube-system
# wellKnownPolicies:
# efsCSIController: true
#- metadata:
# name: external-dns
# namespace: kube-system
# wellKnownPolicies:
# externalDNS: true
#- metadata:
# name: cert-manager
# namespace: cert-manager
# wellKnownPolicies:
# certManager: true
- metadata:
name: cluster-autoscaler
namespace: kube-system
labels: {aws-usage: "cluster-ops"}
wellKnownPolicies:
autoScaler: true
- metadata:
name: autoscaler-service
namespace: kube-system
attachPolicy: # inline policy can be defined along with `attachPolicyARNs`
Version: "2012-10-17"
Statement:
- Effect: Allow
Action:
- "autoscaling:DescribeAutoScalingGroups"
- "autoscaling:DescribeAutoScalingInstances"
- "autoscaling:DescribeLaunchConfigurations"
- "autoscaling:DescribeTags"
- "autoscaling:SetDesiredCapacity"
- "autoscaling:TerminateInstanceInAutoScalingGroup"
- "ec2:DescribeLaunchTemplateVersions"
Resource: '*'
availabilityZones:
- ap-south-1a
- ap-south-1b
- ap-south-1c
vpc:
cidr: 10.10.0.0/16
clusterEndpoints:
publicAccess: true
privateAccess: true
#managedNodeGroups:
#- name: def
# instanceType: c5.xlarge
# amiFamily: AmazonLinux2
# #Try this next time with unsafe-sysctls:
# #ami: ami-0c92ea9c7c0380b66
# #ami: ami-03a6eaae9938c858c
# minSize: 3
# maxSize: 8
# volumeSize: 100
# ssh: # import public key from file
# allow: true
# publicKeyPath: id_rsa_tip-wlan-main.pub
# # This does not work for managed node groups:
# #overrideBootstrapCommand: |
# # #!/bin/bash
# # /etc/eks/bootstrap.sh tip-wlan-main --kubelet-extra-args "--allowed-unsafe-sysctls 'net.*'"
# tags:
# # EC2 tags required for cluster-autoscaler auto-discovery
# k8s.io/cluster-autoscaler/enabled: "true"
# k8s.io/cluster-autoscaler/tip-wlan-main: "owned"
# kubernetes.io/cluster-autoscaler/enabled: "true"
# kubernetes.io/cluster-autoscaler/tip-wlan-main: "owned"
nodeGroups:
- name: def
instanceType: c5.xlarge
amiFamily: AmazonLinux2
minSize: 3
maxSize: 8
desiredCapacity: 4
volumeSize: 100
ssh: # import public key from file
allow: true
publicKeyPath: id_rsa_tip-wlan-main.pub
kubeletExtraConfig:
allowedUnsafeSysctls:
- "net.ipv4.tcp_keepalive_intvl"
- "net.ipv4.tcp_keepalive_probes"
- "net.ipv4.tcp_keepalive_time"
tags:
# EC2 tags required for cluster-autoscaler auto-discovery
k8s.io/cluster-autoscaler/enabled: "true"
k8s.io/cluster-autoscaler/tip-wlan-main: "owned"
kubernetes.io/cluster-autoscaler/enabled: "true"
kubernetes.io/cluster-autoscaler/tip-wlan-main: "owned"
iamIdentityMappings:
- arn: arn:aws:iam::289708231103:user/gha-wlan-testing
username: gha-wlan-testing
noDuplicateARNs: true # prevents shadowing of ARNs
groups:
- system:masters
- arn: arn:aws:iam::289708231103:user/gha-toolsmith
username: gha-toolsmith
noDuplicateARNs: true
groups:
- system:masters
- arn: arn:aws:iam::289708231103:user/gha-wlan-cloud-helm
username: gha-wlan-cloud-helm
noDuplicateARNs: true
groups:
- system:masters
- arn: arn:aws:iam::289708231103:role/AWSReservedSSO_SystemAdministrator_622371b0ceece6f8
groups:
- system:masters
username: admin
noDuplicateARNs: true # prevents shadowing of ARNs
addons:
- name: vpc-cni # no version is specified so it deploys the default version
attachPolicyARNs:
- arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
- name: coredns
version: latest # auto discovers the latest available
- name: kube-proxy
version: latest
#- name: aws-ebs-csi-driver
# wellKnownPolicies: # add IAM and service account
# ebsCSIController: true

View File

@@ -0,0 +1,150 @@
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: ${CLUSTER_NAME}
region: ${AWS_REGION}
version: "1.27"
iam:
withOIDC: true
serviceAccounts:
- metadata:
name: aws-load-balancer-controller
namespace: kube-system
wellKnownPolicies:
awsLoadBalancerController: true
#- metadata:
# name: ebs-csi-controller-sa
# namespace: kube-system
# wellKnownPolicies:
# ebsCSIController: true
#- metadata:
# name: efs-csi-controller-sa
# namespace: kube-system
# wellKnownPolicies:
# efsCSIController: true
#- metadata:
# name: external-dns
# namespace: kube-system
# wellKnownPolicies:
# externalDNS: true
#- metadata:
# name: cert-manager
# namespace: cert-manager
# wellKnownPolicies:
# certManager: true
- metadata:
name: cluster-autoscaler
namespace: kube-system
labels: {aws-usage: "cluster-ops"}
wellKnownPolicies:
autoScaler: true
- metadata:
name: autoscaler-service
namespace: kube-system
attachPolicy: # inline policy can be defined along with `attachPolicyARNs`
Version: "2012-10-17"
Statement:
- Effect: Allow
Action:
- "autoscaling:DescribeAutoScalingGroups"
- "autoscaling:DescribeAutoScalingInstances"
- "autoscaling:DescribeLaunchConfigurations"
- "autoscaling:DescribeTags"
- "autoscaling:SetDesiredCapacity"
- "autoscaling:TerminateInstanceInAutoScalingGroup"
- "ec2:DescribeLaunchTemplateVersions"
Resource: '*'
availabilityZones:
- ${AWS_REGION}a
- ${AWS_REGION}b
- ${AWS_REGION}c
vpc:
cidr: 10.10.0.0/16
clusterEndpoints:
publicAccess: true
privateAccess: true
#managedNodeGroups:
#- name: def
# instanceType: ${CLUSTER_INSTANCE_TYPE}
# amiFamily: AmazonLinux2
# #Try this next time with unsafe-sysctls:
# #ami: ami-0c92ea9c7c0380b66
# #ami: ami-03a6eaae9938c858c
# minSize: ${CLUSTER_NODES}
# maxSize: ${CLUSTER_MAX_NODES}
# volumeSize: ${CLUSTER_VOLUME_SIZE}
# ssh: # import public key from file
# allow: true
# publicKeyPath: id_rsa_${CLUSTER_NAME}.pub
# # This does not work for managed node groups:
# #overrideBootstrapCommand: |
# # #!/bin/bash
# # /etc/eks/bootstrap.sh ${CLUSTER_NAME} --kubelet-extra-args "--allowed-unsafe-sysctls 'net.*'"
# tags:
# # EC2 tags required for cluster-autoscaler auto-discovery
# k8s.io/cluster-autoscaler/enabled: "true"
# k8s.io/cluster-autoscaler/${CLUSTER_NAME}: "owned"
# kubernetes.io/cluster-autoscaler/enabled: "true"
# kubernetes.io/cluster-autoscaler/${CLUSTER_NAME}: "owned"
nodeGroups:
- name: def
instanceType: ${CLUSTER_INSTANCE_TYPE}
amiFamily: AmazonLinux2
minSize: ${CLUSTER_MIN_NODES}
maxSize: ${CLUSTER_MAX_NODES}
desiredCapacity: ${CLUSTER_NODES}
volumeSize: ${CLUSTER_VOLUME_SIZE}
ssh: # import public key from file
allow: true
publicKeyPath: id_rsa_${CLUSTER_NAME}.pub
kubeletExtraConfig:
allowedUnsafeSysctls:
- "net.ipv4.tcp_keepalive_intvl"
- "net.ipv4.tcp_keepalive_probes"
- "net.ipv4.tcp_keepalive_time"
tags:
# EC2 tags required for cluster-autoscaler auto-discovery
k8s.io/cluster-autoscaler/enabled: "true"
k8s.io/cluster-autoscaler/${CLUSTER_NAME}: "owned"
kubernetes.io/cluster-autoscaler/enabled: "true"
kubernetes.io/cluster-autoscaler/${CLUSTER_NAME}: "owned"
iamIdentityMappings:
- arn: arn:aws:iam::${AWS_ACCOUNT_ID}:user/gha-wlan-testing
username: gha-wlan-testing
noDuplicateARNs: true # prevents shadowing of ARNs
groups:
- system:masters
- arn: arn:aws:iam::${AWS_ACCOUNT_ID}:user/gha-toolsmith
username: gha-toolsmith
noDuplicateARNs: true
groups:
- system:masters
- arn: arn:aws:iam::${AWS_ACCOUNT_ID}:user/gha-wlan-cloud-helm
username: gha-wlan-cloud-helm
noDuplicateARNs: true
groups:
- system:masters
- arn: arn:aws:iam::${AWS_ACCOUNT_ID}:role/AWSReservedSSO_SystemAdministrator_622371b0ceece6f8
groups:
- system:masters
username: admin
noDuplicateARNs: true
addons:
- name: vpc-cni # no version is specified so it deploys the default version
attachPolicyARNs:
- arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
- name: coredns
version: latest # auto discovers the latest available
- name: kube-proxy
version: latest
#- name: aws-ebs-csi-driver
# wellKnownPolicies: # add IAM and service account
# ebsCSIController: true

View File

@@ -0,0 +1,18 @@
# if using a SAML profile:
#export AWS_PROFILE="personal"
#otherwise unset this and make sure session token and other access key information is set:
unset AWS_PROFILE
# eg.
#export AWS_ACCESS_KEY_ID="ASKU..."
#export AWS_SECRET_ACCESS_KEY="z6bl3..."
#export AWS_SESSION_TOKEN="Igo..."
export AWS_DEFAULT_REGION="ap-south-1"
export AWS_ACCOUNT_ID="289708231103"
export CLUSTER_DOMAIN="lab.wlan.tip.build"
export CLUSTER_ZONE_ID="Z213ADJASKDA1345" # zone id of $CLUSTER_DOMAIN zone
export CLUSTER_INSTANCE_TYPE="c5.xlarge"
export CLUSTER_NAME="tip-wlan-main"
export CLUSTER_NODES=3
export CLUSTER_MIN_NODES=3
export CLUSTER_MAX_NODES=8
export CLUSTER_VOLUME_SIZE=100

View File

@@ -0,0 +1,11 @@
unset AWS_PROFILE
export AWS_DEFAULT_REGION="ap-south-1"
export AWS_ACCOUNT_ID="289708231103"
export CLUSTER_DOMAIN="lab.wlan.tip.build"
export CLUSTER_ZONE_ID="Z09534373UTXT2L1YL912"
export CLUSTER_INSTANCE_TYPE="c5.xlarge"
export CLUSTER_NAME="tip-wlan-main"
export CLUSTER_NODES=4
export CLUSTER_MIN_NODES=3
export CLUSTER_MAX_NODES=8
export CLUSTER_VOLUME_SIZE=100

View File

@@ -0,0 +1,206 @@
#!/bin/bash
. ./utils.sh
check_env
echo "Creating cluster:"
show_env
echo "Press ENTER to continue [or CTRL-C to exit]"
read enter
function cleanup()
{
#echo "Cleanup $cstep err $have_err!"
if [[ "$cstep" -ge 0 && "$have_err" -eq 1 ]] ; then
local nextstep
((nextstep=cstep + 1))
echo "To retry after the failed step, resume your install via $0 $nextstep"
fi
}
function nextstep()
{
((cstep++))
if [[ "${steps[$cstep]}" == "echo" ]] ; then
f=" - SKIPPED"
else
f=""
fi
logx "[$cstep] Starting step: $1$f"
}
function enabled()
{
if [[ "${steps[$cstep]}" == "echo" ]] ; then
return 1
fi
[ -n "$1" ] && logx "[$cstep] $1"
return 0
}
function err_handler()
{
have_err=1
#echo "Error!"
}
have_err=0
cstep=-1
trap cleanup EXIT
trap err_handler ERR
declare -a steps
max_steps=10
for ((i=0; i < $max_steps; i++)) ; do
steps[$i]=""
done
if [ -n "$1" ] ; then
for ((i=0; i < $1; i++)) ; do
steps[$i]="echo"
done
fi
logv start_install "$(date)"
cstep=0
#----------------------------------
# start the show:
set -e
set -x
#----------------------------------
echo "Determine caller identity"
if [ -n "$AWS_PROFILE" ] ; then
account_id=$(aws sts get-caller-identity --query Account --output text --profile $AWS_PROFILE)
else
account_id=$(aws sts get-caller-identity --query Account --output text)
fi
logv accountid $account_id
if [ -z "$account_id" ] ; then
echo "Unable to determine caller-identity!"
exit 1
fi
#----------------------------------
nextstep "Skip generating SSH Keypair id_rsa_${CLUSTER_NAME}"
if [ ! -f "id_rsa_${CLUSTER_NAME}" ] ; then
if enabled ; then
ssh-keygen -q -t rsa -N '' -f id_rsa_${CLUSTER_NAME} <<<y >/dev/null 2>&1
fi
else
echo "Skip generating SSH Keypair id_rsa_${CLUSTER_NAME} - exists"
fi
#----------------------------------
config_file="cluster.$CLUSTER_NAME.yaml"
nextstep "Generating cluster.yml file -> $config_file"
if enabled ; then
envsubst < cluster.yaml > $config_file
fi
#----------------------------------
nextstep "Creating $CLUSTER_NAME EKS cluster"
${steps[$cstep]} eksctl create cluster -f $config_file
#echo "Press ENTER to continue" ; read a
#----------------------------------
nextstep "Creating EBS CSI policy and SA"
role_name="${CLUSTER_NAME}-ebs-csi"
sa_name=ebs-csi-controller-sa
arn="arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy"
${steps[$cstep]} eksctl create iamserviceaccount \
--name $sa_name \
--namespace kube-system \
--cluster $CLUSTER_NAME \
--role-name $role_name \
--role-only \
--attach-policy-arn $arn \
--approve
#aws iam create-policy \
# --policy-name KMS_Key_For_Encryption_On_EBS_Policy \
# --policy-document file://kms-key-for-encryption-on-ebs.json \
# --no-cli-pager
#aws iam attach-role-policy \
# --policy-arn arn:aws:iam::$account_id:policy/KMS_Key_For_Encryption_On_EBS_Policy \
# --role-name AmazonEKS_EBS_CSI_DriverRole
arn="arn:aws:iam::${account_id}:role/${role_name}"
${steps[$cstep]} eksctl create addon \
--name aws-ebs-csi-driver \
--cluster $CLUSTER_NAME \
--service-account-role-arn $arn \
--force
oidc_id=$(aws eks describe-cluster --name $CLUSTER_NAME --query "cluster.identity.oidc.issuer" --output text | cut -d '/' -f 5)
if [ -n "$oidc_id" ] ; then
oidc_id=$(aws iam list-open-id-connect-providers | grep $oidc_id | cut -d "/" -f4)
fi
if [ -z "$oidc_id" ] ; then
${steps[$cstep]} eksctl utils associate-iam-oidc-provider --cluster $CLUSTER_NAME --approve
fi
#----------------------------------
nextstep "Creating External DNS policy"
role_name="${CLUSTER_NAME}-external-dns"
sa_name="${role_name}-sa"
arn="arn:aws:iam::${account_id}:policy/${role_name}"
# replace zone id
[ -z "$CLUSTER_ZONE_ID" ] && CLUSTER_ZONE_ID='*'
envsubst < route53policy.json.tpl > route53policy.json
${steps[$cstep]} aws iam create-policy \
--policy-name $role_name \
--policy-document file://route53policy.json \
--no-cli-pager
${steps[$cstep]} eksctl create iamserviceaccount \
--name $sa_name \
--namespace kube-system \
--cluster $CLUSTER_NAME \
--role-name $role_name \
--attach-policy-arn $arn \
--override-existing-serviceaccounts \
--approve
#----------------------------------
nextstep "Creating ALB policy"
role_name="${CLUSTER_NAME}-alb-ingress"
sa_name="${role_name}-sa"
arn="arn:aws:iam::${account_id}:policy/${role_name}"
${steps[$cstep]} aws iam create-policy \
--policy-name $role_name \
--policy-document file://alb_ingress_policy.json \
--no-cli-pager
${steps[$cstep]} eksctl create iamserviceaccount \
--cluster $CLUSTER_NAME \
--namespace kube-system \
--name $sa_name \
--role-name $role_name \
--attach-policy-arn $arn \
--override-existing-serviceaccounts \
--approve
#----------------------------------
nextstep "Updating kube config file"
#aws eks update-kubeconfig --name $CLUSTER_NAME --region $AWS_REGION
${steps[$cstep]} aws eks update-kubeconfig \
--kubeconfig ./${CLUSTER_NAME}-kube-config \
--region $AWS_REGION \
--name $CLUSTER_NAME
#----------------------------------
set +xe
cstep=-1
logv endinstall "$(date)"
echo
echo "Cluster creation completed!"
echo
echo "Cluster info:"
kubectl cluster-info
echo
echo "Nodes:"
kubectl get nodes
echo
echo "Storage classes:"
kubectl get sc
echo
echo "All pods:"
kubectl get po -A
echo
echo "To update your current kube config run:"
echo " aws eks update-kubeconfig --name $CLUSTER_NAME --region $AWS_REGION"

View File

@@ -0,0 +1,34 @@
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"route53:ChangeResourceRecordSets"
],
"Resource": [
"arn:aws:route53:::hostedzone/${CLUSTER_ZONE_ID}"
]
},
{
"Effect": "Allow",
"Action": [
"route53:ListHostedZones",
"route53:ListHostedZonesByName",
"route53:ListResourceRecordSets"
],
"Resource": [
"*"
]
},
{
"Effect": "Allow",
"Action": [
"route53:GetChange"
],
"Resource": [
"arn:aws:route53:::change/*"
]
}
]
}

View File

@@ -0,0 +1,84 @@
#!/bin/bash
function check_env()
{
if [ -z "$CLUSTER_NAME" ] ; then
echo "Missing CLUSTER_NAME definition"
echo "Make sure to set environment variables eg. source env_file"
exit 1
elif [ -z "$CLUSTER_INSTANCE_TYPE" ] ; then
echo "Missing CLUSTER_INSTANCE_TYPE definition"
echo "Make sure to set environment variables eg. source env_file"
exit 1
elif [ -z "$AWS_REGION" ] ; then
echo "Missing AWS_REGION definition"
echo "Make sure to set environment variables eg. source env_file"
exit 1
# elif [ -z "$AWS_REGION_REGISTRY" ] ; then
# echo "Missing AWS_REGION_REGISTRY definition"
# echo "Make sure to set environment variables eg. source env_file"
# exit 1
fi
if [ -z "$AWS_DEFAULT_REGION" ] ; then
export AWS_DEFAULT_REGION="$AWS_REGION"
#echo "Default AWS_DEFAULT_REGION to $AWS_DEFAULT_REGION"
fi
if [ -z "$CLUSTER_VERSION" ] ; then
export CLUSTER_VERSION="1.27"
echo "Default CLUSTER_VERSION to $CLUSTER_VERSION"
fi
if [ -z "$CLUSTER_NODES" ] ; then
export CLUSTER_NODES="1"
echo "Default CLUSTER_NODES to $CLUSTER_NODES"
fi
if [ -z "$CLUSTER_MIN_NODES" ] ; then
export CLUSTER_MIN_NODES="1"
echo "Default CLUSTER_MIN_NODES to $CLUSTER_MIN_NODES"
fi
if [ -z "$CLUSTER_MAX_NODES" ] ; then
export CLUSTER_MAX_NODES="3"
echo "Default CLUSTER_MAX_NODES to $CLUSTER_MAX_NODES"
fi
if [ -z "$CLUSTER_VOLUME_SIZE" ] ; then
export CLUSTER_VOLUME_SIZE="100"
echo "Default CLUSTER_VOLUME_SIZE to $CLUSTER_VOLUME_SIZE"
fi
if [ -z "$CLUSTER_ZONE_ID" ] ; then
echo "CLUSTER_ZONE_ID not set - external-dns may not work!"
fi
# if [ -z "$CLUSTER_FS_DRIVER" ] ; then
# export CLUSTER_FS_DRIVER="efs"
# echo "Default CLUSTER_FS_DRIVER to $CLUSTER_FS_DRIVER"
# fi
}
function show_env()
{
echo " - AWS profile: $AWS_PROFILE"
echo " - Region: $AWS_REGION"
echo " - Name: $CLUSTER_NAME"
echo " - Instance type: $CLUSTER_INSTANCE_TYPE"
echo " - Volume size: $CLUSTER_VOLUME_SIZE GiB"
echo " - Kubernetes version: $CLUSTER_VERSION"
echo " - # of nodes: $CLUSTER_NODES"
echo " - Min # of nodes: $CLUSTER_MIN_NODES"
echo " - Max # of nodes: $CLUSTER_MAX_NODES"
#echo " - AWS region registry: $AWS_REGION_REGISTRY"
#echo " - File System Driver: $CLUSTER_FS_DRIVER"
}
function logx()
{
local x="$1"
echo "-> $x"
}
function logv()
{
local nm="$1"
local val="$2"
echo "-> $nm = $val"
echo "${nm}=\"$val\"" >> ${CLUSTER_NAME}-logs
}

View File

@@ -0,0 +1,4 @@
Attention: Please do not use this terraform module!
Now using ../../../eksctl/wifi-289708231103/tip-wlan-main!

View File

@@ -12,7 +12,9 @@ terraform {
dynamodb_table = "terraform-state-lock"
encrypt = true
}
}
#}
# DISABLED - It's not safe to run any of this terraform
# EKS cluster was built with ../../../eksctl/wifi-289708231103/tip-wlan-main instead of this
resource "aws_key_pair" "wlan" {
key_name = "wlan"