initial commit

This commit is contained in:
bsctl
2025-06-04 12:31:09 +02:00
commit 173b190ace
70 changed files with 4727 additions and 0 deletions

36
.gitignore vendored Normal file
View File

@@ -0,0 +1,36 @@
# Ignore all main.auto.tfvars files in the main and subfolders
**/main.auto.tfvars
# Ignore Terraform state files
**/tfstate/
**/terraform.tfstate
**/terraform.tfstate.backup
# Ignore .terraform directories
**/.terraform/
**/.terraform.lock.hcl
# Ignore other common files
*.tfstate
*.tfstate.backup
*.tfvars
*.tfvars.json
*.log
*.bak
*.swp
*.tmp
# Ignore .envrc file
.envrc
**/.envrc
# Ignore cloud-init configuration files (contain sensitive data)
**/*.cfg
**/files/*.cfg
modules/*/files/*.cfg
# Ignore any generated cloud-init files
**/cloud-init-*.yml
**/cloud-init-*.yaml
**/userdata-*.yml
**/userdata-*.yaml

139
README.md Normal file
View File

@@ -0,0 +1,139 @@
# Terraform Kamaji Node Pools
A comprehensive Terraform module collection for creating Kubernetes worker node pools across multiple cloud providers for [Kamaji](https://kamaji.clastix.io), the Control Plane Manaager for Kubernetes.
The worker nodes created by this project automatically join Kamaji tenant clusters using secure bootstrap tokens and the [`yaki`](https://goyaki.clastix.io/) bootstrap script, providing a complete Hosted Managed Kubernetes solution.
## Supported Providers
| Provider | Technology | Description | Scaling | Status |
|----------|------------|-------------|---------|---------|
| **AWS** | Auto Scaling Groups | EC2 instances with automatic scaling and high availability | Automatic | Available |
| **Proxmox** | Virtual Machines | Direct VM management on Proxmox VE with flexible resource allocation | Manual | Available |
| **vSphere** | Virtual Machines | Enterprise-grade VMs on VMware vSphere/vCenter | Manual | Available |
| **vCloud** | vApps | Multi-tenant VMs on VMware Cloud Director with vApp isolation | Manual | Available |
| **Azure** | Virtual Machine Scale Sets | Azure VMs with automatic scaling and availability zones | Automatic | Planned |
## Bootstrap Token Management
This project has a [bootstrap-token module](modules/bootstrap-token/README.md) that automatically connect to the tenant cluster in Kamaji using the provided kubeconfig, generate the bootstrap token, and constructs the join commands using the `yaki` bootstrap script.
## Naming Convention
Assuming you have a tenant called `foo`, and a tenant cluster `tcp-charlie`, you can create several node pools: `application`, `default`, `system` as shown in the following structure:
```sh
foo
├── tcp-alpha
├── tcp-beta
└── tcp-charlie
├── application-pool
│ ├── tcp-charlie-application-node-00
│ └── tcp-charlie-application-node-01
├── default-pool
│ ├── tcp-charlie-default-node-00
│ ├── tcp-charlie-default-node-01
│ └── tcp-charlie-default-node-02
└── system-pool
├── tcp-charlie-system-node-00
└── tcp-charlie-system-node-01
```
## Project Structure
```terraform-kamaji-node-pool/
├── modules/
│ ├── bootstrap-token/ # Shared bootstrap token generation
│ ├── aws-node-pool/ # AWS Auto Scaling Groups
│ ├── azure-node-pool/ # Azure Virtual Machine Scale Sets
│ ├── proxmox-node-pool/ # Proxmox VE virtual machines
│ ├── vsphere-node-pool/ # VMware vSphere VMs
│ ├── vcloud-node-pool/ # VMware Cloud Director vApps
│ ├── templates/ # Shared cloud-init templates
│ └── common/ # Common variable definitions
├── providers/
│ ├── aws/ # AWS provider implementation
│ ├── azure/ # Azure provider implementation
│ ├── proxmox/ # Proxmox provider implementation
│ ├── vsphere/ # vSphere provider implementation
│ └── vcloud/ # vCloud provider implementation
└── examples/
├── aws/ # AWS usage examples
├── azure/ # Azure usage examples
├── proxmox/ # Proxmox usage examples
├── vsphere/ # vSphere usage examples
└── vcloud/ # vCloud usage examples
```
## Quick Start
1. **Choose your provider**:
```bash
# Navigate to your preferred provider
cd providers/aws # for AWS Auto Scaling Groups
cd providers/azure # for Azure Virtual Machine Scale Sets
cd providers/proxmox # for Proxmox VE virtual machines
cd providers/vsphere # for VMware vSphere VMs
cd providers/vcloud # for VMware Cloud Director vApps
```
2. **Choose your deployment approach**:
- Use `providers/` for complete, ready-to-use implementations
- Use `modules/` for custom integrations
- Use `examples/` for reference configurations
3. **Configure your environment**:
```bash
# Copy sample configuration
cp main.auto.tfvars.sample main.auto.tfvars
# Edit configuration
vim main.auto.tfvars
```
4. **Set up authentication**:
```bash
# AWS
export AWS_ACCESS_KEY_ID="your-access-key"
export AWS_SECRET_ACCESS_KEY="your-secret-key"
# Azure
export ARM_CLIENT_ID="your-client-id"
export ARM_CLIENT_SECRET="your-client-secret"
export ARM_SUBSCRIPTION_ID="your-subscription-id"
export ARM_TENANT_ID="your-tenant-id"
# Proxmox
export TF_VAR_proxmox_user="terraform@pve" ## user
export TF_VAR_proxmox_password="your-password"
# vSphere
export TF_VAR_vsphere_username="your-username"
export TF_VAR_vsphere_password="your-password"
# vCloud
export TF_VAR_vcd_username="your-username"
export TF_VAR_vcd_password="your-password"
```
5. **Deploy**:
```bash
terraform init
terraform plan
terraform apply
```
## License
This project is released under Apache2 license.
## Contributing
This project follows infrastructure-as-code best practices and welcomes contributions.
Please ensure:
- Consistent module structure across providers
- Comprehensive variable documentation
- Proper output definitions
- Security-conscious defaults

67
examples/README.md Normal file
View File

@@ -0,0 +1,67 @@
# Examples
Usage examples for Terraform Kamaji node pool modules.
## Available Examples
| Provider | Description |
|----------|-------------|
| `aws/example.tf` | EC2 Auto Scaling Groups |
| `proxmox/example.tf` | Proxmox VE virtual machines |
| `vsphere/example.tf` | VMware vSphere VMs |
| `vcloud/example.tf` | VMware Cloud Director vApps |
## Usage
1. **Copy example**:
```bash
cd examples/aws # or proxmox, vsphere, vcloud
cp example.tf main.tf
```
2. **Customize configuration**:
- Update cluster names and paths
- Configure provider credentials
- Adjust node pool specifications
3. **Set authentication**:
```bash
# AWS
export AWS_ACCESS_KEY_ID="your-key"
export AWS_SECRET_ACCESS_KEY="your-secret"
# Proxmox
export TF_VAR_proxmox_password="your-password"
# vSphere
export TF_VAR_vsphere_password="your-password"
# vCloud
export TF_VAR_vcd_password="your-password"
```
4. **Deploy**:
```bash
terraform init
terraform apply
```
## Requirements
- Terraform >= 1.0
- Provider-specific credentials and infrastructure
- Existing Kamaji tenant cluster with kubeconfig
## Common Configuration
All examples require:
- `tenant_cluster_name` - Name of your Kamaji tenant cluster
- `kubeconfig_path` - Path to tenant cluster kubeconfig
- Provider-specific authentication variables
## Troubleshooting
- **Authentication**: Verify credentials and permissions
- **Templates**: Ensure VM/AMI templates exist and are accessible
- **Network**: Check connectivity and network configuration
- **Resources**: Verify sufficient resources in target environment

39
examples/aws/README.md Normal file
View File

@@ -0,0 +1,39 @@
# AWS Example
Example configuration for deploying Kamaji node pools on AWS using Auto Scaling Groups.
## Usage
1. **Copy and customize**:
```bash
cp example.tf main.tf
# Edit main.tf with your configuration
```
2. **Set AWS credentials**:
```bash
export AWS_ACCESS_KEY_ID="your-access-key"
export AWS_SECRET_ACCESS_KEY="your-secret-key"
```
3. **Deploy**:
```bash
terraform init
terraform apply
```
## Configuration
Update the following in `main.tf`:
- `tenant_cluster_name` - Your Kamaji tenant cluster name
- `kubeconfig_path` - Path to your tenant cluster kubeconfig
- `aws_region` and `aws_zones` - Your target AWS region and zones
- `ami_id` - Ubuntu AMI ID for your region
- Node pool specifications (instance type, size, etc.)
## Requirements
- Terraform >= 1.0
- AWS CLI configured with appropriate permissions
- Existing VPC with subnets
- Valid kubeconfig for Kamaji tenant cluster

88
examples/aws/example.tf Normal file
View File

@@ -0,0 +1,88 @@
# Example: AWS Provider Usage
# This example shows how to use the AWS provider wrapper
terraform {
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
version = "~> 2.35.0"
}
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}
# Configure the AWS provider
provider "aws" {
region = var.aws_region
}
# Configure the Kubernetes provider
provider "kubernetes" {
config_path = var.tenant_kubeconfig_path
}
# Use the AWS provider module
module "aws_kamaji_node_pools" {
source = "../../providers/aws"
# Cluster configuration
tenant_cluster_name = "my-aws-cluster"
tenant_kubeconfig_path = "~/.kube/my-cluster.kubeconfig"
yaki_url = "https://goyaki.clastix.io"
# Node pools configuration
node_pools = [
{
name = "workers"
size = 3
min_size = 2
max_size = 10
node_disk_size = 50
disk_type = "gp3"
instance_type = "t3a.large"
ami_id = "ami-06147ccec7237575f"
public = true
}
]
# AWS configuration
aws_region = var.aws_region
aws_zones = ["eu-south-1a", "eu-south-1b", "eu-south-1c"]
aws_vpc_name = ["kamaji"]
tags = {
"ManagedBy" = "Terraform"
"Environment" = "production"
"Provider" = "AWS"
}
# SSH configuration
ssh_user = "ubuntu"
ssh_public_key_path = "~/.ssh/id_rsa.pub"
}
# Variables
variable "aws_region" {
description = "AWS region"
type = string
default = "eu-south-1"
}
variable "tenant_kubeconfig_path" {
description = "Path to tenant cluster kubeconfig"
type = string
default = "~/.kube/config"
}
# Outputs
output "deployment_summary" {
description = "Deployment summary"
value = module.aws_kamaji_node_pools.deployment_summary
}
output "autoscaling_groups" {
description = "Auto Scaling Group details"
value = module.aws_kamaji_node_pools.autoscaling_groups
}

View File

@@ -0,0 +1,39 @@
# Proxmox Example
Example configuration for deploying Kamaji node pools on Proxmox VE using virtual machines.
## Usage
1. **Copy and customize**:
```bash
cp example.tf main.tf
# Edit main.tf with your configuration
```
2. **Set Proxmox credentials**:
```bash
export TF_VAR_proxmox_password="your-password"
```
3. **Deploy**:
```bash
terraform init
terraform apply
```
## Configuration
Update the following in `main.tf`:
- `tenant_cluster_name` - Your Kamaji tenant cluster name
- `kubeconfig_path` - Path to your tenant cluster kubeconfig
- `proxmox_host`, `proxmox_node`, `proxmox_api_url` - Your Proxmox configuration
- `proxmox_user` - Your Proxmox username
- VM template and network configuration
- Node pool specifications (memory, cores, disk size, etc.)
## Requirements
- Terraform >= 1.0
- Proxmox provider (Telmate/proxmox) >= 3.0.1-rc6
- SSH access to Proxmox host
- VM template with cloud-init support

118
examples/proxmox/example.tf Normal file
View File

@@ -0,0 +1,118 @@
# Example: Proxmox Provider Usage
# This example shows how to use the Proxmox provider wrapper
terraform {
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
version = "~> 2.35.0"
}
proxmox = {
source = "Telmate/proxmox"
version = "3.0.1-rc6"
}
}
}
# Configure the Proxmox provider
provider "proxmox" {
pm_api_url = var.proxmox_api_url
pm_user = var.proxmox_user
pm_password = var.proxmox_password
pm_parallel = 1
pm_tls_insecure = true
pm_log_enable = false
pm_timeout = 600
}
# Configure the Kubernetes provider
provider "kubernetes" {
config_path = var.tenant_kubeconfig_path
}
# Use the Proxmox provider module
module "proxmox_kamaji_node_pools" {
source = "../../providers/proxmox"
# Cluster configuration
tenant_cluster_name = "my-proxmox-cluster"
tenant_kubeconfig_path = "~/.kube/my-cluster.kubeconfig"
yaki_url = "https://goyaki.clastix.io"
# Node pools configuration
node_pools = [
{
name = "workers"
size = 3
network_cidr = "192.168.100.0/24"
network_gateway = "192.168.100.1"
network_offset = 20
vms_state = "started"
vms_agent = 1
vms_memory = 4096
vms_sockets = 1
vms_cores = 2
vms_vcpus = 2
vms_boot = "order=scsi0"
vms_scsihw = "virtio-scsi-single"
vms_disk_size = 20
vms_template = "ubuntu-noble"
}
]
# Proxmox configuration
proxmox_host = "my-proxmox-host.example.com"
proxmox_node = "pve-node1"
proxmox_api_url = var.proxmox_api_url
proxmox_user = var.proxmox_user
proxmox_password = var.proxmox_password
# Network configuration
nameserver = "8.8.8.8"
search_domain = ""
storage_disk = "local-lvm"
network_bridge = "vmbr0"
network_model = "virtio"
# SSH configuration
ssh_user = "ubuntu"
ssh_private_key_path = "~/.ssh/id_rsa"
ssh_public_key_path = "~/.ssh/id_rsa.pub"
}
# Variables
variable "proxmox_api_url" {
description = "Proxmox API URL"
type = string
default = "https://my-proxmox-host:8006/api2/json"
}
variable "proxmox_user" {
description = "Proxmox username"
type = string
default = "terraform@pam"
sensitive = true
}
variable "proxmox_password" {
description = "Proxmox password"
type = string
sensitive = true
}
variable "tenant_kubeconfig_path" {
description = "Path to tenant cluster kubeconfig"
type = string
default = "~/.kube/config"
}
# Outputs
output "deployment_summary" {
description = "Deployment summary"
value = module.proxmox_kamaji_node_pools.deployment_summary
}
output "vm_details" {
description = "VM details"
value = module.proxmox_kamaji_node_pools.vm_details
}

40
examples/vcloud/README.md Normal file
View File

@@ -0,0 +1,40 @@
# vCloud Example
Example configuration for deploying Kamaji node pools on VMware Cloud Director using vApps and VMs.
## Usage
1. **Copy and customize**:
```bash
cp example.tf main.tf
# Edit main.tf with your configuration
```
2. **Set vCloud credentials**:
```bash
export TF_VAR_vcd_username="your-username"
export TF_VAR_vcd_password="your-password"
```
3. **Deploy**:
```bash
terraform init
terraform apply
```
## Configuration
Update the following in `main.tf`:
- `tenant_cluster_name` - Your Kamaji tenant cluster name
- `kubeconfig_path` - Path to your tenant cluster kubeconfig
- `vcd_url`, `vcd_org`, `vcd_vdc` - Your vCloud Director configuration
- `vcd_catalog`, `vcd_network` - Catalog and network settings
- VM template and network configuration
- Node pool specifications (memory, CPU, disk size, etc.)
## Requirements
- Terraform >= 1.0
- VMware Cloud Director provider >= 3.0
- vCloud Director access with appropriate permissions
- VM template with cloud-init support

133
examples/vcloud/example.tf Normal file
View File

@@ -0,0 +1,133 @@
# Example: VMware Cloud Director Provider Usage
# This example shows how to use the vCloud provider wrapper
terraform {
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
version = "~> 2.35.0"
}
vcd = {
source = "vmware/vcd"
version = ">= 3.0"
}
}
}
# Configure the VMware Cloud Director provider
provider "vcd" {
user = var.vcd_username
password = var.vcd_password
url = var.vcd_url
org = var.vcd_org_name
vdc = var.vcd_vdc_name
allow_unverified_ssl = var.vcd_allow_insecure
logging = var.vcd_logging
}
# Configure the Kubernetes provider
provider "kubernetes" {
config_path = var.tenant_kubeconfig_path
}
# Use the vCloud provider module
module "vcloud_kamaji_node_pools" {
source = "../../providers/vcloud"
# Cluster configuration
tenant_cluster_name = "my-vcloud-cluster"
tenant_kubeconfig_path = "~/.kube/my-cluster.kubeconfig"
yaki_url = "https://goyaki.clastix.io"
# Node pools configuration
node_pools = [
{
name = "workers"
size = 3
node_cpus = 2
node_cpu_cores = 2
node_memory = 4096
node_disk_size = 50
node_disk_storage_profile = "Standard"
network_name = "MyNetwork"
network_adapter_type = "VMXNET3"
ip_allocation_mode = "DHCP"
template_name = "ubuntu-24.04-template"
}
]
# VMware Cloud Director configuration
vcd_url = var.vcd_url
vcd_username = var.vcd_username
vcd_password = var.vcd_password
vcd_org_name = "MyOrganization"
vcd_vdc_name = "MyVDC"
vcd_catalog_org_name = "MyOrganization"
vcd_catalog_name = "Templates"
vcd_allow_insecure = false
vcd_logging = false
# SSH configuration
ssh_user = "ubuntu"
ssh_public_key_path = "~/.ssh/id_rsa.pub"
ssh_private_key_path = "~/.ssh/id_rsa"
}
# Variables
variable "vcd_username" {
description = "VMware Cloud Director username"
type = string
sensitive = true
}
variable "vcd_password" {
description = "VMware Cloud Director password"
type = string
sensitive = true
}
variable "vcd_url" {
description = "VMware Cloud Director URL"
type = string
}
variable "vcd_org_name" {
description = "VMware Cloud Director organization name"
type = string
default = "MyOrganization"
}
variable "vcd_vdc_name" {
description = "VMware Cloud Director VDC name"
type = string
default = "MyVDC"
}
variable "vcd_allow_insecure" {
description = "Allow unverified SSL certificates"
type = bool
default = false
}
variable "vcd_logging" {
description = "Enable debug logging"
type = bool
default = false
}
variable "tenant_kubeconfig_path" {
description = "Path to tenant cluster kubeconfig"
type = string
default = "~/.kube/config"
}
# Outputs
output "deployment_summary" {
description = "Deployment summary"
value = module.vcloud_kamaji_node_pools.node_pool_creation_summary
}
output "vapp_details" {
description = "vApp details"
value = module.vcloud_kamaji_node_pools.vapp_details
}

View File

@@ -0,0 +1,40 @@
# vSphere Example
Example configuration for deploying Kamaji node pools on VMware vSphere using virtual machines.
## Usage
1. **Copy and customize**:
```bash
cp example.tf main.tf
# Edit main.tf with your configuration
```
2. **Set vSphere credentials**:
```bash
export TF_VAR_vsphere_username="your-username"
export TF_VAR_vsphere_password="your-password"
```
3. **Deploy**:
```bash
terraform init
terraform apply
```
## Configuration
Update the following in `main.tf`:
- `tenant_cluster_name` - Your Kamaji tenant cluster name
- `kubeconfig_path` - Path to your tenant cluster kubeconfig
- `vsphere_server`, `vsphere_datacenter`, `vsphere_cluster` - Your vSphere configuration
- `vsphere_datastore`, `vsphere_network` - Storage and network settings
- VM template and network configuration
- Node pool specifications (memory, CPU, disk size, etc.)
## Requirements
- Terraform >= 1.0
- VMware vSphere provider >= 2.0
- vCenter/ESXi access with appropriate permissions
- VM template with cloud-init support

117
examples/vsphere/example.tf Normal file
View File

@@ -0,0 +1,117 @@
# Example: vSphere Provider Usage
# This example shows how to use the vSphere provider wrapper
terraform {
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
version = "~> 2.35.0"
}
vsphere = {
source = "vmware/vsphere"
version = "~> 2.13.0"
}
}
}
# Configure the vSphere provider
provider "vsphere" {
user = var.vsphere_username
password = var.vsphere_password
vsphere_server = var.vsphere_server
allow_unverified_ssl = var.vsphere_allow_unverified_ssl
}
# Configure the Kubernetes provider
provider "kubernetes" {
config_path = var.tenant_kubeconfig_path
}
# Use the vSphere provider module
module "vsphere_kamaji_node_pools" {
source = "../../providers/vsphere"
# Cluster configuration
tenant_cluster_name = "my-vsphere-cluster"
tenant_kubeconfig_path = "~/.kube/my-cluster.kubeconfig"
yaki_url = "https://goyaki.clastix.io"
# Node pools configuration
node_pools = [
{
name = "workers"
size = 3
node_memory = 4096
node_cores = 2
node_disk_size = 50
node_guest = "ubuntu64Guest"
network_cidr = "192.168.100.0/24"
network_gateway = "192.168.100.1"
network_offset = 10
}
]
# vSphere configuration
vsphere_server = var.vsphere_server
vsphere_username = var.vsphere_username
vsphere_password = var.vsphere_password
vsphere_datacenter = "Datacenter"
vsphere_compute_cluster = "Cluster"
vsphere_datastore = "datastore1"
vsphere_network = "VM Network"
vsphere_content_library = "Templates"
vsphere_content_library_item = "ubuntu-24.04-template"
vsphere_resource_pool = "Resources"
vsphere_root_folder = "Kubernetes"
vsphere_allow_unverified_ssl = true
vsphere_plus_license = false
# Network configuration
dns_resolvers = ["8.8.8.8", "8.8.4.4"]
# SSH configuration
ssh_user = "ubuntu"
ssh_public_key_path = "~/.ssh/id_rsa.pub"
ssh_private_key_path = "~/.ssh/id_rsa"
}
# Variables
variable "vsphere_username" {
description = "vSphere username"
type = string
sensitive = true
}
variable "vsphere_password" {
description = "vSphere password"
type = string
sensitive = true
}
variable "vsphere_server" {
description = "vSphere server"
type = string
}
variable "vsphere_allow_unverified_ssl" {
description = "Allow unverified SSL certificates"
type = bool
default = false
}
variable "tenant_kubeconfig_path" {
description = "Path to tenant cluster kubeconfig"
type = string
default = "~/.kube/config"
}
# Outputs
output "deployment_summary" {
description = "Deployment summary"
value = module.vsphere_kamaji_node_pools.deployment_summary
}
output "node_details" {
description = "Node details"
value = module.vsphere_kamaji_node_pools.vm_details
}

16
license.txt Normal file
View File

@@ -0,0 +1,16 @@
/*
Copyright 2025 Clastix Labs.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/

View File

@@ -0,0 +1,69 @@
# AWS Node Pool Module
Creates AWS Auto Scaling Groups for Kubernetes worker nodes in Kamaji tenant clusters.
## Usage
```hcl
module "aws_node_pool" {
source = "../../modules/aws-node-pool"
# Cluster configuration
tenant_cluster_name = "my-cluster"
pool_name = "workers"
pool_size = 3
# Instance configuration
instance_type = "t3a.large"
ami_id = "ami-06147ccec7237575f"
node_disk_size = 50
# AWS configuration
aws_region = "us-west-2"
aws_zones = ["us-west-2a", "us-west-2b"]
aws_vpc_name = ["my-vpc"]
# SSH configuration
ssh_public_key_path = "~/.ssh/id_rsa.pub"
# Bootstrap command
runcmd = "kubeadm join cluster-api:6443 --token abc123.xyz789"
}
```
## Variables
| Variable | Type | Default | Description |
|----------|------|---------|-------------|
| `tenant_cluster_name` | `string` | Required | Tenant cluster name |
| `pool_name` | `string` | Required | Node pool name |
| `pool_size` | `number` | `3` | Number of instances |
| `pool_min_size` | `number` | `1` | Minimum instances |
| `pool_max_size` | `number` | `10` | Maximum instances |
| `instance_type` | `string` | `"t3a.medium"` | EC2 instance type |
| `ami_id` | `string` | Required | AMI ID for instances |
| `node_disk_size` | `number` | `20` | EBS volume size (GB) |
| `node_disk_type` | `string` | `"gp3"` | EBS volume type |
| `aws_region` | `string` | Required | AWS region |
| `aws_zones` | `list(string)` | Required | Availability zones |
| `aws_vpc_name` | `list(string)` | Required | VPC name filter |
| `public` | `bool` | `true` | Use public subnets |
| `ssh_user` | `string` | `"ubuntu"` | SSH user |
| `ssh_public_key_path` | `string` | `"~/.ssh/id_rsa.pub"` | SSH public key path |
| `runcmd` | `string` | Required | Bootstrap command |
| `tags` | `map(string)` | `{}` | Additional tags |
## Outputs
- `autoscaling_group_details` - Auto Scaling Group information
- `launch_template_details` - Launch template configuration
- `security_group_details` - Security group information
- `instance_details` - Instance configuration details
- `deployment_summary` - Human-readable deployment summary
## Requirements
- Terraform >= 1.0
- AWS Provider >= 5.0
- Existing VPC with subnets
- Ubuntu AMI with cloud-init support

View File

@@ -0,0 +1,44 @@
# =============================================================================
# DATA SOURCES
# =============================================================================
data "aws_vpc" "tenant" {
filter {
name = "tag:Name"
values = var.aws_vpc_name
}
}
data "aws_subnets" "tenant_subnets" {
filter {
name = "vpc-id"
values = [data.aws_vpc.tenant.id]
}
filter {
name = "availability-zone"
values = var.aws_zones
}
}
# =============================================================================
# CLOUD-INIT CONFIGURATION
# =============================================================================
data "cloudinit_config" "node_cloud_init" {
gzip = true
base64_encode = true
part {
filename = "cloud-config.yaml"
content_type = "text/cloud-config"
content = templatefile("${path.module}/../templates/cloud-init/userdata.yml.tpl", {
hostname = ""
runcmd = var.runcmd
ssh_user = var.ssh_user
ssh_public_key = file(pathexpand(var.ssh_public_key_path))
})
}
}

View File

@@ -0,0 +1,227 @@
# =============================================================================
# TERRAFORM CONFIGURATION
# =============================================================================
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
}
cloudinit = {
source = "hashicorp/cloudinit"
}
}
}
# =============================================================================
# IAM CONFIGURATION
# =============================================================================
# IAM Role and Policy for Node Pool
resource "aws_iam_policy" "node_policy" {
name_prefix = "${var.tenant_cluster_name}-${var.pool_name}-"
path = "/"
description = "Policy for role ${var.tenant_cluster_name}-${var.pool_name}"
policy = file("${path.module}/../templates/policies/aws-node-policy.json.tpl")
}
resource "aws_iam_role" "node_role" {
name_prefix = "${var.tenant_cluster_name}-${var.pool_name}-"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
resource "aws_iam_policy_attachment" "node-attach" {
name = "node-attachment-${var.tenant_cluster_name}-${var.pool_name}"
roles = [aws_iam_role.node_role.name]
policy_arn = aws_iam_policy.node_policy.arn
}
resource "aws_iam_instance_profile" "node_profile" {
name_prefix = "${var.tenant_cluster_name}-${var.pool_name}-"
role = aws_iam_role.node_role.name
}
# =============================================================================
# SECURITY GROUP CONFIGURATION
# =============================================================================
# Security Group for Kubernetes Nodes
resource "aws_security_group" "kubernetes" {
vpc_id = data.aws_vpc.tenant.id
name_prefix = "${var.tenant_cluster_name}-${var.pool_name}-"
tags = merge(
{
"Name" = "${var.tenant_cluster_name}-${var.pool_name}"
},
var.tags,
)
lifecycle {
create_before_destroy = true
ignore_changes = [
description,
]
}
}
# Allow outgoing connectivity
resource "aws_security_group_rule" "allow_all_outbound_from_kubernetes" {
type = "egress"
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
security_group_id = aws_security_group.kubernetes.id
}
# Allow the security group members to talk with each other without restrictions
resource "aws_security_group_rule" "allow_cluster_crosstalk" {
type = "ingress"
from_port = 0
to_port = 0
protocol = "-1"
source_security_group_id = aws_security_group.kubernetes.id
security_group_id = aws_security_group.kubernetes.id
}
# Allow SSH access from your laptop
resource "aws_security_group_rule" "allow_ssh_inbound" {
type = "ingress"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"] # Change this to your IP for better security
security_group_id = aws_security_group.kubernetes.id
}
# =============================================================================
# SSH KEY PAIR
# =============================================================================
# SSH Key Pair for Node Pool
resource "aws_key_pair" "keypair" {
key_name_prefix = "${var.tenant_cluster_name}-${var.pool_name}-"
public_key = file(var.ssh_public_key_path)
}
# =============================================================================
# LAUNCH TEMPLATE
# =============================================================================
# Launch Template for Node Pool
resource "aws_launch_template" "nodes" {
name_prefix = "${var.tenant_cluster_name}-${var.pool_name}-"
image_id = var.ami_id
instance_type = var.instance_type
key_name = aws_key_pair.keypair.key_name
iam_instance_profile {
name = aws_iam_instance_profile.node_profile.name
}
network_interfaces {
associate_public_ip_address = var.public
security_groups = [aws_security_group.kubernetes.id]
delete_on_termination = true
}
user_data = data.cloudinit_config.node_cloud_init.rendered
block_device_mappings {
device_name = "/dev/sda1"
ebs {
volume_type = var.node_disk_type
volume_size = var.node_disk_size
delete_on_termination = true
}
}
tag_specifications {
resource_type = "instance"
tags = merge(
{
"Name" = "${var.tenant_cluster_name}-${var.pool_name}"
},
var.tags,
)
}
lifecycle {
create_before_destroy = true
}
}
# =============================================================================
# AUTO SCALING GROUP
# =============================================================================
resource "aws_autoscaling_group" "nodes" {
vpc_zone_identifier = data.aws_subnets.tenant_subnets.ids
name_prefix = "${var.tenant_cluster_name}-${var.pool_name}-"
max_size = var.pool_max_size
min_size = var.pool_min_size
desired_capacity = var.pool_size
launch_template {
id = aws_launch_template.nodes.id
version = "$Latest"
}
dynamic "instance_refresh" {
for_each = var.enable_instance_refresh ? [1] : []
content {
strategy = "Rolling"
preferences {
min_healthy_percentage = var.instance_refresh_min_healthy_percentage
instance_warmup = var.instance_refresh_warmup
}
triggers = ["tag"]
}
}
lifecycle {
ignore_changes = [desired_capacity]
}
tag {
key = "Name"
value = "${var.tenant_cluster_name}-${var.pool_name}"
propagate_at_launch = true
}
dynamic "tag" {
for_each = var.tags
content {
key = tag.key
value = tag.value
propagate_at_launch = true
}
}
depends_on = [
aws_launch_template.nodes,
aws_security_group.kubernetes,
aws_security_group_rule.allow_all_outbound_from_kubernetes,
aws_security_group_rule.allow_cluster_crosstalk,
aws_security_group_rule.allow_ssh_inbound
]
}

View File

@@ -0,0 +1,28 @@
# =============================================================================
# AUTO SCALING GROUP
# =============================================================================
output "autoscaling_group_details" {
description = "Auto Scaling Group details"
value = {
name = aws_autoscaling_group.nodes.name
arn = aws_autoscaling_group.nodes.arn
min_size = aws_autoscaling_group.nodes.min_size
max_size = aws_autoscaling_group.nodes.max_size
desired_capacity = aws_autoscaling_group.nodes.desired_capacity
}
}
# =============================================================================
# LAUNCH TEMPLATE
# =============================================================================
output "launch_template_details" {
description = "Launch Template details"
value = {
id = aws_launch_template.nodes.id
name = aws_launch_template.nodes.name
ami_id = aws_launch_template.nodes.image_id
instance_type = aws_launch_template.nodes.instance_type
}
}

View File

@@ -0,0 +1,144 @@
# =============================================================================
# CLUSTER CONFIGURATION
# =============================================================================
# Name of the tenant cluster
variable "tenant_cluster_name" {
description = "Name of the tenant cluster"
type = string
default = "charlie"
}
# =============================================================================
# POOL CONFIGURATION
# =============================================================================
variable "runcmd" {
description = "Command to run on the node at first boot time"
type = string
default = "echo 'Hello, World!'"
}
variable "pool_name" {
description = "Name of the node pool"
type = string
default = "default"
}
variable "pool_size" {
description = "The size of the node pool"
type = number
default = 3
}
variable "pool_min_size" {
description = "The minimum size of the node pool"
type = number
default = 1
}
variable "pool_max_size" {
description = "The maximum size of the node pool"
type = number
default = 9
}
# =============================================================================
# AWS CONFIGURATION
# =============================================================================
variable "aws_region" {
description = "Region where resources are created"
default = "eu-south-1"
}
variable "aws_zones" {
type = list(string)
description = "AWS AZs where worker nodes should be created"
default = ["eu-south-1a", "eu-south-1b", "eu-south-1c"]
}
variable "instance_type" {
description = "Type of instance for workers"
default = "t3a.medium"
}
variable "ami_id" {
description = "AMI ID to use for the instances."
type = string
}
variable "aws_vpc_name" {
description = "The name of the AWS VPC to filter"
type = list(string)
default = ["kamaji"]
}
variable "public" {
description = "Whether to associate a public IP address with instances"
type = bool
default = true
}
variable "tags" {
description = "Tags used for AWS resources"
type = map(string)
default = {
"ManagedBy" = "Clastix"
"CreatedBy" = "Terraform"
}
}
# =============================================================================
# NODE CONFIGURATION
# =============================================================================
variable "node_disk_size" {
description = "Disk size for each node in GB"
type = number
default = 20
}
variable "node_disk_type" {
description = "EBS volume type for each node (gp2, gp3, io1, io2)"
type = string
default = "gp3"
}
# =============================================================================
# SSH CONFIGURATION
# =============================================================================
variable "ssh_user" {
description = "SSH user for the nodes"
type = string
default = "ubuntu"
}
variable "ssh_public_key_path" {
description = "Path to the SSH public key"
type = string
default = "~/.ssh/id_rsa.pub"
}
# =============================================================================
# INSTANCE REFRESH CONFIGURATION
# =============================================================================
variable "enable_instance_refresh" {
description = "Enable automatic instance refresh when launch template changes"
type = bool
default = true
}
variable "instance_refresh_min_healthy_percentage" {
description = "Minimum percentage of instances that must remain healthy during instance refresh"
type = number
default = 50
}
variable "instance_refresh_warmup" {
description = "Number of seconds until a newly launched instance is configured and ready to use"
type = number
default = 300
}

View File

@@ -0,0 +1,35 @@
# Bootstrap Token Module
Generates Kubernetes bootstrap tokens for joining worker nodes to Kamaji tenant clusters.
## Usage
```hcl
module "bootstrap_token" {
source = "../../modules/bootstrap-token"
kubeconfig_path = "~/.kube/tenant-cluster.kubeconfig"
yaki_url = "https://goyaki.clastix.io" # optional
}
```
## Variables
| Variable | Type | Default | Description |
|----------|------|---------|-------------|
| `kubeconfig_path` | `string` | `"~/.kube/config"` | Path to kubeconfig file |
| `yaki_url` | `string` | `"https://goyaki.clastix.io"` | YAKI bootstrap script URL |
## Outputs
- `join_cmd` - Complete join command for nodes
- `token_id` - Bootstrap token ID
- `token_secret` - Bootstrap token secret
- `ca_cert_hash` - CA certificate hash
- `cluster_endpoint` - Kubernetes API server endpoint
- `yaki_url` - YAKI bootstrap script URL
- `kubeconfig_path` - Path to kubeconfig file
- `token_ttl` - Token time-to-live
- `token_usage` - Token usage description
## Requirements

View File

@@ -0,0 +1,93 @@
# =============================================================================
# DATA SOURCES
# =============================================================================
# Read the kubeconfig file from the specified path
data "local_file" "tenant_kubeconfig" {
filename = var.kubeconfig_path
}
# Extract the current Kubernetes server version
data "kubernetes_server_version" "current" {}
# Extract the API server endpoint from the cluster-info ConfigMap
data "kubernetes_config_map" "cluster_info" {
metadata {
name = "cluster-info"
namespace = "kube-public"
}
}
# =============================================================================
# RANDOM TOKEN GENERATION
# =============================================================================
# Generate a random token ID
resource "random_string" "token_id" {
length = 6
upper = false
special = false
}
# Generate a random token secret
resource "random_string" "token_secret" {
length = 16
upper = false
special = false
}
# =============================================================================
# KUBERNETES BOOTSTRAP TOKEN
# =============================================================================
# Create the bootstrap token secret in the Kubernetes cluster
resource "kubernetes_secret" "bootstrap_token" {
metadata {
name = "bootstrap-token-${random_string.token_id.result}"
namespace = "kube-system"
}
data = {
"token-id" = random_string.token_id.result
"token-secret" = random_string.token_secret.result
"usage-bootstrap-authentication" = "true"
"usage-bootstrap-signing" = "true"
"auth-extra-groups" = "system:bootstrappers:kubeadm:default-node-token"
"expiration" = timeadd(timestamp(), "1h")
}
type = "bootstrap.kubernetes.io/token"
# Ensure the token ID and secret are generated before creating the secret
depends_on = [
random_string.token_id,
random_string.token_secret
]
# Ensure the secret is recreated if it already exists
lifecycle {
create_before_destroy = true
}
}
# =============================================================================
# JOIN COMMAND PREPARATION
# =============================================================================
# Prepare the join command for bootstrapping nodes
locals {
# Decode the kubeconfig data from the cluster-info ConfigMap
kubeconfig = yamldecode(data.kubernetes_config_map.cluster_info.data["kubeconfig"])
# Extract the join URL from the kubeconfig
join_url = replace(local.kubeconfig.clusters[0].cluster.server, "https://", "")
# Combine the token ID and secret to form the join token
join_token = "${random_string.token_id.result}.${random_string.token_secret.result}"
# Format the Kubernetes version
kubernetes_version = format("v%s", data.kubernetes_server_version.current.version)
# Construct the join command for bootstrapping nodes
join_cmd = "wget -O- ${var.yaki_url} | JOIN_URL=${local.join_url} JOIN_TOKEN=${local.join_token} KUBERNETES_VERSION=${local.kubernetes_version} bash -s join"
}

View File

@@ -0,0 +1,9 @@
# =============================================================================
# JOIN COMMAND
# =============================================================================
output "join_cmd" {
description = "Complete join command for bootstrapping nodes"
value = local.join_cmd
sensitive = true
}

View File

@@ -0,0 +1,19 @@
# =============================================================================
# KUBERNETES CONFIGURATION
# =============================================================================
variable "kubeconfig_path" {
description = "Path to the kubeconfig file"
type = string
default = "~/.kube/config"
}
# =============================================================================
# BOOTSTRAP CONFIGURATION
# =============================================================================
variable "yaki_url" {
description = "URL to the YAKI bootstrap script"
type = string
default = "https://goyaki.clastix.io"
}

31
modules/common/outputs.tf Normal file
View File

@@ -0,0 +1,31 @@
# Standard node pool summary output
output "node_pool_summary" {
description = "Summary of node pool creation"
value = {
tenant_cluster_name = var.tenant_cluster_config.name
kubeconfig_path = var.tenant_cluster_config.kubeconfig_path
pool_name = var.node_pool_config.name
pool_size = var.node_pool_config.size
min_size = var.node_pool_config.min_size
max_size = var.node_pool_config.max_size
ssh_user = var.ssh_config.user
}
}
# Standard success message
output "success_message" {
description = "Success message for node pool creation"
value = <<-EOT
Kamaji Node Pool Successfully Created!
Tenant Cluster: ${var.tenant_cluster_config.name}
Pool Name: ${var.node_pool_config.name}
Pool Size: ${var.node_pool_config.size}
Kubeconfig: ${var.tenant_cluster_config.kubeconfig_path}
To check node status:
kubectl --kubeconfig ${var.tenant_cluster_config.kubeconfig_path} get nodes
EOT
}

View File

@@ -0,0 +1,53 @@
# Tenant Cluster Configuration
variable "tenant_cluster_config" {
description = "Tenant cluster configuration"
type = object({
name = string
kubeconfig_path = string
})
validation {
condition = length(var.tenant_cluster_config.name) > 0
error_message = "Tenant cluster name cannot be empty."
}
}
# SSH Configuration
variable "ssh_config" {
description = "SSH configuration for node access"
type = object({
user = string
public_key_path = string
private_key_path = optional(string, "")
})
validation {
condition = length(var.ssh_config.user) > 0
error_message = "SSH user cannot be empty."
}
}
# Node Pool Configuration
variable "node_pool_config" {
description = "Node pool configuration"
type = object({
name = string
size = number
min_size = optional(number, 1)
max_size = optional(number, 10)
})
validation {
condition = var.node_pool_config.size >= var.node_pool_config.min_size
error_message = "Pool size must be greater than or equal to min_size."
}
validation {
condition = var.node_pool_config.size <= var.node_pool_config.max_size
error_message = "Pool size must be less than or equal to max_size."
}
}
# Bootstrap Configuration
variable "bootstrap_config" {
description = "Bootstrap configuration"
type = object({
yaki_url = optional(string, "https://goyaki.clastix.io")
})
}

View File

@@ -0,0 +1,88 @@
# Proxmox Node Pool Module
Creates virtual machines on Proxmox VE for Kubernetes worker nodes in Kamaji tenant clusters.
## Usage
```hcl
module "proxmox_node_pool" {
source = "../../modules/proxmox-node-pool"
# Cluster configuration
tenant_cluster_name = "my-cluster"
pool_name = "workers"
pool_size = 3
# Network configuration
network_cidr = "10.10.10.0/24"
network_gateway = "10.10.10.1"
network_bridge = "vmbr0"
search_domain = "example.com"
# VM configuration
vms_template = "ubuntu-24.04-template"
vms_memory = 2048
vms_cores = 2
vms_disk_size = 20
# Proxmox configuration
proxmox_host = "proxmox.example.com"
proxmox_node = "pve"
proxmox_api_url = "https://proxmox.example.com:8006/api2/json"
proxmox_user = "terraform@pve"
proxmox_password = var.proxmox_password
# SSH configuration
ssh_private_key_path = "~/.ssh/id_rsa"
ssh_public_key_path = "~/.ssh/id_rsa.pub"
# Bootstrap command
runcmd = "kubeadm join cluster-api:6443 --token abc123.xyz789"
}
```
## Variables
| Variable | Type | Default | Description |
|----------|------|---------|-------------|
| `tenant_cluster_name` | `string` | Required | Tenant cluster name |
| `pool_name` | `string` | `"default"` | Node pool name |
| `pool_size` | `number` | `3` | Number of VMs |
| `network_cidr` | `string` | `"10.10.10.0/24"` | Network CIDR |
| `network_gateway` | `string` | `"10.10.10.1"` | Network gateway |
| `network_offset` | `number` | `10` | IP address offset |
| `network_bridge` | `string` | Required | Proxmox network bridge |
| `network_model` | `string` | `"virtio"` | Network interface model |
| `nameserver` | `string` | `"8.8.8.8"` | DNS resolver |
| `search_domain` | `string` | Required | DNS search domain |
| `vms_template` | `string` | Required | VM template name |
| `vms_state` | `string` | `"started"` | VM state |
| `vms_agent` | `number` | `1` | QEMU Guest Agent |
| `vms_sockets` | `number` | `1` | CPU sockets |
| `vms_cores` | `number` | `2` | CPU cores per socket |
| `vms_vcpus` | `number` | `0` | vCPUs (0 = auto) |
| `vms_memory` | `number` | `1024` | Memory (MB) |
| `vms_boot` | `string` | `"order=scsi0"` | Boot order |
| `vms_scsihw` | `string` | `"virtio-scsi-single"` | SCSI controller |
| `storage_disk` | `string` | `"local"` | Storage location |
| `vms_disk_size` | `number` | `16` | Disk size (GB) |
| `proxmox_host` | `string` | Required | Proxmox hostname/IP |
| `proxmox_node` | `string` | Required | Proxmox node name |
| `proxmox_api_url` | `string` | Required | Proxmox API URL |
| `proxmox_user` | `string` | Required | Proxmox user |
| `proxmox_password` | `string` | Required | Proxmox password |
| `ssh_user` | `string` | `"ubuntu"` | SSH user |
| `ssh_private_key_path` | `string` | `"~/.ssh/id_rsa"` | SSH private key path |
| `ssh_public_key_path` | `string` | `"~/.ssh/id_rsa.pub"` | SSH public key path |
| `runcmd` | `string` | `"echo 'Hello, World!'"` | Bootstrap command |
## Outputs
- `vm_details` - VM information (name, IP, memory, CPU, state)
## Requirements
- Terraform >= 1.0
- Proxmox provider (Telmate/proxmox) >= 3.0.1-rc6
- SSH access to Proxmox host
- VM template with cloud-init support

View File

@@ -0,0 +1,6 @@
# =============================================================================
# DATA SOURCES
# =============================================================================
# No data sources required for this module
# All configuration is passed via variables

View File

@@ -0,0 +1,144 @@
# =============================================================================
# TERRAFORM CONFIGURATION
# =============================================================================
terraform {
required_providers {
# https://github.com/telmate/terraform-provider-proxmox
proxmox = {
source = "Telmate/proxmox"
version = "3.0.1-rc6"
}
}
}
# =============================================================================
# LOCALS
# =============================================================================
locals {
nodes = toset([for n in range(var.pool_size) : format("%s", n)])
template_path = "${path.module}/../templates/cloud-init/userdata.yml.tpl"
}
# =============================================================================
# CLOUD-INIT FILE GENERATION
# =============================================================================
resource "local_file" "cloud_init_user_data_file" {
for_each = local.nodes
content = templatefile(local.template_path, {
hostname = "${var.tenant_cluster_name}-${var.pool_name}-node-${format("%02s", each.key)}"
ssh_user = var.ssh_user
ssh_public_key = file(pathexpand(var.ssh_public_key_path))
runcmd = var.runcmd
})
filename = "${path.module}/files/${var.tenant_cluster_name}-${var.pool_name}-node-${format("%02s", each.key)}.cfg"
}
# =============================================================================
# CLOUD-INIT FILE TRANSFER
# =============================================================================
resource "null_resource" "cloud_init_config" {
for_each = local.nodes
connection {
type = "ssh"
user = "root"
private_key = file("${var.ssh_private_key_path}")
host = var.proxmox_host
timeout = "30s"
}
# Ensure snippets directory exists
provisioner "remote-exec" {
inline = [
"mkdir -p /var/lib/vz/snippets",
"echo 'Snippets directory ready'"
]
on_failure = fail
}
# Transfer cloud-init file
provisioner "file" {
source = local_file.cloud_init_user_data_file[each.key].filename
destination = "/var/lib/vz/snippets/${var.tenant_cluster_name}-${var.pool_name}-node-${format("%02s", each.key)}.yml"
}
depends_on = [local_file.cloud_init_user_data_file]
}
# =============================================================================
# PROXMOX RESOURCES
# =============================================================================
resource "proxmox_pool" "server_pool" {
poolid = "${var.tenant_cluster_name}-${var.pool_name}-pool"
}
resource "proxmox_vm_qemu" "node" {
for_each = local.nodes
# Basic Configuration
name = "${var.tenant_cluster_name}-${var.pool_name}-node-${format("%02s", each.key)}"
target_node = var.proxmox_node
pool = proxmox_pool.server_pool.poolid
clone = var.vms_template
agent = var.vms_agent
vm_state = var.vms_state
onboot = false
# CPU Configuration
sockets = var.vms_sockets
cores = var.vms_cores
vcpus = var.vms_vcpus
# Memory Configuration
memory = var.vms_memory
# Boot Configuration
boot = var.vms_boot
scsihw = var.vms_scsihw
# Network Configuration
network {
id = 0
bridge = var.network_bridge
model = var.network_model
}
# Disk Configuration
disks {
scsi {
scsi0 {
disk {
storage = var.storage_disk
size = var.vms_disk_size
}
}
}
ide {
ide1 {
cloudinit {
storage = var.storage_disk
}
}
}
}
# Cloud-Init Configuration
ciupgrade = false
cicustom = "user=local:snippets/${var.tenant_cluster_name}-${var.pool_name}-node-${format("%02s", each.key)}.yml"
searchdomain = var.search_domain
nameserver = var.nameserver
ipconfig0 = "ip=${cidrhost(var.network_cidr, tonumber(each.key) + var.network_offset)}/${split("/", var.network_cidr)[1]},gw=${var.network_gateway}"
skip_ipv6 = true
depends_on = [
proxmox_pool.server_pool,
null_resource.cloud_init_config
]
}

View File

@@ -0,0 +1,17 @@
# =============================================================================
# VM DETAILS
# =============================================================================
output "vm_details" {
description = "Virtual machine details"
value = [
for node in proxmox_vm_qemu.node : {
name = node.name
ip_address = node.default_ipv4_address
memory_mb = node.memory
cpu_cores = node.cores
cpu_sockets = node.sockets
state = node.vm_state
}
]
}

View File

@@ -0,0 +1,195 @@
# =============================================================================
# CLUSTER CONFIGURATION
# =============================================================================
variable "tenant_cluster_name" {
description = "Name of the tenant cluster"
type = string
}
variable "pool_name" {
description = "Name of the node pool"
type = string
default = "default"
}
variable "pool_size" {
description = "Number of nodes in the pool"
type = number
default = 3
}
variable "runcmd" {
description = "Command to run on nodes at first boot"
type = string
default = "echo 'Hello, World!'"
}
# =============================================================================
# SSH CONFIGURATION
# =============================================================================
variable "ssh_user" {
description = "SSH user for node access"
type = string
default = "ubuntu"
}
variable "ssh_private_key_path" {
description = "Path to SSH private key"
type = string
default = "~/.ssh/id_rsa"
}
variable "ssh_public_key_path" {
description = "Path to SSH public key"
type = string
default = "~/.ssh/id_rsa.pub"
}
# =============================================================================
# NETWORK CONFIGURATION
# =============================================================================
variable "network_cidr" {
description = "CIDR block for the network"
type = string
default = "10.10.10.0/24"
}
variable "network_gateway" {
description = "Network gateway address"
type = string
default = "10.10.10.1"
}
variable "network_offset" {
description = "IP address offset for nodes"
type = number
default = 10
}
variable "network_bridge" {
description = "Proxmox network bridge"
type = string
}
variable "network_model" {
description = "Network interface model"
type = string
default = "virtio"
}
variable "nameserver" {
description = "DNS resolver for nodes"
type = string
default = "8.8.8.8"
}
variable "search_domain" {
description = "DNS search domain"
type = string
}
# =============================================================================
# VM CONFIGURATION
# =============================================================================
variable "vms_template" {
description = "VM template name"
type = string
}
variable "vms_state" {
description = "Desired VM state"
type = string
default = "started"
}
variable "vms_agent" {
description = "Enable QEMU Guest Agent (1=enabled, 0=disabled)"
type = number
default = 1
}
# CPU Configuration
variable "vms_sockets" {
description = "Number of CPU sockets"
type = number
default = 1
}
variable "vms_cores" {
description = "CPU cores per socket"
type = number
default = 2
}
variable "vms_vcpus" {
description = "Number of vCPUs (0 = auto: sockets * cores)"
type = number
default = 0
}
# Memory Configuration
variable "vms_memory" {
description = "Memory allocation in MB"
type = number
default = 1024
}
# Boot Configuration
variable "vms_boot" {
description = "Boot order (must match template OS disk)"
type = string
default = "order=scsi0"
}
variable "vms_scsihw" {
description = "SCSI controller type"
type = string
default = "virtio-scsi-single"
}
# Disk Configuration
variable "storage_disk" {
description = "Storage location for VM disks"
type = string
default = "local"
}
variable "vms_disk_size" {
description = "VM disk size in GB"
type = number
default = 16
}
# =============================================================================
# PROXMOX SERVER CONFIGURATION
# =============================================================================
variable "proxmox_host" {
description = "Proxmox server hostname/IP"
type = string
}
variable "proxmox_node" {
description = "Target Proxmox node"
type = string
}
variable "proxmox_api_url" {
description = "Proxmox API endpoint URL"
type = string
}
variable "proxmox_user" {
description = "Proxmox authentication user"
type = string
}
variable "proxmox_password" {
description = "Proxmox user password"
type = string
sensitive = true
}

View File

@@ -0,0 +1,28 @@
#cloud-config
%{ if hostname != "" }
# Set the hostname for this node
hostname: ${hostname}
%{ endif }
users:
- name: ${ssh_user}
groups: sudo
shell: /bin/bash
sudo: ALL=(ALL) NOPASSWD:ALL
ssh-authorized-keys:
- ${ssh_public_key}
packages:
- socat
- conntrack
ntp:
enabled: true
servers:
- 0.pool.ntp.org
- 1.pool.ntp.org
- 2.pool.ntp.org
runcmd:
- ${runcmd}

View File

@@ -0,0 +1,19 @@
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:Describe*",
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:GetRepositoryPolicy",
"ecr:DescribeRepositories",
"ecr:ListImages",
"ecr:BatchGetImage"
],
"Resource": "*"
}
]
}

View File

@@ -0,0 +1,75 @@
# vCloud Node Pool Module
Creates vApps and VMs on VMware Cloud Director for Kubernetes worker nodes in Kamaji tenant clusters.
## Usage
```hcl
module "vcloud_node_pool" {
source = "../../modules/vcloud-node-pool"
# Cluster configuration
tenant_cluster_name = "my-cluster"
pool_name = "workers"
pool_size = 3
# VM configuration
vm_template = "ubuntu-24.04-template"
vm_memory = 4096
vm_cpu = 2
vm_disk_size = 20
# vCloud configuration
vcd_org = "my-org"
vcd_vdc = "my-vdc"
vcd_catalog = "my-catalog"
vcd_network = "my-network"
# Network configuration
network_cidr = "192.168.1.0/24"
network_gateway = "192.168.1.1"
network_offset = 100
# SSH configuration
ssh_public_key_path = "~/.ssh/id_rsa.pub"
# Bootstrap command
runcmd = "kubeadm join cluster-api:6443 --token abc123.xyz789"
}
```
## Variables
| Variable | Type | Default | Description |
|----------|------|---------|-------------|
| `tenant_cluster_name` | `string` | Required | Tenant cluster name |
| `pool_name` | `string` | `"default"` | Node pool name |
| `pool_size` | `number` | `3` | Number of VMs |
| `vm_template` | `string` | Required | VM template name |
| `vm_memory` | `number` | `2048` | Memory (MB) |
| `vm_cpu` | `number` | `2` | CPU cores |
| `vm_disk_size` | `number` | `20` | Disk size (GB) |
| `vcd_org` | `string` | Required | vCloud organization |
| `vcd_vdc` | `string` | Required | Virtual datacenter |
| `vcd_catalog` | `string` | Required | vCloud catalog |
| `vcd_network` | `string` | Required | vCloud network |
| `network_cidr` | `string` | `"192.168.1.0/24"` | Network CIDR |
| `network_gateway` | `string` | `"192.168.1.1"` | Network gateway |
| `network_offset` | `number` | `10` | IP address offset |
| `nameserver` | `string` | `"8.8.8.8"` | DNS resolver |
| `search_domain` | `string` | `""` | DNS search domain |
| `ssh_user` | `string` | `"ubuntu"` | SSH user |
| `ssh_public_key_path` | `string` | `"~/.ssh/id_rsa.pub"` | SSH public key path |
| `runcmd` | `string` | `"echo 'Hello, World!'"` | Bootstrap command |
## Outputs
- `vm_details` - VM information (name, IP, memory, CPU, state)
- `vapp_details` - vApp information
## Requirements
- Terraform >= 1.0
- VMware Cloud Director provider >= 3.0
- vCloud Director access with appropriate permissions
- VM template with cloud-init support

View File

@@ -0,0 +1,13 @@
# =============================================================================
# DATA SOURCES
# =============================================================================
data "vcd_catalog" "catalog" {
name = var.vcd_catalog_name
org = var.vcd_catalog_org_name
}
data "vcd_catalog_vapp_template" "vapp_template" {
catalog_id = data.vcd_catalog.catalog.id
name = var.vapp_template_name
}

View File

@@ -0,0 +1,110 @@
# =============================================================================
# LOCALS
# =============================================================================
locals {
nodes = toset([for n in range(var.pool_size) : format("%s", n)])
}
# =============================================================================
# VAPP CREATION
# =============================================================================
resource "vcd_vapp" "node_pool" {
name = "${var.tenant_cluster_name}-${var.pool_name}-pool"
description = "vApp for ${var.tenant_cluster_name} cluster and ${var.pool_name} node pool"
metadata_entry {
key = "provisioner"
value = "yaki"
type = "MetadataStringValue"
user_access = "READWRITE"
is_system = false
}
}
# =============================================================================
# NETWORK CONFIGURATION
# =============================================================================
# Connect the dedicated routed network to vApp
resource "vcd_vapp_org_network" "network" {
org = var.vcd_org_name
vdc = var.vcd_vdc_name
vapp_name = vcd_vapp.node_pool.name
org_network_name = var.vapp_network_name
reboot_vapp_on_removal = true
depends_on = [vcd_vapp.node_pool]
}
# =============================================================================
# VIRTUAL MACHINES
# =============================================================================
resource "vcd_vapp_vm" "node_pool_vm" {
for_each = local.nodes
# Metadata
metadata_entry {
key = "provisioner"
value = "yaki"
type = "MetadataStringValue"
user_access = "READWRITE"
is_system = false
}
# Lifecycle management
lifecycle {
ignore_changes = [
guest_properties.user-data,
vapp_template_id,
disk
]
}
# Basic VM configuration
vapp_name = vcd_vapp.node_pool.name
name = "${vcd_vapp.node_pool.name}-node-${format("%02s", each.key)}"
computer_name = "${vcd_vapp.node_pool.name}-node-${format("%02s", each.key)}"
# Cloud-init configuration
guest_properties = {
hostname = "${vcd_vapp.node_pool.name}-node-${format("%02s", each.key)}"
user-data = base64encode(templatefile("${path.module}/../templates/cloud-init/userdata.yml.tpl", {
hostname = "${vcd_vapp.node_pool.name}-node-${format("%02s", each.key)}",
runcmd = var.runcmd,
ssh_user = var.ssh_user,
ssh_public_key = file(pathexpand(var.ssh_public_key_path))
}))
}
# Template configuration
vapp_template_id = data.vcd_catalog_vapp_template.vapp_template.id
# Resource allocation
memory = var.node_memory
cpus = var.node_cpus
cpu_cores = var.node_cpu_cores
cpu_hot_add_enabled = false
memory_hot_add_enabled = false
# Network configuration
network {
type = "org"
name = var.vapp_network_name
adapter_type = var.vapp_network_adapter_type
ip_allocation_mode = var.vapp_ip_allocation_mode
is_primary = true
}
# Disk configuration
override_template_disk {
bus_type = "paravirtual"
size_in_mb = var.node_disk_size
bus_number = 0
unit_number = 0
storage_profile = var.node_disk_storage_profile
}
depends_on = [vcd_vapp.node_pool, vcd_vapp_org_network.network]
}

View File

@@ -0,0 +1,32 @@
# =============================================================================
# VAPP DETAILS
# =============================================================================
output "vapp_details" {
description = "vApp information"
value = {
name = vcd_vapp.node_pool.name
id = vcd_vapp.node_pool.id
}
}
# =============================================================================
# VM DETAILS
# =============================================================================
output "virtual_machines" {
description = "Virtual machine details"
value = {
for vm_key, vm in vcd_vapp_vm.node_pool_vm : vm_key => {
name = vm.name
computer_name = vm.computer_name
memory = vm.memory
cpus = vm.cpus
cpu_cores = vm.cpu_cores
network = length(vm.network) > 0 ? {
ip = tolist(vm.network)[0].ip
mac = tolist(vm.network)[0].mac
} : null
}
}
}

View File

@@ -0,0 +1,149 @@
# =============================================================================
# CLUSTER CONFIGURATION
# =============================================================================
variable "tenant_cluster_name" {
description = "Name of the tenant cluster"
type = string
default = "charlie"
}
# =============================================================================
# NODE POOL CONFIGURATION
# =============================================================================
variable "pool_name" {
description = "Name of the node pool"
type = string
default = "default"
}
variable "pool_size" {
description = "The size of the node pool"
type = number
default = 3
}
variable "runcmd" {
description = "Command to run on the node at first boot time"
type = string
default = "echo 'Hello, World!'"
}
# =============================================================================
# NODE CONFIGURATION
# =============================================================================
variable "node_memory" {
description = "Memory for each node in MB"
type = number
default = 4096
}
variable "node_cpus" {
description = "Number of CPUs for each node"
type = number
default = 2
}
variable "node_cpu_cores" {
description = "Number of CPU cores for each node"
type = number
default = 1
}
variable "node_disk_size" {
description = "Disk size for each node in MB"
type = number
default = 12800
}
variable "node_disk_storage_profile" {
description = "Storage profile for the node disks"
type = string
default = "example-storage-profile"
}
# =============================================================================
# NETWORK CONFIGURATION
# =============================================================================
variable "vapp_network_name" {
description = "Organization network name in VMware Cloud Director"
type = string
default = "example-org-network-name"
}
variable "vapp_network_adapter_type" {
description = "Adapter type for the vApp network"
type = string
default = "VMXNET3"
}
variable "vapp_ip_allocation_mode" {
description = "IP allocation mode for the vApp"
type = string
default = "POOL"
}
# =============================================================================
# TEMPLATE CONFIGURATION
# =============================================================================
variable "vapp_template_name" {
description = "Template name in VMware Cloud Director"
type = string
default = "example-template-name"
}
# =============================================================================
# SSH CONFIGURATION
# =============================================================================
variable "ssh_user" {
description = "SSH user for the nodes"
type = string
default = "ubuntu"
}
variable "ssh_private_key_path" {
description = "Path to the SSH private key"
type = string
default = "~/.ssh/id_rsa"
}
variable "ssh_public_key_path" {
description = "Path to the SSH public key"
type = string
default = "~/.ssh/id_rsa.pub"
}
# =============================================================================
# VMWARE CLOUD DIRECTOR CONFIGURATION
# =============================================================================
variable "vcd_org_name" {
description = "Organization name in VMware Cloud Director"
type = string
}
variable "vcd_vdc_name" {
description = "Virtual Data Center name in VMware Cloud Director"
type = string
}
variable "vcd_catalog_org_name" {
description = "Organization name for the vCloud catalog in VMware Cloud Director"
type = string
}
variable "vcd_catalog_name" {
description = "Catalog name in VMware Cloud Director"
type = string
}
variable "vcd_logging" {
description = "Enable logging for VMware Cloud Director provider"
type = bool
default = false
}

View File

@@ -0,0 +1,14 @@
# =============================================================================
# TERRAFORM CONFIGURATION
# =============================================================================
terraform {
required_version = ">= 1.0"
required_providers {
vcd = {
source = "vmware/vcd"
version = ">= 3.0"
}
}
}

View File

@@ -0,0 +1,76 @@
# vSphere Node Pool Module
Creates virtual machines on VMware vSphere for Kubernetes worker nodes in Kamaji tenant clusters.
## Usage
```hcl
module "vsphere_node_pool" {
source = "../../modules/vsphere-node-pool"
# Cluster configuration
tenant_cluster_name = "my-cluster"
pool_name = "workers"
pool_size = 3
# VM configuration
vm_template = "ubuntu-24.04-template"
vm_memory = 4096
vm_cpu = 2
vm_disk_size = 20
# vSphere configuration
vsphere_datacenter = "Datacenter"
vsphere_cluster = "Cluster"
vsphere_datastore = "datastore1"
vsphere_network = "VM Network"
vsphere_folder = "terraform-vms"
# Network configuration
network_cidr = "192.168.1.0/24"
network_gateway = "192.168.1.1"
network_offset = 100
# SSH configuration
ssh_public_key_path = "~/.ssh/id_rsa.pub"
# Bootstrap command
runcmd = "kubeadm join cluster-api:6443 --token abc123.xyz789"
}
```
## Variables
| Variable | Type | Default | Description |
|----------|------|---------|-------------|
| `tenant_cluster_name` | `string` | Required | Tenant cluster name |
| `pool_name` | `string` | `"default"` | Node pool name |
| `pool_size` | `number` | `3` | Number of VMs |
| `vm_template` | `string` | Required | VM template name |
| `vm_memory` | `number` | `2048` | Memory (MB) |
| `vm_cpu` | `number` | `2` | CPU cores |
| `vm_disk_size` | `number` | `20` | Disk size (GB) |
| `vsphere_datacenter` | `string` | Required | vSphere datacenter |
| `vsphere_cluster` | `string` | Required | vSphere cluster |
| `vsphere_datastore` | `string` | Required | vSphere datastore |
| `vsphere_network` | `string` | Required | vSphere network |
| `vsphere_folder` | `string` | `""` | VM folder |
| `network_cidr` | `string` | `"192.168.1.0/24"` | Network CIDR |
| `network_gateway` | `string` | `"192.168.1.1"` | Network gateway |
| `network_offset` | `number` | `10` | IP address offset |
| `nameserver` | `string` | `"8.8.8.8"` | DNS resolver |
| `search_domain` | `string` | `""` | DNS search domain |
| `ssh_user` | `string` | `"ubuntu"` | SSH user |
| `ssh_public_key_path` | `string` | `"~/.ssh/id_rsa.pub"` | SSH public key path |
| `runcmd` | `string` | `"echo 'Hello, World!'"` | Bootstrap command |
## Outputs
- `vm_details` - VM information (name, IP, memory, CPU, state)
## Requirements
- Terraform >= 1.0
- VMware vSphere provider >= 2.0
- vCenter/ESXi access with appropriate permissions
- VM template with cloud-init support

View File

@@ -0,0 +1,44 @@
# =============================================================================
# DATA SOURCES
# =============================================================================
# Get datacenter information
data "vsphere_datacenter" "dc" {
name = var.vsphere_datacenter
}
# Get datastore for VM storage
data "vsphere_datastore" "datastore" {
name = var.vsphere_datastore
datacenter_id = data.vsphere_datacenter.dc.id
}
# Get network configuration
data "vsphere_network" "network" {
name = var.vsphere_network
datacenter_id = data.vsphere_datacenter.dc.id
}
# Get content library for templates
data "vsphere_content_library" "content_library" {
name = var.vsphere_content_library
}
# Get specific template/OVF from content library
data "vsphere_content_library_item" "item" {
name = var.vsphere_content_library_item
type = "ovf"
library_id = data.vsphere_content_library.content_library.id
}
# Get compute cluster information
data "vsphere_compute_cluster" "compute_cluster" {
name = var.vsphere_compute_cluster
datacenter_id = data.vsphere_datacenter.dc.id
}
# Get resource pool information
data "vsphere_resource_pool" "pool" {
name = var.vsphere_resource_pool
datacenter_id = data.vsphere_datacenter.dc.id
}

View File

@@ -0,0 +1,134 @@
# =============================================================================
# LOCALS
# =============================================================================
locals {
nodes = toset([for n in range(var.pool_size) : format("%s", n)])
}
# =============================================================================
# VSPHERE FOLDER
# =============================================================================
resource "vsphere_folder" "pool_folder" {
path = "${var.vsphere_root_folder}/${var.tenant_cluster_name}-${var.pool_name}-pool"
type = "vm"
datacenter_id = data.vsphere_datacenter.dc.id
lifecycle {
ignore_changes = [
datacenter_id
]
}
}
# =============================================================================
# VIRTUAL MACHINES
# =============================================================================
resource "vsphere_virtual_machine" "node" {
for_each = local.nodes
lifecycle {
ignore_changes = [
resource_pool_id,
clone[0].template_uuid,
vapp[0].properties.user-data,
tags
]
}
name = "${var.tenant_cluster_name}-${var.pool_name}-node-${format("%02s", each.key)}"
resource_pool_id = data.vsphere_resource_pool.pool.id
datastore_id = data.vsphere_datastore.datastore.id
folder = vsphere_folder.pool_folder.path
num_cpus = var.node_cores
memory = var.node_memory
guest_id = var.node_guest
firmware = var.node_firmware
scsi_type = var.node_scsi_type
enable_disk_uuid = true
hardware_version = var.node_hardware_version
network_interface {
network_id = data.vsphere_network.network.id
}
disk {
label = "disk0"
unit_number = 0
size = var.node_disk_size
thin_provisioned = var.node_disk_thin
}
clone {
template_uuid = data.vsphere_content_library_item.item.id
customize {
dns_server_list = var.dns_resolvers
linux_options {
host_name = "${var.tenant_cluster_name}-${var.pool_name}-node-${format("%02s", each.key)}"
domain = "clastix.local" # does not work but is needed by provider
hw_clock_utc = true
time_zone = "Europe/Rome"
}
network_interface {
ipv4_address = cidrhost(var.network_cidr, tonumber(each.key) + var.network_offset)
ipv4_netmask = split("/", var.network_cidr)[1]
}
ipv4_gateway = var.network_gateway
}
}
cdrom {
client_device = true
}
extra_config = {
"disk.enableUUID" = "TRUE"
}
vapp {
properties = {
hostname = "H${var.tenant_cluster_name}-${var.pool_name}-node-${format("%02s", each.key)}"
user-data = base64encode(templatefile("${path.module}/../templates/cloud-init/userdata.yml.tpl", {
hostname = "${var.tenant_cluster_name}-${var.pool_name}-node-${format("%02s", each.key)}"
runcmd = var.runcmd
ssh_user = var.ssh_user
ssh_public_key = file(pathexpand(var.ssh_public_key_path))
}))
}
}
depends_on = [
vsphere_folder.pool_folder,
data.vsphere_datastore.datastore,
data.vsphere_network.network,
data.vsphere_resource_pool.pool,
data.vsphere_content_library_item.item
]
}
# =============================================================================
# ANTI-AFFINITY RULES
# =============================================================================
/*
vSphere DRS needs a vSphere Enterprise Plus license. Disable the variable if you lack this license.
An anti-affinity rule distributes virtual machines across different hosts in a cluster, helping to avoid single points of failure.
*/
resource "vsphere_compute_cluster_vm_anti_affinity_rule" "node_anti_affinity_rule" {
count = var.vsphere_plus_license ? 1 : 0
name = "node_anti_affinity_rule"
compute_cluster_id = data.vsphere_compute_cluster.compute_cluster.id
virtual_machine_ids = [for key, value in vsphere_virtual_machine.node : value.id]
lifecycle {
ignore_changes = [
compute_cluster_id
]
}
}

View File

@@ -0,0 +1,18 @@
# =============================================================================
# VM DETAILS
# =============================================================================
output "node_details" {
description = "Virtual machine details"
value = {
for key, value in vsphere_virtual_machine.node :
key => {
name = value.name
ip_address = value.default_ip_address
uuid = value.uuid
power_state = value.power_state
cpu_cores = value.num_cpus
memory_mb = value.memory
}
}
}

View File

@@ -0,0 +1,215 @@
# =============================================================================
# CLUSTER CONFIGURATION
# =============================================================================
# Tenant Cluster Configuration
variable "tenant_cluster_name" {
description = "Name of the tenant cluster"
type = string
}
# =============================================================================
# POOL CONFIGURATION
# =============================================================================
variable "runcmd" {
description = "Command to run on the node at first boot time"
type = string
default = "echo 'Hello, World!'"
}
variable "pool_name" {
description = "Name of the node pool"
type = string
default = "default"
}
variable "pool_size" {
description = "The size of the node pool"
type = number
default = 3
}
# =============================================================================
# NETWORK CONFIGURATION
# =============================================================================
variable "network_cidr" {
description = "The CIDR block for the network"
type = string
default = "10.10.10.0/24"
}
variable "network_gateway" {
description = "The gateway for the network"
type = string
default = "10.10.10.1"
}
variable "network_offset" {
description = "The offset for the network IP addresses"
type = number
default = 10
}
variable "dns_resolvers" {
description = "The list of DNS resolver names for the nodes to use"
type = list(string)
default = ["8.8.8.8", "8.8.4.4"]
}
# =============================================================================
# NODE CONFIGURATION
# =============================================================================
variable "node_scsi_type" {
description = "The type of the SCSI device of the node"
type = string
default = "lsilogic"
}
variable "node_hardware_version" {
description = "The hardware version of the virtual machine"
type = string
default = "19"
}
variable "node_firmware" {
description = "The firmware type to boot the virtual machine"
type = string
default = "bios"
}
variable "node_disk_thin" {
description = "Whether to thin-provision the disks of the node"
type = bool
default = false
}
variable "node_disk_size" {
description = "The size in GiB of the disks of the node"
type = number
default = 16
}
variable "node_cores" {
description = "The number of CPU cores for the node"
type = number
default = 4
}
variable "node_memory" {
description = "The memory assigned to the node (in MB)"
type = number
default = 16384
}
variable "node_interface_name" {
description = "The default route's interface name of the node"
type = string
default = "ens160"
}
variable "node_guest" {
description = "The guest OS of the node"
type = string
default = "ubuntu64Guest"
}
# =============================================================================
# SSH CONFIGURATION
# =============================================================================
variable "ssh_user" {
description = "The guest OS user to use to connect via SSH on the nodes"
type = string
default = "clastix"
}
variable "ssh_private_key_path" {
description = "The path to the private SSH key to use to provision on the nodes"
type = string
default = "~/.ssh/id_rsa"
}
variable "ssh_public_key_path" {
description = "The path to the public SSH key to use to provision on the nodes"
type = string
default = "~/.ssh/id_rsa.pub"
}
# =============================================================================
# VSPHERE CONFIGURATION
# =============================================================================
variable "vsphere_server" {
description = "The vSphere server address"
type = string
default = "vsphere-server.example.com:443"
}
variable "vsphere_username" {
description = "The username for vSphere"
type = string
sensitive = true
}
variable "vsphere_password" {
description = "The password for vSphere"
type = string
sensitive = true
}
variable "vsphere_datacenter" {
description = "The vSphere datacenter name"
type = string
default = "DatacenterName"
}
variable "vsphere_compute_cluster" {
description = "The vSphere compute cluster name"
type = string
default = "ComputeClusterName"
}
variable "vsphere_datastore" {
description = "The vSphere datastore name"
type = string
default = "DatastoreName"
}
variable "vsphere_content_library" {
description = "The vSphere content library name"
type = string
default = "ContentLibraryName"
}
variable "vsphere_content_library_item" {
description = "The vSphere content library item name"
type = string
default = "ContentLibraryItemName"
}
variable "vsphere_resource_pool" {
description = "The vSphere resource pool name"
type = string
default = "ResourcePoolName"
}
variable "vsphere_root_folder" {
description = "The root folder where to place node pools"
type = string
default = ""
}
variable "vsphere_network" {
description = "The vSphere network name"
type = string
default = "NetworkName"
}
variable "vsphere_plus_license" {
description = "Set on/off based on your vSphere enterprise license"
type = bool
default = false
}

View File

@@ -0,0 +1,12 @@
# =============================================================================
# TERRAFORM CONFIGURATION
# =============================================================================
terraform {
required_providers {
vsphere = {
source = "vmware/vsphere"
version = "~> 2.0"
}
}
}

107
providers/aws/README.md Normal file
View File

@@ -0,0 +1,107 @@
# AWS Provider
Creates AWS Auto Scaling Groups for Kamaji node pools with automatic bootstrap token generation.
## Usage
1. **Configure variables**:
```bash
cp main.auto.tfvars.sample main.auto.tfvars
# Edit main.auto.tfvars with your settings
```
2. **Set AWS credentials**:
```bash
export AWS_ACCESS_KEY_ID="your-access-key"
export AWS_SECRET_ACCESS_KEY="your-secret-key"
```
3. **Deploy**:
```bash
terraform init
terraform apply
```
## Configuration Example
```hcl
# main.auto.tfvars
tenant_cluster_name = "my-cluster"
tenant_kubeconfig_path = "~/.kube/my-cluster.kubeconfig"
aws_region = "us-east-1"
aws_zones = ["us-east-1a", "us-east-1b"]
node_pools = [
{
name = "workers"
size = 3
instance_type = "t3a.medium"
ami_id = "ami-0c02fb55956c7d316"
node_disk_size = 20
min_size = 1
max_size = 5
}
]
```
## Variables
| Variable | Type | Default | Description |
|----------|------|---------|-------------|
| `tenant_cluster_name` | `string` | Required | Tenant cluster name |
| `tenant_kubeconfig_path` | `string` | `"~/.kube/config"` | Kubeconfig path |
| `yaki_url` | `string` | `"https://goyaki.clastix.io"` | YAKI bootstrap URL |
| `node_pools` | `list(object)` | Required | Node pool configurations |
| `aws_region` | `string` | `"eu-south-1"` | AWS region |
| `aws_zones` | `list(string)` | `["eu-south-1a", "eu-south-1b", "eu-south-1c"]` | Availability zones |
| `aws_vpc_name` | `list(string)` | `["kamaji"]` | VPC name filter |
| `ssh_user` | `string` | `"ubuntu"` | SSH user |
| `ssh_public_key_path` | `string` | `"~/.ssh/id_rsa.pub"` | SSH public key path |
| `tags` | `map(string)` | `{}` | Additional tags |
### Node Pool Configuration
Each node pool supports:
- `name` - Pool name (required)
- `size` - Number of instances (required)
- `instance_type` - EC2 instance type (default: "t3a.medium")
- `ami_id` - AMI ID (required)
- `node_disk_size` - Disk size in GB (default: 20)
- `min_size` - Minimum instances (default: 1)
- `max_size` - Maximum instances (default: 9)
- `disk_type` - EBS volume type (default: "gp3")
- `public` - Use public subnets (default: true)
## Outputs
- `node_pool_creation_summary` - Deployment summary
- `autoscaling_groups` - ASG details
- `bootstrap_token` - Bootstrap token info (sensitive)
- `cluster_info` - Cluster configuration
- `useful_commands` - Ready-to-use commands
## Requirements
- Terraform >= 1.0
- AWS CLI configured with appropriate permissions
- Existing VPC with subnets
- Valid kubeconfig for Kamaji tenant cluster
## Finding AMI IDs
```bash
# Ubuntu 24.04 LTS (Noble)
aws ec2 describe-images --region us-east-1 \
--owners 099720109477 \
--filters "Name=name,Values=ubuntu/images/hvm-ssd-gp3/ubuntu-noble-24.04-amd64-server-*" \
--query 'Images|sort_by(@,&CreationDate)[-1].ImageId' \
--output text
# Ubuntu 22.04 LTS (Jammy)
aws ec2 describe-images --region us-east-1 \
--owners 099720109477 \
--filters "Name=name,Values=ubuntu/images/hvm-ssd-gp3/ubuntu-jammy-22.04-amd64-server-*" \
--query 'Images|sort_by(@,&CreationDate)[-1].ImageId' \
--output text
```

9
providers/aws/backend.tf Normal file
View File

@@ -0,0 +1,9 @@
# =============================================================================
# TERRAFORM BACKEND
# =============================================================================
terraform {
backend "local" {
path = "tfstate/terraform.tfstate"
}
}

View File

@@ -0,0 +1,43 @@
# AWS Configuration
aws_region = "" # AWS region (e.g., "us-east-1", "eu-west-1", "eu-south-1")
aws_zones = [] # List of availability zones (e.g., ["us-east-1a", "us-east-1b"])
aws_vpc_name = [] # VPC name filter (e.g., ["my-vpc"])
# SSH Configuration
ssh_user = "" # SSH username (e.g., "ubuntu", "ec2-user")
ssh_public_key_path = "" # Path to SSH public key (e.g., "~/.ssh/id_rsa.pub")
# Tenant Cluster Configuration
tenant_cluster_name = "" # Name of the tenant cluster
tenant_kubeconfig_path = "" # Path to kubeconfig file (e.g., "~/.kube/config")
# Node Pool Configuration
node_pools = [
{
name = "" # Name of the node pool (e.g., "default", "workers")
size = 0 # Number of nodes in the pool
node_disk_size = 0 # Disk size for each node (in GB)
instance_type = "" # AWS instance type (e.g., "t3a.medium", "m5.large")
ami_id = "" # AMI ID for the instances
min_size = 0 # Minimum number of nodes
max_size = 0 # Maximum number of nodes
disk_type = "" # EBS volume type (gp2, gp3, io1, io2)
public = false # Whether to assign public IP addresses
},
# Add more node pools here as needed.
]
# Example: Find Ubuntu 24.04 LTS AMI ID
# aws ec2 describe-images --region <your-region> \
# --owners 099720109477 \
# --filters "Name=name,Values=ubuntu/images/hvm-ssd-gp3/ubuntu-noble-24.04-amd64-server-*" \
# --query 'Images|sort_by(@,&CreationDate)[-1].ImageId' \
# --output text
# Tags for AWS resources
tags = {
"ManagedBy" = "" # Who manages these resources (e.g., "Terraform", "Clastix")
"CreatedBy" = "" # What created these resources (e.g., "Terraform")
"Environment" = "" # Environment name (e.g., "dev", "staging", "prod")
"Project" = "" # Project name (e.g., "Kamaji", "MyProject")
}

70
providers/aws/main.tf Normal file
View File

@@ -0,0 +1,70 @@
# =============================================================================
# PROVIDERS
# =============================================================================
# Configure the Kubernetes provider
provider "kubernetes" {
# Path to the kubeconfig file for accessing the tenant cluster
config_path = var.tenant_kubeconfig_path
}
# Configure the AWS Provider
provider "aws" {
region = var.aws_region
}
# =============================================================================
# BOOTSTRAP TOKEN
# =============================================================================
# Call the shared bootstrap-token module to generate the join command
module "bootstrap_token" {
source = "../../modules/bootstrap-token" # Updated to use shared module
kubeconfig_path = var.tenant_kubeconfig_path # Pass the kubeconfig path to the module
yaki_url = var.yaki_url # Pass the YAKI URL to the module
}
# =============================================================================
# NODE POOLS
# =============================================================================
module "aws_node_pools" {
source = "../../modules/aws-node-pool" # Updated path to the aws-node-pool module
# Iterate over the list of node pools and call the module for each pool
for_each = { for pool in var.node_pools : pool.name => pool }
# Tenant cluster configuration
tenant_cluster_name = var.tenant_cluster_name
# Pool configuration
pool_name = each.value.name
pool_size = each.value.size
pool_min_size = each.value.min_size
pool_max_size = each.value.max_size
# Node configuration
node_disk_size = each.value.node_disk_size
node_disk_type = each.value.disk_type
# AWS configuration
aws_region = var.aws_region
aws_zones = var.aws_zones
aws_vpc_name = var.aws_vpc_name
instance_type = each.value.instance_type
ami_id = each.value.ami_id
public = each.value.public
tags = var.tags
# SSH configuration
ssh_user = var.ssh_user
ssh_public_key_path = var.ssh_public_key_path
# Join command for bootstrapping nodes
runcmd = module.bootstrap_token.join_cmd
# Ensure the aws-node-pool module depends on the bootstrap-token module
depends_on = [
module.bootstrap_token
]
}

56
providers/aws/outputs.tf Normal file
View File

@@ -0,0 +1,56 @@
# =============================================================================
# DEPLOYMENT SUMMARY
# =============================================================================
output "deployment_summary" {
description = "Summary of the node pool deployment"
value = <<-EOT
✅ Kamaji Node Pools Deployed Successfully
Cluster: ${var.tenant_cluster_name}
Region: ${var.aws_region}
Zones: ${join(", ", var.aws_zones)}
Node Pools:
${join("\n", [for pool_name, pool in module.aws_node_pools : format(" • %s: %d nodes (%s)", pool_name, pool.autoscaling_group_details.desired_capacity, pool.launch_template_details.instance_type)])}
Next Steps:
kubectl --kubeconfig ${var.tenant_kubeconfig_path} get nodes
EOT
}
# =============================================================================
# NODE DETAILS
# =============================================================================
output "node_pools" {
description = "Node pool details"
value = {
for pool_name, pool in module.aws_node_pools : pool_name => {
name = pool.autoscaling_group_details.name
desired_size = pool.autoscaling_group_details.desired_capacity
min_size = pool.autoscaling_group_details.min_size
max_size = pool.autoscaling_group_details.max_size
instance_type = pool.launch_template_details.instance_type
ami_id = pool.launch_template_details.ami_id
}
}
}
# =============================================================================
# CLUSTER INFO
# =============================================================================
output "cluster_info" {
description = "Cluster configuration"
value = {
name = var.tenant_cluster_name
kubeconfig = var.tenant_kubeconfig_path
region = var.aws_region
zones = var.aws_zones
total_pools = length(var.node_pools)
total_nodes = sum([for pool in module.aws_node_pools : pool.autoscaling_group_details.desired_capacity])
}
}

90
providers/aws/vars.tf Normal file
View File

@@ -0,0 +1,90 @@
# =============================================================================
# CLUSTER CONFIGURATION
# =============================================================================
variable "tenant_cluster_name" {
description = "Name of the tenant cluster"
type = string
}
variable "tenant_kubeconfig_path" {
description = "Path to the kubeconfig file for the tenant cluster"
type = string
default = "~/.kube/config"
}
# =============================================================================
# BOOTSTRAP CONFIGURATION
# =============================================================================
variable "yaki_url" {
description = "URL to the YAKI script for node bootstrapping"
type = string
default = "https://goyaki.clastix.io"
}
# =============================================================================
# NODE POOL CONFIGURATION
# =============================================================================
variable "node_pools" {
description = "List of AWS node pools with their configurations"
type = list(object({
name = string
size = number
node_disk_size = number
instance_type = string
ami_id = string
min_size = optional(number, 1)
max_size = optional(number, 9)
disk_type = optional(string, "gp3")
public = optional(bool, true)
}))
}
# =============================================================================
# AWS CONFIGURATION
# =============================================================================
variable "aws_region" {
description = "AWS region where resources are created"
type = string
default = "eu-south-1"
}
variable "aws_zones" {
description = "AWS availability zones for worker nodes"
type = list(string)
default = ["eu-south-1a", "eu-south-1b", "eu-south-1c"]
}
variable "aws_vpc_name" {
description = "Name filter for the AWS VPC"
type = list(string)
default = ["kamaji"]
}
variable "tags" {
description = "Tags applied to AWS resources"
type = map(string)
default = {
"ManagedBy" = "Clastix"
"CreatedBy" = "Terraform"
}
}
# =============================================================================
# SSH CONFIGURATION
# =============================================================================
variable "ssh_user" {
description = "SSH user for node access"
type = string
default = "ubuntu"
}
variable "ssh_public_key_path" {
description = "Path to the SSH public key"
type = string
default = "~/.ssh/id_rsa.pub"
}

24
providers/aws/versions.tf Normal file
View File

@@ -0,0 +1,24 @@
# =============================================================================
# TERRAFORM VERSIONS
# =============================================================================
terraform {
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
version = "~> 2.35.0"
}
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
random = {
source = "hashicorp/random"
version = "~> 3.0"
}
local = {
source = "hashicorp/local"
version = "~> 2.0"
}
}
}

View File

@@ -0,0 +1,82 @@
# Proxmox Provider
Creates Proxmox VE virtual machines for Kamaji node pools with automatic bootstrap token generation.
## Usage
1. **Configure variables**:
```bash
cp main.auto.tfvars.sample main.auto.tfvars
# Edit main.auto.tfvars with your settings
```
2. **Set Proxmox credentials**:
```bash
export TF_VAR_proxmox_password="your-password"
```
3. **Deploy**:
```bash
terraform init
terraform apply
```
## Configuration Example
```hcl
# main.auto.tfvars
tenant_cluster_name = "my-cluster"
tenant_kubeconfig_path = "~/.kube/my-cluster.kubeconfig"
proxmox_host = "my-proxmox-host.example.com"
proxmox_node = "pve-node1"
proxmox_api_url = "https://my-proxmox-host:8006/api2/json"
proxmox_user = "terraform@pve"
node_pools = [
{
name = "workers"
size = 3
vms_template = "ubuntu-24.04-template"
vms_memory = 2048
vms_cores = 2
vms_disk_size = 20
network_cidr = "10.10.10.0/24"
network_gateway = "10.10.10.1"
network_bridge = "vmbr0"
}
]
```
## Variables
| Variable | Type | Default | Description |
|----------|------|---------|-------------|
| `tenant_cluster_name` | `string` | Required | Tenant cluster name |
| `tenant_kubeconfig_path` | `string` | `"~/.kube/config"` | Kubeconfig path |
| `yaki_url` | `string` | `"https://goyaki.clastix.io"` | YAKI bootstrap URL |
| `node_pools` | `list(object)` | Required | Node pool configurations |
| `proxmox_host` | `string` | Required | Proxmox hostname/IP |
| `proxmox_node` | `string` | Required | Proxmox node name |
| `proxmox_api_url` | `string` | Required | Proxmox API URL |
| `proxmox_user` | `string` | Required | Proxmox user |
| `proxmox_password` | `string` | Required | Proxmox password |
| `ssh_user` | `string` | `"ubuntu"` | SSH user |
| `ssh_private_key_path` | `string` | `"~/.ssh/id_rsa"` | SSH private key path |
| `ssh_public_key_path` | `string` | `"~/.ssh/id_rsa.pub"` | SSH public key path |
## Outputs
- `node_pool_creation_summary` - Deployment summary
- `vm_details` - VM information
- `bootstrap_token` - Bootstrap token info (sensitive)
- `cluster_info` - Cluster configuration
- `useful_commands` - Ready-to-use commands
## Requirements
- Terraform >= 1.0
- Proxmox provider (Telmate/proxmox) >= 3.0.1-rc6
- SSH access to Proxmox host
- VM template with cloud-init support

View File

@@ -0,0 +1,19 @@
# =============================================================================
# TERRAFORM BACKEND
# =============================================================================
terraform {
backend "local" {
path = "tfstate/terraform.tfstate"
}
}
# Alternative: Remote backend
# terraform {
# backend "remote" {
# organization = "organization"
# workspaces {
# name = "demo"
# }
# }
# }

View File

@@ -0,0 +1,49 @@
# Name of the tenant cluster
tenant_cluster_name = "tenant0"
# kubeconfig file path
tenant_kubeconfig_path = "/home/clastix/.kube/tenant0.kubeconfig"
# YAKI URL for node bootstrapping
yaki_url = "https://goyaki.clastix.io"
# List of node pools
node_pools = [
{
name = "your-node-pool-name"
size = 3
network_cidr = "192.168.0.0/24"
network_gateway = "192.168.0.1"
network_offset = 10
vms_state = "started"
vms_agent = 1
vms_memory = 2048
vms_sockets = 1
vms_cores = 4
vms_vcpus = 4
vms_boot = "order=scsi0"
vms_scsihw = "virtio-scsi-single"
vms_disk_size = 16
vms_template = "ubuntu-template"
},
]
# Cloud-init user
ssh_user = "your-ssh-user"
ssh_public_key_path = "~/.ssh/id_rsa.pub"
ssh_private_key_path = "~/.ssh/id_rsa"
# The DNS resolver name
nameserver = "8.8.8.8"
# Disk storage in which to place the VM
storage_disk = "your-storage-disk"
# Network bridge assigned to the VM
network_bridge = "your-network-bridge"
network_model = "virtio"
# Proxmox configuration
proxmox_host = "your-proxmox-host"
proxmox_node = "your-proxmox-node"
proxmox_api_url = "https://your-proxmox-api-url:8006/api2/json"

79
providers/proxmox/main.tf Normal file
View File

@@ -0,0 +1,79 @@
# =============================================================================
# PROVIDERS
# =============================================================================
provider "kubernetes" {
config_path = var.tenant_kubeconfig_path
}
provider "proxmox" {
pm_api_url = var.proxmox_api_url
pm_user = var.proxmox_user
pm_password = var.proxmox_password
pm_parallel = 1
pm_tls_insecure = true
pm_log_enable = false
pm_timeout = 600
}
# =============================================================================
# BOOTSTRAP TOKEN
# =============================================================================
module "bootstrap_token" {
source = "../../modules/bootstrap-token"
kubeconfig_path = var.tenant_kubeconfig_path
yaki_url = var.yaki_url
}
# =============================================================================
# NODE POOLS
# =============================================================================
module "proxmox_node_pools" {
source = "../../modules/proxmox-node-pool"
for_each = { for pool in var.node_pools : pool.name => pool }
# Cluster Configuration
tenant_cluster_name = var.tenant_cluster_name
pool_name = each.value.name
pool_size = each.value.size
runcmd = module.bootstrap_token.join_cmd
# Network Configuration
network_cidr = each.value.network_cidr
network_gateway = each.value.network_gateway
network_offset = each.value.network_offset
network_bridge = var.network_bridge
network_model = var.network_model
nameserver = var.nameserver
search_domain = var.search_domain
# VM Configuration
vms_state = each.value.vms_state
vms_agent = each.value.vms_agent
vms_memory = each.value.vms_memory
vms_sockets = each.value.vms_sockets
vms_cores = each.value.vms_cores
vms_vcpus = each.value.vms_vcpus
vms_boot = each.value.vms_boot
vms_scsihw = each.value.vms_scsihw
vms_disk_size = each.value.vms_disk_size
vms_template = each.value.vms_template
# Proxmox Configuration
proxmox_host = var.proxmox_host
proxmox_node = var.proxmox_node
proxmox_api_url = var.proxmox_api_url
proxmox_user = var.proxmox_user
proxmox_password = var.proxmox_password
storage_disk = var.storage_disk
# SSH Configuration
ssh_user = var.ssh_user
ssh_private_key_path = var.ssh_private_key_path
ssh_public_key_path = var.ssh_public_key_path
depends_on = [module.bootstrap_token]
}

View File

@@ -0,0 +1,62 @@
# =============================================================================
# DEPLOYMENT SUMMARY
# =============================================================================
output "deployment_summary" {
description = "Summary of the node pool deployment"
value = <<-EOT
✅ Kamaji Node Pools Deployed Successfully
Cluster: ${var.tenant_cluster_name}
Host: ${var.proxmox_host}
Node: ${var.proxmox_node}
Node Pools:
${join("\n", [for pool_name, pool in module.proxmox_node_pools : format(" • %s: %d nodes (%d MB RAM)", pool_name, length(pool.vm_details), length(pool.vm_details) > 0 ? pool.vm_details[0].memory_mb : 0)])}
VMs Created:
${join("\n", [for pool_name, pool in module.proxmox_node_pools : join("\n", [for vm in pool.vm_details : format(" • %s: %s", vm.name, vm.ip_address)])])}
Next Steps:
kubectl --kubeconfig ${var.tenant_kubeconfig_path} get nodes
EOT
}
# =============================================================================
# NODE DETAILS
# =============================================================================
output "node_pools" {
description = "Node pool details"
value = {
for pool_name, pool in module.proxmox_node_pools : pool_name => {
vms = [
for vm in pool.vm_details : {
name = vm.name
ip_address = vm.ip_address
memory_mb = vm.memory_mb
cpu_cores = vm.cpu_cores
state = vm.state
}
]
}
}
}
# =============================================================================
# CLUSTER INFO
# =============================================================================
output "cluster_info" {
description = "Cluster configuration"
value = {
name = var.tenant_cluster_name
kubeconfig = var.tenant_kubeconfig_path
proxmox_host = var.proxmox_host
proxmox_node = var.proxmox_node
total_pools = length(var.node_pools)
total_nodes = sum([for pool in var.node_pools : pool.size])
}
}

131
providers/proxmox/vars.tf Normal file
View File

@@ -0,0 +1,131 @@
# =============================================================================
# CLUSTER CONFIGURATION
# =============================================================================
variable "tenant_cluster_name" {
description = "Name of the tenant cluster"
type = string
}
variable "tenant_kubeconfig_path" {
description = "Path to the tenant cluster kubeconfig file"
type = string
default = "~/.kube/config"
}
variable "yaki_url" {
description = "URL to the YAKI script for node bootstrapping"
type = string
default = "https://goyaki.clastix.io"
}
# =============================================================================
# NODE POOL CONFIGURATION
# =============================================================================
variable "node_pools" {
description = "List of node pools with their configurations"
type = list(object({
name = string
size = number
network_cidr = string
network_gateway = string
network_offset = number
vms_state = string
vms_agent = number
vms_memory = number
vms_sockets = number
vms_cores = number
vms_vcpus = number
vms_boot = string
vms_scsihw = string
vms_disk_size = number
vms_template = string
}))
}
# =============================================================================
# SSH CONFIGURATION
# =============================================================================
variable "ssh_user" {
description = "SSH user for cloud-init"
type = string
default = "clastix"
}
variable "ssh_public_key_path" {
description = "Path to SSH public key"
type = string
default = "~/.ssh/id_rsa.pub"
}
variable "ssh_private_key_path" {
description = "Path to SSH private key"
type = string
default = "~/.ssh/id_rsa"
}
# =============================================================================
# NETWORK CONFIGURATION
# =============================================================================
variable "nameserver" {
description = "DNS resolver for the nodes"
type = string
default = "8.8.8.8"
}
variable "search_domain" {
description = "DNS search domain for the nodes"
type = string
default = ""
}
variable "network_bridge" {
description = "Network bridge for VMs"
type = string
default = "vmbr0"
}
variable "network_model" {
description = "Network model for VMs"
type = string
default = "virtio"
}
# =============================================================================
# PROXMOX CONFIGURATION
# =============================================================================
variable "proxmox_host" {
description = "Proxmox host"
type = string
}
variable "proxmox_node" {
description = "Proxmox target node"
type = string
}
variable "proxmox_api_url" {
description = "Proxmox API URL"
type = string
}
variable "proxmox_user" {
description = "Proxmox user"
type = string
}
variable "proxmox_password" {
description = "Proxmox password"
type = string
sensitive = true
}
variable "storage_disk" {
description = "Storage disk for VMs"
type = string
default = "local"
}

View File

@@ -0,0 +1,13 @@
# =============================================================================
# TERRAFORM VERSIONS
# =============================================================================
terraform {
required_providers {
# https://github.com/telmate/terraform-provider-proxmox
proxmox = {
source = "Telmate/proxmox"
version = "3.0.1-rc6"
}
}
}

View File

@@ -0,0 +1,83 @@
# vCloud Provider
Creates VMware Cloud Director vApps and VMs for Kamaji node pools with automatic bootstrap token generation.
## Usage
1. **Configure variables**:
```bash
cp main.auto.tfvars.sample main.auto.tfvars
# Edit main.auto.tfvars with your settings
```
2. **Set vCloud credentials**:
```bash
export TF_VAR_vcd_username="your-username"
export TF_VAR_vcd_password="your-password"
```
3. **Deploy**:
```bash
terraform init
terraform apply
```
## Configuration Example
```hcl
# main.auto.tfvars
tenant_cluster_name = "my-cluster"
tenant_kubeconfig_path = "~/.kube/my-cluster.kubeconfig"
vcd_url = "https://vcd.example.com/api"
vcd_org = "my-org"
vcd_vdc = "my-vdc"
node_pools = [
{
name = "workers"
size = 3
vm_template = "ubuntu-24.04-template"
vm_memory = 4096
vm_cpu = 2
vm_disk_size = 20
network_cidr = "192.168.1.0/24"
network_gateway = "192.168.1.1"
vcd_catalog = "my-catalog"
vcd_network = "my-network"
}
]
```
## Variables
| Variable | Type | Default | Description |
|----------|------|---------|-------------|
| `tenant_cluster_name` | `string` | Required | Tenant cluster name |
| `tenant_kubeconfig_path` | `string` | `"~/.kube/config"` | Kubeconfig path |
| `yaki_url` | `string` | `"https://goyaki.clastix.io"` | YAKI bootstrap URL |
| `node_pools` | `list(object)` | Required | Node pool configurations |
| `vcd_url` | `string` | Required | vCloud Director URL |
| `vcd_username` | `string` | Required | vCloud username |
| `vcd_password` | `string` | Required | vCloud password |
| `vcd_org` | `string` | Required | vCloud organization |
| `vcd_vdc` | `string` | Required | Virtual datacenter |
| `ssh_user` | `string` | `"ubuntu"` | SSH user |
| `ssh_public_key_path` | `string` | `"~/.ssh/id_rsa.pub"` | SSH public key path |
## Outputs
- `node_pool_creation_summary` - Deployment summary
- `vm_details` - VM information
- `vapp_details` - vApp information
- `bootstrap_token` - Bootstrap token info (sensitive)
- `cluster_info` - Cluster configuration
- `useful_commands` - Ready-to-use commands
## Requirements
- Terraform >= 1.0
- VMware Cloud Director provider >= 3.0
- vCloud Director access with appropriate permissions
- VM template with cloud-init support

View File

@@ -0,0 +1,9 @@
# =============================================================================
# TERRAFORM BACKEND
# =============================================================================
terraform {
backend "local" {
path = "tfstate/terraform.tfstate"
}
}

View File

@@ -0,0 +1,39 @@
# This file contains the configuration for the VMware Cloud Director Kamaji tenant node pool
# Name of the tenant cluster
tenant_cluster_name = "your-tenant-cluster-name"
# kubeconfig path for accessing the tenant cluster
tenant_kubeconfig_path = "~/.kube/config"
# Pool configuration
node_pools = [
{
name = "workers" # Name of the node pool
size = 3 # Number of nodes in the pool
node_cpus = 2 # Number of CPUs for each node
node_cpu_cores = 2 # Number of CPU cores for each node
node_memory = 4096 # Memory for each node (in MB)
node_disk_size = 50 # Disk size for each node (in GB)
node_disk_storage_profile = "Standard" # Storage profile for node disks
network_name = "MyNetwork" # Network name for the node pool
network_adapter_type = "VMXNET3" # Network adapter type (VMXNET3/E1000)
ip_allocation_mode = "DHCP" # IP allocation mode (DHCP/STATIC)
template_name = "ubuntu-24.04-template" # vApp template name
},
# additional pools here
]
# SSH configuration
ssh_user = "ubuntu"
ssh_public_key_path = "~/.ssh/id_rsa.pub"
ssh_private_key_path = "~/.ssh/id_rsa"
# VMware Cloud Director infrastructure configuration
vcd_url = "https://vcloud.example.com/api"
vcd_org_name = "MyOrganization"
vcd_vdc_name = "MyVDC"
vcd_catalog_org_name = "MyOrganization"
vcd_catalog_name = "Templates"
vcd_allow_insecure = false # Set to true only for development with self-signed certificates
vcd_logging = false # Set to true for debugging

84
providers/vcloud/main.tf Normal file
View File

@@ -0,0 +1,84 @@
# =============================================================================
# PROVIDERS
# =============================================================================
# Configure the Kubernetes provider
provider "kubernetes" {
# Path to the kubeconfig file for accessing the tenant cluster
config_path = var.tenant_kubeconfig_path
}
# Configure the VMware Cloud Director Provider
provider "vcd" {
user = var.vcd_username
password = var.vcd_password
url = var.vcd_url
org = var.vcd_org_name
vdc = var.vcd_vdc_name
allow_unverified_ssl = var.vcd_allow_insecure
logging = var.vcd_logging
}
# =============================================================================
# BOOTSTRAP TOKEN
# =============================================================================
# Call the shared bootstrap-token module to generate the join command
module "bootstrap_token" {
source = "../../modules/bootstrap-token" # Updated to use shared module
kubeconfig_path = var.tenant_kubeconfig_path # Pass the kubeconfig path to the module
yaki_url = var.yaki_url # Pass the YAKI URL to the module
}
# =============================================================================
# NODE POOLS
# =============================================================================
module "vcloud_node_pools" {
source = "../../modules/vcloud-node-pool"
# Iterate over the list of node pools and call the module for each pool
for_each = { for pool in var.node_pools : pool.name => pool }
tenant_cluster_name = var.tenant_cluster_name # Name of the tenant cluster
# Pool configuration
pool_name = each.value.name
pool_size = each.value.size
# Node configuration
node_memory = each.value.node_memory
node_cpus = each.value.node_cpus
node_cpu_cores = each.value.node_cpu_cores
node_disk_size = each.value.node_disk_size
node_disk_storage_profile = each.value.node_disk_storage_profile
# Network configuration
vapp_network_name = each.value.network_name
vapp_network_adapter_type = each.value.network_adapter_type
vapp_ip_allocation_mode = each.value.ip_allocation_mode
# Template configuration
vapp_template_name = each.value.template_name
# VMware Cloud Director configuration
vcd_vdc_name = var.vcd_vdc_name
vcd_org_name = var.vcd_org_name
vcd_catalog_org_name = var.vcd_catalog_org_name
vcd_catalog_name = var.vcd_catalog_name
vcd_logging = var.vcd_logging
# SSH configuration
ssh_user = var.ssh_user
ssh_private_key_path = var.ssh_private_key_path
ssh_public_key_path = var.ssh_public_key_path
# Join command for bootstrapping nodes
runcmd = module.bootstrap_token.join_cmd
# Ensure the vcloud-node-pool module depends on the bootstrap-token module
depends_on = [
module.bootstrap_token
]
}

View File

@@ -0,0 +1,68 @@
# =============================================================================
# DEPLOYMENT SUMMARY
# =============================================================================
output "deployment_summary" {
description = "Summary of the node pool deployment"
value = <<-EOT
✅ Kamaji Node Pools Deployed Successfully
Cluster: ${var.tenant_cluster_name}
Organization: ${var.vcd_org_name}
VDC: ${var.vcd_vdc_name}
Node Pools:
${join("\n", [for pool_name, pool in module.vcloud_node_pools : format(" • %s: %d nodes (%d MB RAM)", pool_name, length(pool.virtual_machines), length(pool.virtual_machines) > 0 ? values(pool.virtual_machines)[0].memory : 0)])}
vApps Created:
${join("\n", [for pool_name, pool in module.vcloud_node_pools : format(" • %s: %s", pool_name, pool.vapp_details.name)])}
VMs Created:
${join("\n", [for pool_name, pool in module.vcloud_node_pools : join("\n", [for vm in values(pool.virtual_machines) : format(" • %s", vm.computer_name)])])}
Next Steps:
kubectl --kubeconfig ${var.tenant_kubeconfig_path} get nodes
EOT
}
# =============================================================================
# NODE DETAILS
# =============================================================================
output "node_pools" {
description = "Node pool details"
value = {
for pool_name, pool in module.vcloud_node_pools : pool_name => {
vapp_name = pool.vapp_details.name
vapp_id = pool.vapp_details.id
vms = [
for vm in values(pool.virtual_machines) : {
name = vm.name
computer_name = vm.computer_name
memory = vm.memory
cpus = vm.cpus
cpu_cores = vm.cpu_cores
network_ip = vm.network != null ? vm.network.ip : null
}
]
}
}
}
# =============================================================================
# CLUSTER INFO
# =============================================================================
output "cluster_info" {
description = "Cluster configuration"
value = {
name = var.tenant_cluster_name
kubeconfig = var.tenant_kubeconfig_path
vcd_org_name = var.vcd_org_name
vcd_vdc_name = var.vcd_vdc_name
total_pools = length(var.node_pools)
total_nodes = sum([for pool in var.node_pools : pool.size])
}
}

116
providers/vcloud/vars.tf Normal file
View File

@@ -0,0 +1,116 @@
# =============================================================================
# CLUSTER CONFIGURATION
# =============================================================================
variable "tenant_cluster_name" {
description = "Name of the tenant cluster"
type = string
}
variable "tenant_kubeconfig_path" {
description = "Path to the kubeconfig file for the tenant cluster"
type = string
default = "~/.kube/config"
}
variable "yaki_url" {
description = "URL to the YAKI script for node bootstrapping"
type = string
default = "https://goyaki.clastix.io"
}
# =============================================================================
# NODE POOL CONFIGURATION
# =============================================================================
variable "node_pools" {
description = "List of vApp node pools with their configurations"
type = list(object({
name = string # Name of the node pool
size = number # Number of nodes in the pool
node_disk_size = number # Disk size for each node (in GB)
node_disk_storage_profile = string # Storage profile for node disks
node_cpus = number # Number of CPUs for each node
node_cpu_cores = number # Number of CPU cores for each node
node_memory = number # Memory for each node (in MB)
network_name = string # Network name for the node pool
network_adapter_type = string # Network adapter type
ip_allocation_mode = string # IP allocation mode (DHCP/Static)
template_name = string # vApp template name
}))
}
# =============================================================================
# SSH CONFIGURATION
# =============================================================================
variable "ssh_user" {
description = "SSH user for cloud-init and node access"
type = string
default = "ubuntu"
}
variable "ssh_public_key_path" {
description = "Path to the SSH public key for node access"
type = string
default = "~/.ssh/id_rsa.pub"
}
variable "ssh_private_key_path" {
description = "Path to the SSH private key for node provisioning"
type = string
default = "~/.ssh/id_rsa"
}
# =============================================================================
# VMWARE CLOUD DIRECTOR CONFIGURATION
# =============================================================================
variable "vcd_url" {
description = "VMware Cloud Director API endpoint URL"
type = string
}
variable "vcd_username" {
description = "VMware Cloud Director username for authentication"
type = string
sensitive = true
}
variable "vcd_password" {
description = "VMware Cloud Director password for authentication"
type = string
sensitive = true
}
variable "vcd_org_name" {
description = "VMware Cloud Director organization name"
type = string
}
variable "vcd_vdc_name" {
description = "VMware Cloud Director virtual data center name"
type = string
}
variable "vcd_catalog_org_name" {
description = "Organization name for the vCloud catalog access"
type = string
}
variable "vcd_catalog_name" {
description = "VMware Cloud Director catalog name containing templates"
type = string
}
variable "vcd_allow_insecure" {
description = "Allow unverified SSL certificates (not recommended for production)"
type = bool
default = false
}
variable "vcd_logging" {
description = "Enable debug logging for VMware Cloud Director provider"
type = bool
default = false
}

View File

@@ -0,0 +1,24 @@
# =============================================================================
# TERRAFORM VERSIONS
# =============================================================================
terraform {
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
version = "~> 2.35.0"
}
vcd = {
source = "vmware/vcd"
version = "~> 3.14.0"
}
random = {
source = "hashicorp/random"
version = "~> 3.0"
}
local = {
source = "hashicorp/local"
version = "~> 2.0"
}
}
}

View File

@@ -0,0 +1,85 @@
# vSphere Provider
Creates VMware vSphere virtual machines for Kamaji node pools with automatic bootstrap token generation.
## Usage
1. **Configure variables**:
```bash
cp main.auto.tfvars.sample main.auto.tfvars
# Edit main.auto.tfvars with your settings
```
2. **Set vSphere credentials**:
```bash
export TF_VAR_vsphere_username="your-username"
export TF_VAR_vsphere_password="your-password"
```
3. **Deploy**:
```bash
terraform init
terraform apply
```
## Configuration Example
```hcl
# main.auto.tfvars
tenant_cluster_name = "my-cluster"
tenant_kubeconfig_path = "~/.kube/my-cluster.kubeconfig"
vsphere_server = "vcenter.example.com"
vsphere_datacenter = "Datacenter"
vsphere_cluster = "Cluster"
vsphere_datastore = "datastore1"
vsphere_network = "VM Network"
node_pools = [
{
name = "workers"
size = 3
vm_template = "ubuntu-24.04-template"
vm_memory = 4096
vm_cpu = 2
vm_disk_size = 20
network_cidr = "192.168.1.0/24"
network_gateway = "192.168.1.1"
}
]
```
## Variables
| Variable | Type | Default | Description |
|----------|------|---------|-------------|
| `tenant_cluster_name` | `string` | Required | Tenant cluster name |
| `tenant_kubeconfig_path` | `string` | `"~/.kube/config"` | Kubeconfig path |
| `yaki_url` | `string` | `"https://goyaki.clastix.io"` | YAKI bootstrap URL |
| `node_pools` | `list(object)` | Required | Node pool configurations |
| `vsphere_server` | `string` | Required | vCenter server |
| `vsphere_username` | `string` | Required | vSphere username |
| `vsphere_password` | `string` | Required | vSphere password |
| `vsphere_datacenter` | `string` | Required | vSphere datacenter |
| `vsphere_cluster` | `string` | Required | vSphere cluster |
| `vsphere_datastore` | `string` | Required | vSphere datastore |
| `vsphere_network` | `string` | Required | vSphere network |
| `vsphere_folder` | `string` | `""` | VM folder |
| `ssh_user` | `string` | `"ubuntu"` | SSH user |
| `ssh_public_key_path` | `string` | `"~/.ssh/id_rsa.pub"` | SSH public key path |
## Outputs
- `node_pool_creation_summary` - Deployment summary
- `vm_details` - VM information
- `bootstrap_token` - Bootstrap token info (sensitive)
- `cluster_info` - Cluster configuration
- `useful_commands` - Ready-to-use commands
## Requirements
- Terraform >= 1.0
- VMware vSphere provider >= 2.0
- vCenter/ESXi access with appropriate permissions
- VM template with cloud-init support

View File

@@ -0,0 +1,9 @@
# =============================================================================
# TERRAFORM BACKEND
# =============================================================================
terraform {
backend "local" {
path = "tfstate/terraform.tfstate"
}
}

View File

@@ -0,0 +1,45 @@
# This file contains the configuration for the vSphere Kamaji tenant node pool
# Name of the tenant cluster
tenant_cluster_name = "your-tenant-cluster-name"
# kubeconfig path for accessing the tenant cluster
tenant_kubeconfig_path = "~/.kube/config"
# Pool configuration
node_pools = [
{
name = "your-node-pool-name" # Name of the node pool
size = 3 # Number of nodes in the pool
node_disk_size = 24 # Disk size for each node (in GB)
node_cores = 4 # Number of CPU cores for each node
node_memory = 8192 # Memory for each node (in MB)
node_guest = "ubuntu64Guest" # Guest OS type for each node
network_cidr = "10.9.63.0/24" # Network CIDR for the node pool
network_gateway = "10.9.63.1" # Network gateway for the node pool
network_offset = 160 # Network offset for the node pool
},
# additional pools here
]
# DNS resolvers
dns_resolvers = ["8.8.8.8", "8.8.4.4"]
# SSH configuration
ssh_user = "ubuntu"
ssh_public_key_path = "~/.ssh/id_rsa.pub"
ssh_private_key_path = "~/.ssh/id_rsa"
# vSphere enterprise license
vsphere_plus_license = true
# vSphere infrastructure configuration
vsphere_server = "your-vsphere-server"
vsphere_allow_unverified_ssl = true # Set to false for production with valid certificates
vsphere_datacenter = "your-vsphere-datacenter"
vsphere_compute_cluster = "your-vsphere-compute-cluster"
vsphere_datastore = "your-vsphere-datastore"
vsphere_resource_pool = "your-vsphere-resource-pool"
vsphere_network = "your-vsphere-network"
vsphere_content_library = "your-vsphere-content-library"
vsphere_content_library_item = "your-vsphere-content-library-item"

85
providers/vsphere/main.tf Normal file
View File

@@ -0,0 +1,85 @@
# =============================================================================
# PROVIDERS
# =============================================================================
# Configure the Kubernetes provider
provider "kubernetes" {
# Path to the kubeconfig file for accessing the tenant cluster
config_path = var.tenant_kubeconfig_path
}
# Configure the vSphere provider
provider "vsphere" {
user = var.vsphere_username
password = var.vsphere_password
vsphere_server = var.vsphere_server
allow_unverified_ssl = var.vsphere_allow_unverified_ssl
}
# =============================================================================
# BOOTSTRAP TOKEN
# =============================================================================
# Call the shared bootstrap-token module to generate the join command
module "bootstrap_token" {
source = "../../modules/bootstrap-token"
kubeconfig_path = var.tenant_kubeconfig_path
yaki_url = var.yaki_url
}
# =============================================================================
# NODE POOLS
# =============================================================================
# Iterate over the list of node pools and call the vsphere-node-pool module for each pool
module "vsphere_node_pools" {
source = "../../modules/vsphere-node-pool"
# Iterate over the node pools defined in the variables
for_each = { for pool in var.node_pools : pool.name => pool }
# Tenant Cluster Configuration
tenant_cluster_name = var.tenant_cluster_name
# Pool Configuration
pool_name = each.value.name
pool_size = each.value.size
# Node Configuration
node_cores = each.value.node_cores
node_memory = each.value.node_memory
node_guest = each.value.node_guest
node_disk_size = each.value.node_disk_size
# Network Configuration
network_cidr = each.value.network_cidr
network_gateway = each.value.network_gateway
network_offset = each.value.network_offset
dns_resolvers = var.dns_resolvers
# SSH Configuration
ssh_user = var.ssh_user
ssh_public_key_path = var.ssh_public_key_path
ssh_private_key_path = var.ssh_private_key_path
# vSphere Configuration
vsphere_plus_license = var.vsphere_plus_license
vsphere_username = var.vsphere_username
vsphere_password = var.vsphere_password
vsphere_server = var.vsphere_server
vsphere_datacenter = var.vsphere_datacenter
vsphere_compute_cluster = var.vsphere_compute_cluster
vsphere_datastore = var.vsphere_datastore
vsphere_resource_pool = var.vsphere_resource_pool
vsphere_network = var.vsphere_network
vsphere_content_library = var.vsphere_content_library
vsphere_content_library_item = var.vsphere_content_library_item
vsphere_root_folder = var.vsphere_root_folder
# Bootstrap Configuration
runcmd = module.bootstrap_token.join_cmd
# Dependencies
depends_on = [module.bootstrap_token]
}

View File

@@ -0,0 +1,63 @@
# =============================================================================
# DEPLOYMENT SUMMARY
# =============================================================================
output "deployment_summary" {
description = "Summary of the node pool deployment"
value = <<-EOT
✅ Kamaji Node Pools Deployed Successfully
Cluster: ${var.tenant_cluster_name}
vCenter: ${var.vsphere_server}
Datacenter: ${var.vsphere_datacenter}
Node Pools:
${join("\n", [for pool_name, pool in module.vsphere_node_pools : format(" • %s: %d nodes (%d MB RAM)", pool_name, length(pool.node_details), length(pool.node_details) > 0 ? values(pool.node_details)[0].memory_mb : 0)])}
VMs Created:
${join("\n", [for pool_name, pool in module.vsphere_node_pools : join("\n", [for vm in values(pool.node_details) : format(" • %s: %s", vm.name, vm.ip_address)])])}
Next Steps:
kubectl --kubeconfig ${var.tenant_kubeconfig_path} get nodes
EOT
}
# =============================================================================
# NODE DETAILS
# =============================================================================
output "node_pools" {
description = "Node pool details"
value = {
for pool_name, pool in module.vsphere_node_pools : pool_name => {
vms = [
for vm in values(pool.node_details) : {
name = vm.name
ip_address = vm.ip_address
uuid = vm.uuid
memory_mb = vm.memory_mb
cpu_cores = vm.cpu_cores
power_state = vm.power_state
}
]
}
}
}
# =============================================================================
# CLUSTER INFO
# =============================================================================
output "cluster_info" {
description = "Cluster configuration"
value = {
name = var.tenant_cluster_name
kubeconfig = var.tenant_kubeconfig_path
vsphere_server = var.vsphere_server
vsphere_datacenter = var.vsphere_datacenter
total_pools = length(var.node_pools)
total_nodes = sum([for pool in var.node_pools : pool.size])
}
}

146
providers/vsphere/vars.tf Normal file
View File

@@ -0,0 +1,146 @@
# =============================================================================
# CLUSTER CONFIGURATION
# =============================================================================
variable "tenant_cluster_name" {
description = "Name of the tenant cluster"
type = string
}
variable "tenant_kubeconfig_path" {
description = "Path to the kubeconfig file for the tenant cluster"
type = string
default = "~/.kube/config"
}
variable "yaki_url" {
description = "URL to the YAKI script for node bootstrapping"
type = string
default = "https://goyaki.clastix.io"
}
# =============================================================================
# NODE POOL CONFIGURATION
# =============================================================================
variable "node_pools" {
description = "List of vSphere node pools with their configurations"
type = list(object({
name = string # Name of the node pool
size = number # Number of nodes in the pool
node_disk_size = number # Disk size for each node (in GB)
node_cores = number # Number of CPU cores for each node
node_memory = number # Memory for each node (in MB)
node_guest = string # Guest OS type for each node
network_cidr = string # Network CIDR for the node pool
network_gateway = string # Network gateway for the node pool
network_offset = number # Network offset for the node pool
}))
}
# =============================================================================
# NETWORK CONFIGURATION
# =============================================================================
variable "dns_resolvers" {
description = "List of DNS resolver addresses for the nodes"
type = list(string)
default = ["8.8.8.8", "8.8.4.4"]
}
# =============================================================================
# SSH CONFIGURATION
# =============================================================================
variable "ssh_user" {
description = "SSH user for cloud-init and node access"
type = string
default = "ubuntu"
}
variable "ssh_public_key_path" {
description = "Path to the SSH public key for node access"
type = string
default = "~/.ssh/id_rsa.pub"
}
variable "ssh_private_key_path" {
description = "Path to the SSH private key for node provisioning"
type = string
default = "~/.ssh/id_rsa"
}
# =============================================================================
# VSPHERE CONFIGURATION
# =============================================================================
variable "vsphere_server" {
description = "vSphere server address (vCenter or ESXi host)"
type = string
}
variable "vsphere_username" {
description = "vSphere username for authentication"
type = string
sensitive = true
}
variable "vsphere_password" {
description = "vSphere password for authentication"
type = string
sensitive = true
}
variable "vsphere_allow_unverified_ssl" {
description = "Allow unverified SSL certificates (not recommended for production)"
type = bool
default = false
}
variable "vsphere_datacenter" {
description = "vSphere datacenter name where resources will be created"
type = string
}
variable "vsphere_compute_cluster" {
description = "vSphere compute cluster name for VM placement"
type = string
}
variable "vsphere_datastore" {
description = "vSphere datastore name for VM storage"
type = string
}
variable "vsphere_network" {
description = "vSphere network name for VM network connectivity"
type = string
}
variable "vsphere_content_library" {
description = "vSphere content library name containing VM templates"
type = string
}
variable "vsphere_content_library_item" {
description = "vSphere content library item name (VM template)"
type = string
}
variable "vsphere_resource_pool" {
description = "vSphere resource pool name for VM resource allocation"
type = string
default = "Resources"
}
variable "vsphere_root_folder" {
description = "Root folder path where node pool folders will be created"
type = string
default = "Kubernetes"
}
variable "vsphere_plus_license" {
description = "Enable vSphere Enterprise Plus features (DRS anti-affinity rules)"
type = bool
default = false
}

View File

@@ -0,0 +1,24 @@
# =============================================================================
# TERRAFORM CONFIGURATION
# =============================================================================
terraform {
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
version = "~> 2.35.0"
}
vsphere = {
source = "vmware/vsphere"
version = "~> 2.0"
}
random = {
source = "hashicorp/random"
version = "~> 3.0"
}
local = {
source = "hashicorp/local"
version = "~> 2.0"
}
}
}