53 Commits
v0.7.0 ... main

Author SHA1 Message Date
github-actions[bot]
66c8f6e1d1 chore: release v0.13.0 2026-01-27 14:45:59 +07:00
Serge Logvinov
ffec772a85 chore: bump deps
Updated dependencies

Signed-off-by: Serge Logvinov <serge.logvinov@sinextra.dev>
2026-01-27 14:39:49 +07:00
Serge Logvinov
88fad844c7 fix: service account name
Redefine the default service account name using environment variables.

Signed-off-by: Serge Logvinov <serge.logvinov@sinextra.dev>
2026-01-27 14:25:15 +07:00
Serge Logvinov
344118960d docs: privacy policy
Add privacy policy to readme file.

Signed-off-by: Serge Logvinov <serge.logvinov@sinextra.dev>
2026-01-05 11:45:41 +07:00
Serge Logvinov
704aacce5a feat: force label update
This feature allows VMs to be migrated within the cluster and automatically updates the topology labels to reflect the new location.

Signed-off-by: Serge Logvinov <serge.logvinov@sinextra.dev>
2026-01-05 11:23:01 +07:00
Serge Logvinov
ba7a61181a fix(chart): role binding
Fix service account name reference in role bindings

Signed-off-by: Serge Logvinov <serge.logvinov@sinextra.dev>
2026-01-05 09:47:29 +07:00
Serge Logvinov
96e3332893 chore: bump deps
Updated dependencies

Signed-off-by: Serge Logvinov <serge.logvinov@sinextra.dev>
2026-01-04 13:27:49 +07:00
Serge Logvinov
5fded7f1c8 docs: extra permission
Update installation instructions and add terraform example.

Signed-off-by: Serge Logvinov <serge.logvinov@sinextra.dev>
2026-01-04 08:12:49 +07:00
github-actions[bot]
db3781fd15 chore: release v0.12.3
Release v0.12.3

Signed-off-by: Serge Logvinov <serge.logvinov@sinextra.dev>
2026-01-03 11:03:37 +07:00
Serge Logvinov
8923f5d852 fix: reduce api calls
And dump new release.

Signed-off-by: Serge Logvinov <serge.logvinov@sinextra.dev>
2026-01-03 11:00:25 +07:00
Serge Logvinov
34d39261b2 refactor: optimization
This helps reduce unnecessary API calls.

Signed-off-by: Serge Logvinov <serge.logvinov@sinextra.dev>
2026-01-03 10:49:39 +07:00
Serge Logvinov
62d0bb89e2 chore: bump deps
Updated dependencies

Signed-off-by: Serge Logvinov <serge.logvinov@sinextra.dev>
2025-12-23 17:58:46 +07:00
Serge Logvinov
71174a0105 chore: bump deps
Updated dependencies

Signed-off-by: Serge Logvinov <serge.logvinov@sinextra.dev>
2025-12-02 09:59:02 +07:00
github-actions[bot]
4384e5146f chore: release v0.12.2
Release v0.12.2

Signed-off-by: Serge Logvinov <serge.logvinov@sinextra.dev>
2025-11-14 10:11:44 +07:00
Serge Logvinov
66d2e70230 fix: ha-groups
Proxmox 9 uses HA rules instead of HA groups.
Do not treat it as an error if the HA group (used in Proxmox 8) cannot be retrieved.

Signed-off-by: Serge Logvinov <serge.logvinov@sinextra.dev>
2025-11-14 10:07:43 +07:00
github-actions[bot]
1356bd871f chore: release v0.12.1
Release v0.12.1

Signed-off-by: Serge Logvinov <serge.logvinov@sinextra.dev>
2025-11-12 05:12:38 +07:00
Serge Logvinov
3983d5ba10 fix: helm chart release
Regenerate helm chart version

Signed-off-by: Serge Logvinov <serge.logvinov@sinextra.dev>
2025-11-12 05:09:54 +07:00
Serge Logvinov
63418b0117 fix: release please
Build release manually.

Signed-off-by: Serge Logvinov <serge.logvinov@sinextra.dev>
2025-11-12 05:05:15 +07:00
github-actions[bot]
c9f619ff96 chore: release v0.12.0 2025-11-12 05:02:10 +07:00
Serge Logvinov
fced446f46 fix: release please
Get version from file hack/release-please-manifest.json

Signed-off-by: Serge Logvinov <serge.logvinov@sinextra.dev>
2025-11-12 04:58:35 +07:00
Serge Logvinov
a33ea6ead7 feat: add release-please
Make releases with release-please

Signed-off-by: Serge Logvinov <serge.logvinov@sinextra.dev>
2025-11-12 04:51:26 +07:00
Serge Logvinov
706faa8d08 feat: enhance ha-group handling
Add the group.topology.proxmox.sinextra.dev/ label to improve support for node selector and affinity rules.

Signed-off-by: Serge Logvinov <serge.logvinov@sinextra.dev>
2025-11-12 04:41:46 +07:00
Serge Logvinov
0a31716c17 fix: handle inaccessible nodes
Enhanced instance existence checks to handle inaccessible Proxmox nodes.
Improved test cases for instance existence and metadata retrieval.

Signed-off-by: Serge Logvinov <serge.logvinov@sinextra.dev>
2025-11-11 19:17:08 +07:00
Serge Logvinov
dac1775cf2 fix(chart): provider value typo
Fix a typo in values.yaml related to the provider feature option.

Signed-off-by: Serge Logvinov <serge.logvinov@sinextra.dev>
2025-11-10 13:29:15 +07:00
Serge Logvinov
01e3ce854c chore: bump deps
Updated dependencies

Signed-off-by: Serge Logvinov <serge.logvinov@sinextra.dev>
2025-11-10 13:01:34 +07:00
rojanDinc
d2181a88f6 fix: log error when instance metadata retrieval fails
Added error logging in the InstanceMetadata function to capture failures
when retrieving instance information, enhancing debugging capabilities.

Also includes:
- Added error check for metadata retrieval
- Added unit tests for error handling
- Updated to use errors package for error equality

Signed-off-by: rojanDinc <rojand94@gmail.com>
Signed-off-by: Serge Logvinov <serge.logvinov@sinextra.dev>
2025-10-24 05:40:29 +07:00
Serge Logvinov
0bc8801146 chore: bump deps
Updated dependencies

Signed-off-by: Serge Logvinov <serge.logvinov@sinextra.dev>
2025-10-02 09:22:45 +07:00
Serge Logvinov
0cf1a40802 refactor: change proxmox api go module
New proxmox api modules
* luthermonson/go-proxmox
* sergelogvinov/go-proxmox

Signed-off-by: Serge Logvinov <serge.logvinov@sinextra.dev>
2025-09-09 20:09:01 +07:00
Serge Logvinov
0cfad86361 docs: proxmox ha-groups
Update documentation about using Proxmox HA group as a zone label.

Signed-off-by: Serge Logvinov <serge.logvinov@sinextra.dev>
2025-09-08 19:12:34 +07:00
Serge Logvinov
c8be20eb8d chore: release v0.11.0
Release v0.11.0

Signed-off-by: Serge Logvinov <serge.logvinov@sinextra.dev>
2025-09-08 14:17:05 +07:00
Serge Logvinov
27c3e627c4 feat: use proxmox ha-group as zone name
This feature enables live migration without changing any Kubernetes labels.

Signed-off-by: Serge Logvinov <serge.logvinov@sinextra.dev>
2025-09-08 14:09:19 +07:00
Serge Logvinov
229be1432a feat: add extra labels
Add labels:
* topology.proxmox.sinextra.dev/node
* topology.proxmox.sinextra.dev/region

These labels represent the default topology labels.
They make it possible to use different topologies on the Proxmox side.

Signed-off-by: Serge Logvinov <serge.logvinov@sinextra.dev>
2025-09-08 11:47:00 +07:00
Serge Logvinov
b77455af4d refactor: instance metadata
Store all important information in instanceInfo struct.

Signed-off-by: Serge Logvinov <serge.logvinov@sinextra.dev>
2025-09-08 10:34:45 +07:00
Serge Logvinov
2066aa885e chore: bump deps
Updated dependencies

Signed-off-by: Serge Logvinov <serge.logvinov@sinextra.dev>
2025-09-02 16:59:25 +07:00
3deep5me
8ef4bcea69 feat: add config options token_id_file & token_secret_file
Adds additional config options to read proxmox-cluster credentials from separate files.

Signed-off-by: 3deep5me <manuel.karim5@gmail.com>
2025-08-31 19:28:09 +07:00
Daniel J. Holmes (jaitaiwan)
144b1c74e6 feat: add named errors to cloud config
Changes errors created by cloud config to be standardized so that any
other packages relying on the cloud config can check if the error is of
the same "type".

Signed-off-by: Daniel J. Holmes (jaitaiwan) <dan@jaitaiwan.dev>
2025-08-02 13:05:00 +07:00
Serge Logvinov
1ce4ade1c6 chore: release v0.10.0
Release v0.10.0

Signed-off-by: Serge Logvinov <serge.logvinov@sinextra.dev>
2025-08-01 14:07:38 +07:00
Daniel J. Holmes (jaitaiwan)
e1b8e9b419 feat: add new network addressing features
Changes:
- Increase test coverage of config
- Add networking feature config
- Add ability to find node ip addresses via qemu and specify ips that
  should be treated as ExternalIPAddresses

Signed-off-by: Daniel J. Holmes (jaitaiwan) <dan@jaitaiwan.dev>
Signed-off-by: Serge Logvinov <serge.logvinov@sinextra.dev>
2025-07-25 12:35:52 +07:00
Serge Logvinov
a8183c8df4 refactor: split cloud config module
We will split the cloud configuration into two parts:
  the original cloud controller configuration and a separate function for working with multiple Proxmox clusters.

Signed-off-by: Daniel J. Holmes (jaitaiwan) <dan@jaitaiwan.dev>
Signed-off-by: Serge Logvinov <serge.logvinov@sinextra.dev>
2025-07-20 16:04:19 +07:00
Serge Logvinov
60f953d1da chore: bump deps
Updated dependencies

Signed-off-by: Serge Logvinov <serge.logvinov@sinextra.dev>
2025-07-20 13:26:17 +07:00
Serge Logvinov
2ebbf7a9d5 fix: makefile conformance stage
Add make conformance command.

Signed-off-by: Serge Logvinov <serge.logvinov@sinextra.dev>
2025-06-13 17:06:11 +07:00
Daniel J. Holmes (jaitaiwan)
628e7d6500 chore: clearer error message
Error now clearly indicates the reasoning for the error message.
Previously the error message suggested a kubelet flag was not set even
when it may have been.

Signed-off-by: Daniel J. Holmes (jaitaiwan) <dan@jaitaiwan.dev>
2025-06-13 17:02:19 +07:00
Serge Logvinov
7aba46727d chore: release v0.9.0
Release v0.9.0

Signed-off-by: Serge Logvinov <serge.logvinov@sinextra.dev>
2025-06-05 11:47:30 +07:00
Serge Logvinov
e664b24029 chore: bump deps
Updated dependencies

Signed-off-by: Serge Logvinov <serge.logvinov@sinextra.dev>
2025-06-05 11:40:38 +07:00
Serge Logvinov
efb753c9de fix: cluster vm list
Fix the output to show the current number of VMs.

Signed-off-by: Serge Logvinov <serge.logvinov@sinextra.dev>
2025-05-06 14:32:19 +07:00
Serge Logvinov
5a645a25c3 chore: bump deps
Updated dependencies

Signed-off-by: Serge Logvinov <serge.logvinov@sinextra.dev>
2025-05-05 21:48:48 +07:00
Serge Logvinov
2e35df2db0 chore: release v0.8.0
Release v0.8.0

Signed-off-by: Serge Logvinov <serge.logvinov@sinextra.dev>
2025-04-12 15:07:35 +07:00
Serge Logvinov
646d77633f feat(chart): extra envs values
Add extraEnvs option in helm chart.

Signed-off-by: Serge Logvinov <serge.logvinov@sinextra.dev>
2025-04-12 14:59:25 +07:00
Serge Logvinov
19e1f44996 chore: bump deps
Updated dependencies

Signed-off-by: Serge Logvinov <serge.logvinov@sinextra.dev>
2025-04-12 14:33:05 +07:00
Serge Logvinov
0f0374c2eb feat: custom instance type
Now, we can set a custom instance type using the smbios1[sku] argument

Signed-off-by: Serge Logvinov <serge.logvinov@sinextra.dev>
2025-02-13 18:55:52 +02:00
Serge Logvinov
3a34fb960a fix: find node by name
We will find the node by name more precisely.
Check the UUID and VM name to determine the VM ID.

Signed-off-by: Serge Logvinov <serge.logvinov@sinextra.dev>
2025-02-13 16:59:03 +02:00
Serge Logvinov
8a2f51844c chore: bump deps
Updated dependencies

Signed-off-by: Serge Logvinov <serge.logvinov@sinextra.dev>
2025-02-03 10:24:44 +02:00
Serge Logvinov
ca452ad040 chore: bump deps
Updated dependencies

Signed-off-by: Serge Logvinov <serge.logvinov@sinextra.dev>
2025-01-20 14:43:40 +02:00
64 changed files with 4064 additions and 1555 deletions

View File

@@ -9,10 +9,9 @@ policies:
body:
required: true
dco: true
gpg: false
spellcheck:
locale: US
maximumOfOneCommit: false
maximumOfOneCommit: true
conventional:
types:
- build

View File

@@ -9,3 +9,8 @@ assignees: ""
## Feature Request
### Description
### Community Note
* Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request

View File

@@ -4,7 +4,7 @@
## Note to the Contributor
We encourage contributors to go through a proposal process to discuss major changes.
Before your PR is allowed to run through CI, the maintainers of Talos CCM will first have to approve the PR.
Before your PR is allowed to run through CI, the maintainers of Proxmox CCM will first have to approve the PR.
-->
## What? (description)

View File

@@ -23,14 +23,14 @@ jobs:
id-token: write
steps:
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v6
with:
ref: main
- name: Unshallow
run: git fetch --prune --unshallow
- name: Install Cosign
uses: sigstore/cosign-installer@v3.7.0
uses: sigstore/cosign-installer@v4.0.0
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
with:

View File

@@ -20,18 +20,18 @@ jobs:
contents: read
steps:
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v6
- name: Set up go
timeout-minutes: 5
uses: actions/setup-go@v5
uses: actions/setup-go@v6
with:
go-version-file: 'go.mod'
- name: Lint
uses: golangci/golangci-lint-action@v6
uses: golangci/golangci-lint-action@v9
with:
version: v1.62.2
version: v2.8.0
args: --timeout=5m --config=.golangci.yml
- name: Unit
run: make unit

View File

@@ -14,13 +14,13 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v6
- name: Unshallow
run: git fetch --prune --unshallow
- name: Install chart-testing tools
id: lint
uses: helm/chart-testing-action@v2.6.1
uses: helm/chart-testing-action@v2.8.0
- name: Run helm chart linter
run: ct --config hack/ct.yml lint

View File

@@ -12,7 +12,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v6
with:
fetch-depth: 0
ref: ${{ github.event.pull_request.head.sha }}
@@ -21,3 +21,5 @@ jobs:
- name: Conform action
uses: talos-systems/conform@v0.1.0-alpha.30
with:
token: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -18,7 +18,7 @@ jobs:
id-token: write
steps:
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v6
with:
fetch-depth: 0
@@ -27,7 +27,7 @@ jobs:
with:
version: v3.13.3
- name: Install Cosign
uses: sigstore/cosign-installer@v3.7.0
uses: sigstore/cosign-installer@v4.0.0
- name: Github registry login
uses: docker/login-action@v3

22
.github/workflows/release-please.yml vendored Normal file
View File

@@ -0,0 +1,22 @@
name: Release please
on:
workflow_dispatch: {}
push:
branches:
- main
jobs:
release-please:
runs-on: ubuntu-24.04
permissions:
contents: write
pull-requests: write
steps:
- name: Create release PR
id: release
uses: googleapis/release-please-action@v4
with:
config-file: hack/release-please-config.json
manifest-file: hack/release-please-manifest.json

View File

@@ -15,18 +15,18 @@ jobs:
packages: write
steps:
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v6
- name: Unshallow
run: git fetch --prune --unshallow
- name: Release version
shell: bash
id: release
run: |
echo "TAG=v${GITHUB_HEAD_REF:8}" >> "$GITHUB_ENV"
if: startsWith(github.head_ref, 'release-please')
run: jq -r '"TAG=v"+.[]' hack/release-please-manifest.json >> "$GITHUB_ENV"
- name: Helm docs
uses: gabe565/setup-helm-docs-action@v1
with:
version: v1.11.3
- name: Generate
run: make docs

View File

@@ -1,6 +1,7 @@
name: Release
on:
workflow_dispatch: {}
push:
tags:
- 'v*'
@@ -16,12 +17,12 @@ jobs:
id-token: write
steps:
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v6
- name: Unshallow
run: git fetch --prune --unshallow
- name: Install Cosign
uses: sigstore/cosign-installer@v3.7.0
uses: sigstore/cosign-installer@v4.0.0
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
with:

View File

@@ -12,7 +12,7 @@ jobs:
issues: write
pull-requests: write
steps:
- uses: actions/stale@v9
- uses: actions/stale@v10
with:
stale-issue-message: This issue is stale because it has been open 180 days with no activity. Remove stale label or comment or this will be closed in 14 days.
close-issue-message: This issue was closed because it has been stalled for 14 days with no activity.

View File

@@ -1,113 +1,22 @@
# This file contains all available configuration options
# with their default values.
# options for analysis running
version: "2"
run:
# default concurrency is a available CPU number
# concurrency: 4
# exit code when at least one issue was found, default is 1
issues-exit-code: 1
# include test files or not, default is true
tests: true
# list of build tags, all linters use it. Default is empty list.
build-tags:
- integration
- integration_api
- integration_cli
- integration_k8s
- integration_provision
# output configuration options
issues-exit-code: 1
tests: true
output:
# colored-line-number|line-number|json|tab|checkstyle, default is "colored-line-number"
formats:
- format: line-number
text:
path: stdout
print-issued-lines: true
print-linter-name: true
uniq-by-line: true
sort-results: true
# all available settings of specific linters
linters-settings:
errcheck:
# report about not checking of errors in type assetions: `a := b.(MyStruct)`;
# default is false: such cases aren't reported by default.
check-type-assertions: true
# report about assignment of errors to blank identifier: `num, _ := strconv.Atoi(numStr)`;
# default is false: such cases aren't reported by default.
check-blank: true
govet: {}
gofmt:
# simplify code: gofmt with `-s` option, true by default
simplify: true
gocyclo:
# minimal code complexity to report, 30 by default (but we recommend 10-20)
min-complexity: 30
dupl:
# tokens count to trigger issue, 150 by default
threshold: 100
goconst:
# minimal length of string constant, 3 by default
min-len: 3
# minimal occurrences count to trigger, 3 by default
min-occurrences: 3
misspell:
# Correct spellings using locale preferences for US or UK.
# Default is to use a neutral variety of English.
# Setting locale to US will correct the British spelling of 'colour' to 'color'.
locale: US
lll:
# max line length, lines longer will be reported. Default is 120.
# '\t' is counted as 1 character by default, and can be changed with the tab-width option
line-length: 200
# tab width in spaces. Default to 1.
tab-width: 1
unparam:
# Inspect exported functions, default is false. Set to true if no external program/library imports your code.
# XXX: if you enable this setting, unparam will report a lot of false-positives in text editors:
# if it's called for subdir of a project it can't find external interfaces. All text editor integrations
# with golangci-lint call it on a directory with the changed file.
check-exported: false
nakedret:
# make an issue if func has more lines of code than this setting and it has naked returns; default is 30
max-func-lines: 30
nolintlint:
allow-unused: false
# allow-leading-space: false
allow-no-explanation: []
require-explanation: false
require-specific: true
prealloc:
# XXX: we don't recommend using this linter before doing performance profiling.
# For most programs usage of prealloc will be a premature optimization.
# Report preallocation suggestions only on simple loops that have no returns/breaks/continues/gotos in them.
# True by default.
simple: true
range-loops: true # Report preallocation suggestions on range loops, true by default
for-loops: false # Report preallocation suggestions on for loops, false by default
gci:
sections:
- standard # Captures all standard packages if they do not match another section.
- default # Contains all imports that could not be matched to another section type.
- prefix(github.com/sergelogvinov) # Groups all imports with the specified Prefix.
- prefix(k8s.io) # Groups all imports with the specified Prefix.
cyclop:
# the maximal code complexity to report
max-complexity: 30
gomoddirectives:
replace-local: true
replace-allow-list: []
retract-allow-no-explanation: false
exclude-forbidden: true
print-linter-name: true
print-issued-lines: true
colors: false
linters:
enable-all: true
default: all
disable:
- depguard
- errorlint
@@ -122,63 +31,117 @@ linters:
- godox
- godot
- gosec
- mnd
- ireturn # we return interfaces
- inamedparam
- ireturn
- maintidx
- mnd
- musttag
- nakedret
- nestif
- nilnil # we return "nil, nil"
- nonamedreturns
- nilnil
- nolintlint
- nonamedreturns
- paralleltest
- promlinter # https://github.com/golangci/golangci-lint/issues/2222
- tagliatelle # we have many different conventions
- tagalign # too annoying
- perfsprint
- promlinter
- protogetter
- recvcheck
- tagalign
- tagliatelle
- testifylint
- testpackage
- thelper
- typecheck
- varnamelen # too annoying
- varnamelen
- wrapcheck
- perfsprint
- exportloopref
- wsl
# temporarily disabled linters
- copyloopvar
- intrange
# abandoned linters for which golangci shows the warning that the repo is archived by the owner
- perfsprint
disable-all: false
fast: false
- noinlineerr
settings:
importas:
alias:
- pkg: github.com/sergelogvinov/proxmox-cloud-controller/manager/metrics
alias: metrics
- pkg: github.com/sergelogvinov/proxmox-cloud-controller/proxmoxpool
alias: proxmoxpool
- pkg: github.com/sergelogvinov/proxmox-cloud-controller/proxmox
alias: proxmox
- pkg: github.com/sergelogvinov/proxmox-cloud-controller/provider
alias: provider
- pkg: github.com/sergelogvinov/proxmox-cloud-controller/config
alias: providerconfig
wsl_v5:
allow-first-in-block: true
allow-whole-block: false
branch-max-lines: 2
disable:
- err
cyclop:
max-complexity: 30
dupl:
threshold: 150
errcheck:
check-type-assertions: false
check-blank: true
exclude-functions:
- fmt.Fprintln
- fmt.Fprintf
- fmt.Fprint
goconst:
min-len: 3
min-occurrences: 3
gocyclo:
min-complexity: 30
gomoddirectives:
replace-local: true
replace-allow-list: []
retract-allow-no-explanation: false
exclude-forbidden: true
lll:
line-length: 200
tab-width: 1
misspell:
locale: US
nolintlint:
require-explanation: false
require-specific: true
allow-unused: false
prealloc:
simple: true
range-loops: true
for-loops: false
staticcheck:
checks:
[
"all",
"-ST1000",
"-ST1003",
"-ST1016",
"-ST1020",
"-ST1021",
"-ST1022",
"-QF1001",
"-QF1008",
]
unused:
local-variables-are-used: false
issues:
# List of regexps of issue texts to exclude, empty list by default.
# But independently from this option we use default exclude patterns,
# it can be disabled by `exclude-use-default: false`. To list all
# excluded by default patterns execute `golangci-lint run --help`
exclude:
- package comment should be of the form "Package services ..." # revive
- ^ST1000 # ST1000: at least one file in a package should have a package comment (stylecheck)
exclude-rules: []
# Independently from option `exclude` we use default exclude patterns,
# it can be disabled by this option. To list all
# excluded by default patterns execute `golangci-lint run --help`.
# Default value for this option is true.
exclude-use-default: false
# Maximum issues count per one linter. Set to 0 to disable. Default is 50.
max-issues-per-linter: 0
# Maximum count of issues with the same text. Set to 0 to disable. Default is 3.
max-same-issues: 0
# Show only new issues: if there are unstaged changes or untracked files,
# only those changes are analyzed, else only changes in HEAD~ are analyzed.
# It's a super-useful option for integration of golangci-lint into existing
# large codebase. It's not practical to fix all existing issues at the moment
# of integration: much better don't allow issues in new code.
# Default is false.
uniq-by-line: true
new: false
formatters:
enable:
- gci
- gofmt
- gofumpt
- goimports
settings:
gci:
sections:
- standard # Captures all standard packages if they do not match another section.
- default # Contains all imports that could not be matched to another section type.
- prefix(github.com/sergelogvinov) # Groups all imports with the specified Prefix.
- prefix(k8s.io) # Groups all imports with the specified Prefix.

View File

@@ -1,6 +1,139 @@
<a name="v0.11.0"></a>
## [0.13.0](https://github.com/sergelogvinov/proxmox-cloud-controller-manager/compare/v0.12.3...v0.13.0) (2026-01-27)
### Features
* force label update ([704aacc](https://github.com/sergelogvinov/proxmox-cloud-controller-manager/commit/704aacce5a776257201bb1037e909339062b2151))
### Bug Fixes
* **chart:** role binding ([ba7a611](https://github.com/sergelogvinov/proxmox-cloud-controller-manager/commit/ba7a61181add80a838e3d010feab06a304ef98f9))
* service account name ([88fad84](https://github.com/sergelogvinov/proxmox-cloud-controller-manager/commit/88fad844c72271c40dccd67b46dead69fb4f603c))
## [0.12.3](https://github.com/sergelogvinov/proxmox-cloud-controller-manager/compare/v0.12.2...v0.12.3) (2026-01-03)
### Bug Fixes
* reduce api calls ([8923f5d](https://github.com/sergelogvinov/proxmox-cloud-controller-manager/commit/8923f5d852c8e376ac7081953158f597a7e6b930))
## [0.12.2](https://github.com/sergelogvinov/proxmox-cloud-controller-manager/compare/v0.12.1...v0.12.2) (2025-11-14)
### Bug Fixes
* ha-groups ([66d2e70](https://github.com/sergelogvinov/proxmox-cloud-controller-manager/commit/66d2e7023010f517e422a3b56519fb9600afe9dd))
## [0.12.1](https://github.com/sergelogvinov/proxmox-cloud-controller-manager/compare/v0.12.0...v0.12.1) (2025-11-11)
### Bug Fixes
* helm chart release ([3983d5b](https://github.com/sergelogvinov/proxmox-cloud-controller-manager/commit/3983d5ba102afcaa6ec0ad91fdc350c0b2b0e4d3))
* release please ([63418b0](https://github.com/sergelogvinov/proxmox-cloud-controller-manager/commit/63418b011763fed9620196430bbb9791308bdc30))
## [0.12.0](https://github.com/sergelogvinov/proxmox-cloud-controller-manager/compare/v0.11.0...v0.12.0) (2025-11-11)
### Features
* add release-please ([a33ea6e](https://github.com/sergelogvinov/proxmox-cloud-controller-manager/commit/a33ea6ead7ea03fc0e2addd2ff74afb5a87936bb))
* enhance ha-group handling ([706faa8](https://github.com/sergelogvinov/proxmox-cloud-controller-manager/commit/706faa8d088bb0467770d364b374f060398e9b25))
### Bug Fixes
* **chart:** provider value typo ([dac1775](https://github.com/sergelogvinov/proxmox-cloud-controller-manager/commit/dac1775cf2abcf2e8fb2b597a9672bd1c63d26a7))
* handle inaccessible nodes ([0a31716](https://github.com/sergelogvinov/proxmox-cloud-controller-manager/commit/0a31716c17dd601fbe36025186de86e2d47e82cd))
* log error when instance metadata retrieval fails ([d2181a8](https://github.com/sergelogvinov/proxmox-cloud-controller-manager/commit/d2181a88f6b905544b6a2c9bd4e70e0bbf1da690))
* release please ([fced446](https://github.com/sergelogvinov/proxmox-cloud-controller-manager/commit/fced446f46cec5c0d8091ec918d4f4a2c1e6ad0e))
## [v0.11.0](https://github.com/sergelogvinov/proxmox-cloud-controller-manager/compare/v0.10.0...v0.11.0) (2025-09-08)
Welcome to the v0.11.0 release of Kubernetes cloud controller manager for Proxmox!
### Features
- use proxmox ha-group as zone name
- add extra labels
- add config options token_id_file & token_secret_file
- add named errors to cloud config
### Changelog
* 27c3e62 feat: use proxmox ha-group as zone name
* 229be14 feat: add extra labels
* b77455a refactor: instance metadata
* 2066aa8 chore: bump deps
* 8ef4bce feat: add config options token_id_file & token_secret_file
* 144b1c7 feat: add named errors to cloud config
<a name="v0.10.0"></a>
## [v0.10.0](https://github.com/sergelogvinov/proxmox-cloud-controller-manager/compare/v0.9.0...v0.10.0) (2025-08-01)
Welcome to the v0.10.0 release of Kubernetes cloud controller manager for Proxmox!
### Bug Fixes
- makefile conformance stage
### Features
- add new network addressing features
### Changelog
* 1ce4ade chore: release v0.10.0
* e1b8e9b feat: add new network addressing features
* a8183c8 refactor: split cloud config module
* 60f953d chore: bump deps
* 2ebbf7a fix: makefile conformance stage
* 628e7d6 chore: clearer error message
<a name="v0.9.0"></a>
## [v0.9.0](https://github.com/sergelogvinov/proxmox-cloud-controller-manager/compare/v0.8.0...v0.9.0) (2025-06-05)
Welcome to the v0.9.0 release of Kubernetes cloud controller manager for Proxmox!
### Bug Fixes
- cluster vm list
### Changelog
* 7aba467 chore: release v0.9.0
* e664b24 chore: bump deps
* efb753c fix: cluster vm list
* 5a645a2 chore: bump deps
<a name="v0.8.0"></a>
## [v0.8.0](https://github.com/sergelogvinov/proxmox-cloud-controller-manager/compare/v0.7.0...v0.8.0) (2025-04-12)
Welcome to the v0.8.0 release of Kubernetes cloud controller manager for Proxmox!
### Bug Fixes
- find node by name
### Features
- custom instance type
- **chart:** extra envs values
### Changelog
* 2e35df2 chore: release v0.8.0
* 646d776 feat(chart): extra envs values
* 19e1f44 chore: bump deps
* 0f0374c feat: custom instance type
* 3a34fb9 fix: find node by name
* 8a2f518 chore: bump deps
* ca452ad chore: bump deps
<a name="v0.7.0"></a>
## [v0.7.0](https://github.com/sergelogvinov/proxmox-cloud-controller-manager/compare/v0.6.0...v0.7.0) (2025-01-02)
## [v0.7.0](https://github.com/sergelogvinov/proxmox-cloud-controller-manager/compare/v0.6.0...v0.7.0) (2025-01-08)
Welcome to the v0.7.0 release of Kubernetes cloud controller manager for Proxmox!
@@ -10,6 +143,7 @@ Welcome to the v0.7.0 release of Kubernetes cloud controller manager for Proxmox
### Changelog
* bb868bc chore: release v0.7.0
* 956a30a feat: enable support for capmox This makes ccm compatible with cluster api and cluster api provider proxmox (capmox)
<a name="v0.6.0"></a>

76
CODE_OF_CONDUCT.md Normal file
View File

@@ -0,0 +1,76 @@
## Code of Conduct
### Our Pledge
In the interest of fostering an open and welcoming environment, we as
contributors and maintainers pledge to making participation in our project and
our community a harassment-free experience for everyone, regardless of age, body
size, disability, ethnicity, sex characteristics, gender identity and expression,
level of experience, education, socio-economic status, nationality, personal
appearance, race, religion, or sexual identity and orientation.
### Our Standards
Examples of behavior that contributes to creating a positive environment
include:
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery and unwelcome sexual attention or
advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic
address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
### Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable
behavior and are expected to take appropriate and fair corrective action in
response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or
reject comments, commits, code, wiki edits, issues, and other contributions
that are not aligned to this Code of Conduct, or to ban temporarily or
permanently any contributor for other behaviors that they deem inappropriate,
threatening, offensive, or harmful.
### Scope
This Code of Conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community. Examples of
representing a project or community include using an official project e-mail
address, posting via an official social media account, or acting as an appointed
representative at an online or offline event. Representation of a project may be
further defined and clarified by project maintainers.
### Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported by contacting the project team at [INSERT EMAIL ADDRESS]. All
complaints will be reviewed and investigated and will result in a response that
is deemed necessary and appropriate to the circumstances. The project team is
obligated to maintain confidentiality with regard to the reporter of an incident.
Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good
faith may face temporary or permanent repercussions as determined by other
members of the project's leadership.
### Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
[homepage]: https://www.contributor-covenant.org
For answers to common questions about this code of conduct, see
https://www.contributor-covenant.org/faq

View File

@@ -1,5 +1,15 @@
# Contributing
## Pull Requests
All PRs require a single commit.
Having one commit in a Pull Request is very important for several reasons:
* A single commit per PR keeps the git history clean and readable.
It helps reviewers and future developers understand the change as one atomic unit of work, instead of sifting through many intermediate or redundant commits.
* One commit is easier to cherry-pick into another branch or to track in changelogs.
* Squashing into one meaningful commit ensures the final PR only contains what matters.
## Developer Certificate of Origin
All commits require a [DCO](https://developercertificate.org/) sign-off.

View File

@@ -1,12 +1,12 @@
# syntax = docker/dockerfile:1.12
# syntax = docker/dockerfile:1.18
########################################
FROM --platform=${BUILDPLATFORM} golang:1.23.4-alpine AS builder
FROM --platform=${BUILDPLATFORM} golang:1.25.6-alpine AS builder
RUN apk update && apk add --no-cache make
ENV GO111MODULE=on
WORKDIR /src
COPY go.mod go.sum /src
COPY ["go.mod", "go.sum", "/src/"]
RUN go mod download && go mod verify
COPY . .
@@ -22,7 +22,7 @@ LABEL org.opencontainers.image.source="https://github.com/sergelogvinov/proxmox-
org.opencontainers.image.licenses="Apache-2.0" \
org.opencontainers.image.description="Proxmox VE CCM for Kubernetes"
COPY --from=gcr.io/distroless/static-debian12:nonroot . .
COPY --from=gcr.io/distroless/static-debian13:nonroot . .
ARG TARGETARCH
COPY --from=builder /src/bin/proxmox-cloud-controller-manager-${TARGETARCH} /bin/proxmox-cloud-controller-manager

View File

@@ -40,8 +40,8 @@ To build this project, you must have the following installed:
- git
- make
- golang 1.20+
- golangci-lint
- golang 1.24+
- golangci-lint 2.2.0+
endef
@@ -77,10 +77,31 @@ run: build ## Run
lint: ## Lint Code
golangci-lint run --config .golangci.yml
.PHONY: lint-fix
lint-fix: ## Fix Lint Issues
golangci-lint run --fix --config .golangci.yml
.PHONY: unit
unit: ## Unit Tests
go test -tags=unit $(shell go list ./...) $(TESTARGS)
.PHONY: test
test: lint unit ## Run all tests
.PHONY: licenses
licenses:
go-licenses check ./... --disallowed_types=forbidden,restricted,reciprocal,unknown
.PHONY: conformance
conformance: ## Conformance
docker run --rm -it -v $(PWD):/src -w /src ghcr.io/siderolabs/conform:v0.1.0-alpha.30 enforce
############
.PHONY: labels
labels:
@kubectl get nodes -o json | jq '.items[].metadata.labels'
############
.PHONY: helm-unit

View File

@@ -50,6 +50,13 @@ metadata:
topology.kubernetes.io/region: cluster-1
# Proxmox hypervisor host machine name
topology.kubernetes.io/zone: pve-node-1
# Proxmox specific labels
topology.proxmox.sinextra.dev/region: cluster-1
topology.proxmox.sinextra.dev/zone: pve-node-1
# HA group labels - the same idea as node-role
group.topology.proxmox.sinextra.dev/${HAGroup}: ""
name: worker-1
spec:
...
@@ -88,6 +95,15 @@ See [FAQ](docs/faq.md) for answers to common questions.
Contributions are welcomed and appreciated!
See [Contributing](CONTRIBUTING.md) for our guidelines.
If this project is useful to you, please consider starring the [repository](https://github.com/sergelogvinov/proxmox-cloud-controller-manager).
## Privacy Policy
This project does not collect or send any metrics or telemetry data.
You can build the images yourself and store them in your private registry, see the [Makefile](Makefile) for details.
To provide feedback or report an issue, please use the [GitHub Issues](https://github.com/sergelogvinov/proxmox-cloud-controller-manager/issues).
## License
Licensed under the Apache License, Version 2.0 (the "License");
@@ -101,3 +117,7 @@ distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
---
`Proxmox®` is a registered trademark of [Proxmox Server Solutions GmbH](https://www.proxmox.com/en/about/company).

View File

@@ -16,9 +16,9 @@ maintainers:
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.2.11
version: 0.2.25
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: v0.7.0
appVersion: v0.12.3

View File

@@ -1,6 +1,6 @@
# proxmox-cloud-controller-manager
![Version: 0.2.11](https://img.shields.io/badge/Version-0.2.11-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: v0.7.0](https://img.shields.io/badge/AppVersion-v0.7.0-informational?style=flat-square)
![Version: 0.2.23](https://img.shields.io/badge/Version-0.2.23-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: v0.12.3](https://img.shields.io/badge/AppVersion-v0.12.3-informational?style=flat-square)
Cloud Controller Manager plugin for Proxmox
@@ -30,7 +30,7 @@ You need to set `--cloud-provider=external` in the kubelet argument for all node
```shell
# Create role CCM
pveum role add CCM -privs "VM.Audit"
pveum role add CCM -privs "VM.Audit Sys.Audit"
# Create user and grant permissions
pveum user add kubernetes@pve
pveum aclmod / -user kubernetes@pve -role CCM
@@ -68,6 +68,56 @@ tolerations:
effect: NoSchedule
```
## Example for credentials from separate Secrets
```yaml
# helm-values.yaml
config:
clusters:
- url: https://cluster-api-1.exmple.com:8006/api2/json
insecure: false
token_id_file: /run/secrets/cluster-1/token_id
token_secret_file: /run/secrets/cluster-1/token_secret
region: cluster-1
- url: https://cluster-api-2.exmple.com:8006/api2/json
insecure: false
token_id_file: /run/secrets/cluster-2/token_id
token_secret_file: /run/secrets/cluster-2/token_secret
region: cluster-2
extraVolumes:
- name: credentials-cluster-1
secret:
secretName: proxmox-credentials-cluster-1
- name: credentials-cluster-2
secret:
secretName: proxmox-credentials-cluster-2
extraVolumeMounts:
- name: credentials-cluster-1
readOnly: true
mountPath: "/run/secrets/cluster-1"
- name: credentials-cluster-2
readOnly: true
mountPath: "/run/secrets/cluster-2"
```
```yaml
# secrets-proxmox-clusters.yaml
apiVersion: v1
kind: Secret
metadata:
name: proxmox-credentials-cluster-1
stringData:
token_id: kubernetes@pve!csi
token_secret: key1
---
apiVersion: v1
kind: Secret
metadata:
name: proxmox-credentials-cluster-2
stringData:
token_id: kubernetes@pve!csi
token_secret: key2
```
Deploy chart:
```shell
@@ -86,12 +136,13 @@ helm upgrade -i --namespace=kube-system -f proxmox-ccm.yaml \
| imagePullSecrets | list | `[]` | |
| nameOverride | string | `""` | |
| fullnameOverride | string | `""` | |
| extraEnvs | list | `[]` | Any extra environments for talos-cloud-controller-manager |
| extraArgs | list | `[]` | Any extra arguments for talos-cloud-controller-manager |
| enabledControllers | list | `["cloud-node","cloud-node-lifecycle"]` | List of controllers should be enabled. Use '*' to enable all controllers. Support only `cloud-node,cloud-node-lifecycle` controllers. |
| logVerbosityLevel | int | `2` | Log verbosity level. See https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/logging.md for description of individual verbosity levels. |
| existingConfigSecret | string | `nil` | Proxmox cluster config stored in secrets. |
| existingConfigSecretKey | string | `"config.yaml"` | Proxmox cluster config stored in secrets key. |
| config | object | `{"clusters":[],"features":{"provider":"default"}}` | Proxmox cluster config. |
| config | object | `{"clusters":[],"features":{"provider":"default"}}` | Proxmox cluster config. refs: https://github.com/sergelogvinov/proxmox-cloud-controller-manager/blob/main/docs/config.md |
| serviceAccount | object | `{"annotations":{},"create":true,"name":""}` | Pods Service Account. ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ |
| priorityClassName | string | `"system-cluster-critical"` | CCM pods' priorityClassName. |
| initContainers | list | `[]` | Add additional init containers to the CCM pods. ref: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ |

View File

@@ -28,7 +28,7 @@ You need to set `--cloud-provider=external` in the kubelet argument for all node
```shell
# Create role CCM
pveum role add CCM -privs "VM.Audit"
pveum role add CCM -privs "VM.Audit Sys.Audit"
# Create user and grant permissions
pveum user add kubernetes@pve
pveum aclmod / -user kubernetes@pve -role CCM
@@ -66,6 +66,56 @@ tolerations:
effect: NoSchedule
```
## Example for credentials from separate Secrets
```yaml
# helm-values.yaml
config:
clusters:
- url: https://cluster-api-1.exmple.com:8006/api2/json
insecure: false
token_id_file: /run/secrets/cluster-1/token_id
token_secret_file: /run/secrets/cluster-1/token_secret
region: cluster-1
- url: https://cluster-api-2.exmple.com:8006/api2/json
insecure: false
token_id_file: /run/secrets/cluster-2/token_id
token_secret_file: /run/secrets/cluster-2/token_secret
region: cluster-2
extraVolumes:
- name: credentials-cluster-1
secret:
secretName: proxmox-credentials-cluster-1
- name: credentials-cluster-2
secret:
secretName: proxmox-credentials-cluster-2
extraVolumeMounts:
- name: credentials-cluster-1
readOnly: true
mountPath: "/run/secrets/cluster-1"
- name: credentials-cluster-2
readOnly: true
mountPath: "/run/secrets/cluster-2"
```
```yaml
# secrets-proxmox-clusters.yaml
apiVersion: v1
kind: Secret
metadata:
name: proxmox-credentials-cluster-1
stringData:
token_id: kubernetes@pve!csi
token_secret: key1
---
apiVersion: v1
kind: Secret
metadata:
name: proxmox-credentials-cluster-2
stringData:
token_id: kubernetes@pve!csi
token_secret: key2
```
Deploy chart:
```shell

View File

@@ -7,12 +7,16 @@ affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node-role.kubernetes.io/control-plane
operator: Exists
- matchExpressions:
- key: node-role.kubernetes.io/control-plane
operator: Exists
logVerbosityLevel: 4
extraEnvs:
- name: KUBERNETES_SERVICE_HOST
value: 127.0.0.1
enabledControllers:
- cloud-node
- cloud-node-lifecycle

View File

@@ -71,6 +71,14 @@ spec:
{{- with .Values.extraArgs }}
{{- toYaml . | nindent 12 }}
{{- end }}
env:
- name: SERVICE_ACCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
{{- with .Values.extraEnvs }}
{{- toYaml . | nindent 12 }}
{{- end }}
ports:
- name: metrics
containerPort: 10258

View File

@@ -8,7 +8,7 @@ roleRef:
name: system:{{ include "proxmox-cloud-controller-manager.fullname" . }}
subjects:
- kind: ServiceAccount
name: {{ include "proxmox-cloud-controller-manager.fullname" . }}
name: {{ include "proxmox-cloud-controller-manager.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
---
apiVersion: rbac.authorization.k8s.io/v1
@@ -22,5 +22,5 @@ roleRef:
name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
name: {{ include "proxmox-cloud-controller-manager.fullname" . }}
name: {{ include "proxmox-cloud-controller-manager.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}

View File

@@ -16,8 +16,15 @@ imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
# -- Any extra environments for talos-cloud-controller-manager
extraEnvs:
[]
# - name: KUBERNETES_SERVICE_HOST
# value: 127.0.0.1
# -- Any extra arguments for talos-cloud-controller-manager
extraArgs: []
extraArgs:
[]
# - --cluster-name=kubernetes
# -- List of controllers should be enabled.
@@ -39,10 +46,11 @@ existingConfigSecret: ~
existingConfigSecretKey: config.yaml
# -- Proxmox cluster config.
# refs: https://github.com/sergelogvinov/proxmox-cloud-controller-manager/blob/main/docs/config.md
config:
features:
# specify provider: proxmox if you are using capmox (cluster api provider for proxmox)
provider: 'default'
# Provider value can be "default" or "capmox"
provider: "default"
clusters: []
# - url: https://cluster-api-1.exmple.com:8006/api2/json
# insecure: false
@@ -66,7 +74,8 @@ priorityClassName: system-cluster-critical
# -- Add additional init containers to the CCM pods.
# ref: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
initContainers: []
initContainers:
[]
# - name: loadbalancer
# restartPolicy: Always
# image: ghcr.io/sergelogvinov/haproxy:2.8.3-alpine3.18
@@ -89,7 +98,8 @@ initContainers: []
# -- hostAliases Deployment pod host aliases
# ref: https://kubernetes.io/docs/tasks/network/customize-hosts-file-for-pods/
hostAliases: []
hostAliases:
[]
# - ip: 127.0.0.1
# hostnames:
# - proxmox.domain.com
@@ -113,7 +123,7 @@ securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
- ALL
seccompProfile:
type: RuntimeDefault
@@ -145,7 +155,8 @@ updateStrategy:
# -- Node labels for data pods assignment.
# ref: https://kubernetes.io/docs/user-guide/node-selection/
nodeSelector: {}
nodeSelector:
{}
# node-role.kubernetes.io/control-plane: ""
# -- Tolerations for data pods assignment.

64
docs/config.md Normal file
View File

@@ -0,0 +1,64 @@
# Cloud controller manager configuration file
This file is used to configure the Proxmox CCM.
```yaml
features:
# Provider type
provider: default|capmox
# Network mode
network: default|qemu|auto
# Enable or disable the IPv6 support
ipv6_support_disabled: true|false
# External IP address CIDRs list, comma-separated
# Use `!` to exclude a CIDR
external_ip_cidrs: '192.168.0.0/16,2001:db8:85a3::8a2e:370:7334/112,!fd00:1234:5678::/64'
# IP addresses sort order, comma-separated
# The IPs that do not match the CIDRs will be kept in the order they
# were detected.
ip_sort_order: '192.168.0.0/16,2001:db8:85a3::8a2e:370:7334/112'
# Enable use of Proxmox HA group as a zone label
ha_group: true|false
clusters:
# List of Proxmox clusters
- url: https://cluster-api-1.exmple.com:8006/api2/json
# Skip the certificate verification, if needed
insecure: false
# Proxmox api token
token_id: "kubernetes-csi@pve!csi"
token_secret: "secret"
# (optional) Proxmox api token from separate file (s. Helm README.md)
# token_id_file: /run/secrets/region-1/token_id
# token_secret_file: /run/secrets/region-1/token_secret
# Region name, which is cluster name
region: Region-1
# Add more clusters if needed
- url: https://cluster-api-2.exmple.com:8006/api2/json
insecure: false
token_id: "kubernetes-csi@pve!csi"
token_secret: "secret"
region: Region-2
```
## Cluster list
You can define multiple clusters in the `clusters` section.
* `url` - The URL of the Proxmox cluster API.
* `insecure` - Set to `true` to skip TLS certificate verification.
* `token_id` - The Proxmox API token ID.
* `token_secret` - The name of the Kubernetes Secret that contains the Proxmox API token.
* `region` - The name of the region, which is also used as `topology.kubernetes.io/region` label.
## Feature flags
* `provider` - Set the provider type. The default is `default`, which uses provider-id format `proxmox://<region>/<vm-id>`. The `capmox` value is used for working with the Cluster API for Proxmox (CAPMox), which uses provider-id format `proxmox://<SystemUUID>`.
* `network` - Defines how the network addresses are handled by the CCM. The default value is `default`, which uses the kubelet argument `--node-ips` to assign IPs to the node resource. The `qemu` mode uses the QEMU agent API to retrieve network addresses from the virtual machine, while auto attempts to detect the best mode automatically.
* `ipv6_support_disabled` - Set to `true` to ignore any IPv6 addresses. The default is `false`.
* `external_ip_cidrs` - A comma-separated list of external IP address CIDRs. You can use `!` to exclude a CIDR from the list. This is useful for defining which IPs should be considered external and not included in the node addresses.
* `ip_sort_order` - A comma-separated list defining the order in which IP addresses should be sorted. The IPs that do not match the CIDRs will be kept in the order they were detected.
* `ha_group` - Set to `true` to enable the use of Proxmox HA group as a zone label. The default is `false`.
For more information about the network modes, see the [Networking documentation](networking.md).

View File

@@ -5,10 +5,10 @@ kind: ServiceAccount
metadata:
name: proxmox-cloud-controller-manager
labels:
helm.sh/chart: proxmox-cloud-controller-manager-0.2.11
helm.sh/chart: proxmox-cloud-controller-manager-0.2.23
app.kubernetes.io/name: proxmox-cloud-controller-manager
app.kubernetes.io/instance: proxmox-cloud-controller-manager
app.kubernetes.io/version: "v0.7.0"
app.kubernetes.io/version: "v0.12.3"
app.kubernetes.io/managed-by: Helm
namespace: kube-system
---
@@ -18,10 +18,10 @@ kind: ClusterRole
metadata:
name: system:proxmox-cloud-controller-manager
labels:
helm.sh/chart: proxmox-cloud-controller-manager-0.2.11
helm.sh/chart: proxmox-cloud-controller-manager-0.2.23
app.kubernetes.io/name: proxmox-cloud-controller-manager
app.kubernetes.io/instance: proxmox-cloud-controller-manager
app.kubernetes.io/version: "v0.7.0"
app.kubernetes.io/version: "v0.12.3"
app.kubernetes.io/managed-by: Helm
rules:
- apiGroups:
@@ -106,10 +106,10 @@ kind: DaemonSet
metadata:
name: proxmox-cloud-controller-manager
labels:
helm.sh/chart: proxmox-cloud-controller-manager-0.2.11
helm.sh/chart: proxmox-cloud-controller-manager-0.2.23
app.kubernetes.io/name: proxmox-cloud-controller-manager
app.kubernetes.io/instance: proxmox-cloud-controller-manager
app.kubernetes.io/version: "v0.7.0"
app.kubernetes.io/version: "v0.12.3"
app.kubernetes.io/managed-by: Helm
namespace: kube-system
spec:
@@ -149,7 +149,7 @@ spec:
- ALL
seccompProfile:
type: RuntimeDefault
image: "ghcr.io/sergelogvinov/proxmox-cloud-controller-manager:v0.7.0"
image: "ghcr.io/sergelogvinov/proxmox-cloud-controller-manager:v0.12.3"
imagePullPolicy: IfNotPresent
args:
- --v=2

View File

@@ -5,10 +5,10 @@ kind: ServiceAccount
metadata:
name: proxmox-cloud-controller-manager
labels:
helm.sh/chart: proxmox-cloud-controller-manager-0.2.11
helm.sh/chart: proxmox-cloud-controller-manager-0.2.23
app.kubernetes.io/name: proxmox-cloud-controller-manager
app.kubernetes.io/instance: proxmox-cloud-controller-manager
app.kubernetes.io/version: "v0.7.0"
app.kubernetes.io/version: "v0.12.3"
app.kubernetes.io/managed-by: Helm
namespace: kube-system
---
@@ -18,10 +18,10 @@ kind: ClusterRole
metadata:
name: system:proxmox-cloud-controller-manager
labels:
helm.sh/chart: proxmox-cloud-controller-manager-0.2.11
helm.sh/chart: proxmox-cloud-controller-manager-0.2.23
app.kubernetes.io/name: proxmox-cloud-controller-manager
app.kubernetes.io/instance: proxmox-cloud-controller-manager
app.kubernetes.io/version: "v0.7.0"
app.kubernetes.io/version: "v0.12.3"
app.kubernetes.io/managed-by: Helm
rules:
- apiGroups:
@@ -106,10 +106,10 @@ kind: Deployment
metadata:
name: proxmox-cloud-controller-manager
labels:
helm.sh/chart: proxmox-cloud-controller-manager-0.2.11
helm.sh/chart: proxmox-cloud-controller-manager-0.2.23
app.kubernetes.io/name: proxmox-cloud-controller-manager
app.kubernetes.io/instance: proxmox-cloud-controller-manager
app.kubernetes.io/version: "v0.7.0"
app.kubernetes.io/version: "v0.12.3"
app.kubernetes.io/managed-by: Helm
namespace: kube-system
spec:
@@ -148,7 +148,7 @@ spec:
- ALL
seccompProfile:
type: RuntimeDefault
image: "ghcr.io/sergelogvinov/proxmox-cloud-controller-manager:v0.7.0"
image: "ghcr.io/sergelogvinov/proxmox-cloud-controller-manager:v0.12.3"
imagePullPolicy: IfNotPresent
args:
- --v=4

View File

@@ -5,10 +5,10 @@ kind: ServiceAccount
metadata:
name: proxmox-cloud-controller-manager
labels:
helm.sh/chart: proxmox-cloud-controller-manager-0.2.11
helm.sh/chart: proxmox-cloud-controller-manager-0.2.23
app.kubernetes.io/name: proxmox-cloud-controller-manager
app.kubernetes.io/instance: proxmox-cloud-controller-manager
app.kubernetes.io/version: "v0.7.0"
app.kubernetes.io/version: "v0.12.3"
app.kubernetes.io/managed-by: Helm
namespace: kube-system
---
@@ -18,10 +18,10 @@ kind: ClusterRole
metadata:
name: system:proxmox-cloud-controller-manager
labels:
helm.sh/chart: proxmox-cloud-controller-manager-0.2.11
helm.sh/chart: proxmox-cloud-controller-manager-0.2.23
app.kubernetes.io/name: proxmox-cloud-controller-manager
app.kubernetes.io/instance: proxmox-cloud-controller-manager
app.kubernetes.io/version: "v0.7.0"
app.kubernetes.io/version: "v0.12.3"
app.kubernetes.io/managed-by: Helm
rules:
- apiGroups:
@@ -106,10 +106,10 @@ kind: Deployment
metadata:
name: proxmox-cloud-controller-manager
labels:
helm.sh/chart: proxmox-cloud-controller-manager-0.2.11
helm.sh/chart: proxmox-cloud-controller-manager-0.2.23
app.kubernetes.io/name: proxmox-cloud-controller-manager
app.kubernetes.io/instance: proxmox-cloud-controller-manager
app.kubernetes.io/version: "v0.7.0"
app.kubernetes.io/version: "v0.12.3"
app.kubernetes.io/managed-by: Helm
namespace: kube-system
spec:
@@ -148,7 +148,7 @@ spec:
- ALL
seccompProfile:
type: RuntimeDefault
image: "ghcr.io/sergelogvinov/proxmox-cloud-controller-manager:v0.7.0"
image: "ghcr.io/sergelogvinov/proxmox-cloud-controller-manager:v0.12.3"
imagePullPolicy: Always
args:
- --v=4

View File

@@ -48,13 +48,54 @@ Official [documentation](https://pve.proxmox.com/wiki/User_Management)
```shell
# Create role CCM
pveum role add CCM -privs "VM.Audit"
pveum role add CCM -privs "VM.Audit VM.GuestAgent.Audit Sys.Audit"
# Create user and grant permissions
pveum user add kubernetes@pve
pveum aclmod / -user kubernetes@pve -role CCM
pveum user token add kubernetes@pve ccm -privsep 0
```
Or through terraform:
```hcl
# Plugin: bpg/proxmox
resource "proxmox_virtual_environment_role" "ccm" {
role_id = "CCM"
privileges = [
"Sys.Audit",
"VM.Audit",
"VM.GuestAgent.Audit",
]
}
resource "proxmox_virtual_environment_user" "kubernetes" {
acl {
path = "/"
propagate = true
role_id = proxmox_virtual_environment_role.ccm.role_id
}
comment = "Kubernetes"
user_id = "kubernetes@pve"
}
resource "proxmox_virtual_environment_user_token" "ccm" {
comment = "Kubernetes CCM"
token_name = "ccm"
user_id = proxmox_virtual_environment_user.kubernetes.user_id
}
resource "proxmox_virtual_environment_acl" "ccm" {
token_id = proxmox_virtual_environment_user_token.ccm.id
role_id = proxmox_virtual_environment_role.ccm.role_id
path = "/"
propagate = true
}
```
## Deploy CCM
Create the proxmox credentials config file:
@@ -71,6 +112,8 @@ clusters:
region: cluster-1
```
See [configuration documentation](config.md) for more details.
### Method 1: kubectl
Upload it to the kubernetes:

69
docs/networking.md Normal file
View File

@@ -0,0 +1,69 @@
# Networking
## Node Addressing modes
There are three node addressing modes that Proxmox CCM supports:
- Default mode (only mode available till v0.9.0)
- Auto mode (available from vX.X.X)
- QEMU-only Mode
In Default mode Proxmox CCM expects nodes to be provided with their private IP Address via the `--node-ip` kubelet flag. Default mode
*does not* set the External IP of the node.
In Auto mode, Proxmox CCM makes use of both the host-networking access (if available) and the QEMU guest agent API (if available) to determine the available IP Addresses. At a minimum Auto mode will set only the Internal IP addresses of the node but can be configured to know which IP Addresses should be treated as external based on provided CIDRs and what order ALL IP addresses should be sorted in according to a sort order CIDR.
> [!NOTE]
> All modes, including Default Mode, will use any IPs provided via the `alpha.kubernetes.io/provided-node-ip` annotation, unless they are part of the ignored cidrs list (non-default modes only).
### Default Mode
In Default Mode, Proxmox CCM assumes that the private IP of the node will be set using the kubelet arg `--node-ip`. Setting this flag adds an annotation to the node `alpha.kubernetes.io/provided-node-ip` which is used to then set the Node's `status.Addresses` field.
In this mode there is no validation of the IP address.
### Auto Mode
In Auto mode, Proxmox CCM uses access to the QEMU guest agent API (if available) to get a list of interfaces and IP Addresses as well as any IP addresses provided via `--node-ip`. From there depending on configuration it will setup all detected addresses as private and set any addresses matching a configured set of external CIDRs as external.
Enabling auto mode is done by setting the network feature mode to `auto`:
```yaml
features:
network:
mode: auto
```
### QEMU-only Mode
In QEMU Mode, Proxmox CCM uses the QEMU guest agent API to retrieve a list of IP addresses and set them as Node Addresses. Any node addresses provided via the `alpha.kubernetes.io/provided-node-ip` node annotation will also be available.
Enabling qemu-only mode is done by setting the network feature mode to `qemu`:
```yaml
features:
network:
mode: qemu
```
## Example configuration
The following is example configuration which sets IP addresses from 192.168.0.1 - 192.168.255.254 and 2001:0db8:85a3:0000:0000:8a2e:0370:0000 - 2001:0db8:85a3:0000:0000:8a2e:0370:ffff as "external" addresses. All other IPs from subnet 10.0.0.0/8 will be ignored.
To use any mode other than default specify the following configuration:
```yaml
features:
network:
mode: auto
external_ip_cidrs: '192.168.0.0/16,2001:db8:85a3::8a2e:370:7334/112,!10.0.0.0/8'
```
Further configuration options are available as well. We can disable ipv6 support entirely and provide an order to sort IP addresses in (with any that don't match just being kept in whatever order the make it into the list):
```yaml
features:
network:
mode: auto
ipv6_support_disabled: true
ip_sort_order: '192.168.0.0/16,2001:db8:85a3::8a2e:370:7334/112'
```

View File

@@ -1,12 +1,20 @@
# Make release
## Change release version
```shell
git checkout -b release-0.0.2
git tag v0.0.2
git commit --allow-empty -m "chore: release 2.0.0" -m "Release-As: 2.0.0"
```
## Update helm chart and documentation
```shell
git branch -D release-please--branches--main
git checkout release-please--branches--main
export `jq -r '"TAG=v"+.[]' hack/release-please-manifest.json`
make helm-unit docs
make release-update
git add .
git commit
git commit -s --amend
```

167
go.mod
View File

@@ -1,111 +1,124 @@
module github.com/sergelogvinov/proxmox-cloud-controller-manager
go 1.23.4
go 1.25.6
// replace github.com/sergelogvinov/go-proxmox => ../proxmox/go-proxmox
// replace github.com/luthermonson/go-proxmox => github.com/sergelogvinov/go-proxmox-luthermonson v0.0.0-20251223032417-72ddd47a4a37
require (
github.com/Telmate/proxmox-api-go v0.0.0-20241127232213-af1f4e86b570
github.com/jarcoal/httpmock v1.3.1
github.com/spf13/pflag v1.0.5
github.com/stretchr/testify v1.10.0
github.com/jarcoal/httpmock v1.4.1
github.com/luthermonson/go-proxmox v0.3.2
github.com/pkg/errors v0.9.1
github.com/samber/lo v1.52.0
github.com/sergelogvinov/go-proxmox v0.1.0
github.com/spf13/pflag v1.0.10
github.com/stretchr/testify v1.11.1
go.uber.org/multierr v1.11.0
gopkg.in/yaml.v3 v3.0.1
k8s.io/api v0.32.0
k8s.io/apimachinery v0.32.0
k8s.io/client-go v0.32.0
k8s.io/cloud-provider v0.32.0
k8s.io/component-base v0.32.0
k8s.io/api v0.35.0
k8s.io/apimachinery v0.35.0
k8s.io/client-go v0.35.0
k8s.io/cloud-provider v0.35.0
k8s.io/component-base v0.35.0
k8s.io/klog/v2 v2.130.1
)
require (
cel.dev/expr v0.18.0 // indirect
github.com/Azure/go-ansiterm v0.0.0-20230124172434-306776ec8161 // indirect
cel.dev/expr v0.25.1 // indirect
github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c // indirect
github.com/NYTimes/gziphandler v1.1.1 // indirect
github.com/antlr4-go/antlr/v4 v4.13.1 // indirect
github.com/asaskevich/govalidator v0.0.0-20230301143203-a9d515a09cc2 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/blang/semver/v4 v4.0.0 // indirect
github.com/cenkalti/backoff/v4 v4.3.0 // indirect
github.com/buger/goterm v1.0.4 // indirect
github.com/cenkalti/backoff/v5 v5.0.3 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/coreos/go-semver v0.3.1 // indirect
github.com/coreos/go-systemd/v22 v22.5.0 // indirect
github.com/coreos/go-systemd/v22 v22.6.0 // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/emicklei/go-restful/v3 v3.12.1 // indirect
github.com/diskfs/go-diskfs v1.7.0 // indirect
github.com/djherbis/times v1.6.0 // indirect
github.com/emicklei/go-restful/v3 v3.12.2 // indirect
github.com/felixge/httpsnoop v1.0.4 // indirect
github.com/fsnotify/fsnotify v1.8.0 // indirect
github.com/fxamacker/cbor/v2 v2.7.0 // indirect
github.com/go-logr/logr v1.4.2 // indirect
github.com/fsnotify/fsnotify v1.9.0 // indirect
github.com/fxamacker/cbor/v2 v2.9.0 // indirect
github.com/go-logr/logr v1.4.3 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/go-openapi/jsonpointer v0.21.0 // indirect
github.com/go-openapi/jsonpointer v0.22.4 // indirect
github.com/go-openapi/jsonreference v0.21.0 // indirect
github.com/go-openapi/swag v0.23.0 // indirect
github.com/go-openapi/swag v0.23.1 // indirect
github.com/go-openapi/swag/jsonname v0.25.4 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang/protobuf v1.5.4 // indirect
github.com/google/btree v1.0.1 // indirect
github.com/google/cel-go v0.22.0 // indirect
github.com/google/gnostic-models v0.6.9 // indirect
github.com/google/go-cmp v0.6.0 // indirect
github.com/google/gofuzz v1.2.0 // indirect
github.com/google/btree v1.1.3 // indirect
github.com/google/cel-go v0.26.1 // indirect
github.com/google/gnostic-models v0.7.1 // indirect
github.com/google/go-cmp v0.7.0 // indirect
github.com/google/uuid v1.6.0 // indirect
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674 // indirect
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0 // indirect
github.com/grpc-ecosystem/grpc-gateway/v2 v2.20.0 // indirect
github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.4 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/jinzhu/copier v0.4.0 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/klauspost/compress v1.17.9 // indirect
github.com/kylelemons/godebug v1.1.0 // indirect
github.com/mailru/easyjson v0.7.7 // indirect
github.com/moby/term v0.5.0 // indirect
github.com/magefile/mage v1.15.0 // indirect
github.com/mailru/easyjson v0.9.1 // indirect
github.com/moby/term v0.5.2 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/patrickmn/go-cache v2.1.0+incompatible // indirect
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
github.com/prometheus/client_golang v1.20.5 // indirect
github.com/prometheus/client_model v0.6.1 // indirect
github.com/prometheus/common v0.59.1 // indirect
github.com/prometheus/procfs v0.15.1 // indirect
github.com/spf13/cobra v1.8.1 // indirect
github.com/stoewer/go-strcase v1.3.0 // indirect
github.com/prometheus/client_golang v1.23.2 // indirect
github.com/prometheus/client_model v0.6.2 // indirect
github.com/prometheus/common v0.67.5 // indirect
github.com/prometheus/procfs v0.19.2 // indirect
github.com/spf13/cobra v1.10.2 // indirect
github.com/stoewer/go-strcase v1.3.1 // indirect
github.com/x448/float16 v0.8.4 // indirect
go.etcd.io/etcd/api/v3 v3.5.17 // indirect
go.etcd.io/etcd/client/pkg/v3 v3.5.17 // indirect
go.etcd.io/etcd/client/v3 v3.5.17 // indirect
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.53.0 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.53.0 // indirect
go.opentelemetry.io/otel v1.28.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.28.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.28.0 // indirect
go.opentelemetry.io/otel/metric v1.28.0 // indirect
go.opentelemetry.io/otel/sdk v1.28.0 // indirect
go.opentelemetry.io/otel/trace v1.28.0 // indirect
go.opentelemetry.io/proto/otlp v1.3.1 // indirect
go.uber.org/multierr v1.11.0 // indirect
go.uber.org/zap v1.27.0 // indirect
golang.org/x/crypto v0.31.0 // indirect
golang.org/x/exp v0.0.0-20241217172543-b2144cdd0a67 // indirect
golang.org/x/net v0.33.0 // indirect
golang.org/x/oauth2 v0.24.0 // indirect
golang.org/x/sync v0.10.0 // indirect
golang.org/x/sys v0.28.0 // indirect
golang.org/x/term v0.27.0 // indirect
golang.org/x/text v0.21.0 // indirect
golang.org/x/time v0.8.0 // indirect
google.golang.org/genproto v0.0.0-20240814211410-ddb44dafa142 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20240826202546-f6391c0de4c7 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20240826202546-f6391c0de4c7 // indirect
google.golang.org/grpc v1.65.0 // indirect
google.golang.org/protobuf v1.35.1 // indirect
gopkg.in/evanphx/json-patch.v4 v4.12.0 // indirect
go.etcd.io/etcd/api/v3 v3.6.7 // indirect
go.etcd.io/etcd/client/pkg/v3 v3.6.7 // indirect
go.etcd.io/etcd/client/v3 v3.6.7 // indirect
go.opentelemetry.io/auto/sdk v1.2.1 // indirect
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.63.0 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.63.0 // indirect
go.opentelemetry.io/otel v1.38.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.38.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.38.0 // indirect
go.opentelemetry.io/otel/metric v1.38.0 // indirect
go.opentelemetry.io/otel/sdk v1.38.0 // indirect
go.opentelemetry.io/otel/trace v1.38.0 // indirect
go.opentelemetry.io/proto/otlp v1.9.0 // indirect
go.uber.org/zap v1.27.1 // indirect
go.yaml.in/yaml/v2 v2.4.3 // indirect
go.yaml.in/yaml/v3 v3.0.4 // indirect
golang.org/x/crypto v0.47.0 // indirect
golang.org/x/exp v0.0.0-20260112195511-716be5621a96 // indirect
golang.org/x/net v0.49.0 // indirect
golang.org/x/oauth2 v0.34.0 // indirect
golang.org/x/sync v0.19.0 // indirect
golang.org/x/sys v0.40.0 // indirect
golang.org/x/term v0.39.0 // indirect
golang.org/x/text v0.33.0 // indirect
golang.org/x/time v0.14.0 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20251222181119-0a764e51fe1b // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20251222181119-0a764e51fe1b // indirect
google.golang.org/grpc v1.78.0 // indirect
google.golang.org/protobuf v1.36.11 // indirect
gopkg.in/evanphx/json-patch.v4 v4.13.0 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/natefinch/lumberjack.v2 v2.2.1 // indirect
k8s.io/apiserver v0.32.0 // indirect
k8s.io/component-helpers v0.32.0 // indirect
k8s.io/controller-manager v0.32.0 // indirect
k8s.io/kms v0.32.0 // indirect
k8s.io/kube-openapi v0.0.0-20241212222426-2c72e554b1e7 // indirect
k8s.io/utils v0.0.0-20241210054802-24370beab758 // indirect
sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.31.1 // indirect
sigs.k8s.io/json v0.0.0-20241014173422-cfa47c3a1cc8 // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.5.0 // indirect
sigs.k8s.io/yaml v1.4.0 // indirect
k8s.io/apiserver v0.35.0 // indirect
k8s.io/component-helpers v0.35.0 // indirect
k8s.io/controller-manager v0.35.0 // indirect
k8s.io/kms v0.35.0 // indirect
k8s.io/kube-openapi v0.0.0-20251125145642-4e65d59e963e // indirect
k8s.io/utils v0.0.0-20260108192941-914a6e750570 // indirect
sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.34.0 // indirect
sigs.k8s.io/json v0.0.0-20250730193827-2d320260d730 // indirect
sigs.k8s.io/randfill v1.0.0 // indirect
sigs.k8s.io/structured-merge-diff/v6 v6.3.1 // indirect
sigs.k8s.io/yaml v1.6.0 // indirect
)

416
go.sum
View File

@@ -1,273 +1,319 @@
cel.dev/expr v0.18.0 h1:CJ6drgk+Hf96lkLikr4rFf19WrU0BOWEihyZnI2TAzo=
cel.dev/expr v0.18.0/go.mod h1:MrpN08Q+lEBs+bGYdLxxHkZoUSsCp0nSKTs0nTymJgw=
github.com/Azure/go-ansiterm v0.0.0-20230124172434-306776ec8161 h1:L/gRVlceqvL25UVaW/CKtUDjefjrs0SPonmDGUVOYP0=
github.com/Azure/go-ansiterm v0.0.0-20230124172434-306776ec8161/go.mod h1:xomTg63KZ2rFqZQzSB4Vz2SUXa1BpHTVz9L5PTmPC4E=
cel.dev/expr v0.25.1 h1:1KrZg61W6TWSxuNZ37Xy49ps13NUovb66QLprthtwi4=
cel.dev/expr v0.25.1/go.mod h1:hrXvqGP6G6gyx8UAHSHJ5RGk//1Oj5nXQ2NI02Nrsg4=
github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c h1:udKWzYgxTojEKWjV8V+WSxDXJ4NFATAsZjh8iIbsQIg=
github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c/go.mod h1:xomTg63KZ2rFqZQzSB4Vz2SUXa1BpHTVz9L5PTmPC4E=
github.com/Masterminds/semver/v3 v3.4.0 h1:Zog+i5UMtVoCU8oKka5P7i9q9HgrJeGzI9SA1Xbatp0=
github.com/Masterminds/semver/v3 v3.4.0/go.mod h1:4V+yj/TJE1HU9XfppCwVMZq3I84lprf4nC11bSS5beM=
github.com/NYTimes/gziphandler v1.1.1 h1:ZUDjpQae29j0ryrS0u/B8HZfJBtBQHjqw2rQ2cqUQ3I=
github.com/NYTimes/gziphandler v1.1.1/go.mod h1:n/CVRwUEOgIxrgPvAQhUUr9oeUtvrhMomdKFjzJNB0c=
github.com/Telmate/proxmox-api-go v0.0.0-20241127232213-af1f4e86b570 h1:Qln/bkARmiTMLgpQasFHo3NfeQ90dSALjeH41exbSV4=
github.com/Telmate/proxmox-api-go v0.0.0-20241127232213-af1f4e86b570/go.mod h1:Gu6n6vEn1hlyFUkjrvU+X1fdgaSXLoM9HKYYJqy1fsY=
github.com/anchore/go-lzo v0.1.0 h1:NgAacnzqPeGH49Ky19QKLBZEuFRqtTG9cdaucc3Vncs=
github.com/anchore/go-lzo v0.1.0/go.mod h1:3kLx0bve2oN1iDwgM1U5zGku1Tfbdb0No5qp1eL1fIk=
github.com/antlr4-go/antlr/v4 v4.13.1 h1:SqQKkuVZ+zWkMMNkjy5FZe5mr5WURWnlpmOuzYWrPrQ=
github.com/antlr4-go/antlr/v4 v4.13.1/go.mod h1:GKmUxMtwp6ZgGwZSva4eWPC5mS6vUAmOABFgjdkM7Nw=
github.com/asaskevich/govalidator v0.0.0-20230301143203-a9d515a09cc2 h1:DklsrG3dyBCFEj5IhUbnKptjxatkF07cF2ak3yi77so=
github.com/asaskevich/govalidator v0.0.0-20230301143203-a9d515a09cc2/go.mod h1:WaHUgvxTVq04UNunO+XhnAqY/wQc+bxr74GqbsZ/Jqw=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/blang/semver/v4 v4.0.0 h1:1PFHFE6yCCTv8C1TeyNNarDzntLi7wMI5i/pzqYIsAM=
github.com/blang/semver/v4 v4.0.0/go.mod h1:IbckMUScFkM3pff0VJDNKRiT6TG/YpiHIM2yvyW5YoQ=
github.com/cenkalti/backoff/v4 v4.3.0 h1:MyRJ/UdXutAwSAT+s3wNd7MfTIcy71VQueUuFK343L8=
github.com/cenkalti/backoff/v4 v4.3.0/go.mod h1:Y3VNntkOUPxTVeUxJ/G5vcM//AlwfmyYozVcomhLiZE=
github.com/buger/goterm v1.0.4 h1:Z9YvGmOih81P0FbVtEYTFF6YsSgxSUKEhf/f9bTMXbY=
github.com/buger/goterm v1.0.4/go.mod h1:HiFWV3xnkolgrBV3mY8m0X0Pumt4zg4QhbdOzQtB8tE=
github.com/cenkalti/backoff/v5 v5.0.3 h1:ZN+IMa753KfX5hd8vVaMixjnqRZ3y8CuJKRKj1xcsSM=
github.com/cenkalti/backoff/v5 v5.0.3/go.mod h1:rkhZdG3JZukswDf7f0cwqPNk4K0sa+F97BxZthm/crw=
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/coreos/go-semver v0.3.1 h1:yi21YpKnrx1gt5R+la8n5WgS0kCrsPp33dmEyHReZr4=
github.com/coreos/go-semver v0.3.1/go.mod h1:irMmmIw/7yzSRPWryHsK7EYSg09caPQL03VsM8rvUec=
github.com/coreos/go-systemd/v22 v22.5.0 h1:RrqgGjYQKalulkV8NGVIfkXQf6YYmOyiJKk8iXXhfZs=
github.com/coreos/go-systemd/v22 v22.5.0/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/cpuguy83/go-md2man/v2 v2.0.4/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/coreos/go-systemd/v22 v22.6.0 h1:aGVa/v8B7hpb0TKl0MWoAavPDmHvobFe5R5zn0bCJWo=
github.com/coreos/go-systemd/v22 v22.6.0/go.mod h1:iG+pp635Fo7ZmV/j14KUcmEyWF+0X7Lua8rrTWzYgWU=
github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g=
github.com/creack/pty v1.1.18 h1:n56/Zwd5o6whRC5PMGretI4IdRLlmBXYNjScPaBgsbY=
github.com/creack/pty v1.1.18/go.mod h1:MOBLtS5ELjhRRrroQr9kyvTxUAFNvYEK993ew/Vr4O4=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/diskfs/go-diskfs v1.7.0 h1:vonWmt5CMowXwUc79jWyGrf2DIMeoOjkLlMnQYGVOs8=
github.com/diskfs/go-diskfs v1.7.0/go.mod h1:LhQyXqOugWFRahYUSw47NyZJPezFzB9UELwhpszLP/k=
github.com/djherbis/times v1.6.0 h1:w2ctJ92J8fBvWPxugmXIv7Nz7Q3iDMKNx9v5ocVH20c=
github.com/djherbis/times v1.6.0/go.mod h1:gOHeRAz2h+VJNZ5Gmc/o7iD9k4wW7NMVqieYCY99oc0=
github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY=
github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto=
github.com/emicklei/go-restful/v3 v3.12.1 h1:PJMDIM/ak7btuL8Ex0iYET9hxM3CI2sjZtzpL63nKAU=
github.com/emicklei/go-restful/v3 v3.12.1/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc=
github.com/elliotwutingfeng/asciiset v0.0.0-20230602022725-51bbb787efab h1:h1UgjJdAAhj+uPL68n7XASS6bU+07ZX1WJvVS2eyoeY=
github.com/elliotwutingfeng/asciiset v0.0.0-20230602022725-51bbb787efab/go.mod h1:GLo/8fDswSAniFG+BFIaiSPcK610jyzgEhWYPQwuQdw=
github.com/emicklei/go-restful/v3 v3.12.2 h1:DhwDP0vY3k8ZzE0RunuJy8GhNpPL6zqLkDf9B/a0/xU=
github.com/emicklei/go-restful/v3 v3.12.2/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc=
github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=
github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
github.com/fsnotify/fsnotify v1.8.0 h1:dAwr6QBTBZIkG8roQaJjGof0pp0EeF+tNV7YBP3F/8M=
github.com/fsnotify/fsnotify v1.8.0/go.mod h1:8jBTzvmWwFyi3Pb8djgCCO5IBqzKJ/Jwo8TRcHyHii0=
github.com/fxamacker/cbor/v2 v2.7.0 h1:iM5WgngdRBanHcxugY4JySA0nk1wZorNOpTgCMedv5E=
github.com/fxamacker/cbor/v2 v2.7.0/go.mod h1:pxXPTn3joSm21Gbwsv0w9OSA2y1HFR9qXEeXQVeNoDQ=
github.com/fsnotify/fsnotify v1.9.0 h1:2Ml+OJNzbYCTzsxtv8vKSFD9PbJjmhYF14k/jKC7S9k=
github.com/fsnotify/fsnotify v1.9.0/go.mod h1:8jBTzvmWwFyi3Pb8djgCCO5IBqzKJ/Jwo8TRcHyHii0=
github.com/fxamacker/cbor/v2 v2.9.0 h1:NpKPmjDBgUfBms6tr6JZkTHtfFGcMKsw3eGcmD/sapM=
github.com/fxamacker/cbor/v2 v2.9.0/go.mod h1:vM4b+DJCtHn+zz7h3FFp/hDAI9WNWCsZj23V5ytsSxQ=
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-logr/logr v1.4.2 h1:6pFjapn8bFcIbiKo3XT4j/BhANplGihG6tvd+8rYgrY=
github.com/go-logr/logr v1.4.2/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI=
github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
github.com/go-logr/zapr v1.3.0 h1:XGdV8XW8zdwFiwOA2Dryh1gj2KRQyOOoNmBy4EplIcQ=
github.com/go-logr/zapr v1.3.0/go.mod h1:YKepepNBd1u/oyhd/yQmtjVXmm9uML4IXUgMOwR8/Gg=
github.com/go-openapi/jsonpointer v0.21.0 h1:YgdVicSA9vH5RiHs9TZW5oyafXZFc6+2Vc1rr/O9oNQ=
github.com/go-openapi/jsonpointer v0.21.0/go.mod h1:IUyH9l/+uyhIYQ/PXVA41Rexl+kOkAPDdXEYns6fzUY=
github.com/go-openapi/jsonpointer v0.22.4 h1:dZtK82WlNpVLDW2jlA1YCiVJFVqkED1MegOUy9kR5T4=
github.com/go-openapi/jsonpointer v0.22.4/go.mod h1:elX9+UgznpFhgBuaMQ7iu4lvvX1nvNsesQ3oxmYTw80=
github.com/go-openapi/jsonreference v0.21.0 h1:Rs+Y7hSXT83Jacb7kFyjn4ijOuVGSvOdF2+tg1TRrwQ=
github.com/go-openapi/jsonreference v0.21.0/go.mod h1:LmZmgsrTkVg9LG4EaHeY8cBDslNPMo06cago5JNLkm4=
github.com/go-openapi/swag v0.23.0 h1:vsEVJDUo2hPJ2tu0/Xc+4noaxyEffXNIs3cOULZ+GrE=
github.com/go-openapi/swag v0.23.0/go.mod h1:esZ8ITTYEsH1V2trKHjAN8Ai7xHb8RV+YSZ577vPjgQ=
github.com/go-openapi/swag v0.23.1 h1:lpsStH0n2ittzTnbaSloVZLuB5+fvSY/+hnagBjSNZU=
github.com/go-openapi/swag v0.23.1/go.mod h1:STZs8TbRvEQQKUA+JZNAm3EWlgaOBGpyFDqQnDHMef0=
github.com/go-openapi/swag/jsonname v0.25.4 h1:bZH0+MsS03MbnwBXYhuTttMOqk+5KcQ9869Vye1bNHI=
github.com/go-openapi/swag/jsonname v0.25.4/go.mod h1:GPVEk9CWVhNvWhZgrnvRA6utbAltopbKwDu8mXNUMag=
github.com/go-openapi/testify/v2 v2.0.2 h1:X999g3jeLcoY8qctY/c/Z8iBHTbwLz7R2WXd6Ub6wls=
github.com/go-openapi/testify/v2 v2.0.2/go.mod h1:HCPmvFFnheKK2BuwSA0TbbdxJ3I16pjwMkYkP4Ywn54=
github.com/go-task/slim-sprig/v3 v3.0.0 h1:sUs3vkvUymDpBKi3qH1YSqBQk9+9D/8M2mN1vB6EwHI=
github.com/go-task/slim-sprig/v3 v3.0.0/go.mod h1:W848ghGpv3Qj3dhTPRyJypKRiqCdHZiAzKg9hl15HA8=
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/go-test/deep v1.0.8 h1:TDsG77qcSprGbC6vTN8OuXp5g+J+b5Pcguhf7Zt61VM=
github.com/go-test/deep v1.0.8/go.mod h1:5C2ZWiW0ErCdrYzpqxLbTX7MG14M9iiw8DgHncVwcsE=
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
github.com/golang-jwt/jwt/v4 v4.5.0 h1:7cYmW1XlMY7h7ii7UhUyChSgS5wUJEnm9uZVTGqOWzg=
github.com/golang-jwt/jwt/v4 v4.5.0/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0=
github.com/golang-jwt/jwt/v5 v5.3.0 h1:pv4AsKCKKZuqlgs5sUmn4x8UlGa0kEVt/puTpKx9vvo=
github.com/golang-jwt/jwt/v5 v5.3.0/go.mod h1:fxCRLWMO43lRc8nhHWY6LGqRcf+1gQWArsqaEUEa5bE=
github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=
github.com/google/btree v1.0.1 h1:gK4Kx5IaGY9CD5sPJ36FHiBJ6ZXl0kilRiiCj+jdYp4=
github.com/google/btree v1.0.1/go.mod h1:xXMiIv4Fb/0kKde4SpL7qlzvu5cMJDRkFDxJfI9uaxA=
github.com/google/cel-go v0.22.0 h1:b3FJZxpiv1vTMo2/5RDUqAHPxkT8mmMfJIrq1llbf7g=
github.com/google/cel-go v0.22.0/go.mod h1:BuznPXXfQDpXKWQ9sPW3TzlAJN5zzFe+i9tIs0yC4s8=
github.com/google/gnostic-models v0.6.9 h1:MU/8wDLif2qCXZmzncUQ/BOfxWfthHi63KqpoNbWqVw=
github.com/google/gnostic-models v0.6.9/go.mod h1:CiWsm0s6BSQd1hRn8/QmxqB6BesYcbSZxsz9b0KuDBw=
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/btree v1.1.3 h1:CVpQJjYgC4VbzxeGVHfvZrv1ctoYCAI8vbl07Fcxlyg=
github.com/google/btree v1.1.3/go.mod h1:qOPhT0dTNdNzV6Z/lhRX0YXUafgPLFUh+gZMl761Gm4=
github.com/google/cel-go v0.26.1 h1:iPbVVEdkhTX++hpe3lzSk7D3G3QSYqLGoHOcEio+UXQ=
github.com/google/cel-go v0.26.1/go.mod h1:A9O8OU9rdvrK5MQyrqfIxo1a0u4g3sF8KB6PUIaryMM=
github.com/google/gnostic-models v0.7.1 h1:SisTfuFKJSKM5CPZkffwi6coztzzeYUhc3v4yxLWH8c=
github.com/google/gnostic-models v0.7.1/go.mod h1:whL5G0m6dmc5cPxKc5bdKdEN3UjI7OUGxBlw57miDrQ=
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/gofuzz v1.2.0 h1:xRy4A+RhZaiKjJ1bPfwQ8sedCA+YS2YcCHW6ec7JMi0=
github.com/google/gofuzz v1.2.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/pprof v0.0.0-20241029153458-d1b30febd7db h1:097atOisP2aRj7vFgYQBbFN4U4JNXUNYpxael3UzMyo=
github.com/google/pprof v0.0.0-20241029153458-d1b30febd7db/go.mod h1:vavhavw2zAxS5dIdcRluK6cSGGPlZynqzFM8NdvU144=
github.com/google/pprof v0.0.0-20250403155104-27863c87afa6 h1:BHT72Gu3keYf3ZEu2J0b1vyeLSOYI8bm5wbJM/8yDe8=
github.com/google/pprof v0.0.0-20250403155104-27863c87afa6/go.mod h1:boTsfXsheKC2y+lKOCMpSfarhxDeIzfZG1jqGcPl3cA=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/gorilla/websocket v1.5.0 h1:PPwGk2jz7EePpoHN/+ClbZu8SPxiqlu12wZP/3sWmnc=
github.com/gorilla/websocket v1.5.0/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
github.com/grpc-ecosystem/go-grpc-middleware v1.3.0 h1:+9834+KizmvFV7pXQGSXQTsaWhq2GjuNUt0aUU0YBYw=
github.com/grpc-ecosystem/go-grpc-middleware v1.3.0/go.mod h1:z0ButlSOZa5vEBq9m2m2hlwIgKw+rp3sdCBRoJY+30Y=
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674 h1:JeSE6pjso5THxAzdVpqr6/geYxZytqFMBCOtn/ujyeo=
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674/go.mod h1:r4w70xmWCQKmi1ONH4KIaBptdivuRPyosB9RmPlGEwA=
github.com/grpc-ecosystem/go-grpc-middleware/providers/prometheus v1.0.1 h1:qnpSQwGEnkcRpTqNOIR6bJbR0gAorgP9CSALpRcKoAA=
github.com/grpc-ecosystem/go-grpc-middleware/providers/prometheus v1.0.1/go.mod h1:lXGCsh6c22WGtjr+qGHj1otzZpV/1kwTMAqkwZsnWRU=
github.com/grpc-ecosystem/go-grpc-middleware/v2 v2.3.0 h1:FbSCl+KggFl+Ocym490i/EyXF4lPgLoUtcSWquBM0Rs=
github.com/grpc-ecosystem/go-grpc-middleware/v2 v2.3.0/go.mod h1:qOchhhIlmRcqk/O9uCo/puJlyo07YINaIqdZfZG3Jkc=
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0 h1:Ovs26xHkKqVztRpIrF/92BcuyuQ/YW4NSIpoGtfXNho=
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk=
github.com/grpc-ecosystem/grpc-gateway v1.16.0 h1:gmcG1KaJ57LophUzW0Hy8NmPhnMZb4M0+kPpLofRdBo=
github.com/grpc-ecosystem/grpc-gateway v1.16.0/go.mod h1:BDjrQk3hbvj6Nolgz8mAMFbcEtjT1g+wF4CSlocrBnw=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.20.0 h1:bkypFPDjIYGfCYD5mRBvpqxfYX1YCS1PXdKYWi8FsN0=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.20.0/go.mod h1:P+Lt/0by1T8bfcF3z737NnSbmxQAppXMRziHUxPOC8k=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.4 h1:kEISI/Gx67NzH3nJxAmY/dGac80kKZgZt134u7Y/k1s=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.4/go.mod h1:6Nz966r3vQYCqIzWsuEl9d7cf7mRhtDmm++sOxlnfxI=
github.com/h2non/gock v1.2.0 h1:K6ol8rfrRkUOefooBC8elXoaNGYkpp7y2qcxGG6BzUE=
github.com/h2non/gock v1.2.0/go.mod h1:tNhoxHYW2W42cYkYb1WqzdbYIieALC99kpYr7rH/BQk=
github.com/h2non/parth v0.0.0-20190131123155-b4df798d6542 h1:2VTzZjLZBgl62/EtslCrtky5vbi9dd7HrQPQIx6wqiw=
github.com/h2non/parth v0.0.0-20190131123155-b4df798d6542/go.mod h1:Ow0tF8D4Kplbc8s8sSb3V2oUCygFHVp8gC3Dn6U4MNI=
github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
github.com/jarcoal/httpmock v1.3.1 h1:iUx3whfZWVf3jT01hQTO/Eo5sAYtB2/rqaUuOtpInww=
github.com/jarcoal/httpmock v1.3.1/go.mod h1:3yb8rc4BI7TCBhFY8ng0gjuLKJNquuDNiPaZjnENuYg=
github.com/jonboulle/clockwork v0.4.0 h1:p4Cf1aMWXnXAUh8lVfewRBx1zaTSYKrKMF2g3ST4RZ4=
github.com/jonboulle/clockwork v0.4.0/go.mod h1:xgRqUGwRcjKCO1vbZUEtSLrqKoPSsUpK7fnezOII0kc=
github.com/jarcoal/httpmock v1.4.1 h1:0Ju+VCFuARfFlhVXFc2HxlcQkfB+Xq12/EotHko+x2A=
github.com/jarcoal/httpmock v1.4.1/go.mod h1:ftW1xULwo+j0R0JJkJIIi7UKigZUXCLLanykgjwBXL0=
github.com/jinzhu/copier v0.4.0 h1:w3ciUoD19shMCRargcpm0cm91ytaBhDvuRpz1ODO/U8=
github.com/jinzhu/copier v0.4.0/go.mod h1:DfbEm0FYsaqBcKcFuvmOZb218JkPGtvSHsKg8S8hyyg=
github.com/jonboulle/clockwork v0.5.0 h1:Hyh9A8u51kptdkR+cqRpT1EebBwTn1oK9YfGYbdFz6I=
github.com/jonboulle/clockwork v0.5.0/go.mod h1:3mZlmanh0g2NDKO5TWZVJAfofYk64M7XN3SzBPjZF60=
github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8HmY=
github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y=
github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM=
github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/klauspost/compress v1.17.9 h1:6KIumPrER1LHsvBVuDa0r5xaG0Es51mhhB9BQB2qeMA=
github.com/klauspost/compress v1.17.9/go.mod h1:Di0epgTjJY877eYKx5yC51cX2A2Vl2ibi7bDH9ttBbw=
github.com/klauspost/compress v1.18.0 h1:c/Cqfb0r+Yi+JtIEq73FWXVkRonBlf0CRNYc8Zttxdo=
github.com/klauspost/compress v1.18.0/go.mod h1:2Pp+KzxcywXVXMr50+X0Q/Lsb43OQHYWRCY2AiWywWQ=
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
github.com/mailru/easyjson v0.7.7 h1:UGYAvKxe3sBsEDzO8ZeWOSlIQfWFlxbzLZe7hwFURr0=
github.com/mailru/easyjson v0.7.7/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc=
github.com/maxatome/go-testdeep v1.12.0 h1:Ql7Go8Tg0C1D/uMMX59LAoYK7LffeJQ6X2T04nTH68g=
github.com/maxatome/go-testdeep v1.12.0/go.mod h1:lPZc/HAcJMP92l7yI6TRz1aZN5URwUBUAfUNvrclaNM=
github.com/moby/term v0.5.0 h1:xt8Q1nalod/v7BqbG21f8mQPqH+xAaC9C3N3wfWbVP0=
github.com/moby/term v0.5.0/go.mod h1:8FzsFHVUBGZdbDsJw/ot+X+d5HLUbvklYLJ9uGfcI3Y=
github.com/luthermonson/go-proxmox v0.3.2 h1:/zUg6FCl9cAABx0xU3OIgtDtClY0gVXxOCsrceDNylc=
github.com/luthermonson/go-proxmox v0.3.2/go.mod h1:oyFgg2WwTEIF0rP6ppjiixOHa5ebK1p8OaRiFhvICBQ=
github.com/magefile/mage v1.15.0 h1:BvGheCMAsG3bWUDbZ8AyXXpCNwU9u5CB6sM+HNb9HYg=
github.com/magefile/mage v1.15.0/go.mod h1:z5UZb/iS3GoOSn0JgWuiw7dxlurVYTu+/jHXqQg881A=
github.com/mailru/easyjson v0.9.1 h1:LbtsOm5WAswyWbvTEOqhypdPeZzHavpZx96/n553mR8=
github.com/mailru/easyjson v0.9.1/go.mod h1:1+xMtQp2MRNVL/V1bOzuP3aP8VNwRW55fQUto+XFtTU=
github.com/maxatome/go-testdeep v1.14.0 h1:rRlLv1+kI8eOI3OaBXZwb3O7xY3exRzdW5QyX48g9wI=
github.com/maxatome/go-testdeep v1.14.0/go.mod h1:lPZc/HAcJMP92l7yI6TRz1aZN5URwUBUAfUNvrclaNM=
github.com/moby/term v0.5.2 h1:6qk3FJAFDs6i/q3W/pQ97SX192qKfZgGjCQqfCJkgzQ=
github.com/moby/term v0.5.2/go.mod h1:d3djjFCrjnB+fl8NJux+EJzu0msscUP+f8it8hPkFLc=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M=
github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee h1:W5t00kpgFdJifH4BDsTlE89Zl93FEloxaWZfGcifgq8=
github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/onsi/ginkgo/v2 v2.21.0 h1:7rg/4f3rB88pb5obDgNZrNHrQ4e6WpjonchcpuBRnZM=
github.com/onsi/ginkgo/v2 v2.21.0/go.mod h1:7Du3c42kxCUegi0IImZ1wUQzMBVecgIHjR1C+NkhLQo=
github.com/onsi/gomega v1.35.1 h1:Cwbd75ZBPxFSuZ6T+rN/WCb/gOc6YgFBXLlZLhC7Ds4=
github.com/onsi/gomega v1.35.1/go.mod h1:PvZbdDc8J6XJEpDK4HCuRBm8a6Fzp9/DmhC9C7yFlog=
github.com/onsi/ginkgo/v2 v2.27.2 h1:LzwLj0b89qtIy6SSASkzlNvX6WktqurSHwkk2ipF/Ns=
github.com/onsi/ginkgo/v2 v2.27.2/go.mod h1:ArE1D/XhNXBXCBkKOLkbsb2c81dQHCRcF5zwn/ykDRo=
github.com/onsi/gomega v1.38.2 h1:eZCjf2xjZAqe+LeWvKb5weQ+NcPwX84kqJ0cZNxok2A=
github.com/onsi/gomega v1.38.2/go.mod h1:W2MJcYxRGV63b418Ai34Ud0hEdTVXq9NW9+Sx6uXf3k=
github.com/patrickmn/go-cache v2.1.0+incompatible h1:HRMgzkcYKYpi3C8ajMPV8OFXaaRUnok+kx1WdO15EQc=
github.com/patrickmn/go-cache v2.1.0+incompatible/go.mod h1:3Qf8kWWT7OJRJbdiICTKqZju1ZixQ/KpMGzzAfe6+WQ=
github.com/pierrec/lz4/v4 v4.1.17 h1:kV4Ip+/hUBC+8T6+2EgburRtkE9ef4nbY3f4dFhGjMc=
github.com/pierrec/lz4/v4 v4.1.17/go.mod h1:gZWDp/Ze/IJXGXf23ltt2EXimqmTUXEy0GFuRQyBid4=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/xattr v0.4.9 h1:5883YPCtkSd8LFbs13nXplj9g9tlrwoJRjgpgMu1/fE=
github.com/pkg/xattr v0.4.9/go.mod h1:di8WF84zAKk8jzR1UBTEWh9AUlIZZ7M/JNt8e9B6ktU=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_golang v1.20.5 h1:cxppBPuYhUnsO6yo/aoRol4L7q7UFfdm+bR9r+8l63Y=
github.com/prometheus/client_golang v1.20.5/go.mod h1:PIEt8X02hGcP8JWbeHyeZ53Y/jReSnHgO035n//V5WE=
github.com/prometheus/client_model v0.6.1 h1:ZKSh/rekM+n3CeS952MLRAdFwIKqeY8b62p8ais2e9E=
github.com/prometheus/client_model v0.6.1/go.mod h1:OrxVMOVHjw3lKMa8+x6HeMGkHMQyHDk9E3jmP2AmGiY=
github.com/prometheus/common v0.59.1 h1:LXb1quJHWm1P6wq/U824uxYi4Sg0oGvNeUm1z5dJoX0=
github.com/prometheus/common v0.59.1/go.mod h1:GpWM7dewqmVYcd7SmRaiWVe9SSqjf0UrwnYnpEZNuT0=
github.com/prometheus/procfs v0.15.1 h1:YagwOFzUgYfKKHX6Dr+sHT7km/hxC76UB0learggepc=
github.com/prometheus/procfs v0.15.1/go.mod h1:fB45yRUv8NstnjriLhBQLuOUt+WW4BsoGhij/e3PBqk=
github.com/rogpeppe/go-internal v1.12.0 h1:exVL4IDcn6na9z1rAb56Vxr+CgyK3nn3O+epU5NdKM8=
github.com/rogpeppe/go-internal v1.12.0/go.mod h1:E+RYuTGaKKdloAfM02xzb0FW3Paa99yedzYV+kq4uf4=
github.com/prometheus/client_golang v1.23.2 h1:Je96obch5RDVy3FDMndoUsjAhG5Edi49h0RJWRi/o0o=
github.com/prometheus/client_golang v1.23.2/go.mod h1:Tb1a6LWHB3/SPIzCoaDXI4I8UHKeFTEQ1YCr+0Gyqmg=
github.com/prometheus/client_model v0.6.2 h1:oBsgwpGs7iVziMvrGhE53c/GrLUsZdHnqNwqPLxwZyk=
github.com/prometheus/client_model v0.6.2/go.mod h1:y3m2F6Gdpfy6Ut/GBsUqTWZqCUvMVzSfMLjcu6wAwpE=
github.com/prometheus/common v0.67.5 h1:pIgK94WWlQt1WLwAC5j2ynLaBRDiinoAb86HZHTUGI4=
github.com/prometheus/common v0.67.5/go.mod h1:SjE/0MzDEEAyrdr5Gqc6G+sXI67maCxzaT3A2+HqjUw=
github.com/prometheus/procfs v0.19.2 h1:zUMhqEW66Ex7OXIiDkll3tl9a1ZdilUOd/F6ZXw4Vws=
github.com/prometheus/procfs v0.19.2/go.mod h1:M0aotyiemPhBCM0z5w87kL22CxfcH05ZpYlu+b4J7mw=
github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=
github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ=
github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
github.com/samber/lo v1.52.0 h1:Rvi+3BFHES3A8meP33VPAxiBZX/Aws5RxrschYGjomw=
github.com/samber/lo v1.52.0/go.mod h1:4+MXEGsJzbKGaUEQFKBq2xtfuznW9oz/WrgyzMzRoM0=
github.com/sergelogvinov/go-proxmox v0.1.0 h1:6S858CmCuC61x9SrfiuvKUanz2AJR+sdFHSZ+wI/GG8=
github.com/sergelogvinov/go-proxmox v0.1.0/go.mod h1:3v8baTO3uoOuFKEWhYVjrh6ptEUQiAH/eHOYR06nDcU=
github.com/sirupsen/logrus v1.9.4-0.20230606125235-dd1b4c2e81af h1:Sp5TG9f7K39yfB+If0vjp97vuT74F72r8hfRpP8jLU0=
github.com/sirupsen/logrus v1.9.4-0.20230606125235-dd1b4c2e81af/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
github.com/soheilhy/cmux v0.1.5 h1:jjzc5WVemNEDTLwv9tlmemhC73tI08BNOIGwBOo10Js=
github.com/soheilhy/cmux v0.1.5/go.mod h1:T7TcVDs9LWfQgPlPsdngu6I6QIoyIFZDDC6sNE1GqG0=
github.com/spf13/cobra v1.8.1 h1:e5/vxKd/rZsfSJMUX1agtjeTDf+qv1/JdBF8gg5k9ZM=
github.com/spf13/cobra v1.8.1/go.mod h1:wHxEcudfqmLYa8iTfL+OuZPbBZkmvliBWKIezN3kD9Y=
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/stoewer/go-strcase v1.3.0 h1:g0eASXYtp+yvN9fK8sH94oCIk0fau9uV1/ZdJ0AVEzs=
github.com/stoewer/go-strcase v1.3.0/go.mod h1:fAH5hQ5pehh+j3nZfvwdk2RgEgQjAoM8wodgtPmh1xo=
github.com/spf13/cobra v1.10.2 h1:DMTTonx5m65Ic0GOoRY2c16WCbHxOOw6xxezuLaBpcU=
github.com/spf13/cobra v1.10.2/go.mod h1:7C1pvHqHw5A4vrJfjNwvOdzYu0Gml16OCs2GRiTUUS4=
github.com/spf13/pflag v1.0.9/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/spf13/pflag v1.0.10 h1:4EBh2KAYBwaONj6b2Ye1GiHfwjqyROoF4RwYO+vPwFk=
github.com/spf13/pflag v1.0.10/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/stoewer/go-strcase v1.3.1 h1:iS0MdW+kVTxgMoE1LAZyMiYJFKlOzLooE4MxjirtkAs=
github.com/stoewer/go-strcase v1.3.1/go.mod h1:fAH5hQ5pehh+j3nZfvwdk2RgEgQjAoM8wodgtPmh1xo=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
github.com/stretchr/objx v0.5.2 h1:xuMeJ0Sdp5ZMRXx/aWO6RZxdr3beISkG5/G/aIRr3pY=
github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
github.com/tmc/grpc-websocket-proxy v0.0.0-20220101234140-673ab2c3ae75 h1:6fotK7otjonDflCTK0BCfls4SPy3NcCVb5dqqmbRknE=
github.com/tmc/grpc-websocket-proxy v0.0.0-20220101234140-673ab2c3ae75/go.mod h1:KO6IkyS8Y3j8OdNO85qEYBsRPuteD+YciPomcXdrMnk=
github.com/ulikunitz/xz v0.5.11 h1:kpFauv27b6ynzBNT/Xy+1k+fK4WswhN/6PN5WhFAGw8=
github.com/ulikunitz/xz v0.5.11/go.mod h1:nbz6k7qbPmH4IRqmfOplQw/tblSgqTqBwxkY0oWt/14=
github.com/x448/float16 v0.8.4 h1:qLwI1I70+NjRFUR3zs1JPUCgaCXSh3SW62uAKT1mSBM=
github.com/x448/float16 v0.8.4/go.mod h1:14CWIYCyZA/cWjXOioeEpHeN/83MdbZDRQHoFcYsOfg=
github.com/xiang90/probing v0.0.0-20221125231312-a49e3df8f510 h1:S2dVYn90KE98chqDkyE9Z4N61UnQd+KOfgp5Iu53llk=
github.com/xiang90/probing v0.0.0-20221125231312-a49e3df8f510/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU=
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
go.etcd.io/bbolt v1.3.11 h1:yGEzV1wPz2yVCLsD8ZAiGHhHVlczyC9d1rP43/VCRJ0=
go.etcd.io/bbolt v1.3.11/go.mod h1:dksAq7YMXoljX0xu6VF5DMZGbhYYoLUalEiSySYAS4I=
go.etcd.io/etcd/api/v3 v3.5.17 h1:cQB8eb8bxwuxOilBpMJAEo8fAONyrdXTHUNcMd8yT1w=
go.etcd.io/etcd/api/v3 v3.5.17/go.mod h1:d1hvkRuXkts6PmaYk2Vrgqbv7H4ADfAKhyJqHNLJCB4=
go.etcd.io/etcd/client/pkg/v3 v3.5.17 h1:XxnDXAWq2pnxqx76ljWwiQ9jylbpC4rvkAeRVOUKKVw=
go.etcd.io/etcd/client/pkg/v3 v3.5.17/go.mod h1:4DqK1TKacp/86nJk4FLQqo6Mn2vvQFBmruW3pP14H/w=
go.etcd.io/etcd/client/v2 v2.305.16 h1:kQrn9o5czVNaukf2A2At43cE9ZtWauOtf9vRZuiKXow=
go.etcd.io/etcd/client/v2 v2.305.16/go.mod h1:h9YxWCzcdvZENbfzBTFCnoNumr2ax3F19sKMqHFmXHE=
go.etcd.io/etcd/client/v3 v3.5.17 h1:o48sINNeWz5+pjy/Z0+HKpj/xSnBkuVhVvXkjEXbqZY=
go.etcd.io/etcd/client/v3 v3.5.17/go.mod h1:j2d4eXTHWkT2ClBgnnEPm/Wuu7jsqku41v9DZ3OtjQo=
go.etcd.io/etcd/pkg/v3 v3.5.16 h1:cnavs5WSPWeK4TYwPYfmcr3Joz9BH+TZ6qoUtz6/+mc=
go.etcd.io/etcd/pkg/v3 v3.5.16/go.mod h1:+lutCZHG5MBBFI/U4eYT5yL7sJfnexsoM20Y0t2uNuY=
go.etcd.io/etcd/raft/v3 v3.5.16 h1:zBXA3ZUpYs1AwiLGPafYAKKl/CORn/uaxYDwlNwndAk=
go.etcd.io/etcd/raft/v3 v3.5.16/go.mod h1:P4UP14AxofMJ/54boWilabqqWoW9eLodl6I5GdGzazI=
go.etcd.io/etcd/server/v3 v3.5.16 h1:d0/SAdJ3vVsZvF8IFVb1k8zqMZ+heGcNfft71ul9GWE=
go.etcd.io/etcd/server/v3 v3.5.16/go.mod h1:ynhyZZpdDp1Gq49jkUg5mfkDWZwXnn3eIqCqtJnrD/s=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.53.0 h1:9G6E0TXzGFVfTnawRzrPl83iHOAV7L8NJiR8RSGYV1g=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.53.0/go.mod h1:azvtTADFQJA8mX80jIH/akaE7h+dbm/sVuaHqN13w74=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.53.0 h1:4K4tsIXefpVJtvA/8srF4V4y0akAoPHkIslgAkjixJA=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.53.0/go.mod h1:jjdQuTGVsXV4vSs+CJ2qYDeDPf9yIJV23qlIzBm73Vg=
go.opentelemetry.io/otel v1.28.0 h1:/SqNcYk+idO0CxKEUOtKQClMK/MimZihKYMruSMViUo=
go.opentelemetry.io/otel v1.28.0/go.mod h1:q68ijF8Fc8CnMHKyzqL6akLO46ePnjkgfIMIjUIX9z4=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.28.0 h1:3Q/xZUyC1BBkualc9ROb4G8qkH90LXEIICcs5zv1OYY=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.28.0/go.mod h1:s75jGIWA9OfCMzF0xr+ZgfrB5FEbbV7UuYo32ahUiFI=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.28.0 h1:R3X6ZXmNPRR8ul6i3WgFURCHzaXjHdm0karRG/+dj3s=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.28.0/go.mod h1:QWFXnDavXWwMx2EEcZsf3yxgEKAqsxQ+Syjp+seyInw=
go.opentelemetry.io/otel/metric v1.28.0 h1:f0HGvSl1KRAU1DLgLGFjrwVyismPlnuU6JD6bOeuA5Q=
go.opentelemetry.io/otel/metric v1.28.0/go.mod h1:Fb1eVBFZmLVTMb6PPohq3TO9IIhUisDsbJoL/+uQW4s=
go.opentelemetry.io/otel/sdk v1.28.0 h1:b9d7hIry8yZsgtbmM0DKyPWMMUMlK9NEKuIG4aBqWyE=
go.opentelemetry.io/otel/sdk v1.28.0/go.mod h1:oYj7ClPUA7Iw3m+r7GeEjz0qckQRJK2B8zjcZEfu7Pg=
go.opentelemetry.io/otel/trace v1.28.0 h1:GhQ9cUuQGmNDd5BTCP2dAvv75RdMxEfTmYejp+lkx9g=
go.opentelemetry.io/otel/trace v1.28.0/go.mod h1:jPyXzNPg6da9+38HEwElrQiHlVMTnVfM3/yv2OlIHaI=
go.opentelemetry.io/proto/otlp v1.3.1 h1:TrMUixzpM0yuc/znrFTP9MMRh8trP93mkCiDVeXrui0=
go.opentelemetry.io/proto/otlp v1.3.1/go.mod h1:0X1WI4de4ZsLrrJNLAQbFeLCm3T7yBkR0XqQ7niQU+8=
go.etcd.io/bbolt v1.4.3 h1:dEadXpI6G79deX5prL3QRNP6JB8UxVkqo4UPnHaNXJo=
go.etcd.io/bbolt v1.4.3/go.mod h1:tKQlpPaYCVFctUIgFKFnAlvbmB3tpy1vkTnDWohtc0E=
go.etcd.io/etcd/api/v3 v3.6.7 h1:7BNJ2gQmc3DNM+9cRkv7KkGQDayElg8x3X+tFDYS+E0=
go.etcd.io/etcd/api/v3 v3.6.7/go.mod h1:xJ81TLj9hxrYYEDmXTeKURMeY3qEDN24hqe+q7KhbnI=
go.etcd.io/etcd/client/pkg/v3 v3.6.7 h1:vvzgyozz46q+TyeGBuFzVuI53/yd133CHceNb/AhBVs=
go.etcd.io/etcd/client/pkg/v3 v3.6.7/go.mod h1:2IVulJ3FZ/czIGl9T4lMF1uxzrhRahLqe+hSgy+Kh7Q=
go.etcd.io/etcd/client/v3 v3.6.7 h1:9WqA5RpIBtdMxAy1ukXLAdtg2pAxNqW5NUoO2wQrE6U=
go.etcd.io/etcd/client/v3 v3.6.7/go.mod h1:2XfROY56AXnUqGsvl+6k29wrwsSbEh1lAouQB1vHpeE=
go.etcd.io/etcd/pkg/v3 v3.6.5 h1:byxWB4AqIKI4SBmquZUG1WGtvMfMaorXFoCcFbVeoxM=
go.etcd.io/etcd/pkg/v3 v3.6.5/go.mod h1:uqrXrzmMIJDEy5j00bCqhVLzR5jEJIwDp5wTlLwPGOU=
go.etcd.io/etcd/server/v3 v3.6.5 h1:4RbUb1Bd4y1WkBHmuF+cZII83JNQMuNXzyjwigQ06y0=
go.etcd.io/etcd/server/v3 v3.6.5/go.mod h1:PLuhyVXz8WWRhzXDsl3A3zv/+aK9e4A9lpQkqawIaH0=
go.etcd.io/raft/v3 v3.6.0 h1:5NtvbDVYpnfZWcIHgGRk9DyzkBIXOi8j+DDp1IcnUWQ=
go.etcd.io/raft/v3 v3.6.0/go.mod h1:nLvLevg6+xrVtHUmVaTcTz603gQPHfh7kUAwV6YpfGo=
go.opentelemetry.io/auto/sdk v1.2.1 h1:jXsnJ4Lmnqd11kwkBV2LgLoFMZKizbCi5fNZ/ipaZ64=
go.opentelemetry.io/auto/sdk v1.2.1/go.mod h1:KRTj+aOaElaLi+wW1kO/DZRXwkF4C5xPbEe3ZiIhN7Y=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.63.0 h1:YH4g8lQroajqUwWbq/tr2QX1JFmEXaDLgG+ew9bLMWo=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.63.0/go.mod h1:fvPi2qXDqFs8M4B4fmJhE92TyQs9Ydjlg3RvfUp+NbQ=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.63.0 h1:RbKq8BG0FI8OiXhBfcRtqqHcZcka+gU3cskNuf05R18=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.63.0/go.mod h1:h06DGIukJOevXaj/xrNjhi/2098RZzcLTbc0jDAUbsg=
go.opentelemetry.io/otel v1.38.0 h1:RkfdswUDRimDg0m2Az18RKOsnI8UDzppJAtj01/Ymk8=
go.opentelemetry.io/otel v1.38.0/go.mod h1:zcmtmQ1+YmQM9wrNsTGV/q/uyusom3P8RxwExxkZhjM=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.38.0 h1:GqRJVj7UmLjCVyVJ3ZFLdPRmhDUp2zFmQe3RHIOsw24=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.38.0/go.mod h1:ri3aaHSmCTVYu2AWv44YMauwAQc0aqI9gHKIcSbI1pU=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.38.0 h1:lwI4Dc5leUqENgGuQImwLo4WnuXFPetmPpkLi2IrX54=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.38.0/go.mod h1:Kz/oCE7z5wuyhPxsXDuaPteSWqjSBD5YaSdbxZYGbGk=
go.opentelemetry.io/otel/metric v1.38.0 h1:Kl6lzIYGAh5M159u9NgiRkmoMKjvbsKtYRwgfrA6WpA=
go.opentelemetry.io/otel/metric v1.38.0/go.mod h1:kB5n/QoRM8YwmUahxvI3bO34eVtQf2i4utNVLr9gEmI=
go.opentelemetry.io/otel/sdk v1.38.0 h1:l48sr5YbNf2hpCUj/FoGhW9yDkl+Ma+LrVl8qaM5b+E=
go.opentelemetry.io/otel/sdk v1.38.0/go.mod h1:ghmNdGlVemJI3+ZB5iDEuk4bWA3GkTpW+DOoZMYBVVg=
go.opentelemetry.io/otel/sdk/metric v1.38.0 h1:aSH66iL0aZqo//xXzQLYozmWrXxyFkBJ6qT5wthqPoM=
go.opentelemetry.io/otel/sdk/metric v1.38.0/go.mod h1:dg9PBnW9XdQ1Hd6ZnRz689CbtrUp0wMMs9iPcgT9EZA=
go.opentelemetry.io/otel/trace v1.38.0 h1:Fxk5bKrDZJUH+AMyyIXGcFAPah0oRcT+LuNtJrmcNLE=
go.opentelemetry.io/otel/trace v1.38.0/go.mod h1:j1P9ivuFsTceSWe1oY+EeW3sc+Pp42sO++GHkg4wwhs=
go.opentelemetry.io/proto/otlp v1.9.0 h1:l706jCMITVouPOqEnii2fIAuO3IVGBRPV5ICjceRb/A=
go.opentelemetry.io/proto/otlp v1.9.0/go.mod h1:xE+Cx5E/eEHw+ISFkwPLwCZefwVjY+pqKg1qcK03+/4=
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0=
go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y=
go.uber.org/zap v1.27.0 h1:aJMhYGrd5QSmlpLMr2MftRKl7t8J8PTZPA732ud/XR8=
go.uber.org/zap v1.27.0/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E=
go.uber.org/zap v1.27.1 h1:08RqriUEv8+ArZRYSTXy1LeBScaMpVSTBhCeaZYfMYc=
go.uber.org/zap v1.27.1/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E=
go.yaml.in/yaml/v2 v2.4.3 h1:6gvOSjQoTB3vt1l+CU+tSyi/HOjfOjRLJ4YwYZGwRO0=
go.yaml.in/yaml/v2 v2.4.3/go.mod h1:zSxWcmIDjOzPXpjlTTbAsKokqkDNAVtZO0WOMiT90s8=
go.yaml.in/yaml/v3 v3.0.4 h1:tfq32ie2Jv2UxXFdLJdh3jXuOzWiL1fo0bu/FbuKpbc=
go.yaml.in/yaml/v3 v3.0.4/go.mod h1:DhzuOOF2ATzADvBadXxruRBLzYTpT36CKvDb3+aBEFg=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.31.0 h1:ihbySMvVjLAeSH1IbfcRTkD/iNscyz8rGzjF/E5hV6U=
golang.org/x/crypto v0.31.0/go.mod h1:kDsLvtWBEx7MV9tJOj9bnXsPbxwJQ6csT/x4KIN4Ssk=
golang.org/x/exp v0.0.0-20241217172543-b2144cdd0a67 h1:1UoZQm6f0P/ZO0w1Ri+f+ifG/gXhegadRdwBIXEFWDo=
golang.org/x/exp v0.0.0-20241217172543-b2144cdd0a67/go.mod h1:qj5a5QZpwLU2NLQudwIN5koi3beDhSAlJwa67PuM98c=
golang.org/x/crypto v0.47.0 h1:V6e3FRj+n4dbpw86FJ8Fv7XVOql7TEwpHapKoMJ/GO8=
golang.org/x/crypto v0.47.0/go.mod h1:ff3Y9VzzKbwSSEzWqJsJVBnWmRwRSHt/6Op5n9bQc4A=
golang.org/x/exp v0.0.0-20260112195511-716be5621a96 h1:Z/6YuSHTLOHfNFdb8zVZomZr7cqNgTJvA8+Qz75D8gU=
golang.org/x/exp v0.0.0-20260112195511-716be5621a96/go.mod h1:nzimsREAkjBCIEFtHiYkrJyT+2uy9YZJB7H1k68CXZU=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.32.0 h1:9F4d3PHLljb6x//jOyokMv3eX+YDeepZSEo3mFJy93c=
golang.org/x/mod v0.32.0/go.mod h1:SgipZ/3h2Ci89DlEtEXWUk/HteuRin+HHhN+WbNhguU=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.33.0 h1:74SYHlV8BIgHIFC/LrYkOGIwL19eTYXQ5wc6TBuO36I=
golang.org/x/net v0.33.0/go.mod h1:HXLR5J+9DxmrqMwG9qjGCxZ+zKXxBru04zlTvWlWuN4=
golang.org/x/oauth2 v0.24.0 h1:KTBBxWqUa0ykRPLtV69rRto9TLXcqYkeswu48x/gvNE=
golang.org/x/oauth2 v0.24.0/go.mod h1:XYTD2NtWslqkgxebSiOHnXEap4TF09sJSc7H1sXbhtI=
golang.org/x/net v0.49.0 h1:eeHFmOGUTtaaPSGNmjBKpbng9MulQsJURQUAfUwY++o=
golang.org/x/net v0.49.0/go.mod h1:/ysNB2EvaqvesRkuLAyjI1ycPZlQHM3q01F02UY/MV8=
golang.org/x/oauth2 v0.34.0 h1:hqK/t4AKgbqWkdkcAeI8XLmbK+4m4G5YeQRrmiotGlw=
golang.org/x/oauth2 v0.34.0/go.mod h1:lzm5WQJQwKZ3nwavOZ3IS5Aulzxi68dUSgRHujetwEA=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.10.0 h1:3NQrjDixjgGwUOCaF8w2+VYHv0Ve/vGYSbdkTa98gmQ=
golang.org/x/sync v0.10.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.19.0 h1:vV+1eWNmZ5geRlYjzm2adRgW2/mcpevXNg50YZtPCE4=
golang.org/x/sync v0.19.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210331175145-43e1dd70ce54/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210616094352-59db8d763f22/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.28.0 h1:Fksou7UEQUWlKvIdsqzJmUmCX3cZuD2+P3XyyzwMhlA=
golang.org/x/sys v0.28.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/term v0.27.0 h1:WP60Sv1nlK1T6SupCHbXzSaN0b9wUmsPoRS9b61A23Q=
golang.org/x/term v0.27.0/go.mod h1:iMsnZpn0cago0GOrHO2+Y7u7JPn5AylBrcoWkElMTSM=
golang.org/x/sys v0.0.0-20220615213510-4f61da869c0c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.40.0 h1:DBZZqJ2Rkml6QMQsZywtnjnnGvHza6BTfYFWY9kjEWQ=
golang.org/x/sys v0.40.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/term v0.39.0 h1:RclSuaJf32jOqZz74CkPA9qFuVTX7vhLlpfj/IGWlqY=
golang.org/x/term v0.39.0/go.mod h1:yxzUCTP/U+FzoxfdKmLaA0RV1WgE0VY7hXBwKtY/4ww=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.21.0 h1:zyQAAkrwaneQ066sspRyJaG9VNi/YJ1NfzcGB3hZ/qo=
golang.org/x/text v0.21.0/go.mod h1:4IBbMaMmOPCJ8SecivzSH54+73PCFmPWxNTLm+vZkEQ=
golang.org/x/time v0.8.0 h1:9i3RxcPv3PZnitoVGMPDKZSq1xW1gK1Xy3ArNOGZfEg=
golang.org/x/time v0.8.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
golang.org/x/text v0.33.0 h1:B3njUFyqtHDUI5jMn1YIr5B0IE2U0qck04r6d4KPAxE=
golang.org/x/text v0.33.0/go.mod h1:LuMebE6+rBincTi9+xWTY8TztLzKHc/9C1uBCG27+q8=
golang.org/x/time v0.14.0 h1:MRx4UaLrDotUKUdCIqzPC48t1Y9hANFKIRpNx+Te8PI=
golang.org/x/time v0.14.0/go.mod h1:eL/Oa2bBBK0TkX57Fyni+NgnyQQN4LitPmob2Hjnqw4=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.28.0 h1:WuB6qZ4RPCQo5aP3WdKZS7i595EdWqWR8vqJTlwTVK8=
golang.org/x/tools v0.28.0/go.mod h1:dcIOrVd3mfQKTgrDVQHqCPMWy6lnhfhtX3hLXYVLfRw=
golang.org/x/tools v0.41.0 h1:a9b8iMweWG+S0OBnlU36rzLp20z1Rp10w+IY2czHTQc=
golang.org/x/tools v0.41.0/go.mod h1:XSY6eDqxVNiYgezAVqqCeihT4j1U2CCsqvH3WhQpnlg=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/genproto v0.0.0-20240814211410-ddb44dafa142 h1:oLiyxGgE+rt22duwci1+TG7bg2/L1LQsXwfjPlmuJA0=
google.golang.org/genproto v0.0.0-20240814211410-ddb44dafa142/go.mod h1:G11eXq53iI5Q+kyNOmCvnzBaxEA2Q/Ik5Tj7nqBE8j4=
google.golang.org/genproto/googleapis/api v0.0.0-20240826202546-f6391c0de4c7 h1:YcyjlL1PRr2Q17/I0dPk2JmYS5CDXfcdb2Z3YRioEbw=
google.golang.org/genproto/googleapis/api v0.0.0-20240826202546-f6391c0de4c7/go.mod h1:OCdP9MfskevB/rbYvHTsXTtKC+3bHWajPdoKgjcYkfo=
google.golang.org/genproto/googleapis/rpc v0.0.0-20240826202546-f6391c0de4c7 h1:2035KHhUv+EpyB+hWgJnaWKJOdX1E95w2S8Rr4uWKTs=
google.golang.org/genproto/googleapis/rpc v0.0.0-20240826202546-f6391c0de4c7/go.mod h1:UqMtugtsSgubUsoxbuAoiCXvqvErP7Gf0so0mK9tHxU=
google.golang.org/grpc v1.65.0 h1:bs/cUb4lp1G5iImFFd3u5ixQzweKizoZJAwBNLR42lc=
google.golang.org/grpc v1.65.0/go.mod h1:WgYC2ypjlB0EiQi6wdKixMqukr6lBc0Vo+oOgjrM5ZQ=
google.golang.org/protobuf v1.35.1 h1:m3LfL6/Ca+fqnjnlqQXNpFPABW1UD7mjh8KO2mKFytA=
google.golang.org/protobuf v1.35.1/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE=
gonum.org/v1/gonum v0.16.0 h1:5+ul4Swaf3ESvrOnidPp4GZbzf0mxVQpDCYUQE7OJfk=
gonum.org/v1/gonum v0.16.0/go.mod h1:fef3am4MQ93R2HHpKnLk4/Tbh/s0+wqD5nfa6Pnwy4E=
google.golang.org/genproto/googleapis/api v0.0.0-20251222181119-0a764e51fe1b h1:uA40e2M6fYRBf0+8uN5mLlqUtV192iiksiICIBkYJ1E=
google.golang.org/genproto/googleapis/api v0.0.0-20251222181119-0a764e51fe1b/go.mod h1:Xa7le7qx2vmqB/SzWUBa7KdMjpdpAHlh5QCSnjessQk=
google.golang.org/genproto/googleapis/rpc v0.0.0-20251222181119-0a764e51fe1b h1:Mv8VFug0MP9e5vUxfBcE3vUkV6CImK3cMNMIDFjmzxU=
google.golang.org/genproto/googleapis/rpc v0.0.0-20251222181119-0a764e51fe1b/go.mod h1:j9x/tPzZkyxcgEFkiKEEGxfvyumM01BEtsW8xzOahRQ=
google.golang.org/grpc v1.78.0 h1:K1XZG/yGDJnzMdd/uZHAkVqJE+xIDOcmdSFZkBUicNc=
google.golang.org/grpc v1.78.0/go.mod h1:I47qjTo4OKbMkjA/aOOwxDIiPSBofUtQUI5EfpWvW7U=
google.golang.org/protobuf v1.36.11 h1:fV6ZwhNocDyBLK0dj+fg8ektcVegBBuEolpbTQyBNVE=
google.golang.org/protobuf v1.36.11/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
gopkg.in/evanphx/json-patch.v4 v4.12.0 h1:n6jtcsulIzXPJaxegRbvFNNrZDjbij7ny3gmSPG+6V4=
gopkg.in/evanphx/json-patch.v4 v4.12.0/go.mod h1:p8EYWUEYMpynmqDbY58zCKCFZw8pRWMG4EsWvDvM72M=
gopkg.in/evanphx/json-patch.v4 v4.13.0 h1:czT3CmqEaQ1aanPc5SdlgQrrEIb8w/wwCvWWnfEbYzo=
gopkg.in/evanphx/json-patch.v4 v4.13.0/go.mod h1:p8EYWUEYMpynmqDbY58zCKCFZw8pRWMG4EsWvDvM72M=
gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc=
gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=
gopkg.in/natefinch/lumberjack.v2 v2.2.1 h1:bBRl1b0OH9s/DuPhuXpNl+VtCaJXFZ5/uEFST95x9zc=
@@ -275,35 +321,37 @@ gopkg.in/natefinch/lumberjack.v2 v2.2.1/go.mod h1:YD8tP3GAjkrDg1eZH7EGmyESg/lsYs
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
k8s.io/api v0.32.0 h1:OL9JpbvAU5ny9ga2fb24X8H6xQlVp+aJMFlgtQjR9CE=
k8s.io/api v0.32.0/go.mod h1:4LEwHZEf6Q/cG96F3dqR965sYOfmPM7rq81BLgsE0p0=
k8s.io/apimachinery v0.32.0 h1:cFSE7N3rmEEtv4ei5X6DaJPHHX0C+upp+v5lVPiEwpg=
k8s.io/apimachinery v0.32.0/go.mod h1:GpHVgxoKlTxClKcteaeuF1Ul/lDVb74KpZcxcmLDElE=
k8s.io/apiserver v0.32.0 h1:VJ89ZvQZ8p1sLeiWdRJpRD6oLozNZD2+qVSLi+ft5Qs=
k8s.io/apiserver v0.32.0/go.mod h1:HFh+dM1/BE/Hm4bS4nTXHVfN6Z6tFIZPi649n83b4Ag=
k8s.io/client-go v0.32.0 h1:DimtMcnN/JIKZcrSrstiwvvZvLjG0aSxy8PxN8IChp8=
k8s.io/client-go v0.32.0/go.mod h1:boDWvdM1Drk4NJj/VddSLnx59X3OPgwrOo0vGbtq9+8=
k8s.io/cloud-provider v0.32.0 h1:QXYJGmwME2q2rprymbmw2GroMChQYc/MWN6l/I4Kgp8=
k8s.io/cloud-provider v0.32.0/go.mod h1:cz3gVodkhgwi2ugj/JUPglIruLSdDaThxawuDyCHfr8=
k8s.io/component-base v0.32.0 h1:d6cWHZkCiiep41ObYQS6IcgzOUQUNpywm39KVYaUqzU=
k8s.io/component-base v0.32.0/go.mod h1:JLG2W5TUxUu5uDyKiH2R/7NnxJo1HlPoRIIbVLkK5eM=
k8s.io/component-helpers v0.32.0 h1:pQEEBmRt3pDJJX98cQvZshDgJFeKRM4YtYkMmfOlczw=
k8s.io/component-helpers v0.32.0/go.mod h1:9RuClQatbClcokXOcDWSzFKQm1huIf0FzQlPRpizlMc=
k8s.io/controller-manager v0.32.0 h1:tpQl1rvH4huFB6Avl1nhowZHtZoCNWqn6OYdZPl7Ybc=
k8s.io/controller-manager v0.32.0/go.mod h1:JRuYnYCkKj3NgBTy+KNQKIUm/lJRoDAvGbfdEmk9LhY=
k8s.io/api v0.35.0 h1:iBAU5LTyBI9vw3L5glmat1njFK34srdLmktWwLTprlY=
k8s.io/api v0.35.0/go.mod h1:AQ0SNTzm4ZAczM03QH42c7l3bih1TbAXYo0DkF8ktnA=
k8s.io/apimachinery v0.35.0 h1:Z2L3IHvPVv/MJ7xRxHEtk6GoJElaAqDCCU0S6ncYok8=
k8s.io/apimachinery v0.35.0/go.mod h1:jQCgFZFR1F4Ik7hvr2g84RTJSZegBc8yHgFWKn//hns=
k8s.io/apiserver v0.35.0 h1:CUGo5o+7hW9GcAEF3x3usT3fX4f9r8xmgQeCBDaOgX4=
k8s.io/apiserver v0.35.0/go.mod h1:QUy1U4+PrzbJaM3XGu2tQ7U9A4udRRo5cyxkFX0GEds=
k8s.io/client-go v0.35.0 h1:IAW0ifFbfQQwQmga0UdoH0yvdqrbwMdq9vIFEhRpxBE=
k8s.io/client-go v0.35.0/go.mod h1:q2E5AAyqcbeLGPdoRB+Nxe3KYTfPce1Dnu1myQdqz9o=
k8s.io/cloud-provider v0.35.0 h1:syiBCQbKh2gho/S1BkIl006Dc44pV8eAtGZmv5NMe7M=
k8s.io/cloud-provider v0.35.0/go.mod h1:7grN+/Nt5Hf7tnSGPT3aErt4K7aQpygyCrGpbrQbzNc=
k8s.io/component-base v0.35.0 h1:+yBrOhzri2S1BVqyVSvcM3PtPyx5GUxCK2tinZz1G94=
k8s.io/component-base v0.35.0/go.mod h1:85SCX4UCa6SCFt6p3IKAPej7jSnF3L8EbfSyMZayJR0=
k8s.io/component-helpers v0.35.0 h1:wcXv7HJRksgVjM4VlXJ1CNFBpyDHruRI99RrBtrJceA=
k8s.io/component-helpers v0.35.0/go.mod h1:ahX0m/LTYmu7fL3W8zYiIwnQ/5gT28Ex4o2pymF63Co=
k8s.io/controller-manager v0.35.0 h1:KteodmfVIRzfZ3RDaxhnHb72rswBxEngvdL9vuZOA9A=
k8s.io/controller-manager v0.35.0/go.mod h1:1bVuPNUG6/dpWpevsJpXioS0E0SJnZ7I/Wqc9Awyzm4=
k8s.io/klog/v2 v2.130.1 h1:n9Xl7H1Xvksem4KFG4PYbdQCQxqc/tTUyrgXaOhHSzk=
k8s.io/klog/v2 v2.130.1/go.mod h1:3Jpz1GvMt720eyJH1ckRHK1EDfpxISzJ7I9OYgaDtPE=
k8s.io/kms v0.32.0 h1:jwOfunHIrcdYl5FRcA+uUKKtg6qiqoPCwmS2T3XTYL4=
k8s.io/kms v0.32.0/go.mod h1:Bk2evz/Yvk0oVrvm4MvZbgq8BD34Ksxs2SRHn4/UiOM=
k8s.io/kube-openapi v0.0.0-20241212222426-2c72e554b1e7 h1:hcha5B1kVACrLujCKLbr8XWMxCxzQx42DY8QKYJrDLg=
k8s.io/kube-openapi v0.0.0-20241212222426-2c72e554b1e7/go.mod h1:GewRfANuJ70iYzvn+i4lezLDAFzvjxZYK1gn1lWcfas=
k8s.io/utils v0.0.0-20241210054802-24370beab758 h1:sdbE21q2nlQtFh65saZY+rRM6x6aJJI8IUa1AmH/qa0=
k8s.io/utils v0.0.0-20241210054802-24370beab758/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0=
sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.31.1 h1:uOuSLOMBWkJH0TWa9X6l+mj5nZdm6Ay6Bli8HL8rNfk=
sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.31.1/go.mod h1:Ve9uj1L+deCXFrPOk1LpFXqTg7LCFzFso6PA48q/XZw=
sigs.k8s.io/json v0.0.0-20241014173422-cfa47c3a1cc8 h1:gBQPwqORJ8d8/YNZWEjoZs7npUVDpVXUUOFfW6CgAqE=
sigs.k8s.io/json v0.0.0-20241014173422-cfa47c3a1cc8/go.mod h1:mdzfpAEoE6DHQEN0uh9ZbOCuHbLK5wOm7dK4ctXE9Tg=
sigs.k8s.io/structured-merge-diff/v4 v4.5.0 h1:nbCitCK2hfnhyiKo6uf2HxUPTCodY6Qaf85SbDIaMBk=
sigs.k8s.io/structured-merge-diff/v4 v4.5.0/go.mod h1:N8f93tFZh9U6vpxwRArLiikrE5/2tiu1w1AGfACIGE4=
sigs.k8s.io/yaml v1.4.0 h1:Mk1wCc2gy/F0THH0TAp1QYyJNzRm2KCLy3o5ASXVI5E=
sigs.k8s.io/yaml v1.4.0/go.mod h1:Ejl7/uTz7PSA4eKMyQCUTnhZYNmLIl+5c2lQPGR2BPY=
k8s.io/kms v0.35.0 h1:/x87FED2kDSo66csKtcYCEHsxF/DBlNl7LfJ1fVQs1o=
k8s.io/kms v0.35.0/go.mod h1:VT+4ekZAdrZDMgShK37vvlyHUVhwI9t/9tvh0AyCWmQ=
k8s.io/kube-openapi v0.0.0-20251125145642-4e65d59e963e h1:iW9ChlU0cU16w8MpVYjXk12dqQ4BPFBEgif+ap7/hqQ=
k8s.io/kube-openapi v0.0.0-20251125145642-4e65d59e963e/go.mod h1:kdmbQkyfwUagLfXIad1y2TdrjPFWp2Q89B3qkRwf/pQ=
k8s.io/utils v0.0.0-20260108192941-914a6e750570 h1:JT4W8lsdrGENg9W+YwwdLJxklIuKWdRm+BC+xt33FOY=
k8s.io/utils v0.0.0-20260108192941-914a6e750570/go.mod h1:xDxuJ0whA3d0I4mf/C4ppKHxXynQ+fxnkmQH0vTHnuk=
sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.34.0 h1:hSfpvjjTQXQY2Fol2CS0QHMNs/WI1MOSGzCm1KhM5ec=
sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.34.0/go.mod h1:Ve9uj1L+deCXFrPOk1LpFXqTg7LCFzFso6PA48q/XZw=
sigs.k8s.io/json v0.0.0-20250730193827-2d320260d730 h1:IpInykpT6ceI+QxKBbEflcR5EXP7sU1kvOlxwZh5txg=
sigs.k8s.io/json v0.0.0-20250730193827-2d320260d730/go.mod h1:mdzfpAEoE6DHQEN0uh9ZbOCuHbLK5wOm7dK4ctXE9Tg=
sigs.k8s.io/randfill v1.0.0 h1:JfjMILfT8A6RbawdsK2JXGBR5AQVfd+9TbzrlneTyrU=
sigs.k8s.io/randfill v1.0.0/go.mod h1:XeLlZ/jmk4i1HRopwe7/aU3H5n1zNUcX6TM94b3QxOY=
sigs.k8s.io/structured-merge-diff/v6 v6.3.1 h1:JrhdFMqOd/+3ByqlP2I45kTOZmTRLBUm5pvRjeheg7E=
sigs.k8s.io/structured-merge-diff/v6 v6.3.1/go.mod h1:M3W8sfWvn2HhQDIbGWj3S099YozAsymCo/wrT5ohRUE=
sigs.k8s.io/yaml v1.6.0 h1:G8fkbMSAFqgEFgh4b1wmtzDnioxFCUgTZhlbj5P9QYs=
sigs.k8s.io/yaml v1.6.0/go.mod h1:796bPqUfzR/0jLAl6XjHl3Ck7MiyVv8dbTdyT3/pMf4=

View File

@@ -0,0 +1,35 @@
{
"$schema": "https://raw.githubusercontent.com/googleapis/release-please/main/schemas/config.json",
"pull-request-header": ":robot: I have created a release",
"pull-request-title-pattern": "chore: release v${version}",
"group-pull-request-title-pattern": "chore: release v${version}",
"packages": {
".": {
"changelog-path": "CHANGELOG.md",
"release-type": "go",
"skip-github-release": false,
"bump-minor-pre-major": true,
"include-v-in-tag": true,
"draft": false,
"draft-pull-request": true,
"prerelease": false,
"changelog-sections": [
{
"type": "feat",
"section": "Features",
"hidden": false
},
{
"type": "fix",
"section": "Bug Fixes",
"hidden": false
},
{
"type": "*",
"section": "Changelog",
"hidden": false
}
]
}
}
}

View File

@@ -0,0 +1,3 @@
{
".": "0.13.0"
}

View File

@@ -1,175 +0,0 @@
/*
Copyright 2023 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Package cluster implements the multi-cloud provider interface for Proxmox.
package cluster
import (
"crypto/tls"
"encoding/base64"
"fmt"
"net/http"
"net/url"
"os"
"strings"
pxapi "github.com/Telmate/proxmox-api-go/proxmox"
)
// Cluster is a Proxmox client.
type Cluster struct {
config *ClustersConfig
proxmox map[string]*pxapi.Client
}
// NewCluster creates a new Proxmox cluster client.
func NewCluster(config *ClustersConfig, hclient *http.Client) (*Cluster, error) {
clusters := len(config.Clusters)
if clusters > 0 {
proxmox := make(map[string]*pxapi.Client, clusters)
for _, cfg := range config.Clusters {
tlsconf := &tls.Config{InsecureSkipVerify: true}
if !cfg.Insecure {
tlsconf = nil
}
client, err := pxapi.NewClient(cfg.URL, hclient, os.Getenv("PM_HTTP_HEADERS"), tlsconf, "", 600)
if err != nil {
return nil, err
}
if cfg.Username != "" && cfg.Password != "" {
if err := client.Login(cfg.Username, cfg.Password, ""); err != nil {
return nil, err
}
} else {
client.SetAPIToken(cfg.TokenID, cfg.TokenSecret)
}
proxmox[cfg.Region] = client
}
return &Cluster{
config: config,
proxmox: proxmox,
}, nil
}
return nil, fmt.Errorf("no Proxmox clusters found")
}
// CheckClusters checks if the Proxmox connection is working.
func (c *Cluster) CheckClusters() error {
for region, client := range c.proxmox {
if _, err := client.GetVersion(); err != nil {
return fmt.Errorf("failed to initialized proxmox client in region %s, error: %v", region, err)
}
}
return nil
}
// GetProxmoxCluster returns a Proxmox cluster client in a given region.
func (c *Cluster) GetProxmoxCluster(region string) (*pxapi.Client, error) {
if c.proxmox[region] != nil {
return c.proxmox[region], nil
}
return nil, fmt.Errorf("proxmox cluster %s not found", region)
}
// FindVMByName find a VM by name in all Proxmox clusters.
func (c *Cluster) FindVMByName(name string) (*pxapi.VmRef, string, error) {
for region, px := range c.proxmox {
vmr, err := px.GetVmRefByName(name)
if err != nil {
if strings.Contains(err.Error(), "not found") {
continue
}
return nil, "", err
}
return vmr, region, nil
}
return nil, "", fmt.Errorf("vm '%s' not found", name)
}
// FindVMByUUID find a VM by uuid in all Proxmox clusters.
func (c *Cluster) FindVMByUUID(uuid string) (*pxapi.VmRef, string, error) {
for region, px := range c.proxmox {
vms, err := px.GetResourceList("vm")
if err != nil {
return nil, "", fmt.Errorf("error get resources %v", err)
}
for vmii := range vms {
vm, ok := vms[vmii].(map[string]interface{})
if !ok {
return nil, "", fmt.Errorf("failed to cast response to map, vm: %v", vm)
}
if vm["type"].(string) != "qemu" { //nolint:errcheck
continue
}
vmr := pxapi.NewVmRef(int(vm["vmid"].(float64))) //nolint:errcheck
vmr.SetNode(vm["node"].(string)) //nolint:errcheck
vmr.SetVmType("qemu")
config, err := px.GetVmConfig(vmr)
if err != nil {
return nil, "", err
}
if config["smbios1"] != nil {
if c.getUUID(config["smbios1"].(string)) == uuid { //nolint:errcheck
return vmr, region, nil
}
}
}
}
return nil, "", fmt.Errorf("vm with uuid '%s' not found", uuid)
}
func (c *Cluster) getUUID(smbios string) string {
for _, l := range strings.Split(smbios, ",") {
if l == "" || l == "base64=1" {
continue
}
parsedParameter, err := url.ParseQuery(l)
if err != nil {
return ""
}
for k, v := range parsedParameter {
if k == "uuid" {
decodedString, err := base64.StdEncoding.DecodeString(v[0])
if err != nil {
decodedString = []byte(v[0])
}
return string(decodedString)
}
}
}
return ""
}

View File

@@ -1,220 +0,0 @@
/*
Copyright 2023 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package cluster_test
import (
"fmt"
"net/http"
"strings"
"testing"
"github.com/jarcoal/httpmock"
"github.com/stretchr/testify/assert"
"github.com/sergelogvinov/proxmox-cloud-controller-manager/pkg/cluster"
)
func newClusterEnv() (*cluster.ClustersConfig, error) {
cfg, err := cluster.ReadCloudConfig(strings.NewReader(`
clusters:
- url: https://127.0.0.1:8006/api2/json
insecure: false
token_id: "user!token-id"
token_secret: "secret"
region: cluster-1
- url: https://127.0.0.2:8006/api2/json
insecure: false
token_id: "user!token-id"
token_secret: "secret"
region: cluster-2
`))
return &cfg, err
}
func TestNewClient(t *testing.T) {
cfg, err := newClusterEnv()
assert.Nil(t, err)
assert.NotNil(t, cfg)
client, err := cluster.NewCluster(&cluster.ClustersConfig{}, nil)
assert.NotNil(t, err)
assert.Nil(t, client)
client, err = cluster.NewCluster(cfg, nil)
assert.Nil(t, err)
assert.NotNil(t, client)
}
func TestCheckClusters(t *testing.T) {
cfg, err := newClusterEnv()
assert.Nil(t, err)
assert.NotNil(t, cfg)
client, err := cluster.NewCluster(cfg, nil)
assert.Nil(t, err)
assert.NotNil(t, client)
pxapi, err := client.GetProxmoxCluster("test")
assert.NotNil(t, err)
assert.Nil(t, pxapi)
assert.Equal(t, "proxmox cluster test not found", err.Error())
pxapi, err = client.GetProxmoxCluster("cluster-1")
assert.Nil(t, err)
assert.NotNil(t, pxapi)
err = client.CheckClusters()
assert.NotNil(t, err)
assert.Contains(t, err.Error(), "failed to initialized proxmox client in region")
}
func TestFindVMByNameNonExist(t *testing.T) {
cfg, err := newClusterEnv()
assert.Nil(t, err)
assert.NotNil(t, cfg)
httpmock.Activate()
defer httpmock.DeactivateAndReset()
httpmock.RegisterResponder("GET", "https://127.0.0.1:8006/api2/json/cluster/resources",
func(_ *http.Request) (*http.Response, error) {
return httpmock.NewJsonResponse(200, map[string]interface{}{
"data": []interface{}{
map[string]interface{}{
"node": "node-1",
"type": "qemu",
"vmid": 100,
"name": "test1-vm",
},
},
})
},
)
httpmock.RegisterResponder("GET", "https://127.0.0.2:8006/api2/json/cluster/resources",
func(_ *http.Request) (*http.Response, error) {
return httpmock.NewJsonResponse(200, map[string]interface{}{
"data": []interface{}{
map[string]interface{}{
"node": "node-2",
"type": "qemu",
"vmid": 100,
"name": "test2-vm",
},
},
})
},
)
client, err := cluster.NewCluster(cfg, &http.Client{})
assert.Nil(t, err)
assert.NotNil(t, client)
vmr, cluster, err := client.FindVMByName("non-existing-vm")
assert.NotNil(t, err)
assert.Equal(t, "", cluster)
assert.Nil(t, vmr)
assert.Contains(t, err.Error(), "vm 'non-existing-vm' not found")
}
func TestFindVMByNameExist(t *testing.T) {
cfg, err := newClusterEnv()
assert.Nil(t, err)
assert.NotNil(t, cfg)
httpmock.Activate()
defer httpmock.DeactivateAndReset()
httpmock.RegisterResponder("GET", "https://127.0.0.1:8006/api2/json/cluster/resources",
httpmock.NewJsonResponderOrPanic(200, map[string]interface{}{
"data": []interface{}{
map[string]interface{}{
"node": "node-1",
"type": "qemu",
"vmid": 100,
"name": "test1-vm",
},
},
}),
)
httpmock.RegisterResponder("GET", "https://127.0.0.2:8006/api2/json/cluster/resources",
func(_ *http.Request) (*http.Response, error) {
return httpmock.NewJsonResponse(200, map[string]interface{}{
"data": []interface{}{
map[string]interface{}{
"node": "node-2",
"type": "qemu",
"vmid": 100,
"name": "test2-vm",
},
},
})
},
)
client, err := cluster.NewCluster(cfg, &http.Client{})
assert.Nil(t, err)
assert.NotNil(t, client)
tests := []struct {
msg string
vmName string
expectedError error
expectedVMID int
expectedCluster string
}{
{
msg: "vm not found",
vmName: "non-existing-vm",
expectedError: fmt.Errorf("vm 'non-existing-vm' not found"),
},
{
msg: "Test1-VM",
vmName: "test1-vm",
expectedVMID: 100,
expectedCluster: "cluster-1",
},
{
msg: "Test2-VM",
vmName: "test2-vm",
expectedVMID: 100,
expectedCluster: "cluster-2",
},
}
for _, testCase := range tests {
testCase := testCase
t.Run(fmt.Sprint(testCase.msg), func(t *testing.T) {
vmr, cluster, err := client.FindVMByName(testCase.vmName)
if testCase.expectedError == nil {
assert.Nil(t, err)
assert.NotNil(t, vmr)
assert.Equal(t, testCase.expectedVMID, vmr.VmId())
assert.Equal(t, testCase.expectedCluster, cluster)
} else {
assert.NotNil(t, err)
assert.Equal(t, "", cluster)
assert.Nil(t, vmr)
assert.Contains(t, err.Error(), "vm 'non-existing-vm' not found")
}
})
}
}

View File

@@ -1,98 +0,0 @@
/*
Copyright 2023 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package cluster
import (
"fmt"
"io"
"os"
"path/filepath"
"strings"
yaml "gopkg.in/yaml.v3"
)
// Provider specifies the provider. Can be 'default' or 'capmox'
type Provider string
// ProviderDefault is the default provider
const ProviderDefault Provider = "default"
// ProviderCapmox is the Provider for capmox
const ProviderCapmox Provider = "capmox"
// ClustersConfig is proxmox multi-cluster cloud config.
type ClustersConfig struct {
Features struct {
Provider Provider `yaml:"provider,omitempty"`
} `yaml:"features,omitempty"`
Clusters []struct {
URL string `yaml:"url"`
Insecure bool `yaml:"insecure,omitempty"`
TokenID string `yaml:"token_id,omitempty"`
TokenSecret string `yaml:"token_secret,omitempty"`
Username string `yaml:"username,omitempty"`
Password string `yaml:"password,omitempty"`
Region string `yaml:"region,omitempty"`
} `yaml:"clusters,omitempty"`
}
// ReadCloudConfig reads cloud config from a reader.
func ReadCloudConfig(config io.Reader) (ClustersConfig, error) {
cfg := ClustersConfig{}
if config != nil {
if err := yaml.NewDecoder(config).Decode(&cfg); err != nil {
return ClustersConfig{}, err
}
}
for idx, c := range cfg.Clusters {
if c.Username != "" && c.Password != "" {
if c.TokenID != "" || c.TokenSecret != "" {
return ClustersConfig{}, fmt.Errorf("cluster #%d: token_id and token_secret are not allowed when username and password are set", idx+1)
}
} else if c.TokenID == "" || c.TokenSecret == "" {
return ClustersConfig{}, fmt.Errorf("cluster #%d: either username and password or token_id and token_secret are required", idx+1)
}
if c.Region == "" {
return ClustersConfig{}, fmt.Errorf("cluster #%d: region is required", idx+1)
}
if c.URL == "" || !strings.HasPrefix(c.URL, "http") {
return ClustersConfig{}, fmt.Errorf("cluster #%d: url is required", idx+1)
}
}
if cfg.Features.Provider == "" {
cfg.Features.Provider = ProviderDefault
}
return cfg, nil
}
// ReadCloudConfigFromFile reads cloud config from a file.
func ReadCloudConfigFromFile(file string) (ClustersConfig, error) {
f, err := os.Open(filepath.Clean(file))
if err != nil {
return ClustersConfig{}, fmt.Errorf("error reading %s: %v", file, err)
}
defer f.Close() // nolint: errcheck
return ReadCloudConfig(f)
}

View File

@@ -1,129 +0,0 @@
/*
Copyright 2023 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package cluster_test
import (
"strings"
"testing"
"github.com/stretchr/testify/assert"
"github.com/sergelogvinov/proxmox-cloud-controller-manager/pkg/cluster"
)
func TestReadCloudConfig(t *testing.T) {
cfg, err := cluster.ReadCloudConfig(nil)
assert.Nil(t, err)
assert.NotNil(t, cfg)
// Empty config
cfg, err = cluster.ReadCloudConfig(strings.NewReader(`
clusters:
`))
assert.Nil(t, err)
assert.NotNil(t, cfg)
// Wrong config
cfg, err = cluster.ReadCloudConfig(strings.NewReader(`
clusters:
test: false
`))
assert.NotNil(t, err)
assert.NotNil(t, cfg)
// Non full config
cfg, err = cluster.ReadCloudConfig(strings.NewReader(`
clusters:
- url: abcd
region: cluster-1
`))
assert.NotNil(t, err)
assert.NotNil(t, cfg)
// Valid config with one cluster
cfg, err = cluster.ReadCloudConfig(strings.NewReader(`
clusters:
- url: https://example.com
insecure: false
token_id: "user!token-id"
token_secret: "secret"
region: cluster-1
`))
assert.Nil(t, err)
assert.NotNil(t, cfg)
assert.Equal(t, 1, len(cfg.Clusters))
// Valid config with one cluster (username/password), implicit default provider
cfg, err = cluster.ReadCloudConfig(strings.NewReader(`
clusters:
- url: https://example.com
insecure: false
username: "user@pam"
password: "secret"
region: cluster-1
`))
assert.Nil(t, err)
assert.NotNil(t, cfg)
assert.Equal(t, 1, len(cfg.Clusters))
assert.Equal(t, cluster.ProviderDefault, cfg.Features.Provider)
// Valid config with one cluster (username/password), explicit provider default
cfg, err = cluster.ReadCloudConfig(strings.NewReader(`
features:
provider: 'default'
clusters:
- url: https://example.com
insecure: false
username: "user@pam"
password: "secret"
region: cluster-1
`))
assert.Nil(t, err)
assert.NotNil(t, cfg)
assert.Equal(t, 1, len(cfg.Clusters))
assert.Equal(t, cluster.ProviderDefault, cfg.Features.Provider)
// Valid config with one cluster (username/password), explicit provider capmox
cfg, err = cluster.ReadCloudConfig(strings.NewReader(`
features:
provider: 'capmox'
clusters:
- url: https://example.com
insecure: false
username: "user@pam"
password: "secret"
region: cluster-1
`))
assert.Nil(t, err)
assert.NotNil(t, cfg)
assert.Equal(t, 1, len(cfg.Clusters))
assert.Equal(t, cluster.ProviderCapmox, cfg.Features.Provider)
}
func TestReadCloudConfigFromFile(t *testing.T) {
cfg, err := cluster.ReadCloudConfigFromFile("testdata/cloud-config.yaml")
assert.NotNil(t, err)
assert.EqualError(t, err, "error reading testdata/cloud-config.yaml: open testdata/cloud-config.yaml: no such file or directory")
assert.NotNil(t, cfg)
cfg, err = cluster.ReadCloudConfigFromFile("../../hack/proxmox-config.yaml")
assert.Nil(t, err)
assert.NotNil(t, cfg)
assert.Equal(t, 2, len(cfg.Clusters))
}

158
pkg/config/config.go Normal file
View File

@@ -0,0 +1,158 @@
/*
Copyright 2023 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Package config is the configuration for the cloud provider.
package config
import (
"errors"
"fmt"
"io"
"os"
"path/filepath"
"slices"
"strings"
yaml "gopkg.in/yaml.v3"
"github.com/sergelogvinov/proxmox-cloud-controller-manager/pkg/proxmoxpool"
)
// Provider specifies the provider. Can be 'default' or 'capmox'
type Provider string
// ProviderDefault is the default provider
const ProviderDefault Provider = "default"
// ProviderCapmox is the Provider for capmox
const ProviderCapmox Provider = "capmox"
// NetworkMode specifies the network mode.
type NetworkMode string
const (
// NetworkModeDefault 'default' mode uses ips provided to the kubelet via --node-ip flags
NetworkModeDefault NetworkMode = "default"
// NetworkModeOnlyQemu 'qemu' mode tries to determine the ip addresses via the QEMU agent.
NetworkModeOnlyQemu NetworkMode = "qemu"
// NetworkModeAuto 'auto' attempts to use a combination of the above modes
NetworkModeAuto NetworkMode = "auto"
)
// ValidNetworkModes is a list of valid network modes.
var ValidNetworkModes = []NetworkMode{NetworkModeDefault, NetworkModeOnlyQemu, NetworkModeAuto}
// NetworkOpts specifies the network options for the cloud provider.
type NetworkOpts struct {
ExternalIPCIDRS string `yaml:"external_ip_cidrs,omitempty"`
IPv6SupportDisabled bool `yaml:"ipv6_support_disabled,omitempty"`
IPSortOrder string `yaml:"ip_sort_order,omitempty"`
Mode NetworkMode `yaml:"mode,omitempty"`
}
// ClustersFeatures specifies the features for the cloud provider.
type ClustersFeatures struct {
// HAGroup specifies if the provider should use HA groups to determine node zone.
// If enabled, the provider will use the HA group name as the zone name.
// If disabled, the provider will use the node's cluster name as the zone name.
// Default is false.
HAGroup bool `yaml:"ha_group,omitempty"`
// Provider specifies the provider to use. Can be 'default' or 'capmox'.
// Default is 'default'.
Provider Provider `yaml:"provider,omitempty"`
// Network specifies the network options for the cloud provider.
Network NetworkOpts `yaml:"network,omitempty"`
// ForceUpdateLabels specifies if the provider should force update topology labels
// topology.kubernetes.io/region and topology.kubernetes.io/zone on nodes when
// a VM is migrated to a different zone within the Proxmox cluster.
// Default is false.
ForceUpdateLabels bool `yaml:"force_update_labels,omitempty"`
}
// ClustersConfig is proxmox multi-cluster cloud config.
type ClustersConfig struct {
Features ClustersFeatures `yaml:"features,omitempty"`
Clusters []*proxmoxpool.ProxmoxCluster `yaml:"clusters,omitempty"`
}
// Errors for Reading Cloud Config
var (
ErrMissingPVERegion = errors.New("missing PVE region in cloud config")
ErrMissingPVEAPIURL = errors.New("missing PVE API URL in cloud config")
ErrAuthCredentialsMissing = errors.New("user, token or file credentials are required")
ErrInvalidAuthCredentials = errors.New("must specify one of user, token or file credentials, not multiple")
ErrInvalidCloudConfig = errors.New("invalid cloud config")
ErrInvalidNetworkMode = fmt.Errorf("invalid network mode, valid modes are %v", ValidNetworkModes)
)
// ReadCloudConfig reads cloud config from a reader.
func ReadCloudConfig(config io.Reader) (ClustersConfig, error) {
cfg := ClustersConfig{}
if config != nil {
if err := yaml.NewDecoder(config).Decode(&cfg); err != nil {
return ClustersConfig{}, errors.Join(ErrInvalidCloudConfig, err)
}
}
for idx, c := range cfg.Clusters {
hasTokenAuth := c.TokenID != "" || c.TokenSecret != ""
hasTokenFileAuth := c.TokenIDFile != "" || c.TokenSecretFile != ""
hasUserAuth := c.Username != "" && c.Password != ""
if (hasTokenAuth && hasUserAuth) || (hasTokenFileAuth && hasUserAuth) || (hasTokenAuth && hasTokenFileAuth) {
return ClustersConfig{}, fmt.Errorf("cluster #%d: %w", idx+1, ErrInvalidAuthCredentials)
}
if !hasTokenAuth && !hasTokenFileAuth && !hasUserAuth {
return ClustersConfig{}, fmt.Errorf("cluster #%d: %w", idx+1, ErrAuthCredentialsMissing)
}
if c.Region == "" {
return ClustersConfig{}, fmt.Errorf("cluster #%d: %w", idx+1, ErrMissingPVERegion)
}
if c.URL == "" || !strings.HasPrefix(c.URL, "http") {
return ClustersConfig{}, fmt.Errorf("cluster #%d: %w", idx+1, ErrMissingPVEAPIURL)
}
}
if cfg.Features.Provider == "" {
cfg.Features.Provider = ProviderDefault
}
if cfg.Features.Network.Mode == "" {
cfg.Features.Network.Mode = NetworkModeDefault
}
// Validate network mode is valid
if !slices.Contains(ValidNetworkModes, cfg.Features.Network.Mode) {
return ClustersConfig{}, ErrInvalidNetworkMode
}
return cfg, nil
}
// ReadCloudConfigFromFile reads cloud config from a file.
func ReadCloudConfigFromFile(file string) (ClustersConfig, error) {
f, err := os.Open(filepath.Clean(file))
if err != nil {
return ClustersConfig{}, fmt.Errorf("error reading %s: %v", file, err)
}
defer f.Close() // nolint: errcheck
return ReadCloudConfig(f)
}

233
pkg/config/config_test.go Normal file
View File

@@ -0,0 +1,233 @@
/*
Copyright 2023 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package config_test
import (
"strings"
"testing"
"github.com/stretchr/testify/assert"
providerconfig "github.com/sergelogvinov/proxmox-cloud-controller-manager/pkg/config"
)
func TestReadCloudConfig(t *testing.T) {
cfg, err := providerconfig.ReadCloudConfig(nil)
assert.Nil(t, err)
assert.NotNil(t, cfg)
// Empty config
cfg, err = providerconfig.ReadCloudConfig(strings.NewReader(`
clusters:
`))
assert.Nil(t, err)
assert.NotNil(t, cfg)
// Wrong config
cfg, err = providerconfig.ReadCloudConfig(strings.NewReader(`
clusters:
test: false
`))
assert.NotNil(t, err)
assert.ErrorIs(t, err, providerconfig.ErrInvalidCloudConfig)
assert.NotNil(t, cfg)
// Non full config
cfg, err = providerconfig.ReadCloudConfig(strings.NewReader(`
clusters:
- url: abcd
region: cluster-1
`))
assert.NotNil(t, err)
assert.ErrorIs(t, err, providerconfig.ErrAuthCredentialsMissing)
assert.NotNil(t, cfg)
// Valid config with one cluster and secret_file
cfg, err = providerconfig.ReadCloudConfig(strings.NewReader(`
clusters:
- url: https://example.com
insecure: false
token_id_file: "/etc/proxmox-secrets/cluster1/token_id"
token_secret_file: "/etc/proxmox-secrets/cluster1/token_secret"
region: cluster-1
`))
assert.Nil(t, err)
assert.NotNil(t, cfg)
assert.Equal(t, 1, len(cfg.Clusters))
assert.Equal(t, "/etc/proxmox-secrets/cluster1/token_id", cfg.Clusters[0].TokenIDFile)
// Valid config with one cluster
cfg, err = providerconfig.ReadCloudConfig(strings.NewReader(`
clusters:
- url: https://example.com
insecure: false
token_id: "user!token-id"
token_secret: "secret"
region: cluster-1
`))
assert.Nil(t, err)
assert.NotNil(t, cfg)
assert.Equal(t, 1, len(cfg.Clusters))
assert.Equal(t, "user!token-id", cfg.Clusters[0].TokenID)
// Valid config with one cluster (username/password), implicit default provider
cfg, err = providerconfig.ReadCloudConfig(strings.NewReader(`
clusters:
- url: https://example.com
insecure: false
username: "user@pam"
password: "secret"
region: cluster-1
`))
assert.Nil(t, err)
assert.NotNil(t, cfg)
assert.Equal(t, 1, len(cfg.Clusters))
assert.Equal(t, providerconfig.ProviderDefault, cfg.Features.Provider)
// Valid config with one cluster (username/password), explicit provider default
cfg, err = providerconfig.ReadCloudConfig(strings.NewReader(`
features:
provider: 'default'
clusters:
- url: https://example.com
insecure: false
username: "user@pam"
password: "secret"
region: cluster-1
`))
assert.Nil(t, err)
assert.NotNil(t, cfg)
assert.Equal(t, 1, len(cfg.Clusters))
assert.Equal(t, providerconfig.ProviderDefault, cfg.Features.Provider)
// Valid config with one cluster (username/password), explicit provider capmox
cfg, err = providerconfig.ReadCloudConfig(strings.NewReader(`
features:
provider: 'capmox'
clusters:
- url: https://example.com
insecure: false
username: "user@pam"
password: "secret"
region: cluster-1
`))
assert.Nil(t, err)
assert.NotNil(t, cfg)
assert.Equal(t, 1, len(cfg.Clusters))
assert.Equal(t, providerconfig.ProviderCapmox, cfg.Features.Provider)
// Errors when token_id/token_secret are set with token_id_file/token_secret_file
_, err = providerconfig.ReadCloudConfig(strings.NewReader(`
features:
provider: 'capmox'
clusters:
- url: https://example.com
insecure: false
token_id_file: "/etc/proxmox-secrets/cluster1/token_id"
token_secret_file: "/etc/proxmox-secrets/cluster1/token_secret"
token_id: "ha"
token_secret: "secret"
region: cluster-1
`))
assert.NotNil(t, err)
// Errors when username/password are set with token_id/token_secret
_, err = providerconfig.ReadCloudConfig(strings.NewReader(`
features:
provider: 'capmox'
clusters:
- url: https://example.com
insecure: false
username: "user@pam"
password: "secret"
token_id: "ha"
token_secret: "secret"
region: cluster-1
`))
assert.NotNil(t, err)
// Errors when no region
_, err = providerconfig.ReadCloudConfig(strings.NewReader(`
features:
provider: 'capmox'
clusters:
- url: https://example.com
insecure: false
username: "user@pam"
password: "secret"
`))
assert.NotNil(t, err)
assert.ErrorIs(t, err, providerconfig.ErrMissingPVERegion)
// Errors when empty url
_, err = providerconfig.ReadCloudConfig(strings.NewReader(`
features:
provider: 'capmox'
clusters:
- url: ""
region: test
insecure: false
username: "user@pam"
password: "secret"
`))
assert.NotNil(t, err)
assert.ErrorIs(t, err, providerconfig.ErrMissingPVEAPIURL)
// Errors when invalid url protocol
_, err = providerconfig.ReadCloudConfig(strings.NewReader(`
features:
provider: 'capmox'
clusters:
- url: quic://example.com
insecure: false
region: test
username: "user@pam"
password: "secret"
`))
assert.NotNil(t, err)
assert.ErrorIs(t, err, providerconfig.ErrMissingPVEAPIURL)
}
func TestNetworkConfig(t *testing.T) {
// Empty config results in default network mode
cfg, err := providerconfig.ReadCloudConfig(strings.NewReader(`---`))
assert.Nil(t, err)
assert.NotNil(t, cfg)
assert.Equal(t, providerconfig.NetworkModeDefault, cfg.Features.Network.Mode)
// Invalid network mode value results in error
_, err = providerconfig.ReadCloudConfig(strings.NewReader(`
features:
network:
mode: 'invalid-mode'
`))
assert.NotNil(t, err)
}
func TestReadCloudConfigFromFile(t *testing.T) {
cfg, err := providerconfig.ReadCloudConfigFromFile("testdata/cloud-config.yaml")
assert.NotNil(t, err)
assert.EqualError(t, err, "error reading testdata/cloud-config.yaml: open testdata/cloud-config.yaml: no such file or directory")
assert.NotNil(t, cfg)
cfg, err = providerconfig.ReadCloudConfigFromFile("../../hack/proxmox-config.yaml")
assert.Nil(t, err)
assert.NotNil(t, cfg)
assert.Equal(t, 2, len(cfg.Clusters))
}

View File

@@ -44,7 +44,7 @@ func (mc *MetricContext) ObserveRequest(err error) error {
}
func registerAPIMetrics() *CSIMetrics {
metrics := &CSIMetrics{
m := &CSIMetrics{
Duration: metrics.NewHistogramVec(
&metrics.HistogramOpts{
Name: "proxmox_api_request_duration_seconds",
@@ -59,9 +59,9 @@ func registerAPIMetrics() *CSIMetrics {
}
legacyregistry.MustRegister(
metrics.Duration,
metrics.Errors,
m.Duration,
m.Errors,
)
return metrics
return m
}

View File

@@ -22,8 +22,6 @@ import (
"regexp"
"strconv"
"strings"
pxapi "github.com/Telmate/proxmox-api-go/proxmox"
)
const (
@@ -33,9 +31,9 @@ const (
var providerIDRegexp = regexp.MustCompile(`^` + ProviderName + `://([^/]*)/([^/]+)$`)
// GetProviderID returns the magic providerID for kubernetes node.
func GetProviderID(region string, vmr *pxapi.VmRef) string {
return fmt.Sprintf("%s://%s/%d", ProviderName, region, vmr.VmId())
// GetProviderIDFromID returns the magic providerID for kubernetes node.
func GetProviderIDFromID(region string, vmID int) string {
return fmt.Sprintf("%s://%s/%d", ProviderName, region, vmID)
}
// GetProviderIDFromUUID returns the magic providerID for kubernetes node.
@@ -63,20 +61,20 @@ func GetVMID(providerID string) (int, error) {
}
// ParseProviderID returns the VmRef and region from the providerID.
func ParseProviderID(providerID string) (*pxapi.VmRef, string, error) {
func ParseProviderID(providerID string) (int, string, error) {
if !strings.HasPrefix(providerID, ProviderName) {
return nil, "", fmt.Errorf("foreign providerID or empty \"%s\"", providerID)
return 0, "", fmt.Errorf("foreign providerID or empty \"%s\"", providerID)
}
matches := providerIDRegexp.FindStringSubmatch(providerID)
if len(matches) != 3 {
return nil, "", fmt.Errorf("providerID \"%s\" didn't match expected format \"%s://region/InstanceID\"", providerID, ProviderName)
return 0, "", fmt.Errorf("providerID \"%s\" didn't match expected format \"%s://region/InstanceID\"", providerID, ProviderName)
}
vmID, err := strconv.Atoi(matches[2])
if err != nil {
return nil, "", fmt.Errorf("InstanceID have to be a number, but got \"%s\"", matches[2])
return 0, "", fmt.Errorf("InstanceID have to be a number, but got \"%s\"", matches[2])
}
return pxapi.NewVmRef(vmID), matches[1], nil
return vmID, matches[1], nil
}

View File

@@ -20,13 +20,12 @@ import (
"fmt"
"testing"
pxapi "github.com/Telmate/proxmox-api-go/proxmox"
"github.com/stretchr/testify/assert"
provider "github.com/sergelogvinov/proxmox-cloud-controller-manager/pkg/provider"
)
func TestGetProviderID(t *testing.T) {
func TestGetProviderIDFromID(t *testing.T) {
t.Parallel()
tests := []struct {
@@ -50,12 +49,10 @@ func TestGetProviderID(t *testing.T) {
}
for _, testCase := range tests {
testCase := testCase
t.Run(fmt.Sprint(testCase.msg), func(t *testing.T) {
t.Parallel()
providerID := provider.GetProviderID(testCase.region, pxapi.NewVmRef(testCase.vmID))
providerID := provider.GetProviderIDFromID(testCase.region, testCase.vmID)
assert.Equal(t, testCase.expectedProviderID, providerID)
})
@@ -109,8 +106,6 @@ func TestGetVmID(t *testing.T) {
}
for _, testCase := range tests {
testCase := testCase
t.Run(fmt.Sprint(testCase.msg), func(t *testing.T) {
t.Parallel()
@@ -118,7 +113,7 @@ func TestGetVmID(t *testing.T) {
if testCase.expectedError != nil {
assert.NotNil(t, err)
assert.Equal(t, err.Error(), testCase.expectedError.Error())
assert.EqualError(t, err, testCase.expectedError.Error())
} else {
assert.Equal(t, testCase.expectedvmID, VMID)
}
@@ -173,8 +168,6 @@ func TestParseProviderID(t *testing.T) {
}
for _, testCase := range tests {
testCase := testCase
t.Run(fmt.Sprint(testCase.msg), func(t *testing.T) {
t.Parallel()
@@ -182,10 +175,9 @@ func TestParseProviderID(t *testing.T) {
if testCase.expectedError != nil {
assert.NotNil(t, err)
assert.Equal(t, err.Error(), testCase.expectedError.Error())
assert.EqualError(t, err, testCase.expectedError.Error())
} else {
assert.NotNil(t, vmr)
assert.Equal(t, testCase.expectedvmID, vmr.VmId())
assert.Equal(t, testCase.expectedvmID, vmr)
assert.Equal(t, testCase.expectedRegion, region)
}
})

22
pkg/proxmox/annotation.go Normal file
View File

@@ -0,0 +1,22 @@
/*
Copyright 2023 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package proxmox
const (
// AnnotationProxmoxInstanceID is the annotation used to store the Proxmox node virtual machine ID.
AnnotationProxmoxInstanceID = Group + "/instance-id"
)

View File

@@ -20,9 +20,11 @@ package proxmox
import (
"context"
"io"
"os"
"github.com/sergelogvinov/proxmox-cloud-controller-manager/pkg/cluster"
ccmConfig "github.com/sergelogvinov/proxmox-cloud-controller-manager/pkg/config"
provider "github.com/sergelogvinov/proxmox-cloud-controller-manager/pkg/provider"
pxpool "github.com/sergelogvinov/proxmox-cloud-controller-manager/pkg/proxmoxpool"
clientkubernetes "k8s.io/client-go/kubernetes"
cloudprovider "k8s.io/cloud-provider"
@@ -35,20 +37,30 @@ const (
// ServiceAccountName is the service account name used in kube-system namespace.
ServiceAccountName = provider.ProviderName + "-cloud-controller-manager"
// ServiceAccountNameEnv is the environment variable for the service account name.
ServiceAccountNameEnv = "SERVICE_ACCOUNT"
// Group name
Group = "proxmox.sinextra.dev"
)
type cloud struct {
client *cluster.Cluster
kclient clientkubernetes.Interface
client *client
instancesV2 cloudprovider.InstancesV2
ctx context.Context //nolint:containedctx
stop func()
}
type client struct {
pxpool *pxpool.ProxmoxPool
kclient clientkubernetes.Interface
}
func init() {
cloudprovider.RegisterCloudProvider(provider.ProviderName, func(config io.Reader) (cloudprovider.Interface, error) {
cfg, err := cluster.ReadCloudConfig(config)
cfg, err := ccmConfig.ReadCloudConfig(config)
if err != nil {
klog.ErrorS(err, "failed to read config")
@@ -59,17 +71,27 @@ func init() {
})
}
func newCloud(config *cluster.ClustersConfig) (cloudprovider.Interface, error) {
client, err := cluster.NewCluster(config, nil)
func newCloud(config *ccmConfig.ClustersConfig) (cloudprovider.Interface, error) {
ctx, cancel := context.WithCancel(context.Background())
px, err := pxpool.NewProxmoxPool(config.Clusters)
if err != nil {
cancel()
return nil, err
}
instancesInterface := newInstances(client, config.Features.Provider)
client := &client{
pxpool: px,
}
instancesInterface := newInstances(client, config.Features)
return &cloud{
client: client,
instancesV2: instancesInterface,
ctx: ctx,
stop: cancel,
}, nil
}
@@ -77,15 +99,16 @@ func newCloud(config *cluster.ClustersConfig) (cloudprovider.Interface, error) {
// to perform housekeeping or run custom controllers specific to the cloud provider.
// Any tasks started here should be cleaned up when the stop channel closes.
func (c *cloud) Initialize(clientBuilder cloudprovider.ControllerClientBuilder, stop <-chan struct{}) {
c.kclient = clientBuilder.ClientOrDie(ServiceAccountName)
serviceAccountName := os.Getenv(ServiceAccountNameEnv)
if serviceAccountName == "" {
serviceAccountName = ServiceAccountName
}
c.client.kclient = clientBuilder.ClientOrDie(serviceAccountName)
klog.InfoS("clientset initialized")
ctx, cancel := context.WithCancel(context.Background())
c.ctx = ctx
c.stop = cancel
err := c.client.CheckClusters()
err := c.client.pxpool.CheckClusters(c.ctx)
if err != nil {
klog.ErrorS(err, "failed to check proxmox cluster")
}

View File

@@ -22,19 +22,20 @@ import (
"github.com/stretchr/testify/assert"
"github.com/sergelogvinov/proxmox-cloud-controller-manager/pkg/cluster"
ccmConfig "github.com/sergelogvinov/proxmox-cloud-controller-manager/pkg/config"
provider "github.com/sergelogvinov/proxmox-cloud-controller-manager/pkg/provider"
"github.com/sergelogvinov/proxmox-cloud-controller-manager/pkg/proxmoxpool"
)
func TestNewCloudError(t *testing.T) {
cloud, err := newCloud(&cluster.ClustersConfig{})
cloud, err := newCloud(&ccmConfig.ClustersConfig{})
assert.NotNil(t, err)
assert.Nil(t, cloud)
assert.EqualError(t, err, "no Proxmox clusters found")
assert.Equal(t, proxmoxpool.ErrClustersNotFound, err)
}
func TestCloud(t *testing.T) {
cfg, err := cluster.ReadCloudConfig(strings.NewReader(`
cfg, err := ccmConfig.ReadCloudConfig(strings.NewReader(`
clusters:
- url: https://example.com
insecure: false

22
pkg/proxmox/errors.go Normal file
View File

@@ -0,0 +1,22 @@
/*
Copyright 2023 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package proxmox
import "github.com/pkg/errors"
// ErrKubeletExternalProvider is returned when a kubelet node does not have --cloud-provider=external argument
var ErrKubeletExternalProvider = errors.New("node does not have --cloud-provider=external argument")

View File

@@ -0,0 +1,287 @@
/*
Copyright 2023 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package proxmox
import (
"bytes"
"context"
"fmt"
"net"
"slices"
"sort"
"strings"
"github.com/luthermonson/go-proxmox"
providerconfig "github.com/sergelogvinov/proxmox-cloud-controller-manager/pkg/config"
metrics "github.com/sergelogvinov/proxmox-cloud-controller-manager/pkg/metrics"
v1 "k8s.io/api/core/v1"
cloudproviderapi "k8s.io/cloud-provider/api"
"k8s.io/klog/v2"
)
const (
noSortPriority = 0
)
func (i *instances) addresses(ctx context.Context, node *v1.Node, info *instanceInfo) []v1.NodeAddress {
var (
providedIP string
ok bool
)
if providedIP, ok = node.ObjectMeta.Annotations[cloudproviderapi.AnnotationAlphaProvidedIPAddr]; !ok {
klog.ErrorS(ErrKubeletExternalProvider, fmt.Sprintf(
"instances.InstanceMetadata() called: annotation %s missing from node. Was kubelet started without --cloud-provider=external or --node-ip?",
cloudproviderapi.AnnotationAlphaProvidedIPAddr),
"node", klog.KRef("", node.Name))
}
// providedIP is supposed to be a single IP but some kubelets might set a comma separated list of IPs.
providedAddresses := []string{}
if providedIP != "" {
providedAddresses = strings.Split(providedIP, ",")
}
addresses := []v1.NodeAddress{
{Type: v1.NodeHostName, Address: node.Name},
}
for _, address := range providedAddresses {
if address = strings.TrimSpace(address); address != "" {
parsedAddress := net.ParseIP(address)
if parsedAddress != nil {
addresses = append(addresses, v1.NodeAddress{
Type: v1.NodeInternalIP,
Address: parsedAddress.String(),
})
} else {
klog.Warningf("Ignoring invalid provided address '%s' for node %s", address, node.Name)
}
}
}
if i.networkOpts.Mode == providerconfig.NetworkModeDefault {
klog.V(4).InfoS("instances.addresses() returning provided IPs", "node", klog.KObj(node))
return addresses
}
if i.networkOpts.Mode == providerconfig.NetworkModeOnlyQemu || i.networkOpts.Mode == providerconfig.NetworkModeAuto {
newAddresses, err := i.retrieveQemuAddresses(ctx, info)
if err != nil {
klog.ErrorS(err, "Failed to retrieve host addresses")
}
addToNodeAddresses(&addresses, newAddresses...)
}
// Remove addresses that match the ignored CIDRs
if len(i.networkOpts.IgnoredCIDRs) > 0 {
var removableAddresses []v1.NodeAddress
for _, addr := range addresses {
ip := net.ParseIP(addr.Address)
if ip != nil && isAddressInCIDRList(i.networkOpts.IgnoredCIDRs, ip) {
removableAddresses = append(removableAddresses, addr)
}
}
removeFromNodeAddresses(&addresses, removableAddresses...)
}
sortNodeAddresses(addresses, i.networkOpts.SortOrder)
klog.V(4).InfoS("instances.addresses() returning addresses", "addresses", addresses, "node", klog.KObj(node))
return addresses
}
// retrieveQemuAddresses retrieves the addresses from the QEMU agent
func (i *instances) retrieveQemuAddresses(ctx context.Context, info *instanceInfo) ([]v1.NodeAddress, error) {
var addresses []v1.NodeAddress
nics, err := i.getInstanceNics(ctx, info)
if err != nil {
return nil, err
}
for _, nic := range nics {
if slices.Contains([]string{"lo", "cilium_net", "cilium_host"}, nic.Name) ||
strings.HasPrefix(nic.Name, "dummy") {
continue
}
for _, ip := range nic.IPAddresses {
i.processIP(ctx, &addresses, ip.IPAddress)
}
}
return addresses, nil
}
func (i *instances) processIP(_ context.Context, addresses *[]v1.NodeAddress, addr string) {
ip := net.ParseIP(addr)
if ip == nil || ip.IsLoopback() {
return
}
if ip.To4() == nil {
if i.networkOpts.IPv6SupportDisabled {
klog.V(4).InfoS("Skipping IPv6 address due to IPv6 support being disabled", "address", ip.String())
return
}
if ip.IsPrivate() || ip.IsLinkLocalUnicast() {
return
}
}
addressType := v1.NodeInternalIP
if len(i.networkOpts.ExternalCIDRs) != 0 && isAddressInCIDRList(i.networkOpts.ExternalCIDRs, ip) {
addressType = v1.NodeExternalIP
}
*addresses = append(*addresses, v1.NodeAddress{
Type: addressType,
Address: ip.String(),
})
}
func (i *instances) getInstanceNics(ctx context.Context, info *instanceInfo) ([]*proxmox.AgentNetworkIface, error) {
result := make([]*proxmox.AgentNetworkIface, 0)
px, err := i.c.pxpool.GetProxmoxCluster(info.Region)
if err != nil {
return result, err
}
vm, err := px.GetVMConfig(ctx, info.ID)
if err != nil {
return nil, err
}
mc := metrics.NewMetricContext("getVmInfo")
nicset, err := vm.AgentGetNetworkIFaces(ctx)
if mc.ObserveRequest(err) != nil {
return result, err
}
klog.V(4).InfoS("getInstanceNics() retrieved IP set", "nicset", nicset)
return nicset, nil
}
// getSortPriority returns the priority as int of an address.
//
// The priority depends on the index of the CIDR in the list the address is matching,
// where the first item of the list has higher priority than the last.
//
// If the address does not match any CIDR or is not an IP address the function returns noSortPriority.
func getSortPriority(list []*net.IPNet, address string) int {
parsedAddress := net.ParseIP(address)
if parsedAddress == nil {
return noSortPriority
}
for i, cidr := range list {
if cidr.Contains(parsedAddress) {
return len(list) - i
}
}
return noSortPriority
}
// sortNodeAddresses sorts node addresses based on comma separated list of CIDRs represented by addressSortOrder.
//
// The function only sorts addresses which match the CIDR and leaves the other addresses in the same order they are in.
// Essentially, it will also group the addresses matching a CIDR together and sort them ascending in this group,
// whereas the inter-group sorting depends on the priority.
//
// The priority depends on the order of the item in addressSortOrder, where the first item has higher priority than the last.
func sortNodeAddresses(addresses []v1.NodeAddress, addressSortOrder []*net.IPNet) {
sort.SliceStable(addresses, func(i int, j int) bool {
addressLeft := addresses[i]
addressRight := addresses[j]
priorityLeft := getSortPriority(addressSortOrder, addressLeft.Address)
priorityRight := getSortPriority(addressSortOrder, addressRight.Address)
// ignore priorities of value 0 since this means the address has noSortPriority and we need to sort by priority
if priorityLeft > noSortPriority && priorityLeft == priorityRight {
return bytes.Compare(net.ParseIP(addressLeft.Address), net.ParseIP(addressRight.Address)) < 0
}
return priorityLeft > priorityRight
})
}
// addToNodeAddresses appends the NodeAddresses to the passed-by-pointer slice,
// only if they do not already exist
func addToNodeAddresses(addresses *[]v1.NodeAddress, addAddresses ...v1.NodeAddress) {
for _, add := range addAddresses {
exists := false
for _, existing := range *addresses {
if existing.Address == add.Address && existing.Type == add.Type {
exists = true
break
}
}
if !exists {
*addresses = append(*addresses, add)
}
}
}
// removeFromNodeAddresses removes the NodeAddresses from the passed-by-pointer
// slice if they already exist.
func removeFromNodeAddresses(addresses *[]v1.NodeAddress, removeAddresses ...v1.NodeAddress) {
var indexesToRemove []int
for _, remove := range removeAddresses {
for i := len(*addresses) - 1; i >= 0; i-- {
existing := (*addresses)[i]
if existing.Address == remove.Address && (existing.Type == remove.Type || remove.Type == "") {
indexesToRemove = append(indexesToRemove, i)
}
}
}
for _, i := range indexesToRemove {
if i < len(*addresses) {
*addresses = append((*addresses)[:i], (*addresses)[i+1:]...)
}
}
}
// isAddressInCIDRList checks if the given address is contained in any of the CIDRs in the list.
func isAddressInCIDRList(cidrs []*net.IPNet, address net.IP) bool {
for _, cidr := range cidrs {
if cidr.Contains(address) {
return true
}
}
return false
}

View File

@@ -18,36 +18,87 @@ package proxmox
import (
"context"
"errors"
"fmt"
"net"
"regexp"
"strconv"
"strings"
pxapi "github.com/Telmate/proxmox-api-go/proxmox"
"github.com/sergelogvinov/proxmox-cloud-controller-manager/pkg/cluster"
goproxmox "github.com/sergelogvinov/go-proxmox"
providerconfig "github.com/sergelogvinov/proxmox-cloud-controller-manager/pkg/config"
metrics "github.com/sergelogvinov/proxmox-cloud-controller-manager/pkg/metrics"
provider "github.com/sergelogvinov/proxmox-cloud-controller-manager/pkg/provider"
"github.com/sergelogvinov/proxmox-cloud-controller-manager/pkg/proxmoxpool"
v1 "k8s.io/api/core/v1"
cloudprovider "k8s.io/cloud-provider"
cloudproviderapi "k8s.io/cloud-provider/api"
"k8s.io/klog/v2"
)
type instances struct {
c *cluster.Cluster
provider cluster.Provider
type instanceNetops struct {
ExternalCIDRs []*net.IPNet
SortOrder []*net.IPNet
IgnoredCIDRs []*net.IPNet
Mode providerconfig.NetworkMode
IPv6SupportDisabled bool
}
func newInstances(client *cluster.Cluster, provider cluster.Provider) *instances {
type instanceInfo struct {
ID int
UUID string
Name string
Type string
Node string
Region string
Zone string
}
type instances struct {
c *client
zoneAsHAGroup bool
provider providerconfig.Provider
networkOpts instanceNetops
updateLabels bool
}
var instanceTypeNameRegexp = regexp.MustCompile(`(^[a-zA-Z0-9_.-]+)$`)
func newInstances(client *client, features providerconfig.ClustersFeatures) *instances {
externalIPCIDRs := ParseCIDRList(features.Network.ExternalIPCIDRS)
if len(features.Network.ExternalIPCIDRS) > 0 && len(externalIPCIDRs) == 0 {
klog.Warningf("Failed to parse external CIDRs: %v", features.Network.ExternalIPCIDRS)
}
sortOrderCIDRs, ignoredCIDRs, err := ParseCIDRRuleset(features.Network.IPSortOrder)
if err != nil {
klog.Errorf("Failed to parse sort order CIDRs: %v", err)
}
if len(features.Network.IPSortOrder) > 0 && (len(sortOrderCIDRs)+len(ignoredCIDRs)) == 0 {
klog.Warningf("Failed to parse sort order CIDRs: %v", features.Network.IPSortOrder)
}
netOps := instanceNetops{
ExternalCIDRs: externalIPCIDRs,
SortOrder: sortOrderCIDRs,
IgnoredCIDRs: ignoredCIDRs,
Mode: features.Network.Mode,
IPv6SupportDisabled: features.Network.IPv6SupportDisabled,
}
return &instances{
c: client,
provider: provider,
c: client,
zoneAsHAGroup: features.HAGroup,
provider: features.Provider,
networkOpts: netOps,
updateLabels: features.ForceUpdateLabels,
}
}
// InstanceExists returns true if the instance for the given node exists according to the cloud provider.
// Use the node.name or node.spec.providerID field to find the node in the cloud provider.
func (i *instances) InstanceExists(_ context.Context, node *v1.Node) (bool, error) {
func (i *instances) InstanceExists(ctx context.Context, node *v1.Node) (bool, error) {
klog.V(4).InfoS("instances.InstanceExists() called", "node", klog.KRef("", node.Name))
if node.Spec.ProviderID == "" {
@@ -63,13 +114,19 @@ func (i *instances) InstanceExists(_ context.Context, node *v1.Node) (bool, erro
}
mc := metrics.NewMetricContext("getVmInfo")
if _, _, err := i.getInstance(node); mc.ObserveRequest(err) != nil {
if err == cloudprovider.InstanceNotFound {
if _, err := i.getInstanceInfo(ctx, node); mc.ObserveRequest(err) != nil {
if errors.Is(err, cloudprovider.InstanceNotFound) {
klog.V(4).InfoS("instances.InstanceExists() instance not found", "node", klog.KObj(node), "providerID", node.Spec.ProviderID)
return false, nil
}
if errors.Is(err, proxmoxpool.ErrNodeInaccessible) {
klog.V(4).InfoS("instances.InstanceExists() proxmox node inaccessible, cannot define instance status", "node", klog.KObj(node), "providerID", node.Spec.ProviderID)
return true, nil
}
return false, err
}
@@ -78,7 +135,7 @@ func (i *instances) InstanceExists(_ context.Context, node *v1.Node) (bool, erro
// InstanceShutdown returns true if the instance is shutdown according to the cloud provider.
// Use the node.name or node.spec.providerID field to find the node in the cloud provider.
func (i *instances) InstanceShutdown(_ context.Context, node *v1.Node) (bool, error) {
func (i *instances) InstanceShutdown(ctx context.Context, node *v1.Node) (bool, error) {
klog.V(4).InfoS("instances.InstanceShutdown() called", "node", klog.KRef("", node.Name))
if node.Spec.ProviderID == "" {
@@ -93,14 +150,21 @@ func (i *instances) InstanceShutdown(_ context.Context, node *v1.Node) (bool, er
return false, nil
}
vmr, region, err := provider.ParseProviderID(node.Spec.ProviderID)
vmID, region, err := provider.ParseProviderID(node.Spec.ProviderID)
if err != nil {
klog.ErrorS(err, "instances.InstanceShutdown() failed to parse providerID", "providerID", node.Spec.ProviderID)
if i.provider == providerconfig.ProviderDefault {
klog.ErrorS(err, "instances.InstanceShutdown() failed to parse providerID", "providerID", node.Spec.ProviderID)
}
return false, nil
vmID, region, err = i.parseProviderIDFromNode(node)
if err != nil {
klog.ErrorS(err, "instances.InstanceShutdown() failed to parse providerID from node", "node", klog.KObj(node))
return false, nil
}
}
px, err := i.c.GetProxmoxCluster(region)
px, err := i.c.pxpool.GetProxmoxCluster(region)
if err != nil {
klog.ErrorS(err, "instances.InstanceShutdown() failed to get Proxmox cluster", "region", region)
@@ -109,12 +173,12 @@ func (i *instances) InstanceShutdown(_ context.Context, node *v1.Node) (bool, er
mc := metrics.NewMetricContext("getVmState")
vmState, err := px.GetVmState(vmr)
vm, err := px.GetVMByID(ctx, uint64(vmID))
if mc.ObserveRequest(err) != nil {
return false, err
}
if vmState["status"].(string) == "stopped" { //nolint:errcheck
if vm.Status == "stopped" {
return true, nil
}
@@ -124,141 +188,225 @@ func (i *instances) InstanceShutdown(_ context.Context, node *v1.Node) (bool, er
// InstanceMetadata returns the instance's metadata. The values returned in InstanceMetadata are
// translated into specific fields in the Node object on registration.
// Use the node.name or node.spec.providerID field to find the node in the cloud provider.
func (i *instances) InstanceMetadata(_ context.Context, node *v1.Node) (*cloudprovider.InstanceMetadata, error) {
func (i *instances) InstanceMetadata(ctx context.Context, node *v1.Node) (*cloudprovider.InstanceMetadata, error) {
klog.V(4).InfoS("instances.InstanceMetadata() called", "node", klog.KRef("", node.Name))
if providedIP, ok := node.ObjectMeta.Annotations[cloudproviderapi.AnnotationAlphaProvidedIPAddr]; ok {
var (
vmRef *pxapi.VmRef
region string
err error
)
var (
info *instanceInfo
err error
)
providerID := node.Spec.ProviderID
if providerID == "" {
uuid := node.Status.NodeInfo.SystemUUID
providerID := node.Spec.ProviderID
if providerID != "" && !strings.HasPrefix(providerID, provider.ProviderName) {
klog.V(4).InfoS("instances.InstanceMetadata() omitting unmanaged node", "node", klog.KObj(node), "providerID", providerID)
klog.V(4).InfoS("instances.InstanceMetadata() empty providerID, trying find node", "node", klog.KObj(node), "uuid", uuid)
return &cloudprovider.InstanceMetadata{}, nil
}
mc := metrics.NewMetricContext("findVmByName")
mc := metrics.NewMetricContext("getInstanceInfo")
vmRef, region, err = i.c.FindVMByName(node.Name)
if mc.ObserveRequest(err) != nil {
mc := metrics.NewMetricContext("findVmByUUID")
info, err = i.getInstanceInfo(ctx, node)
if mc.ObserveRequest(err) != nil {
klog.ErrorS(err, "instances.InstanceMetadata() failed to get instance info", "node", klog.KObj(node))
vmRef, region, err = i.c.FindVMByUUID(uuid)
if mc.ObserveRequest(err) != nil {
return nil, fmt.Errorf("instances.InstanceMetadata() - failed to find instance by name/uuid %s: %v, skipped", node.Name, err)
}
}
if i.provider == cluster.ProviderCapmox {
providerID = provider.GetProviderIDFromUUID(uuid)
} else {
providerID = provider.GetProviderID(region, vmRef)
}
} else if !strings.HasPrefix(node.Spec.ProviderID, provider.ProviderName) {
klog.V(4).InfoS("instances.InstanceMetadata() omitting unmanaged node", "node", klog.KObj(node), "providerID", node.Spec.ProviderID)
if errors.Is(err, cloudprovider.InstanceNotFound) {
klog.V(4).InfoS("instances.InstanceMetadata() instance not found", "node", klog.KObj(node), "providerID", providerID)
return &cloudprovider.InstanceMetadata{}, nil
}
if vmRef == nil {
mc := metrics.NewMetricContext("getVmInfo")
if errors.Is(err, proxmoxpool.ErrNodeInaccessible) {
klog.V(4).InfoS("instances.InstanceMetadata() proxmox node inaccessible, cannot get instance metadata", "node", klog.KObj(node), "providerID", providerID)
vmRef, region, err = i.getInstance(node)
return &cloudprovider.InstanceMetadata{}, nil
}
return nil, err
}
annotations := map[string]string{}
labels := map[string]string{
LabelTopologyRegion: info.Region,
LabelTopologyZone: info.Zone,
}
if providerID == "" {
if i.provider == providerconfig.ProviderCapmox {
providerID = provider.GetProviderIDFromUUID(info.UUID)
annotations[AnnotationProxmoxInstanceID] = fmt.Sprintf("%d", info.ID)
} else {
providerID = provider.GetProviderIDFromID(info.Region, info.ID)
}
}
metadata := &cloudprovider.InstanceMetadata{
ProviderID: providerID,
NodeAddresses: i.addresses(ctx, node, info),
InstanceType: info.Type,
Zone: info.Zone,
Region: info.Region,
AdditionalLabels: labels,
}
haGroups, err := i.c.pxpool.GetNodeHAGroups(ctx, info.Region, info.Node)
if err != nil {
if !errors.Is(err, proxmoxpool.ErrHAGroupNotFound) {
klog.ErrorS(err, "instances.InstanceMetadata() failed to get HA group for the node", "node", klog.KRef("", node.Name), "region", info.Region)
}
}
for _, g := range haGroups {
labels[LabelTopologyHAGroupPrefix+g] = ""
}
if i.zoneAsHAGroup {
if len(haGroups) == 0 {
err := fmt.Errorf("cannot set zone as HA-Group")
klog.ErrorS(err, "instances.InstanceMetadata() no HA groups found for the node", "node", klog.KRef("", node.Name))
return nil, err
}
metadata.Zone = haGroups[0]
labels[LabelTopologyZone] = haGroups[0]
}
if !hasUninitializedTaint(node) {
if i.updateLabels {
labels[v1.LabelTopologyZone] = metadata.Zone
labels[v1.LabelFailureDomainBetaZone] = metadata.Zone
labels[v1.LabelTopologyRegion] = metadata.Region
labels[v1.LabelFailureDomainBetaRegion] = metadata.Region
}
if len(labels) > 0 {
if err := syncNodeLabels(i.c, node, labels); err != nil {
klog.ErrorS(err, "error updating labels for the node", "node", klog.KRef("", node.Name))
}
}
}
if len(annotations) > 0 {
if err := syncNodeAnnotations(ctx, i.c.kclient, node, annotations); err != nil {
klog.ErrorS(err, "error updating annotations for the node", "node", klog.KRef("", node.Name))
}
}
klog.V(5).InfoS("instances.InstanceMetadata()", "info", info, "metadata", metadata)
return metadata, nil
}
func (i *instances) getInstanceInfo(ctx context.Context, node *v1.Node) (*instanceInfo, error) {
klog.V(4).InfoS("instances.getInstanceInfo() called", "node", klog.KRef("", node.Name), "provider", i.provider)
var (
vmID int
region string
err error
)
providerID := node.Spec.ProviderID
vmID, region, err = provider.ParseProviderID(providerID)
if err != nil {
if i.provider == providerconfig.ProviderDefault {
klog.ErrorS(err, "instances.getInstanceInfo() failed to parse providerID", "node", klog.KObj(node), "providerID", providerID)
}
vmID, region, err = i.parseProviderIDFromNode(node)
if err != nil {
klog.ErrorS(err, "instances.getInstanceInfo() failed to parse providerID from node", "node", klog.KObj(node))
}
}
if vmID == 0 || region == "" {
klog.V(4).InfoS("instances.getInstanceInfo() trying to find node in cluster", "node", klog.KObj(node), "providerID", providerID)
mc := metrics.NewMetricContext("findVmByNode")
vmID, region, err = i.c.pxpool.FindVMByNode(ctx, node)
if mc.ObserveRequest(err) != nil {
mc := metrics.NewMetricContext("findVmByUUID")
vmID, region, err = i.c.pxpool.FindVMByUUID(ctx, node.Status.NodeInfo.SystemUUID)
if mc.ObserveRequest(err) != nil {
if errors.Is(err, proxmoxpool.ErrInstanceNotFound) {
return nil, cloudprovider.InstanceNotFound
}
return nil, err
}
}
addresses := []v1.NodeAddress{}
for _, ip := range strings.Split(providedIP, ",") {
addresses = append(addresses, v1.NodeAddress{Type: v1.NodeInternalIP, Address: ip})
}
addresses = append(addresses, v1.NodeAddress{Type: v1.NodeHostName, Address: node.Name})
instanceType, err := i.getInstanceType(vmRef, region)
if err != nil {
instanceType = vmRef.GetVmType()
}
return &cloudprovider.InstanceMetadata{
ProviderID: providerID,
NodeAddresses: addresses,
InstanceType: instanceType,
Zone: vmRef.Node(),
Region: region,
}, nil
}
klog.InfoS("instances.InstanceMetadata() is kubelet has args: --cloud-provider=external on the node?", node, klog.KRef("", node.Name))
return &cloudprovider.InstanceMetadata{}, nil
}
func (i *instances) getInstance(node *v1.Node) (*pxapi.VmRef, string, error) {
if i.provider == cluster.ProviderCapmox {
uuid := node.Status.NodeInfo.SystemUUID
vmRef, region, err := i.c.FindVMByUUID(uuid)
if err != nil {
return nil, "", fmt.Errorf("instances.getInstance() error: %v", err)
}
return vmRef, region, nil
}
vm, region, err := provider.ParseProviderID(node.Spec.ProviderID)
px, err := i.c.pxpool.GetProxmoxCluster(region)
if err != nil {
return nil, "", fmt.Errorf("instances.getInstance() error: %v", err)
return nil, err
}
px, err := i.c.GetProxmoxCluster(region)
if err != nil {
return nil, "", fmt.Errorf("instances.getInstance() error: %v", err)
}
mc := metrics.NewMetricContext("getVMConfig")
mc := metrics.NewMetricContext("getVmInfo")
vmInfo, err := px.GetVmInfo(vm)
vm, err := px.GetVMConfig(ctx, vmID)
if mc.ObserveRequest(err) != nil {
if strings.Contains(err.Error(), "not found") {
return nil, "", cloudprovider.InstanceNotFound
return nil, cloudprovider.InstanceNotFound
}
return nil, "", err
if errors.Is(err, goproxmox.ErrVirtualMachineUnreachable) {
return nil, proxmoxpool.ErrNodeInaccessible
}
return nil, err
}
if vmInfo["name"] != nil && vmInfo["name"].(string) != node.Name { //nolint:errcheck
return nil, "", fmt.Errorf("instances.getInstance() vm.name(%s) != node.name(%s)", vmInfo["name"].(string), node.Name) //nolint:errcheck
info := &instanceInfo{
ID: vmID,
UUID: goproxmox.GetVMUUID(vm),
Name: vm.Name,
Node: vm.Node,
Region: region,
Zone: vm.Node,
}
klog.V(5).Infof("instances.getInstance() vmInfo %+v", vmInfo)
if info.UUID != node.Status.NodeInfo.SystemUUID {
klog.Errorf("instances.getInstanceInfo() node %s does not match SystemUUID=%s", info.Name, node.Status.NodeInfo.SystemUUID)
return vm, region, nil
return nil, cloudprovider.InstanceNotFound
}
if !strings.HasPrefix(info.Name, node.Name) {
klog.Errorf("instances.getInstanceInfo() node %s does not match VM name=%s", node.Name, info.Name)
return nil, cloudprovider.InstanceNotFound
}
info.Type = goproxmox.GetVMSKU(vm)
if !instanceTypeNameRegexp.MatchString(info.Type) {
info.Type = fmt.Sprintf("%dVCPU-%dGB", vm.CPUs, vm.MaxMem/1024/1024/1024)
}
return info, nil
}
func (i *instances) getInstanceType(vmRef *pxapi.VmRef, region string) (string, error) {
px, err := i.c.GetProxmoxCluster(region)
if err != nil {
return "", err
func (i *instances) parseProviderIDFromNode(node *v1.Node) (vmID int, region string, err error) {
if node.Annotations[AnnotationProxmoxInstanceID] != "" {
region = node.Labels[LabelTopologyRegion]
if region == "" {
region = node.Labels[v1.LabelTopologyRegion]
}
vmID, err = strconv.Atoi(node.Annotations[AnnotationProxmoxInstanceID])
if err != nil {
return 0, "", fmt.Errorf("instances.getProviderIDFromNode() parse annotation error: %v", err)
}
if _, err := i.c.pxpool.GetProxmoxCluster(region); err != nil {
return 0, "", fmt.Errorf("instances.getProviderIDFromNode() get cluster error: %v", err)
}
return vmID, region, nil
}
mc := metrics.NewMetricContext("getVmInfo")
vmInfo, err := px.GetVmInfo(vmRef)
if mc.ObserveRequest(err) != nil {
return "", err
}
if vmInfo["maxcpu"] == nil || vmInfo["maxmem"] == nil {
return "", fmt.Errorf("instances.getInstanceType() failed to get instance type")
}
return fmt.Sprintf("%.0fVCPU-%.0fGB",
vmInfo["maxcpu"].(float64), //nolint:errcheck
vmInfo["maxmem"].(float64)/1024/1024/1024), nil //nolint:errcheck
return 0, "", fmt.Errorf("instances.getProviderIDFromNode() no annotation found")
}

File diff suppressed because it is too large Load Diff

28
pkg/proxmox/labels.go Normal file
View File

@@ -0,0 +1,28 @@
/*
Copyright 2023 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package proxmox
const (
// LabelTopologyRegion is the label used to store the Proxmox region name.
LabelTopologyRegion = "topology." + Group + "/region"
// LabelTopologyZone is the label used to store the Proxmox zone name.
LabelTopologyZone = "topology." + Group + "/zone"
// LabelTopologyHAGroupPrefix is the prefix for labels used to store Proxmox HA group information.
LabelTopologyHAGroupPrefix = "group.topology." + Group + "/"
)

184
pkg/proxmox/utils.go Normal file
View File

@@ -0,0 +1,184 @@
/*
Copyright 2023 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package proxmox
import (
"context"
"encoding/json"
"fmt"
"maps"
"net"
"strings"
"unicode"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/strategicpatch"
clientkubernetes "k8s.io/client-go/kubernetes"
cloudproviderapi "k8s.io/cloud-provider/api"
cloudnodeutil "k8s.io/cloud-provider/node/helpers"
)
// ErrorCIDRConflict is the error message formatting string for CIDR conflicts
const ErrorCIDRConflict = "CIDR %s intersects with ignored CIDR %s"
var uninitializedTaint = &corev1.Taint{
Key: cloudproviderapi.TaintExternalCloudProvider,
Effect: corev1.TaintEffectNoSchedule,
}
// SplitTrim splits a string of values separated by sep rune into a slice of
// strings with trimmed spaces.
func SplitTrim(s string, sep rune) []string {
f := func(c rune) bool {
return unicode.IsSpace(c) || c == sep
}
return strings.FieldsFunc(s, f)
}
// ParseCIDRRuleset parses a comma separated list of CIDRs and returns two slices of *net.IPNet, the first being the allow list, the second be the disallow list
func ParseCIDRRuleset(cidrList string) (allowList, ignoreList []*net.IPNet, err error) {
cidrlist := SplitTrim(cidrList, ',')
if len(cidrlist) == 0 {
return []*net.IPNet{}, []*net.IPNet{}, nil
}
for _, item := range cidrlist {
item, isIgnore := strings.CutPrefix(item, "!")
_, cidr, err := net.ParseCIDR(item)
if err != nil {
continue
}
if isIgnore {
ignoreList = append(ignoreList, cidr)
continue
}
allowList = append(allowList, cidr)
}
// Check for no interactions
for _, n1 := range allowList {
for _, n2 := range ignoreList {
if checkIPIntersects(n1, n2) {
return nil, nil, fmt.Errorf(ErrorCIDRConflict, n1.String(), n2.String())
}
}
}
return ignoreList, allowList, nil
}
// ParseCIDRList parses a comma separated list of CIDRs and returns a slice of *net.IPNet ignoring errors
func ParseCIDRList(cidrList string) []*net.IPNet {
cidrlist := SplitTrim(cidrList, ',')
if len(cidrlist) == 0 {
return []*net.IPNet{}
}
cidrs := make([]*net.IPNet, 0, len(cidrlist))
for _, item := range cidrlist {
_, cidr, err := net.ParseCIDR(item)
if err != nil {
continue
}
cidrs = append(cidrs, cidr)
}
return cidrs
}
func checkIPIntersects(n1, n2 *net.IPNet) bool {
return n2.Contains(n1.IP) || n1.Contains(n2.IP)
}
func hasUninitializedTaint(node *corev1.Node) bool {
for _, taint := range node.Spec.Taints {
if taint.MatchTaint(uninitializedTaint) {
return true
}
}
return false
}
func syncNodeAnnotations(ctx context.Context, kclient clientkubernetes.Interface, node *corev1.Node, nodeAnnotations map[string]string) error {
nodeAnnotationsOrig := node.ObjectMeta.Annotations
annotationsToUpdate := map[string]string{}
for k, v := range nodeAnnotations {
if r, ok := nodeAnnotationsOrig[k]; !ok || r != v {
annotationsToUpdate[k] = v
}
}
if len(annotationsToUpdate) > 0 {
oldData, err := json.Marshal(node)
if err != nil {
return fmt.Errorf("failed to marshal the existing node %#v: %w", node, err)
}
newNode := node.DeepCopy()
if newNode.Annotations == nil {
newNode.Annotations = make(map[string]string)
}
maps.Copy(newNode.Annotations, annotationsToUpdate)
newData, err := json.Marshal(newNode)
if err != nil {
return fmt.Errorf("failed to marshal the new node %#v: %w", newNode, err)
}
patchBytes, err := strategicpatch.CreateTwoWayMergePatch(oldData, newData, &corev1.Node{})
if err != nil {
return fmt.Errorf("failed to create a two-way merge patch: %v", err)
}
if _, err := kclient.CoreV1().Nodes().Patch(ctx, node.Name, types.StrategicMergePatchType, patchBytes, metav1.PatchOptions{}); err != nil {
return fmt.Errorf("failed to patch the node: %v", err)
}
}
return nil
}
func syncNodeLabels(c *client, node *corev1.Node, nodeLabels map[string]string) error {
nodeLabelsOrig := node.ObjectMeta.Labels
labelsToUpdate := map[string]string{}
for k, v := range nodeLabels {
if r, ok := nodeLabelsOrig[k]; !ok || r != v {
labelsToUpdate[k] = v
}
}
if len(labelsToUpdate) > 0 {
if !cloudnodeutil.AddOrUpdateLabelsOnNode(c.kclient, labelsToUpdate, node) {
return fmt.Errorf("failed update labels for node %s", node.Name)
}
}
return nil
}

94
pkg/proxmox/utils_test.go Normal file
View File

@@ -0,0 +1,94 @@
/*
Copyright 2023 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package proxmox_test
import (
"fmt"
"net"
"testing"
"github.com/stretchr/testify/assert"
proxmox "github.com/sergelogvinov/proxmox-cloud-controller-manager/pkg/proxmox"
)
func TestParseCIDRRuleset(t *testing.T) {
t.Parallel()
tests := []struct {
msg string
cidrs string
expectedAllowList []*net.IPNet
expectedIgnoreList []*net.IPNet
expectedError []any
}{
{
msg: "Empty CIDR ruleset",
cidrs: "",
expectedAllowList: []*net.IPNet{},
expectedIgnoreList: []*net.IPNet{},
expectedError: []any{},
},
{
msg: "Conflicting CIDRs",
cidrs: "192.168.0.1/16,!192.168.0.1/24",
expectedAllowList: []*net.IPNet{},
expectedIgnoreList: []*net.IPNet{},
expectedError: []any{"192.168.0.0/16", "192.168.0.0/24"},
},
{
msg: "Ignores invalid CIDRs",
cidrs: "722.887.0.1/16,!588.0.1/24",
expectedAllowList: []*net.IPNet{},
expectedIgnoreList: []*net.IPNet{},
expectedError: []any{},
},
{
msg: "Valid CIDRs with ignore",
cidrs: "192.168.0.1/16,!10.0.0.5/8,144.0.0.7/16,!13.0.0.9/8",
expectedAllowList: []*net.IPNet{mustParseCIDR("192.168.0.0/16"), mustParseCIDR("144.0.0.0/16")},
expectedIgnoreList: []*net.IPNet{mustParseCIDR("10.0.0.0/8"), mustParseCIDR("13.0.0.0/8")},
expectedError: []any{},
},
}
for _, testCase := range tests {
t.Run(fmt.Sprint(testCase.msg), func(t *testing.T) {
t.Parallel()
allowList, ignoreList, err := proxmox.ParseCIDRRuleset(testCase.cidrs)
assert.Equal(t, len(testCase.expectedAllowList), len(allowList), "Allow list length mismatch")
assert.Equal(t, len(testCase.expectedIgnoreList), len(ignoreList), "Allow list length mismatch")
if len(testCase.expectedError) != 0 {
assert.EqualError(t, err, fmt.Sprintf(proxmox.ErrorCIDRConflict, testCase.expectedError...), "Error mismatch")
} else {
assert.NoError(t, err, "Unexpected error")
}
})
}
}
func mustParseCIDR(cidr string) *net.IPNet {
_, parsedCIDR, err := net.ParseCIDR(cidr)
if err != nil {
panic(fmt.Sprintf("Failed to parse CIDR %s: %v", cidr, err))
}
return parsedCIDR
}

17
pkg/proxmoxpool/doc.go Normal file
View File

@@ -0,0 +1,17 @@
/*
Copyright 2023 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package proxmoxpool

35
pkg/proxmoxpool/errors.go Normal file
View File

@@ -0,0 +1,35 @@
/*
Copyright 2023 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package proxmoxpool
import "github.com/pkg/errors"
var (
// ErrClustersNotFound is returned when a cluster is not found in the Proxmox
ErrClustersNotFound = errors.New("clusters not found")
// ErrHAGroupNotFound is returned when a ha-group is not found in the Proxmox
ErrHAGroupNotFound = errors.New("ha-group not found")
// ErrRegionNotFound is returned when a region is not found in the Proxmox
ErrRegionNotFound = errors.New("region not found")
// ErrZoneNotFound is returned when a zone is not found in the Proxmox
ErrZoneNotFound = errors.New("zone not found")
// ErrInstanceNotFound is returned when an instance is not found in the Proxmox
ErrInstanceNotFound = errors.New("instance not found")
// ErrNodeInaccessible is returned when a Proxmox node cannot be reached or accessed
ErrNodeInaccessible = errors.New("node is inaccessible")
)

336
pkg/proxmoxpool/pool.go Normal file
View File

@@ -0,0 +1,336 @@
/*
Copyright 2023 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Package proxmoxpool provides a pool of Telmate/proxmox-api-go/proxmox clients
package proxmoxpool
import (
"context"
"crypto/tls"
"errors"
"fmt"
"net/http"
"os"
"slices"
"strings"
proxmox "github.com/luthermonson/go-proxmox"
"go.uber.org/multierr"
goproxmox "github.com/sergelogvinov/go-proxmox"
v1 "k8s.io/api/core/v1"
"k8s.io/klog/v2"
)
// ProxmoxCluster defines a Proxmox cluster configuration.
type ProxmoxCluster struct {
URL string `yaml:"url"`
Insecure bool `yaml:"insecure,omitempty"`
TokenID string `yaml:"token_id,omitempty"`
TokenIDFile string `yaml:"token_id_file,omitempty"`
TokenSecret string `yaml:"token_secret,omitempty"`
TokenSecretFile string `yaml:"token_secret_file,omitempty"`
Username string `yaml:"username,omitempty"`
Password string `yaml:"password,omitempty"`
Region string `yaml:"region,omitempty"`
}
// ProxmoxPool is a Proxmox client pool of proxmox clusters.
type ProxmoxPool struct {
clients map[string]*goproxmox.APIClient
}
// NewProxmoxPool creates a new Proxmox cluster client.
func NewProxmoxPool(config []*ProxmoxCluster, options ...proxmox.Option) (*ProxmoxPool, error) {
clusters := len(config)
if clusters > 0 {
clients := make(map[string]*goproxmox.APIClient, clusters)
for _, cfg := range config {
opts := []proxmox.Option{proxmox.WithUserAgent("ProxmoxCCM/1.0")}
opts = append(opts, options...)
if cfg.Insecure {
httpTr := &http.Transport{
TLSClientConfig: &tls.Config{
InsecureSkipVerify: true,
MinVersion: tls.VersionTLS12,
},
}
opts = append(opts, proxmox.WithHTTPClient(&http.Client{Transport: httpTr}))
}
if cfg.TokenID == "" && cfg.TokenIDFile != "" {
var err error
cfg.TokenID, err = readValueFromFile(cfg.TokenIDFile)
if err != nil {
return nil, err
}
}
if cfg.TokenSecret == "" && cfg.TokenSecretFile != "" {
var err error
cfg.TokenSecret, err = readValueFromFile(cfg.TokenSecretFile)
if err != nil {
return nil, err
}
}
if cfg.Username != "" && cfg.Password != "" {
opts = append(opts, proxmox.WithCredentials(&proxmox.Credentials{
Username: cfg.Username,
Password: cfg.Password,
}))
} else if cfg.TokenID != "" && cfg.TokenSecret != "" {
opts = append(opts, proxmox.WithAPIToken(cfg.TokenID, cfg.TokenSecret))
}
pxClient, err := goproxmox.NewAPIClient(cfg.URL, opts...)
if err != nil {
return nil, err
}
clients[cfg.Region] = pxClient
}
return &ProxmoxPool{
clients: clients,
}, nil
}
return nil, ErrClustersNotFound
}
// GetRegions returns supported regions.
func (c *ProxmoxPool) GetRegions() []string {
regions := make([]string, 0, len(c.clients))
for region := range c.clients {
regions = append(regions, region)
}
return regions
}
// CheckClusters checks if the Proxmox connection is working.
func (c *ProxmoxPool) CheckClusters(ctx context.Context) error {
for region, pxClient := range c.clients {
info, err := pxClient.Version(ctx)
if err != nil {
return fmt.Errorf("failed to initialized proxmox client in region %s, error: %v", region, err)
}
cluster := (&proxmox.Cluster{}).New(pxClient.Client)
// Check if we can have permission to list VMs
vms, err := cluster.Resources(ctx, "vm")
if err != nil {
return fmt.Errorf("failed to get list of VMs in region %s, error: %v", region, err)
}
if len(vms) > 0 {
klog.V(4).InfoS("Proxmox cluster information", "region", region, "version", info.Version, "vms", len(vms))
} else {
klog.InfoS("Proxmox cluster has no VMs, or check the account permission", "region", region)
}
}
return nil
}
// GetProxmoxCluster returns a Proxmox cluster client in a given region.
func (c *ProxmoxPool) GetProxmoxCluster(region string) (*goproxmox.APIClient, error) {
if c.clients[region] != nil {
return c.clients[region], nil
}
return nil, ErrRegionNotFound
}
// GetVMByIDInRegion returns a Proxmox VM by its ID in a given region.
func (c *ProxmoxPool) GetVMByIDInRegion(ctx context.Context, region string, vmid uint64) (*proxmox.ClusterResource, error) {
px, err := c.GetProxmoxCluster(region)
if err != nil {
return nil, err
}
vm, err := px.GetVMByID(ctx, uint64(vmid)) //nolint: unconvert
if err != nil {
return nil, err
}
return vm, nil
}
// DeleteVMByIDInRegion deletes a Proxmox VM by its ID in a given region.
func (c *ProxmoxPool) DeleteVMByIDInRegion(ctx context.Context, region string, vm *proxmox.ClusterResource) error {
px, err := c.GetProxmoxCluster(region)
if err != nil {
return err
}
return px.DeleteVMByID(ctx, vm.Node, int(vm.VMID))
}
// GetNodeHAGroups returns a Proxmox node ha-group in a given region for the node.
func (c *ProxmoxPool) GetNodeHAGroups(ctx context.Context, region string, node string) ([]string, error) {
groups := []string{}
px, err := c.GetProxmoxCluster(region)
if err != nil {
return nil, err
}
haGroups, err := px.GetHAGroupList(ctx)
if err != nil {
return nil, fmt.Errorf("error get ha-groups %v", err)
}
for _, g := range haGroups {
if g.Type != "group" {
continue
}
for n := range strings.SplitSeq(g.Nodes, ",") {
if node == strings.Split(n, ":")[0] {
groups = append(groups, g.Group)
}
}
}
if len(groups) > 0 {
slices.Sort(groups)
return groups, nil
}
return nil, ErrHAGroupNotFound
}
// FindVMByNode find a VM by kubernetes node resource in all Proxmox clusters.
func (c *ProxmoxPool) FindVMByNode(ctx context.Context, node *v1.Node) (vmID int, region string, err error) {
var errs error
for region, px := range c.clients {
vm, err := px.GetVMByFilter(ctx, func(rs *proxmox.ClusterResource) (bool, error) {
if rs.Type != "qemu" {
return false, nil
}
if !strings.HasPrefix(rs.Name, node.Name) {
return false, nil
}
if rs.Status == "unknown" {
errs = multierr.Append(errs, fmt.Errorf("region %s node %s: %w", region, rs.Node, ErrNodeInaccessible))
return false, nil //nolint: nilerr
}
vm, err := px.GetVMConfig(ctx, int(rs.VMID))
if err != nil {
return false, err
}
if goproxmox.GetVMUUID(vm) == node.Status.NodeInfo.SystemUUID {
return true, nil
}
return false, nil
})
if err != nil {
if err == goproxmox.ErrVirtualMachineNotFound {
continue
}
return 0, "", err
}
if vm.VMID == 0 {
continue
}
return int(vm.VMID), region, nil
}
if errs != nil {
return 0, "", errs
}
return 0, "", ErrInstanceNotFound
}
// FindVMByUUID find a VM by uuid in all Proxmox clusters.
func (c *ProxmoxPool) FindVMByUUID(ctx context.Context, uuid string) (vmID int, region string, err error) {
var errs error
for region, px := range c.clients {
vm, err := px.GetVMByFilter(ctx, func(rs *proxmox.ClusterResource) (bool, error) {
if rs.Type != "qemu" {
return false, nil
}
if rs.Status == "unknown" {
errs = multierr.Append(errs, fmt.Errorf("region %s node %s: %w", region, rs.Node, ErrNodeInaccessible))
return false, nil //nolint: nilerr
}
vm, err := px.GetVMConfig(ctx, int(rs.VMID))
if err != nil {
return false, err
}
if goproxmox.GetVMUUID(vm) == uuid {
return true, nil
}
return false, nil
})
if err != nil {
if errors.Is(err, goproxmox.ErrVirtualMachineNotFound) {
continue
}
return 0, "", err
}
return int(vm.VMID), region, nil
}
if errs != nil {
return 0, "", errs
}
return 0, "", ErrInstanceNotFound
}
func readValueFromFile(path string) (string, error) {
if path == "" {
return "", fmt.Errorf("path cannot be empty")
}
content, err := os.ReadFile(path)
if err != nil {
return "", fmt.Errorf("failed to read file '%s': %w", path, err)
}
return strings.TrimSpace(string(content)), nil
}

View File

@@ -0,0 +1,119 @@
/*
Copyright 2023 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package proxmoxpool_test
import (
"os"
"testing"
"github.com/stretchr/testify/assert"
pxpool "github.com/sergelogvinov/proxmox-cloud-controller-manager/pkg/proxmoxpool"
)
func newClusterEnv() []*pxpool.ProxmoxCluster {
cfg := []*pxpool.ProxmoxCluster{
{
URL: "https://127.0.0.1:8006/api2/json",
Insecure: false,
TokenID: "user!token-id",
TokenSecret: "secret",
Region: "cluster-1",
},
{
URL: "https://127.0.0.2:8006/api2/json",
Insecure: false,
TokenID: "user!token-id",
TokenSecret: "secret",
Region: "cluster-2",
},
}
return cfg
}
func newClusterEnvWithFiles(tokenIDPath, tokenSecretPath string) []*pxpool.ProxmoxCluster {
cfg := []*pxpool.ProxmoxCluster{
{
URL: "https://127.0.0.1:8006/api2/json",
Insecure: false,
TokenIDFile: tokenIDPath,
TokenSecretFile: tokenSecretPath,
Region: "cluster-1",
},
}
return cfg
}
func TestNewClient(t *testing.T) {
cfg := newClusterEnv()
assert.NotNil(t, cfg)
pxClient, err := pxpool.NewProxmoxPool([]*pxpool.ProxmoxCluster{})
assert.NotNil(t, err)
assert.Nil(t, pxClient)
pxClient, err = pxpool.NewProxmoxPool(cfg)
assert.Nil(t, err)
assert.NotNil(t, pxClient)
}
func TestNewClientWithCredentialsFromFile(t *testing.T) {
tempDir := t.TempDir()
tokenIDFile, err := os.CreateTemp(tempDir, "token_id")
assert.Nil(t, err)
tokenSecretFile, err := os.CreateTemp(tempDir, "token_secret")
assert.Nil(t, err)
_, err = tokenIDFile.WriteString("user!token-id")
assert.Nil(t, err)
_, err = tokenSecretFile.WriteString("secret")
assert.Nil(t, err)
cfg := newClusterEnvWithFiles(tokenIDFile.Name(), tokenSecretFile.Name())
pxClient, err := pxpool.NewProxmoxPool(cfg)
assert.Nil(t, err)
assert.NotNil(t, pxClient)
assert.Equal(t, "user!token-id", cfg[0].TokenID)
assert.Equal(t, "secret", cfg[0].TokenSecret)
}
func TestCheckClusters(t *testing.T) {
cfg := newClusterEnv()
assert.NotNil(t, cfg)
pxClient, err := pxpool.NewProxmoxPool(cfg)
assert.Nil(t, err)
assert.NotNil(t, pxClient)
pxapi, err := pxClient.GetProxmoxCluster("test")
assert.NotNil(t, err)
assert.Nil(t, pxapi)
assert.Equal(t, pxpool.ErrRegionNotFound, err)
pxapi, err = pxClient.GetProxmoxCluster("cluster-1")
assert.Nil(t, err)
assert.NotNil(t, pxapi)
err = pxClient.CheckClusters(t.Context())
assert.NotNil(t, err)
assert.Contains(t, err.Error(), "failed to initialized proxmox client in region")
}

482
test/cluster/cluster.go Normal file
View File

@@ -0,0 +1,482 @@
/*
Copyright 2023 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package cluster
import (
"fmt"
"net/http"
"github.com/jarcoal/httpmock"
"github.com/luthermonson/go-proxmox"
goproxmox "github.com/sergelogvinov/go-proxmox"
)
// SetupMockResponders sets up the HTTP mock responders for Proxmox API calls.
func SetupMockResponders() {
httpmock.RegisterResponder(http.MethodGet, `=~/version$`,
func(_ *http.Request) (*http.Response, error) {
return httpmock.NewJsonResponse(200, map[string]any{
"data": proxmox.Version{Version: "8.4"},
})
})
httpmock.RegisterResponder(http.MethodGet, `=~/cluster/status`,
func(_ *http.Request) (*http.Response, error) {
return httpmock.NewJsonResponse(200, map[string]any{
"data": proxmox.NodeStatuses{{Name: "pve-1"}, {Name: "pve-2"}, {Name: "pve-3"}, {Name: "pve-4"}},
})
})
httpmock.RegisterResponder(http.MethodGet, `=~/cluster/ha/groups`,
func(_ *http.Request) (*http.Response, error) {
return httpmock.NewJsonResponse(200, map[string]any{
"data": []goproxmox.HAGroup{
{Group: "rnd", Type: "group", Nodes: "pve-1,pve-2"},
{Group: "dev", Type: "group", Nodes: "pve-4"},
},
})
})
httpmock.RegisterResponder(http.MethodGet, "https://127.0.0.2:8006/api2/json/cluster/resources",
func(_ *http.Request) (*http.Response, error) {
return httpmock.NewJsonResponse(200, map[string]any{
"data": proxmox.ClusterResources{
&proxmox.ClusterResource{
Node: "pve-3",
Type: "qemu",
VMID: 103,
Name: "cluster-2-node-1",
MaxCPU: 2,
MaxMem: 5 * 1024 * 1024 * 1024,
Status: "stopped",
},
},
})
},
)
httpmock.RegisterResponder(http.MethodGet, "=~/cluster/resources",
func(_ *http.Request) (*http.Response, error) {
return httpmock.NewJsonResponse(200, map[string]any{
"data": proxmox.ClusterResources{
&proxmox.ClusterResource{
Node: "pve-1",
Type: "qemu",
VMID: 100,
Name: "cluster-1-node-1",
MaxCPU: 4,
MaxMem: 10 * 1024 * 1024 * 1024,
Status: "running",
},
&proxmox.ClusterResource{
Node: "pve-2",
Type: "qemu",
VMID: 101,
Name: "cluster-1-node-2",
MaxCPU: 2,
MaxMem: 5 * 1024 * 1024 * 1024,
Status: "running",
},
&proxmox.ClusterResource{
Node: "pve-4",
Type: "qemu",
VMID: 104,
Name: "cluster-1-node-4",
MaxCPU: 2,
MaxMem: 4 * 1024 * 1024 * 1024,
Status: "unknown",
},
&proxmox.ClusterResource{
ID: "storage/smb",
Type: "storage",
PluginType: "cifs",
Node: "pve-1",
Storage: "smb",
Content: "rootdir,images",
Shared: 1,
Status: "available",
},
&proxmox.ClusterResource{
ID: "storage/rbd",
Type: "storage",
PluginType: "dir",
Node: "pve-1",
Storage: "rbd",
Content: "images",
Shared: 1,
Status: "available",
},
&proxmox.ClusterResource{
ID: "storage/zfs",
Type: "storage",
PluginType: "zfspool",
Node: "pve-1",
Storage: "zfs",
Content: "images",
Status: "available",
},
&proxmox.ClusterResource{
ID: "storage/zfs",
Type: "storage",
PluginType: "zfspool",
Node: "pve-2",
Storage: "zfs",
Content: "images",
Status: "available",
},
&proxmox.ClusterResource{
ID: "storage/zfs",
Type: "storage",
PluginType: "zfspool",
Node: "pve-4",
Storage: "zfs",
Content: "images",
Status: "unknown",
},
&proxmox.ClusterResource{
ID: "storage/lvm",
Type: "storage",
PluginType: "lvm",
Node: "pve-1",
Storage: "local-lvm",
Content: "images",
Status: "available",
},
&proxmox.ClusterResource{
ID: "storage/lvm",
Type: "storage",
PluginType: "lvm",
Node: "pve-2",
Storage: "local-lvm",
Content: "images",
Status: "available",
},
&proxmox.ClusterResource{
ID: "storage/lvm",
Type: "storage",
PluginType: "lvm",
Node: "pve-4",
Storage: "local-lvm",
Content: "images",
Status: "unknown",
},
},
})
},
)
httpmock.RegisterResponder(http.MethodGet, `=~/nodes/pve-1/status`,
func(_ *http.Request) (*http.Response, error) {
return httpmock.NewJsonResponse(200, map[string]any{
"data": proxmox.Node{},
})
})
httpmock.RegisterResponder(http.MethodGet, `=~/nodes/pve-2/status`,
func(_ *http.Request) (*http.Response, error) {
return httpmock.NewJsonResponse(200, map[string]any{
"data": proxmox.Node{},
})
})
httpmock.RegisterResponder(http.MethodGet, `=~/nodes/pve-3/status`,
func(_ *http.Request) (*http.Response, error) {
return httpmock.NewJsonResponse(200, map[string]any{
"data": proxmox.Node{},
})
})
httpmock.RegisterResponder(http.MethodGet, `=~/nodes/pve-4/status`,
func(_ *http.Request) (*http.Response, error) {
return httpmock.NewBytesResponse(595, []byte{}), nil
})
httpmock.RegisterResponder(http.MethodGet, "=~/nodes$",
func(_ *http.Request) (*http.Response, error) {
return httpmock.NewJsonResponse(200, map[string]any{
"data": []proxmox.NodeStatus{
{
Node: "pve-1",
Status: "online",
},
{
Node: "pve-2",
Status: "online",
},
{
Node: "pve-3",
Status: "online",
},
{
Node: "pve-4",
Status: "offline",
},
},
})
})
httpmock.RegisterResponder(http.MethodGet, `=~/nodes/\S+/storage/rbd/status`,
func(_ *http.Request) (*http.Response, error) {
return httpmock.NewJsonResponse(200, map[string]any{
"data": proxmox.Storage{
Type: "dir",
Enabled: 1,
Active: 1,
Shared: 1,
Content: "images",
Total: 100 * 1024 * 1024 * 1024,
Used: 50 * 1024 * 1024 * 1024,
Avail: 50 * 1024 * 1024 * 1024,
},
})
},
)
httpmock.RegisterResponder(http.MethodGet, `=~/nodes/\S+/storage/zfs/status`,
func(_ *http.Request) (*http.Response, error) {
return httpmock.NewJsonResponse(200, map[string]any{
"data": proxmox.Storage{
Type: "zfspool",
Enabled: 1,
Active: 1,
Content: "images",
Total: 100 * 1024 * 1024 * 1024,
Used: 50 * 1024 * 1024 * 1024,
Avail: 50 * 1024 * 1024 * 1024,
},
})
},
)
httpmock.RegisterResponder(http.MethodGet, `=~/nodes/\S+/storage/local-lvm/status`,
func(_ *http.Request) (*http.Response, error) {
return httpmock.NewJsonResponse(200, map[string]any{
"data": proxmox.Storage{
Type: "lvmthin",
Enabled: 1,
Active: 1,
Content: "images",
Total: 100 * 1024 * 1024 * 1024,
Used: 50 * 1024 * 1024 * 1024,
Avail: 50 * 1024 * 1024 * 1024,
},
})
},
)
httpmock.RegisterResponder(http.MethodGet, `=~/nodes/\S+/storage/\S+/status`,
func(_ *http.Request) (*http.Response, error) {
return httpmock.NewJsonResponse(400, map[string]any{
"data": nil,
"message": "Parameter verification failed",
"errors": map[string]string{
"storage": "No such storage.",
},
})
},
)
httpmock.RegisterResponder(http.MethodGet, `=~/nodes/\S+/storage/smb/content`,
func(_ *http.Request) (*http.Response, error) {
return httpmock.NewJsonResponse(200, map[string]any{
"data": []proxmox.StorageContent{
{
Format: "raw",
Volid: "smb:9999/vm-9999-volume-smb.raw",
VMID: 9999,
Size: 1024 * 1024 * 1024,
},
},
})
},
)
httpmock.RegisterResponder(http.MethodGet, `=~/nodes/\S+/storage/rbd/content`,
func(_ *http.Request) (*http.Response, error) {
return httpmock.NewJsonResponse(200, map[string]any{
"data": []proxmox.StorageContent{
{
Format: "raw",
Volid: "rbd:9999/vm-9999-volume-rbd.raw",
VMID: 9999,
Size: 1024 * 1024 * 1024,
},
},
})
},
)
httpmock.RegisterResponder(http.MethodGet, `=~/nodes/pve-1/qemu$`,
func(_ *http.Request) (*http.Response, error) {
return httpmock.NewJsonResponse(200, map[string]any{
"data": []proxmox.VirtualMachine{
{
VMID: 100,
Status: "running",
Name: "cluster-1-node-1",
Node: "pve-1",
},
},
})
})
httpmock.RegisterResponder(http.MethodGet, `=~/nodes/pve-2/qemu$`,
func(_ *http.Request) (*http.Response, error) {
return httpmock.NewJsonResponse(200, map[string]any{
"data": []proxmox.VirtualMachine{
{
VMID: 101,
Status: "running",
Name: "cluster-1-node-2",
Node: "pve-2",
},
},
})
})
httpmock.RegisterResponder(http.MethodGet, `=~/nodes/pve-3/qemu$`,
func(_ *http.Request) (*http.Response, error) {
return httpmock.NewJsonResponse(200, map[string]any{
"data": []proxmox.VirtualMachine{
{
VMID: 103,
Status: "stopped",
Name: "cluster-2-node-1",
Node: "pve-3",
},
},
})
})
httpmock.RegisterResponder(http.MethodGet, `=~/nodes/pve-4/qemu$`,
func(_ *http.Request) (*http.Response, error) {
return httpmock.NewBytesResponse(595, []byte{}), nil
})
httpmock.RegisterResponder(http.MethodGet, `=~/nodes/pve-1/qemu/100/status/current`,
func(_ *http.Request) (*http.Response, error) {
return httpmock.NewJsonResponse(200, map[string]any{
"data": proxmox.VirtualMachine{
VMID: 100,
Name: "cluster-1-node-1",
Node: "pve-1",
CPUs: 4,
MaxMem: 10 * 1024 * 1024 * 1024,
Status: "running",
},
})
},
)
httpmock.RegisterResponder(http.MethodGet, `=~/nodes/pve-1/qemu/100/config`,
func(_ *http.Request) (*http.Response, error) {
return httpmock.NewJsonResponse(200, map[string]any{
"data": map[string]any{
"vmid": 100,
"cores": 4,
"memory": "10240",
"scsi0": "local-lvm:vm-100-disk-0,size=10G",
"scsi1": "local-lvm:vm-9999-pvc-123,backup=0,iothread=1,wwn=0x5056432d49443031",
"smbios1": "uuid=11833f4c-341f-4bd3-aad7-f7abed000000",
},
})
},
)
httpmock.RegisterResponder(http.MethodGet, `=~/nodes/pve-2/qemu/101/status/current`,
func(_ *http.Request) (*http.Response, error) {
return httpmock.NewJsonResponse(200, map[string]any{
"data": proxmox.VirtualMachine{
VMID: 101,
Name: "cluster-1-node-2",
Node: "pve-2",
CPUs: 2,
MaxMem: 5 * 1024 * 1024 * 1024,
Status: "running",
},
})
},
)
httpmock.RegisterResponder(http.MethodGet, `=~/nodes/pve-2/qemu/101/config`,
func(_ *http.Request) (*http.Response, error) {
return httpmock.NewJsonResponse(200, map[string]any{
"data": map[string]any{
"vmid": 101,
"scsi0": "local-lvm:vm-101-disk-0,size=10G",
"scsi1": "local-lvm:vm-101-disk-1,size=1G",
"scsi3": "local-lvm:vm-101-disk-2,size=1G",
"smbios1": "uuid=11833f4c-341f-4bd3-aad7-f7abed000001",
},
})
},
)
httpmock.RegisterResponder(http.MethodGet, `=~/nodes/pve-3/qemu/103/status/current`,
func(_ *http.Request) (*http.Response, error) {
return httpmock.NewJsonResponse(200, map[string]any{
"data": proxmox.VirtualMachine{
VMID: 103,
Name: "cluster-2-node-1",
Node: "pve-3",
CPUs: 1,
MaxMem: 2 * 1024 * 1024 * 1024,
Status: "running",
},
})
},
)
httpmock.RegisterResponder(http.MethodGet, `=~/nodes/pve-3/qemu/103/config`,
func(_ *http.Request) (*http.Response, error) {
return httpmock.NewJsonResponse(200, map[string]any{
"data": map[string]any{
"vmid": 103,
"smbios1": "uuid=11833f4c-341f-4bd3-aad7-f7abea000000,sku=YzEubWVkaXVt",
},
})
},
)
httpmock.RegisterResponder(http.MethodGet, `=~/nodes/pve-4/qemu/`,
func(_ *http.Request) (*http.Response, error) {
return httpmock.NewBytesResponse(595, []byte{}), nil
},
)
httpmock.RegisterResponder(http.MethodPut, "https://127.0.0.1:8006/api2/json/nodes/pve-1/qemu/100/resize",
func(_ *http.Request) (*http.Response, error) {
return httpmock.NewJsonResponse(200, map[string]any{
"data": "",
})
},
)
task := &proxmox.Task{
UPID: "UPID:pve-1:003B4235:1DF4ABCA:667C1C45:csi:103:root@pam:",
Type: "delete",
User: "root",
Status: "completed",
Node: "pve-1",
IsRunning: false,
}
taskErr := &proxmox.Task{
UPID: "UPID:pve-1:003B4235:1DF4ABCA:667C1C45:csi:104:root@pam:",
Type: "delete",
User: "root",
Status: "stopped",
ExitStatus: "ERROR",
Node: "pve-1",
IsRunning: false,
}
httpmock.RegisterResponder(http.MethodGet, fmt.Sprintf(`=~/nodes/%s/tasks/%s/status`, "pve-1", string(task.UPID)),
httpmock.NewJsonResponderOrPanic(200, map[string]any{"data": task}))
httpmock.RegisterResponder(http.MethodGet, fmt.Sprintf(`=~/nodes/%s/tasks/%s/status`, "pve-1", string(taskErr.UPID)),
httpmock.NewJsonResponderOrPanic(200, map[string]any{"data": taskErr}))
httpmock.RegisterResponder(http.MethodDelete, `=~/nodes/pve-1/storage/local-lvm/content/vm-9999-pvc-123`,
httpmock.NewJsonResponderOrPanic(200, map[string]any{"data": task.UPID}).Times(1))
httpmock.RegisterResponder(http.MethodDelete, `=~/nodes/pve-1/storage/local-lvm/content/vm-9999-pvc-error`,
httpmock.NewJsonResponderOrPanic(200, map[string]any{"data": taskErr.UPID}).Times(1))
}

18
test/cluster/docs.go Normal file
View File

@@ -0,0 +1,18 @@
/*
Copyright 2023 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Package cluster implements the http mock server for testing purposes.
package cluster

View File

@@ -0,0 +1,13 @@
features:
provider: default
clusters:
- url: https://127.0.0.1:8006/api2/json
insecure: false
token_id: "user!token-id"
token_secret: "secret"
region: cluster-1
- url: https://127.0.0.2:8006/api2/json
insecure: false
token_id: "user!token-id"
token_secret: "secret"
region: cluster-2

View File

@@ -0,0 +1,13 @@
features:
provider: capmox
clusters:
- url: https://127.0.0.1:8006/api2/json
insecure: false
token_id: "user!token-id"
token_secret: "secret"
region: cluster-1
- url: https://127.0.0.2:8006/api2/json
insecure: false
token_id: "user!token-id"
token_secret: "secret"
region: cluster-2