Compare commits

..

23 Commits

Author SHA1 Message Date
Andrei Kvapil
9d83d3eaeb [tests] cleanup state before repeat e2e-apps
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-08-08 09:04:24 +02:00
Andrei Kvapil
8d4a12e14f [ci] Stop using personal domain for CI (#1322)
Migrate away from using a private domain for build infra.

<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does


### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
[]
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Chores**
* Updated container image registry mirror URLs in the cluster
configuration to use a new domain.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-08-07 16:59:43 +02:00
Timofei Larkin
771fbc817f [ci] Stop using personal domain for CI
Migrate away from using a private domain for build infra.

Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-08-07 17:52:48 +03:00
klinch0
bc22b22341 [clickhouse] add clickhouse keeper (#1320)
<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does


### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
- update ch operator
- add chk
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **New Features**
* Added configurable parameter to set the number of ClickHouse Keeper
replicas, with a default of 3.
* Replica count for ClickHouse Keeper and related resources can now be
adjusted via configuration.

* **Documentation**
* Updated documentation to describe the new `clickhouseKeeper.replicas`
parameter and its usage.
  * Removed an outdated command from setup instructions.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-08-07 14:26:11 +03:00
kklinch0
cffff6c49e fix readme
Signed-off-by: kklinch0 <kklinch0@gmail.com>
2025-08-07 14:24:32 +03:00
klinch0
39adc16015 Merge branch 'main' into clickhouse-add-ch-keeper
Signed-off-by: klinch0 <68821526+klinch0@users.noreply.github.com>
2025-08-07 14:11:22 +03:00
kklinch0
896209a004 [clickhouse] add clickhouse keeper
Signed-off-by: kklinch0 <kklinch0@gmail.com>
2025-08-07 14:07:05 +03:00
Andrei Kvapil
c6bceff54b [fix] Disable VPA for VPA (#1318)
The earlier PR was erroneously merged without including an amendment to
the existing commits, so now this amendment must be included as a
separate patch. See #1301 for details.

<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does


### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
[]
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Chores**
* Updated configuration structure by moving the `vpaForVPA` setting to a
top-level key in the default values for Vertical Pod Autoscaler. No
changes to configuration values or functionality.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-08-07 12:53:44 +02:00
Timofei Larkin
ff3305f43c [fix] Disable VPA for VPA
The earlier PR was erroneously merged without including an amendment to
the existing commits, so now this amendment must be included as a
separate patch. See #1301 for details.

Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-08-07 13:37:20 +03:00
Nick Volynkin
58def95f67 Use cozyvalues-gen with packages/apps/tenant (#1314)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Documentation**
* All application parameter documentation was enhanced with explicit
type annotations and structured field descriptions for improved clarity.
* README files now include detailed parameter tables with type columns
and refined default values.
* Helm values.yaml files feature consistent type annotations and
hierarchical field documentation.

* **Schema Enhancements**
* JSON schemas for Postgres, Tenant, Virtual Machine, and Monitoring
apps were comprehensively restructured with explicit types, defaults,
validation patterns, and richer nested configuration options.

* **Chores**
* Switched documentation and schema generation tools to a unified
command (`cozyvalues-gen`) across all relevant Makefiles and CI
workflows for consistency and simplification.

* **Bug Fixes**
* Updated resource specifications in virtual machine tests for improved
accuracy.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-08-07 15:05:52 +05:00
Andrei Kvapil
9bc3b636a2 [monitoring] more retries (#1294)
<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does


### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
[monitoring] more retries
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Chores**
* Increased the timeout for the monitoring component deployment from 5
to 10 minutes.
* Added remediation retry settings, allowing up to 10 retries for both
install and upgrade phases of the monitoring component.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-08-07 11:47:07 +02:00
Andrei Kvapil
895597eecb [test] fix vm tests (#1308)
<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does


### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
- fix tests for vm
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Chores**
* Simplified the resource specification for virtual machines by removing
empty string assignments for CPU and memory.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-08-07 11:46:36 +02:00
Andrei Kvapil
a91e829cc9 Update Flux Operator to 0.27.0 (#1315)
New Flux Operator from this morning

Changelogs:
* 0.25.0
https://github.com/controlplaneio-fluxcd/flux-operator/releases/tag/v0.25.0
* 0.26.0
https://github.com/controlplaneio-fluxcd/flux-operator/releases/tag/v0.26.0
* 0.27.0
https://github.com/controlplaneio-fluxcd/flux-operator/releases/tag/v0.27.0

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **New Features**
* Introduced a configurable healthcheck feature for post-install and
post-upgrade verification, including a dedicated healthcheck job and
service account options.
* Added an optional `size` field to cluster configuration, allowing
selection of vertical scaling profiles (`small`, `medium`, `large`).

* **Enhancements**
* Increased default CPU resource limits for the Flux Operator from 1 CPU
to 2 CPUs.
* Improved configuration schemas with explicit typing and validation for
greater clarity and reliability.

* **Documentation**
* Updated documentation to reflect new configuration options, version
numbers, and enhanced resource settings.

* **Bug Fixes**
* Template rendering now omits empty string values in cluster
configuration, resulting in cleaner manifests.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-08-07 11:46:07 +02:00
Andrei Kvapil
be31370540 [clickhouse] add clickhouse keeper (#1298)
<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does


### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
- update ch operator
- add chk
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **New Features**
* Added support for deploying ClickHouse Keeper for cluster
coordination, with configurable enablement, resource presets, and
storage size.
* Introduced new Kubernetes resources and monitoring for ClickHouse
Keeper, including metrics integration and workload monitoring.
* Enhanced configuration flexibility with new parameters for Keeper in
both values and schema files.

* **Documentation**
* Updated documentation to describe new ClickHouse Keeper parameters and
deployment options.
* Improved Helm chart and CRD documentation for ClickHouse Operator,
including new features, configuration options, and secret integration.

* **Bug Fixes**
* Updated Grafana dashboards for compatibility with latest versions and
improved metric queries.

* **Chores**
  * Incremented chart and operator versions.
  * Updated test scripts to include ClickHouse Keeper scenarios.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-08-07 11:45:14 +02:00
Nick Volynkin
b26dc63b01 [apps] Use new OpenAPI schema and README generator for tenants
Signed-off-by: Nick Volynkin <nick.volynkin@gmail.com>
2025-08-07 11:40:22 +03:00
Andrei Kvapil
fafa859660 PoC: new OpenAPI schema generator (#1216)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>

<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does


### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
[cozystack-api] new OpenAPI schema generator
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Documentation**
* Enhanced parameter tables and configuration comments across multiple
apps to include explicit data types, structured field descriptions, and
improved clarity in README and values.yaml files.
* Expanded and reorganized documentation for complex objects and nested
parameters, improving usability and precision.

* **Schema Updates**
* Restructured and enriched JSON schemas for Postgres, Virtual Machine,
and Monitoring apps with detailed typing, descriptions, required fields,
validation patterns, and improved consistency.

* **Chores**
* Updated Makefiles to streamline documentation and schema generation
processes, replacing previous tools with a new generator and simplifying
command sequences.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-08-07 09:40:33 +02:00
Kingdon B
6e119ba940 Update Flux Operator to 0.27.0
Signed-off-by: Kingdon B <kingdon@urmanac.com>
2025-08-06 13:25:19 -04:00
Andrei Kvapil
754d5a976d [apps] Introduce new OpenAPI schema generator
Use https://github.com/cozystack/cozyvalues-gen for three apps:

- apps/postgres
- apps/virtual-machine
- extra/monitoring

Changes:
- Add type and enum definitions to values.yaml.
- Update READMEs with new information.
- Update values.schema.json with definitions for children objects,
  allowing precise UI customization. Add regexp for specific types
  such as resources: CPU like `500m` and RAM like `4GiB`.
- Remove direct injections with `yq` from Makefiles where they're not
  needed anymore.

Co-authored-by: Nick Volynkin <nick.volynkin@gmail.com>

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
Signed-off-by: Nick Volynkin <nick.volynkin@gmail.com>
2025-08-06 20:08:06 +03:00
IvanHunters
c4a2bef4c9 [test] fix vm tests
Signed-off-by: IvanHunters <xorokhotnikov@gmail.com>
(cherry picked from commit 299d006d20)
2025-08-06 17:05:13 +03:00
Andrei Kvapil
cd80a73446 [dashboard] fix diff editor
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-08-05 12:54:47 +02:00
IvanHunters
299d006d20 [test] fix vm tests
Signed-off-by: IvanHunters <xorokhotnikov@gmail.com>
2025-08-04 23:31:08 +03:00
kklinch0
85063cf624 clickhouse add chk
Signed-off-by: kklinch0 <kklinch0@gmail.com>
2025-08-04 18:22:43 +03:00
IvanHunters
1c2cc0fa28 [monitoring] more retries
Signed-off-by: IvanHunters <xorokhotnikov@gmail.com>
2025-08-01 15:54:09 +03:00
91 changed files with 8732 additions and 5211 deletions

View File

@@ -29,6 +29,7 @@ jobs:
- name: Install generate
run: |
curl -sSL https://github.com/cozystack/readme-generator-for-helm/releases/download/v1.0.0/readme-generator-for-helm-linux-amd64.tar.gz | tar -xzvf- -C /usr/local/bin/ readme-generator-for-helm
curl -sSL https://github.com/cozystack/cozyvalues-gen/releases/download/v0.7.0/cozyvalues-gen-linux-amd64.tar.gz | tar -xzvf- -C /usr/local/bin/ cozyvalues-gen
- name: Run pre-commit hooks
run: |

View File

@@ -1,15 +0,0 @@
FROM golang:1.24-alpine as builder
ARG TARGETOS
ARG TARGETARCH
COPY main.go go.mod go.sum /src/
WORKDIR /src
RUN go build -o /token-proxy -ldflags '-extldflags "-static" -w -s' main.go
FROM scratch
COPY --from=builder /token-proxy /token-proxy
ENTRYPOINT ["/token-proxy"]

View File

@@ -1,8 +0,0 @@
args:
- --upstream=http://incloud-web-nginx.incloud-web.svc:8080
- --http-address=0.0.0.0:8000
- --cookie-refresh=1h
- --cookie-name=kc-access
- --cookie-secure=true
- --cookie-secret=$(OAUTH2_PROXY_COOKIE_SECRET)
- --token-check-url=http://incloud-web-nginx.incloud-web.svc:8080/api/clusters/cozydev4/k8s/apis/core.cozystack.io/v1alpha1/tenantnamespaces

View File

@@ -1,8 +0,0 @@
module token-proxy
go 1.24.0
require (
github.com/golang-jwt/jwt/v5 v5.3.0
github.com/gorilla/securecookie v1.1.2
)

View File

@@ -1,6 +0,0 @@
github.com/golang-jwt/jwt/v5 v5.3.0 h1:pv4AsKCKKZuqlgs5sUmn4x8UlGa0kEVt/puTpKx9vvo=
github.com/golang-jwt/jwt/v5 v5.3.0/go.mod h1:fxCRLWMO43lRc8nhHWY6LGqRcf+1gQWArsqaEUEa5bE=
github.com/google/gofuzz v1.2.0 h1:xRy4A+RhZaiKjJ1bPfwQ8sedCA+YS2YcCHW6ec7JMi0=
github.com/google/gofuzz v1.2.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/gorilla/securecookie v1.1.2 h1:YCIWL56dvtr73r6715mJs5ZvhtnY73hBvEF8kXD8ePA=
github.com/gorilla/securecookie v1.1.2/go.mod h1:NfCASbcHqRSY+3a8tlWJwsQap2VX5pwzwo4h3eOamfo=

View File

@@ -1,259 +0,0 @@
package main
import (
"encoding/base64"
"encoding/json"
"flag"
"fmt"
"html/template"
"log"
"net/http"
"net/http/httputil"
"net/url"
"os"
"path"
"strings"
"time"
"github.com/golang-jwt/jwt/v5"
"github.com/gorilla/securecookie"
)
/* ----------------------------- flags ------------------------------------ */
var (
upstream, httpAddr, proxyPrefix string
cookieName, cookieSecretB64 string
cookieSecure bool
cookieRefresh time.Duration
tokenCheckURL string
)
func init() {
flag.StringVar(&upstream, "upstream", "", "Upstream URL to proxy to (required)")
flag.StringVar(&httpAddr, "http-address", "0.0.0.0:8000", "Listen address")
flag.StringVar(&proxyPrefix, "proxy-prefix", "/oauth2", "URL prefix for control endpoints")
flag.StringVar(&cookieName, "cookie-name", "_oauth2_proxy_0", "Cookie name")
flag.StringVar(&cookieSecretB64, "cookie-secret", "", "Base64-encoded cookie secret")
flag.BoolVar(&cookieSecure, "cookie-secure", false, "Set Secure flag on cookie")
flag.DurationVar(&cookieRefresh, "cookie-refresh", 0, "Cookie refresh interval (e.g. 1h)")
flag.StringVar(&tokenCheckURL, "token-check-url", "", "URL for external token validation")
}
/* ----------------------------- templates -------------------------------- */
var loginTmpl = template.Must(template.New("login").Parse(`
<!doctype html><html><head><title>Login</title></head>
<body>
<h2>Enter ServiceAccount / OIDC token</h2>
{{if .Err}}<p style="color:red">{{.Err}}</p>{{end}}
<form method="POST" action="{{.Action}}">
<input style="width:420px" name="token" placeholder="Paste token" autofocus/>
<button type="submit">Login</button>
</form>
</body></html>`))
/* ----------------------------- helpers ---------------------------------- */
func decodeJWT(raw string) jwt.MapClaims {
tkn, _ := jwt.Parse(raw, nil)
if c, ok := tkn.Claims.(jwt.MapClaims); ok {
return c
}
return jwt.MapClaims{}
}
func externalTokenCheck(raw string) error {
if tokenCheckURL == "" {
return nil
}
req, _ := http.NewRequest(http.MethodGet, tokenCheckURL, nil)
req.Header.Set("Authorization", "Bearer "+raw)
cli := &http.Client{Timeout: 5 * time.Second}
resp, err := cli.Do(req)
if err != nil {
return err
}
resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return fmt.Errorf("status %d", resp.StatusCode)
}
return nil
}
func encodeSession(sc *securecookie.SecureCookie, token string, exp, issued int64) (string, error) {
v := map[string]interface{}{
"access_token": token,
"expires": exp,
"issued": issued,
}
return sc.Encode(cookieName, v)
}
/* ----------------------------- main ------------------------------------- */
func main() {
flag.Parse()
if upstream == "" {
log.Fatal("--upstream is required")
}
upURL, err := url.Parse(upstream)
if err != nil {
log.Fatalf("invalid upstream url: %v", err)
}
if cookieSecretB64 == "" {
cookieSecretB64 = os.Getenv("COOKIE_SECRET")
}
if cookieSecretB64 == "" {
log.Fatal("--cookie-secret or $COOKIE_SECRET is required")
}
secret, err := base64.StdEncoding.DecodeString(cookieSecretB64)
if err != nil {
log.Fatalf("cookie-secret: %v", err)
}
sc := securecookie.New(secret, nil)
// control paths
signIn := path.Join(proxyPrefix, "sign_in")
signOut := path.Join(proxyPrefix, "sign_out")
userInfo := path.Join(proxyPrefix, "userinfo")
proxy := httputil.NewSingleHostReverseProxy(upURL)
/* ------------------------- /sign_in ---------------------------------- */
http.HandleFunc(signIn, func(w http.ResponseWriter, r *http.Request) {
switch r.Method {
case http.MethodGet:
_ = loginTmpl.Execute(w, struct {
Action string
Err string
}{Action: signIn})
case http.MethodPost:
token := strings.TrimSpace(r.FormValue("token"))
if token == "" {
_ = loginTmpl.Execute(w, struct {
Action string
Err string
}{Action: signIn, Err: "Token required"})
return
}
if err := externalTokenCheck(token); err != nil {
_ = loginTmpl.Execute(w, struct {
Action string
Err string
}{Action: signIn, Err: "Invalid token"})
return
}
exp := time.Now().Add(24 * time.Hour).Unix()
claims := decodeJWT(token)
if v, ok := claims["exp"].(float64); ok {
exp = int64(v)
}
session, _ := encodeSession(sc, token, exp, time.Now().Unix())
http.SetCookie(w, &http.Cookie{
Name: cookieName,
Value: session,
Path: "/",
Expires: time.Unix(exp, 0),
Secure: cookieSecure,
HttpOnly: true,
SameSite: http.SameSiteLaxMode,
})
http.Redirect(w, r, "/", http.StatusSeeOther)
}
})
/* ------------------------- /sign_out --------------------------------- */
http.HandleFunc(signOut, func(w http.ResponseWriter, r *http.Request) {
http.SetCookie(w, &http.Cookie{
Name: cookieName,
Value: "",
Path: "/",
MaxAge: -1,
Secure: cookieSecure,
HttpOnly: true,
})
http.Redirect(w, r, signIn, http.StatusSeeOther)
})
/* ------------------------- /userinfo --------------------------------- */
http.HandleFunc(userInfo, func(w http.ResponseWriter, r *http.Request) {
c, err := r.Cookie(cookieName)
if err != nil {
http.Error(w, "unauthorized", http.StatusUnauthorized)
return
}
var sess map[string]interface{}
if err := sc.Decode(cookieName, c.Value, &sess); err != nil {
http.Error(w, "unauthorized", http.StatusUnauthorized)
return
}
token, _ := sess["access_token"].(string)
claims := decodeJWT(token)
out := map[string]interface{}{
"token": token,
"sub": claims["sub"],
"email": claims["email"],
"preferred_username": claims["preferred_username"],
"groups": claims["groups"],
"expires": sess["expires"],
"issued": sess["issued"],
"cookie_refresh_enable": cookieRefresh > 0,
}
w.Header().Set("Content-Type", "application/json")
_ = json.NewEncoder(w).Encode(out)
})
/* ----------------------------- proxy --------------------------------- */
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
c, err := r.Cookie(cookieName)
if err != nil {
http.Redirect(w, r, signIn, http.StatusFound)
return
}
var sess map[string]interface{}
if err := sc.Decode(cookieName, c.Value, &sess); err != nil {
http.Redirect(w, r, signIn, http.StatusFound)
return
}
token, _ := sess["access_token"].(string)
if token == "" {
http.Redirect(w, r, signIn, http.StatusFound)
return
}
// cookie refresh
if cookieRefresh > 0 {
if issued, ok := sess["issued"].(float64); ok {
if time.Since(time.Unix(int64(issued), 0)) > cookieRefresh {
enc, _ := encodeSession(sc, token, int64(sess["expires"].(float64)), time.Now().Unix())
http.SetCookie(w, &http.Cookie{
Name: cookieName,
Value: enc,
Path: "/",
Expires: time.Unix(int64(sess["expires"].(float64)), 0),
Secure: cookieSecure,
HttpOnly: true,
SameSite: http.SameSiteLaxMode,
})
}
}
}
r.Header.Set("Authorization", "Bearer "+token)
proxy.ServeHTTP(w, r)
})
log.Printf("Listening on %s → %s (control prefix %s)", httpAddr, upURL, proxyPrefix)
if err := http.ListenAndServe(httpAddr, nil); err != nil {
log.Fatal(err)
}
}

View File

@@ -3,6 +3,7 @@
@test "Create and Verify Seeweedfs Bucket" {
# Create the bucket resource
name='test'
kubectl -n tenant-test delete buckets.apps.cozystack.io "$name" --ignore-not-found
kubectl apply -f - <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: Bucket

View File

@@ -2,6 +2,7 @@
@test "Create DB ClickHouse" {
name='test'
kubectl -n tenant-test delete clickhouses.apps.cozystack.io $name --ignore-not-found
kubectl apply -f- <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: ClickHouse
@@ -27,6 +28,10 @@ spec:
s3AccessKey: oobaiRus9pah8PhohL1ThaeTa4UVa7gu
s3SecretKey: ju3eum4dekeich9ahM1te8waeGai0oog
resticPassword: ChaXoveekoh6eigh4siesheeda2quai0
clickhouseKeeper:
enabled: true
resourcesPreset: "micro"
size: "1Gi"
resources: {}
resourcesPreset: "nano"
EOF

View File

@@ -2,6 +2,7 @@
@test "Create Kafka" {
name='test'
kubectl -n tenant-test delete kafkas.apps.cozystack.io "$name" --ignore-not-found
kubectl apply -f- <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: Kafka

View File

@@ -2,6 +2,7 @@
@test "Create DB MySQL" {
name='test'
kubectl -n tenant-test delete mysqls.apps.cozystack.io $name --ignore-not-found
kubectl apply -f- <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: MySQL

View File

@@ -2,6 +2,7 @@
@test "Create DB PostgreSQL" {
name='test'
kubectl -n tenant-test delete postgreses.apps.cozystack.io $name --ignore-not-found
kubectl apply -f - <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: Postgres

View File

@@ -2,6 +2,7 @@
@test "Create Redis" {
name='test'
kubectl -n tenant-test delete redises.apps.cozystack.io $name --ignore-not-found
kubectl apply -f- <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: Redis

View File

@@ -4,6 +4,7 @@ run_kubernetes_test() {
local port="$3"
local k8s_version=$(yq "$version_expr" packages/apps/kubernetes/files/versions.yaml)
kubectl -n tenant-test delete kuberneteses.apps.cozystack.io $test_name --ignore-not-found
kubectl apply -f - <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: Kubernetes

View File

@@ -2,6 +2,7 @@
@test "Create a Virtual Machine" {
name='test'
kubectl -n tenant-test delete virtualmachines.apps.cozystack.io $name --ignore-not-found
kubectl apply -f - <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: VirtualMachine
@@ -20,9 +21,7 @@ spec:
storage: 5Gi
storageClass: replicated
gpus: []
resources:
cpu: ""
memory: ""
resources: {}
sshKeys:
- ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPht0dPk5qQ+54g1hSX7A6AUxXJW5T6n/3d7Ga2F8gTF
test@test

View File

@@ -1,5 +1,12 @@
#!/usr/bin/env bats
@test "Cleanup" {
name='test'
diskName='test'
kubectl -n tenant-test delete vmdisks.apps.cozystack.io $diskName --ignore-not-found
kubectl -n tenant-test delete vminstances.apps.cozystack.io $name --ignore-not-found
}
@test "Create a VM Disk" {
name='test'
kubectl apply -f - <<EOF

View File

@@ -136,25 +136,25 @@ machine:
mirrors:
docker.io:
endpoints:
- https://dockerio.nexus.lllamnyp.su
- https://dockerio.nexus.aenix.org
cr.fluentbit.io:
endpoints:
- https://fluentbit.nexus.lllamnyp.su
- https://fluentbit.nexus.aenix.org
docker-registry3.mariadb.com:
endpoints:
- https://mariadb.nexus.lllamnyp.su
- https://mariadb.nexus.aenix.org
gcr.io:
endpoints:
- https://gcr.nexus.lllamnyp.su
- https://gcr.nexus.aenix.org
ghcr.io:
endpoints:
- https://ghcr.nexus.lllamnyp.su
- https://ghcr.nexus.aenix.org
quay.io:
endpoints:
- https://quay.nexus.lllamnyp.su
- https://quay.nexus.aenix.org
registry.k8s.io:
endpoints:
- https://k8s.nexus.lllamnyp.su
- https://k8s.nexus.aenix.org
files:
- content: |
[plugins]

View File

@@ -4,6 +4,5 @@
cd packages/core/installer
make image-cozystack REGISTRY=YOUR_CUSTOM_REGISTRY
make apply
kubectl delete pod dashboard-redis-master-0 -n cozy-dashboard
kubectl delete po -l app=source-controller -n cozy-fluxcd
```

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.11.1
version: 0.12.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -7,6 +7,7 @@ include ../../../scripts/package.mk
generate:
readme-generator-for-helm -v values.yaml -s values.schema.json -r README.md
yq -i -o json --indent 4 '.properties.resourcesPreset.enum = $(PRESET_ENUM)' values.schema.json
yq -i -o json --indent 4 '.properties.clickhouseKeeper.resourcesPreset.enum = $(PRESET_ENUM)' values.schema.json
image:
docker buildx build images/clickhouse-backup \

View File

@@ -53,6 +53,15 @@ For more details, read [Restic: Effective Backup from Stdin](https://blog.aenix.
| `backup.s3SecretKey` | Secret key for S3, used for authentication | `ju3eum4dekeich9ahM1te8waeGai0oog` |
| `backup.resticPassword` | Password for Restic backup encryption | `ChaXoveekoh6eigh4siesheeda2quai0` |
### clickhouseKeeper parameters
| Name | Description | Value |
| ---------------------------------- | --------------------------------------------------------------------------------------------------------------------------- | ------- |
| `clickhouseKeeper.enabled` | Deploy ClickHouse Keeper for cluster coordination | `true` |
| `clickhouseKeeper.size` | Persistent Volume Claim size, available for application data | `1Gi` |
| `clickhouseKeeper.resourcesPreset` | Default sizing preset used when `resources` is omitted. Allowed values: nano, micro, small, medium, large, xlarge, 2xlarge. | `micro` |
| `clickhouseKeeper.replicas` | Number of keeper replicas | `3` |
## Parameter examples and reference
### resources and resourcesPreset

View File

@@ -0,0 +1,96 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $clusterDomain := (index $cozyConfig.data "cluster-domain") | default "cozy.local" }}
{{- if .Values.clickhouseKeeper.enabled }}
apiVersion: "clickhouse-keeper.altinity.com/v1"
kind: "ClickHouseKeeperInstallation"
metadata:
name: "{{ .Release.Name }}-keeper"
annotations:
prometheus.io/port: "7000"
prometheus.io/scrape: "true"
spec:
namespaceDomainPattern: "%s.svc.{{ $clusterDomain }}"
configuration:
clusters:
- name: "cluster1"
layout:
replicasCount: {{ .Values.clickhouseKeeper.replicas }}
settings:
logger/level: "trace"
logger/console: "true"
listen_host: "0.0.0.0"
keeper_server/four_letter_word_white_list: "*"
keeper_server/coordination_settings/raft_logs_level: "information"
prometheus/endpoint: "/metrics"
prometheus/port: "7000"
prometheus/metrics: "true"
prometheus/events: "true"
prometheus/asynchronous_metrics: "true"
prometheus/status_info: "false"
defaults:
templates:
# Templates are specified as default for all clusters
podTemplate: default
dataVolumeClaimTemplate: default
templates:
podTemplates:
- name: default
metadata:
labels:
app: "{{ .Release.Name }}-keeper"
annotations:
prometheus.io/port: "7000"
prometheus.io/scrape: "true"
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- "{{ .Release.Name }}-keeper"
topologyKey: "kubernetes.io/hostname"
containers:
- name: clickhouse-keeper
imagePullPolicy: IfNotPresent
image: clickhouse/clickhouse-keeper:24.9.2.42
resources: {{- include "cozy-lib.resources.defaultingSanitize" (list .Values.clickhouseKeeper.resourcesPreset .Values.resources $) | nindent 20 }}
securityContext:
fsGroup: 101
volumeClaimTemplates:
- name: default
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: "{{ .Values.clickhouseKeeper.size }}"
---
apiVersion: operator.victoriametrics.com/v1beta1
kind: VMPodScrape
metadata:
name: {{ .Release.Name }}-keeper
namespace: {{ .Release.Namespace }}
spec:
selector:
matchLabels:
app: {{ .Release.Name }}-keeper
namespaceSelector:
matchNames:
- {{ .Release.Namespace }}
podMetricsEndpoints:
- port: metrics
path: /metrics
interval: 30s
scheme: http
relabelConfigs:
- action: replace
sourceLabels: [__meta_kubernetes_pod_node_name]
targetLabel: instance
{{- end }}

View File

@@ -91,6 +91,18 @@ spec:
layout:
shardsCount: {{ .Values.shards }}
replicasCount: {{ .Values.replicas }}
{{- if .Values.clickhouseKeeper.enabled }}
zookeeper:
nodes:
{{- $replicas := int .Values.clickhouseKeeper.replicas }}
{{- $release := .Release.Name }}
{{- $namespace := .Release.Namespace }}
{{- $clusterDomain := .Values.clusterDomain }}
{{- range $i := until $replicas }}
- host: "chk-{{ $release }}-keeper-cluster1-0-{{ $i }}.{{ $namespace }}.svc.{{ $clusterDomain }}"
port: 2181
{{- end }}
{{- end }}
templates:
volumeClaimTemplates:
- name: data-volume-template

View File

@@ -23,6 +23,9 @@ rules:
- workloadmonitors
resourceNames:
- {{ .Release.Name }}
{{- if .Values.clickhouseKeeper.enabled }}
- {{ .Release.Name }}-keeper
{{- end }}
verbs: ["get", "list", "watch"]
---
kind: RoleBinding

View File

@@ -11,3 +11,18 @@ spec:
selector:
app.kubernetes.io/instance: {{ $.Release.Name }}
version: {{ $.Chart.Version }}
{{- if .Values.clickhouseKeeper.enabled }}
---
apiVersion: cozystack.io/v1alpha1
kind: WorkloadMonitor
metadata:
name: {{ $.Release.Name }}-keeper
spec:
replicas: {{ .Values.clickhouseKeeper.replicas }}
minReplicas: 1
kind: clickhouse
type: clickhouse
selector:
app: {{ $.Release.Name }}-keeper
version: {{ $.Chart.Version }}
{{- end }}

View File

@@ -45,6 +45,42 @@
},
"type": "object"
},
"clickhouseKeeper": {
"properties": {
"enabled": {
"default": true,
"description": "Deploy ClickHouse Keeper for cluster coordination ",
"type": "boolean"
},
"replicas": {
"default": 3,
"description": "Number of keeper replicas",
"type": "number"
},
"resourcesPreset": {
"default": "micro",
"description": "Default sizing preset used when `resources` is omitted. Allowed values: nano, micro, small, medium, large, xlarge, 2xlarge.",
"type": "string"
},
"size": {
"default": "1Gi",
"description": "Persistent Volume Claim size, available for application data",
"type": "string"
}
},
"type": "object",
"resourcesPreset": {
"enum": [
"nano",
"micro",
"small",
"medium",
"large",
"xlarge",
"2xlarge"
]
}
},
"logStorageSize": {
"default": "2Gi",
"description": "Size of Persistent Volume for logs",

View File

@@ -56,3 +56,13 @@ backup:
s3SecretKey: ju3eum4dekeich9ahM1te8waeGai0oog
resticPassword: ChaXoveekoh6eigh4siesheeda2quai0
## @section clickhouseKeeper parameters
## @param clickhouseKeeper.enabled Deploy ClickHouse Keeper for cluster coordination
## @param clickhouseKeeper.size Persistent Volume Claim size, available for application data
## @param clickhouseKeeper.resourcesPreset Default sizing preset used when `resources` is omitted. Allowed values: nano, micro, small, medium, large, xlarge, 2xlarge.
## @param clickhouseKeeper.replicas Number of keeper replicas
clickhouseKeeper:
enabled: true
size: 1Gi
resourcesPreset: micro
replicas: 3

View File

@@ -3,8 +3,8 @@
{{- $clusterDomain := (index $cozyConfig.data "cluster-domain") | default "cozy.local" }}
{{- $myNS := lookup "v1" "Namespace" "" .Release.Namespace }}
{{- $targetTenant := index $myNS.metadata.annotations "namespace.cozystack.io/monitoring" }}
vpaForVPA: false
vertical-pod-autoscaler:
vpaForVPA: false
recommender:
extraArgs:
container-name-label: container

View File

@@ -1,6 +1,4 @@
include ../../../scripts/package.mk
PRESET_ENUM := ["nano","micro","small","medium","large","xlarge","2xlarge"]
generate:
readme-generator-for-helm -v values.yaml -s values.schema.json -r README.md
yq -i -o json --indent 4 '.properties.resourcesPreset.enum = $(PRESET_ENUM)' values.schema.json
cozyvalues-gen -v values.yaml -s values.schema.json -r README.md

View File

@@ -66,44 +66,61 @@ See:
### Common parameters
| Name | Description | Value |
| ----------------- | --------------------------------------------------------------------------------------------------------------------------------------- | ------- |
| `replicas` | Number of Postgres replicas | `2` |
| `resources` | Explicit CPU and memory configuration for each PostgreSQL replica. When left empty, the preset defined in `resourcesPreset` is applied. | `{}` |
| `resourcesPreset` | Default sizing preset used when `resources` is omitted. Allowed values: nano, micro, small, medium, large, xlarge, 2xlarge. | `micro` |
| `size` | Persistent Volume size | `10Gi` |
| `storageClass` | StorageClass used to store the data | `""` |
| `external` | Enable external access from outside the cluster | `false` |
| Name | Description | Type | Value |
| ------------------ | ----------------------------------------------------------------------------------------------------------------------------------------- | --------- | ------- |
| `replicas` | Number of Postgres replicas | `int` | `2` |
| `resources` | Explicit CPU and memory configuration for each PostgreSQL replica. When left empty, the preset defined in `resourcesPreset` is applied. | `*object` | `{}` |
| `resources.cpu` | CPU | `*string` | `null` |
| `resources.memory` | Memory | `*string` | `null` |
| `resourcesPreset` | Default sizing preset used when `resources` is omitted. Allowed values: `nano`, `micro`, `small`, `medium`, `large`, `xlarge`, `2xlarge`. | `string` | `{}` |
| `size` | Persistent Volume Claim size, available for application data | `string` | `10Gi` |
| `storageClass` | StorageClass used to store the data | `string` | `""` |
| `external` | Enable external access from outside the cluster | `bool` | `false` |
### Application-specific parameters
| Name | Description | Value |
| --------------------------------------- | ------------------------------------------------------------------------------------------------------------------------ | ----- |
| `postgresql.parameters.max_connections` | Determines the maximum number of concurrent connections to the database server. The default is typically 100 connections | `100` |
| `quorum.minSyncReplicas` | Minimum number of synchronous replicas that must acknowledge a transaction before it is considered committed. | `0` |
| `quorum.maxSyncReplicas` | Maximum number of synchronous replicas that can acknowledge a transaction (must be lower than the number of instances). | `0` |
| `users` | Users configuration | `{}` |
| `databases` | Databases configuration | `{}` |
| Name | Description | Type | Value |
| --------------------------------------- | ------------------------------------------------------------------------------------------------------------------------ | ------------------- | ------- |
| `postgresql` | PostgreSQL server configuration | `object` | `{}` |
| `postgresql.parameters` | PostgreSQL server parameters | `object` | `{}` |
| `postgresql.parameters.max_connections` | Determines the maximum number of concurrent connections to the database server. The default is typically 100 connections | `int` | `100` |
| `quorum` | Quorum configuration for synchronous replication | `object` | `{}` |
| `quorum.minSyncReplicas` | Minimum number of synchronous replicas that must acknowledge a transaction before it is considered committed. | `int` | `0` |
| `quorum.maxSyncReplicas` | Maximum number of synchronous replicas that can acknowledge a transaction (must be lower than the number of instances). | `int` | `0` |
| `users` | Users configuration | `map[string]object` | `{...}` |
| `users[name].password` | Password for the user | `*string` | `null` |
| `users[name].replication` | Whether the user has replication privileges | `*bool` | `null` |
| `databases` | Databases configuration | `map[string]object` | `{...}` |
| `databases[name].roles` | Roles for the database | `*object` | `null` |
| `databases[name].roles.admin` | List of users with admin privileges | `[]string` | `[]` |
| `databases[name].roles.readonly` | List of users with read-only privileges | `[]string` | `[]` |
| `databases[name].extensions` | Extensions enabled for the database | `[]string` | `[]` |
### Backup parameters
| Name | Description | Value |
| ------------------------ | ---------------------------------------------------------- | ----------------------------------- |
| `backup.enabled` | Enable regular backups | `false` |
| `backup.schedule` | Cron schedule for automated backups | `0 2 * * * *` |
| `backup.retentionPolicy` | Retention policy | `30d` |
| `backup.destinationPath` | Path to store the backup (i.e. s3://bucket/path/to/folder) | `s3://bucket/path/to/folder/` |
| `backup.endpointURL` | S3 Endpoint used to upload data to the cloud | `http://minio-gateway-service:9000` |
| `backup.s3AccessKey` | Access key for S3, used for authentication | `oobaiRus9pah8PhohL1ThaeTa4UVa7gu` |
| `backup.s3SecretKey` | Secret key for S3, used for authentication | `ju3eum4dekeich9ahM1te8waeGai0oog` |
| Name | Description | Type | Value |
| ------------------------ | ---------------------------------------------------------- | -------- | ----------------------------------- |
| `backup` | Backup configuration | `object` | `{}` |
| `backup.enabled` | Enable regular backups | `bool` | `false` |
| `backup.schedule` | Cron schedule for automated backups | `string` | `0 2 * * * *` |
| `backup.retentionPolicy` | Retention policy | `string` | `30d` |
| `backup.destinationPath` | Path to store the backup (i.e. s3://bucket/path/to/folder) | `string` | `s3://bucket/path/to/folder/` |
| `backup.endpointURL` | S3 Endpoint used to upload data to the cloud | `string` | `http://minio-gateway-service:9000` |
| `backup.s3AccessKey` | Access key for S3, used for authentication | `string` | `<access key>` |
| `backup.s3SecretKey` | Secret key for S3, used for authentication | `string` | `<secret key>` |
### Bootstrap (recovery) parameters
| Name | Description | Value |
| ------------------------ | -------------------------------------------------------------------------------------------------------------------- | ------- |
| `bootstrap.enabled` | Restore database cluster from a backup | `false` |
| `bootstrap.recoveryTime` | Timestamp (PITR) up to which recovery will proceed, expressed in RFC 3339 format. If left empty, will restore latest | `""` |
| `bootstrap.oldName` | Name of database cluster before deleting | `""` |
| Name | Description | Type | Value |
| ------------------------ | -------------------------------------------------------------------------------------------------------------------- | --------- | ------- |
| `bootstrap` | Bootstrap configuration | `object` | `{}` |
| `bootstrap.enabled` | Restore database cluster from a backup | `bool` | `false` |
| `bootstrap.recoveryTime` | Timestamp (PITR) up to which recovery will proceed, expressed in RFC 3339 format. If left empty, will restore latest | `*string` | `""` |
| `bootstrap.oldName` | Name of database cluster before deleting | `string` | `""` |
## Parameter examples and reference

View File

@@ -1,140 +1,257 @@
{
"properties": {
"backup": {
"properties": {
"destinationPath": {
"default": "s3://bucket/path/to/folder/",
"description": "Path to store the backup (i.e. s3://bucket/path/to/folder)",
"type": "string"
},
"enabled": {
"default": false,
"description": "Enable regular backups",
"type": "boolean"
},
"endpointURL": {
"default": "http://minio-gateway-service:9000",
"description": "S3 Endpoint used to upload data to the cloud",
"type": "string"
},
"retentionPolicy": {
"default": "30d",
"description": "Retention policy",
"type": "string"
},
"s3AccessKey": {
"default": "oobaiRus9pah8PhohL1ThaeTa4UVa7gu",
"description": "Access key for S3, used for authentication",
"type": "string"
},
"s3SecretKey": {
"default": "ju3eum4dekeich9ahM1te8waeGai0oog",
"description": "Secret key for S3, used for authentication",
"type": "string"
},
"schedule": {
"default": "0 2 * * * *",
"description": "Cron schedule for automated backups",
"type": "string"
}
},
"type": "object"
"title": "Chart Values",
"type": "object",
"properties": {
"backup": {
"description": "Backup configuration",
"type": "object",
"default": {
"destinationPath": "s3://bucket/path/to/folder/",
"enabled": false,
"endpointURL": "http://minio-gateway-service:9000",
"retentionPolicy": "30d",
"s3AccessKey": "\u003caccess key\u003e",
"s3SecretKey": "\u003csecret key\u003e",
"schedule": "0 2 * * * *"
},
"required": [
"destinationPath",
"enabled",
"endpointURL",
"retentionPolicy",
"s3AccessKey",
"s3SecretKey",
"schedule"
],
"properties": {
"destinationPath": {
"description": "Path to store the backup (i.e. s3://bucket/path/to/folder)",
"type": "string",
"default": "s3://bucket/path/to/folder/"
},
"bootstrap": {
"properties": {
"enabled": {
"default": false,
"description": "Restore database cluster from a backup",
"type": "boolean"
},
"oldName": {
"default": "",
"description": "Name of database cluster before deleting",
"type": "string"
},
"recoveryTime": {
"default": "",
"description": "Timestamp (PITR) up to which recovery will proceed, expressed in RFC 3339 format. If left empty, will restore latest",
"type": "string"
}
},
"type": "object"
"enabled": {
"description": "Enable regular backups",
"type": "boolean",
"default": false
},
"databases": {
"default": {},
"description": "Databases configuration",
"type": "object"
"endpointURL": {
"description": "S3 Endpoint used to upload data to the cloud",
"type": "string",
"default": "http://minio-gateway-service:9000"
},
"external": {
"default": false,
"description": "Enable external access from outside the cluster",
"type": "boolean"
"retentionPolicy": {
"description": "Retention policy",
"type": "string",
"default": "30d"
},
"postgresql": {
"properties": {
"parameters": {
"properties": {
"max_connections": {
"default": 100,
"description": "Determines the maximum number of concurrent connections to the database server. The default is typically 100 connections",
"type": "number"
}
},
"type": "object"
}
},
"type": "object"
"s3AccessKey": {
"description": "Access key for S3, used for authentication",
"type": "string",
"default": "\u003caccess key\u003e"
},
"quorum": {
"properties": {
"maxSyncReplicas": {
"default": 0,
"description": "Maximum number of synchronous replicas that can acknowledge a transaction (must be lower than the number of instances).",
"type": "number"
},
"minSyncReplicas": {
"default": 0,
"description": "Minimum number of synchronous replicas that must acknowledge a transaction before it is considered committed.",
"type": "number"
}
},
"type": "object"
"s3SecretKey": {
"description": "Secret key for S3, used for authentication",
"type": "string",
"default": "\u003csecret key\u003e"
},
"replicas": {
"default": 2,
"description": "Number of Postgres replicas",
"type": "number"
},
"resources": {
"default": {},
"description": "Explicit CPU and memory configuration for each PostgreSQL replica. When left empty, the preset defined in `resourcesPreset` is applied.",
"type": "object"
},
"resourcesPreset": {
"default": "micro",
"description": "Default sizing preset used when `resources` is omitted. Allowed values: nano, micro, small, medium, large, xlarge, 2xlarge.",
"type": "string",
"enum": [
"nano",
"micro",
"small",
"medium",
"large",
"xlarge",
"2xlarge"
]
},
"size": {
"default": "10Gi",
"description": "Persistent Volume size",
"type": "string"
},
"storageClass": {
"default": "",
"description": "StorageClass used to store the data",
"type": "string"
"schedule": {
"description": "Cron schedule for automated backups",
"type": "string",
"default": "0 2 * * * *"
}
}
},
"title": "Chart Values",
"type": "object"
}
"bootstrap": {
"description": "Bootstrap configuration",
"type": "object",
"default": {
"enabled": false,
"oldName": "",
"recoveryTime": ""
},
"required": [
"enabled",
"oldName"
],
"properties": {
"enabled": {
"description": "Restore database cluster from a backup",
"type": "boolean",
"default": false
},
"oldName": {
"description": "Name of database cluster before deleting",
"type": "string"
},
"recoveryTime": {
"description": "Timestamp (PITR) up to which recovery will proceed, expressed in RFC 3339 format. If left empty, will restore latest",
"type": "string"
}
}
},
"databases": {
"description": "Databases configuration",
"type": "object",
"default": {},
"additionalProperties": {
"type": "object",
"properties": {
"extensions": {
"description": "Extensions enabled for the database",
"type": "array",
"items": {
"type": "string"
}
},
"roles": {
"description": "Roles for the database",
"type": "object",
"properties": {
"admin": {
"description": "List of users with admin privileges",
"type": "array",
"items": {
"type": "string"
}
},
"readonly": {
"description": "List of users with read-only privileges",
"type": "array",
"items": {
"type": "string"
}
}
}
}
}
}
},
"external": {
"description": "Enable external access from outside the cluster",
"type": "boolean",
"default": false
},
"postgresql": {
"description": "PostgreSQL server configuration",
"type": "object",
"default": {
"parameters": {
"max_connections": 100
}
},
"required": [
"parameters"
],
"properties": {
"parameters": {
"description": "PostgreSQL server parameters",
"type": "object",
"default": {
"max_connections": 100
},
"required": [
"max_connections"
],
"properties": {
"max_connections": {
"description": "Determines the maximum number of concurrent connections to the database server. The default is typically 100 connections",
"type": "integer",
"default": 100
}
}
}
}
},
"quorum": {
"description": "Quorum configuration for synchronous replication",
"type": "object",
"default": {
"maxSyncReplicas": 0,
"minSyncReplicas": 0
},
"required": [
"maxSyncReplicas",
"minSyncReplicas"
],
"properties": {
"maxSyncReplicas": {
"description": "Maximum number of synchronous replicas that can acknowledge a transaction (must be lower than the number of instances).",
"type": "integer",
"default": 0
},
"minSyncReplicas": {
"description": "Minimum number of synchronous replicas that must acknowledge a transaction before it is considered committed.",
"type": "integer",
"default": 0
}
}
},
"replicas": {
"description": "Number of Postgres replicas",
"type": "integer",
"default": 2
},
"resources": {
"description": "Explicit CPU and memory configuration for each PostgreSQL replica. When left empty, the preset defined in `resourcesPreset` is applied.",
"type": "object",
"default": {},
"properties": {
"cpu": {
"description": "CPU",
"type": "string",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"x-kubernetes-int-or-string": true
},
"memory": {
"description": "Memory",
"type": "string",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"x-kubernetes-int-or-string": true
}
}
},
"resourcesPreset": {
"description": "Default sizing preset used when `resources` is omitted. Allowed values: `nano`, `micro`, `small`, `medium`, `large`, `xlarge`, `2xlarge`.",
"type": "string",
"default": "micro",
"enum": [
"nano",
"micro",
"small",
"medium",
"large",
"xlarge",
"2xlarge"
]
},
"size": {
"description": "Persistent Volume Claim size, available for application data",
"type": "string",
"default": "10Gi",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"x-kubernetes-int-or-string": true
},
"storageClass": {
"description": "StorageClass used to store the data",
"type": "string"
},
"users": {
"description": "Users configuration",
"type": "object",
"default": {},
"additionalProperties": {
"type": "object",
"properties": {
"password": {
"description": "Password for the user",
"type": "string"
},
"replication": {
"description": "Whether the user has replication privileges",
"type": "boolean"
}
}
}
}
}
}

View File

@@ -1,36 +1,44 @@
## @section Common parameters
##
## @param replicas Number of Postgres replicas
## @param replicas {int} Number of Postgres replicas
replicas: 2
## @param resources Explicit CPU and memory configuration for each PostgreSQL replica. When left empty, the preset defined in `resourcesPreset` is applied.
## @param resources {*resources} Explicit CPU and memory configuration for each PostgreSQL replica. When left empty, the preset defined in `resourcesPreset` is applied.
## @field resources.cpu {*quantity} CPU
## @field resources.memory {*quantity} Memory
resources: {}
# resources:
# cpu: 4000m
# memory: 4Gi
## @param resourcesPreset Default sizing preset used when `resources` is omitted. Allowed values: nano, micro, small, medium, large, xlarge, 2xlarge.
## @param resourcesPreset {string enum:"nano,micro,small,medium,large,xlarge,2xlarge"} Default sizing preset used when `resources` is omitted. Allowed values: `nano`, `micro`, `small`, `medium`, `large`, `xlarge`, `2xlarge`.
resourcesPreset: "micro"
## @param size Persistent Volume size
## @param size {quantity} Persistent Volume Claim size, available for application data
size: 10Gi
## @param storageClass StorageClass used to store the data
## @param storageClass {string} StorageClass used to store the data
storageClass: ""
## @param external Enable external access from outside the cluster
## @param external {bool} Enable external access from outside the cluster
external: false
## @section Application-specific parameters
## @param postgresql {postgresql} PostgreSQL server configuration
## @field postgresql.parameters {postgresqlParameters} PostgreSQL server parameters
## @field postgresqlParameters.max_connections {int} Determines the maximum number of concurrent connections to the database server. The default is typically 100 connections
##
## @param postgresql.parameters.max_connections Determines the maximum number of concurrent connections to the database server. The default is typically 100 connections
postgresql:
parameters:
max_connections: 100
## @param quorum.minSyncReplicas Minimum number of synchronous replicas that must acknowledge a transaction before it is considered committed.
## @param quorum.maxSyncReplicas Maximum number of synchronous replicas that can acknowledge a transaction (must be lower than the number of instances).
## Configuration for the quorum-based synchronous replication
## @param quorum {quorum} Quorum configuration for synchronous replication
## @field quorum.minSyncReplicas {int} Minimum number of synchronous replicas that must acknowledge a transaction before it is considered committed.
## @field quorum.maxSyncReplicas {int} Maximum number of synchronous replicas that can acknowledge a transaction (must be lower than the number of instances).
quorum:
minSyncReplicas: 0
maxSyncReplicas: 0
## @param users [object] Users configuration
## @param users {map[string]user} Users configuration
## @field user.password {*string} Password for the user
## @field user.replication {*bool} Whether the user has replication privileges
##
## Example:
## users:
## user1:
@@ -44,7 +52,12 @@ quorum:
##
users: {}
## @param databases Databases configuration
## @param databases {map[string]database} Databases configuration
## @field database.roles {*databaseRoles} Roles for the database
## @field databaseRoles.admin {[]string} List of users with admin privileges
## @field databaseRoles.readonly {[]string} List of users with read-only privileges
## @field database.extensions {[]string} Extensions enabled for the database
##
## Example:
## databases:
## myapp:
@@ -64,27 +77,29 @@ databases: {}
## @section Backup parameters
## @param backup.enabled Enable regular backups
## @param backup.schedule Cron schedule for automated backups
## @param backup.retentionPolicy Retention policy
## @param backup.destinationPath Path to store the backup (i.e. s3://bucket/path/to/folder)
## @param backup.endpointURL S3 Endpoint used to upload data to the cloud
## @param backup.s3AccessKey Access key for S3, used for authentication
## @param backup.s3SecretKey Secret key for S3, used for authentication
## @param backup {backup} Backup configuration
## @field backup.enabled {bool} Enable regular backups
## @field backup.schedule {string} Cron schedule for automated backups
## @field backup.retentionPolicy {string} Retention policy
## @field backup.destinationPath {string} Path to store the backup (i.e. s3://bucket/path/to/folder)
## @field backup.endpointURL {string} S3 Endpoint used to upload data to the cloud
## @field backup.s3AccessKey {string} Access key for S3, used for authentication
## @field backup.s3SecretKey {string} Secret key for S3, used for authentication
backup:
enabled: false
retentionPolicy: 30d
destinationPath: s3://bucket/path/to/folder/
endpointURL: http://minio-gateway-service:9000
destinationPath: "s3://bucket/path/to/folder/"
endpointURL: "http://minio-gateway-service:9000"
schedule: "0 2 * * * *"
s3AccessKey: oobaiRus9pah8PhohL1ThaeTa4UVa7gu
s3SecretKey: ju3eum4dekeich9ahM1te8waeGai0oog
s3AccessKey: "<access key>"
s3SecretKey: "<secret key>"
## @section Bootstrap (recovery) parameters
## @param bootstrap.enabled Restore database cluster from a backup
## @param bootstrap.recoveryTime Timestamp (PITR) up to which recovery will proceed, expressed in RFC 3339 format. If left empty, will restore latest
## @param bootstrap.oldName Name of database cluster before deleting
## @param bootstrap {bootstrap} Bootstrap configuration
## @field bootstrap.enabled {bool} Restore database cluster from a backup
## @field bootstrap.recoveryTime {*string} Timestamp (PITR) up to which recovery will proceed, expressed in RFC 3339 format. If left empty, will restore latest
## @field bootstrap.oldName {string} Name of database cluster before deleting
##
bootstrap:
enabled: false

View File

@@ -1,4 +1,4 @@
include ../../../scripts/package.mk
generate:
readme-generator-for-helm -v values.yaml -s values.schema.json -r README.md
cozyvalues-gen -v values.yaml -s values.schema.json -r README.md

View File

@@ -69,12 +69,13 @@ tenant-u1
### Common parameters
| Name | Description | Value |
| ---------------- | --------------------------------------------------------------------------------------------------------------------------- | ------- |
| `host` | The hostname used to access tenant services (defaults to using the tenant name as a subdomain for it's parent tenant host). | `""` |
| `etcd` | Deploy own Etcd cluster | `false` |
| `monitoring` | Deploy own Monitoring Stack | `false` |
| `ingress` | Deploy own Ingress Controller | `false` |
| `seaweedfs` | Deploy own SeaweedFS | `false` |
| `isolated` | Enforce tenant namespace with network policies | `true` |
| `resourceQuotas` | Define resource quotas for the tenant | `{}` |
| Name | Description | Type | Value |
| ---------------- | --------------------------------------------------------------------------------------------------------------------------- | --------- | ------- |
| `host` | The hostname used to access tenant services (defaults to using the tenant name as a subdomain for it's parent tenant host). | `*string` | `""` |
| `etcd` | Deploy own Etcd cluster | `bool` | `false` |
| `monitoring` | Deploy own Monitoring Stack | `bool` | `false` |
| `ingress` | Deploy own Ingress Controller | `bool` | `false` |
| `seaweedfs` | Deploy own SeaweedFS | `bool` | `false` |
| `isolated` | Enforce tenant namespace with network policies, `true` by default | `bool` | `true` |
| `resourceQuotas` | Define resource quotas for the tenant | `string` | `{}` |

View File

@@ -17,6 +17,12 @@ spec:
kind: HelmRepository
name: cozystack-extra
namespace: cozy-public
install:
remediation:
retries: 10
upgrade:
remediation:
retries: 10
interval: 1m0s
timeout: 5m0s
timeout: 10m0s
{{- end }}

View File

@@ -1,41 +1,45 @@
{
"properties": {
"etcd": {
"default": false,
"description": "Deploy own Etcd cluster",
"type": "boolean"
},
"host": {
"default": "",
"description": "The hostname used to access tenant services (defaults to using the tenant name as a subdomain for it's parent tenant host).",
"type": "string"
},
"ingress": {
"default": false,
"description": "Deploy own Ingress Controller",
"type": "boolean"
},
"isolated": {
"default": true,
"description": "Enforce tenant namespace with network policies",
"type": "boolean"
},
"monitoring": {
"default": false,
"description": "Deploy own Monitoring Stack",
"type": "boolean"
},
"resourceQuotas": {
"default": {},
"description": "Define resource quotas for the tenant",
"type": "object"
},
"seaweedfs": {
"default": false,
"description": "Deploy own SeaweedFS",
"type": "boolean"
}
"title": "Chart Values",
"type": "object",
"properties": {
"etcd": {
"description": "Deploy own Etcd cluster",
"type": "boolean",
"default": false
},
"title": "Chart Values",
"type": "object"
"host": {
"description": "The hostname used to access tenant services (defaults to using the tenant name as a subdomain for it's parent tenant host).",
"type": "string"
},
"ingress": {
"description": "Deploy own Ingress Controller",
"type": "boolean",
"default": false
},
"isolated": {
"description": "Enforce tenant namespace with network policies, `true` by default",
"type": "boolean",
"default": true
},
"monitoring": {
"description": "Deploy own Monitoring Stack",
"type": "boolean",
"default": false
},
"resourceQuotas": {
"description": "Define resource quotas for the tenant",
"type": "object",
"default": {},
"additionalProperties": {
"type": "string",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"x-kubernetes-int-or-string": true
}
},
"seaweedfs": {
"description": "Deploy own SeaweedFS",
"type": "boolean",
"default": false
}
}
}

View File

@@ -1,18 +1,18 @@
## @section Common parameters
## @param host The hostname used to access tenant services (defaults to using the tenant name as a subdomain for it's parent tenant host).
## @param etcd Deploy own Etcd cluster
## @param monitoring Deploy own Monitoring Stack
## @param ingress Deploy own Ingress Controller
## @param seaweedfs Deploy own SeaweedFS
## @param isolated Enforce tenant namespace with network policies
## @param resourceQuotas Define resource quotas for the tenant
## @param host {*string} The hostname used to access tenant services (defaults to using the tenant name as a subdomain for it's parent tenant host).
## @param etcd {bool} Deploy own Etcd cluster
## @param monitoring {bool} Deploy own Monitoring Stack
## @param ingress {bool} Deploy own Ingress Controller
## @param seaweedfs {bool} Deploy own SeaweedFS
## @param isolated {bool} Enforce tenant namespace with network policies, `true` by default
host: ""
etcd: false
monitoring: false
ingress: false
seaweedfs: false
isolated: true
## @param resourceQuotas {map[string]quantity} Define resource quotas for the tenant
resourceQuotas: {}
# resourceQuotas:
# cpu: "1"

View File

@@ -15,7 +15,8 @@ clickhouse 0.9.2 632224a3
clickhouse 0.10.0 6358fd7a
clickhouse 0.10.1 4369b031
clickhouse 0.11.0 08cb7c0f
clickhouse 0.11.1 HEAD
clickhouse 0.11.1 0e47e1e8
clickhouse 0.12.0 HEAD
ferretdb 0.1.0 e9716091
ferretdb 0.1.1 91b0499a
ferretdb 0.2.0 6c5cf5bf

View File

@@ -1,12 +1,9 @@
include ../../../scripts/package.mk
generate:
readme-generator-for-helm -v values.yaml -s values.schema.json -r README.md
cozyvalues-gen -v values.yaml -s values.schema.json -r README.md
yq -o json -i '.properties.gpus.items.type = "object" | .properties.gpus.default = []' values.schema.json
# INSTANCE_TYPES=$$(yq e '.metadata.name' -o=json -r ../../system/kubevirt-instancetypes/templates/instancetypes.yaml | yq 'split(" ") | . + [""]' -o json) \
# && yq -i -o json ".properties.instanceType.enum = $${INSTANCE_TYPES}" values.schema.json
PREFERENCES=$$(yq e '.metadata.name' -o=json -r ../../system/kubevirt-instancetypes/templates/preferences.yaml | yq 'split(" ") | . + [""]' -o json) \
&& yq -i -o json ".properties.instanceProfile.enum = $${PREFERENCES}" values.schema.json
yq -i -o json '.properties.externalPorts.items.type = "integer"' values.schema.json
yq -i -o json '.properties.systemDisk.properties.image.enum = ["ubuntu", "cirros", "alpine", "fedora", "talos"]' values.schema.json
yq -i -o json '.properties.externalMethod.enum = ["PortList", "WholeIP"]' values.schema.json

View File

@@ -36,24 +36,28 @@ virtctl ssh <user>@<vm>
### Common parameters
| Name | Description | Value |
| ------------------------- | ---------------------------------------------------------------------------------------------------------- | ------------ |
| `external` | Enable external access from outside the cluster | `false` |
| `externalMethod` | specify method to passthrough the traffic to the virtual machine. Allowed values: `WholeIP` and `PortList` | `PortList` |
| `externalPorts` | Specify ports to forward from outside the cluster | `[]` |
| `running` | Determines if the virtual machine should be running | `true` |
| `instanceType` | Virtual Machine instance type | `u1.medium` |
| `instanceProfile` | Virtual Machine preferences profile | `ubuntu` |
| `systemDisk.image` | The base image for the virtual machine. Allowed values: `ubuntu`, `cirros`, `alpine`, `fedora` and `talos` | `ubuntu` |
| `systemDisk.storage` | The size of the disk allocated for the virtual machine | `5Gi` |
| `systemDisk.storageClass` | StorageClass used to store the data | `replicated` |
| `gpus` | List of GPUs to attach | `[]` |
| `resources.cpu` | The number of CPU cores allocated to the virtual machine | `""` |
| `resources.memory` | The amount of memory allocated to the virtual machine | `""` |
| `resources.sockets` | The number of CPU sockets allocated to the virtual machine (used to define vCPU topology) | `""` |
| `sshKeys` | List of SSH public keys for authentication. Can be a single key or a list of keys. | `[]` |
| `cloudInit` | cloud-init user data config. See cloud-init documentation for more details. | `""` |
| `cloudInitSeed` | A seed string to generate an SMBIOS UUID for the VM. | `""` |
| Name | Description | Type | Value |
| ------------------------- | ----------------------------------------------------------------------------------------------------------- | ---------- | ------------ |
| `external` | Enable external access from outside the cluster | `bool` | `false` |
| `externalMethod` | Specify method to pass through the traffic to the virtual machine. Allowed values: `WholeIP` and `PortList` | `string` | `{}` |
| `externalPorts` | Specify ports to forward from outside the cluster | `[]int` | `[22]` |
| `running` | if the virtual machine should be running | `bool` | `true` |
| `instanceType` | Virtual Machine instance type | `string` | `u1.medium` |
| `instanceProfile` | Virtual Machine preferences profile | `string` | `ubuntu` |
| `systemDisk` | System disk configuration | `object` | `{}` |
| `systemDisk.image` | The base image for the virtual machine. Allowed values: `ubuntu`, `cirros`, `alpine`, `fedora` and `talos` | `string` | `ubuntu` |
| `systemDisk.storage` | The size of the disk allocated for the virtual machine | `string` | `5Gi` |
| `systemDisk.storageClass` | StorageClass used to store the data | `*string` | `replicated` |
| `gpus` | List of GPUs to attach | `[]object` | `[]` |
| `gpus[i].name` | The name of the GPU to attach. This should match the GPU resource name in the cluster. | `string` | `""` |
| `resources` | Resources | `object` | `{}` |
| `resources.cpu` | The number of CPU cores allocated to the virtual machine | `*string` | `null` |
| `resources.sockets` | The number of CPU sockets allocated to the virtual machine (used to define vCPU topology) | `*string` | `null` |
| `resources.memory` | The amount of memory allocated to the virtual machine | `*string` | `null` |
| `sshKeys` | List of SSH public keys for authentication. Can be a single key or a list of keys. | `[]string` | `[]` |
| `cloudInit` | cloud-init user data config. See cloud-init documentation for more details. | `string` | `""` |
| `cloudInitSeed` | A seed string to generate an SMBIOS UUID for the VM. | `string` | `""` |
## U Series

View File

@@ -1,49 +1,60 @@
{
"title": "Chart Values",
"type": "object",
"properties": {
"cloudInit": {
"default": "",
"description": "cloud-init user data config. See cloud-init documentation for more details.",
"type": "string"
},
"cloudInitSeed": {
"default": "",
"description": "A seed string to generate an SMBIOS UUID for the VM.",
"type": "string"
},
"external": {
"default": false,
"description": "Enable external access from outside the cluster",
"type": "boolean"
"type": "boolean",
"default": false
},
"externalMethod": {
"default": "PortList",
"description": "specify method to passthrough the traffic to the virtual machine. Allowed values: `WholeIP` and `PortList`",
"description": "Specify method to pass through the traffic to the virtual machine. Allowed values: `WholeIP` and `PortList`",
"type": "string",
"default": "PortList",
"enum": [
"PortList",
"WholeIP"
]
},
"externalPorts": {
"default": [],
"description": "Specify ports to forward from outside the cluster",
"type": "array",
"default": [
22
],
"items": {
"type": "integer"
},
"type": "array"
}
},
"gpus": {
"default": [],
"description": "List of GPUs to attach",
"type": "array",
"default": [],
"items": {
"type": "object"
},
"type": "array"
"type": "object",
"required": [
"name"
],
"properties": {
"name": {
"description": "The name of the GPU to attach. This should match the GPU resource name in the cluster.",
"type": "string"
}
}
}
},
"instanceProfile": {
"default": "ubuntu",
"description": "Virtual Machine preferences profile",
"type": "string",
"default": "ubuntu",
"enum": [
"alpine",
"centos.7",
@@ -91,47 +102,65 @@
]
},
"instanceType": {
"default": "u1.medium",
"description": "Virtual Machine instance type",
"type": "string"
"type": "string",
"default": "u1.medium"
},
"resources": {
"description": "Resources",
"type": "object",
"default": {},
"properties": {
"cpu": {
"default": "",
"description": "The number of CPU cores allocated to the virtual machine",
"type": "string"
"type": "string",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"x-kubernetes-int-or-string": true
},
"memory": {
"default": "",
"description": "The amount of memory allocated to the virtual machine",
"type": "string"
"type": "string",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"x-kubernetes-int-or-string": true
},
"sockets": {
"default": "",
"description": "The number of CPU sockets allocated to the virtual machine (used to define vCPU topology)",
"type": "string"
"type": "string",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"x-kubernetes-int-or-string": true
}
},
"type": "object"
}
},
"running": {
"default": true,
"description": "Determines if the virtual machine should be running",
"type": "boolean"
"description": "if the virtual machine should be running",
"type": "boolean",
"default": true
},
"sshKeys": {
"default": [],
"description": "List of SSH public keys for authentication. Can be a single key or a list of keys.",
"items": {},
"type": "array"
"type": "array",
"default": [],
"items": {
"type": "string"
}
},
"systemDisk": {
"description": "System disk configuration",
"type": "object",
"default": {
"image": "ubuntu",
"storage": "5Gi",
"storageClass": "replicated"
},
"required": [
"image",
"storage"
],
"properties": {
"image": {
"default": "ubuntu",
"description": "The base image for the virtual machine. Allowed values: `ubuntu`, `cirros`, `alpine`, `fedora` and `talos`",
"type": "string",
"default": "ubuntu",
"enum": [
"ubuntu",
"cirros",
@@ -141,19 +170,16 @@
]
},
"storage": {
"default": "5Gi",
"description": "The size of the disk allocated for the virtual machine",
"type": "string"
"type": "string",
"default": "5Gi"
},
"storageClass": {
"default": "replicated",
"description": "StorageClass used to store the data",
"type": "string"
"type": "string",
"default": "replicated"
}
},
"type": "object"
}
}
},
"title": "Chart Values",
"type": "object"
}
}

View File

@@ -1,46 +1,53 @@
## @section Common parameters
## @param external Enable external access from outside the cluster
## @param externalMethod specify method to passthrough the traffic to the virtual machine. Allowed values: `WholeIP` and `PortList`
## @param externalPorts [array] Specify ports to forward from outside the cluster
##
## @param external {bool} Enable external access from outside the cluster
external: false
externalMethod: PortList
## @param externalMethod {string enum:"PortList,WholeIP"} Specify method to pass through the traffic to the virtual machine. Allowed values: `WholeIP` and `PortList`
externalMethod: "PortList"
## @param externalPorts {[]int} Specify ports to forward from outside the cluster
externalPorts:
- 22
- 22
## @param running Determines if the virtual machine should be running
## @param running {bool} if the virtual machine should be running
running: true
## @param instanceType Virtual Machine instance type
## @param instanceProfile Virtual Machine preferences profile
## @param instanceType {string} Virtual Machine instance type
## @param instanceProfile {string} Virtual Machine preferences profile
##
instanceType: "u1.medium"
instanceProfile: ubuntu
## @param systemDisk.image The base image for the virtual machine. Allowed values: `ubuntu`, `cirros`, `alpine`, `fedora` and `talos`
## @param systemDisk.storage The size of the disk allocated for the virtual machine
## @param systemDisk.storageClass StorageClass used to store the data
##
## @param systemDisk {systemDisk} System disk configuration
## @field systemDisk.image {string enum:"ubuntu,cirros,alpine,fedora,talos"} The base image for the virtual machine. Allowed values: `ubuntu`, `cirros`, `alpine`, `fedora` and `talos`
## @field systemDisk.storage {string} The size of the disk allocated for the virtual machine
## @field systemDisk.storageClass {*string} StorageClass used to store the data
##
systemDisk:
image: ubuntu
storage: 5Gi
storageClass: replicated
## @param gpus [array] List of GPUs to attach
## @param gpus {[]gpu} List of GPUs to attach
## @field gpu.name {string} The name of the GPU to attach. This should match the GPU resource name in the cluster.
## Example:
## gpus:
## - name: nvidia.com/GA102GL_A10
gpus: []
## @param resources.cpu The number of CPU cores allocated to the virtual machine
## @param resources.memory The amount of memory allocated to the virtual machine
## @param resources.sockets The number of CPU sockets allocated to the virtual machine (used to define vCPU topology)
resources:
cpu: ""
memory: ""
sockets: ""
## @param sshKeys [array] List of SSH public keys for authentication. Can be a single key or a list of keys.
## @param resources {resources} Resources
## @field resources.cpu {*quantity} The number of CPU cores allocated to the virtual machine
## @field resources.sockets {*quantity} The number of CPU sockets allocated to the virtual machine (used to define vCPU topology)
## @field resources.memory {*quantity} The amount of memory allocated to the virtual machine
## Example:
## resources:
## cpu: "4"
## sockets: "1"
## memory: "8Gi"
resources: {}
## @param sshKeys {[]string} List of SSH public keys for authentication. Can be a single key or a list of keys.
## Example:
## sshKeys:
## - ssh-rsa ...
@@ -48,7 +55,7 @@ resources:
##
sshKeys: []
## @param cloudInit cloud-init user data config. See cloud-init documentation for more details.
## @param cloudInit {string} cloud-init user data config. See cloud-init documentation for more details.
## - https://cloudinit.readthedocs.io/en/latest/explanation/format.html
## - https://cloudinit.readthedocs.io/en/latest/reference/examples.html
## Example:
@@ -59,11 +66,11 @@ sshKeys: []
##
cloudInit: ""
## @param cloudInitSeed A seed string to generate an SMBIOS UUID for the VM.
cloudInitSeed: ""
## @param cloudInitSeed {string} A seed string to generate an SMBIOS UUID for the VM.
## Change it to any new value to force a full cloud-init reconfiguration. Change it when you want to apply
## to an existing VM settings that are usually written only once, like new SSH keys or new network configuration.
## An empty value does nothing (and the existing UUID is not reverted). Please note that changing this value
## does not trigger a VM restart. You must perform the restart separately.
## Example:
## cloudInitSeed: "upd1"
cloudInitSeed: ""

View File

@@ -6,11 +6,7 @@ include ../../../scripts/common-envs.mk
include ../../../scripts/package.mk
generate:
readme-generator-for-helm -v values.yaml -s values.schema.json.tmp -r README.md
cat values.schema.json.tmp | \
jq '.properties.metricsStorages.items.type = "object" | .properties.logsStorages.items.type = "object"' \
> values.schema.json
rm -f values.schema.json.tmp
cozyvalues-gen -v values.yaml -s values.schema.json -r README.md
image:
docker buildx build images/grafana \

View File

@@ -4,22 +4,88 @@
### Common parameters
| Name | Description | Value |
| ----------------------------------------- | --------------------------------------------------------------------------------------------------------- | ------- |
| `host` | The hostname used to access the grafana externally (defaults to 'grafana' subdomain for the tenant host). | `""` |
| `metricsStorages` | Configuration of metrics storage instances | `[]` |
| `logsStorages` | Configuration of logs storage instances | `[]` |
| `alerta.storage` | Persistent Volume size for alerta database | `10Gi` |
| `alerta.storageClassName` | StorageClass used to store the data | `""` |
| `alerta.resources.requests.cpu` | The minimum amount of CPU required for alerta | `100m` |
| `alerta.resources.requests.memory` | The minimum amount of memory required for alerta | `256Mi` |
| `alerta.resources.limits.cpu` | The maximum amount of CPU allowed for alerta | `1` |
| `alerta.resources.limits.memory` | The maximum amount of memory allowed for alerta | `1Gi` |
| `alerta.alerts.telegram.token` | telegram token for your bot | `""` |
| `alerta.alerts.telegram.chatID` | specify multiple ID's separated by comma. Get yours in https://t.me/chatid_echo_bot | `""` |
| `alerta.alerts.telegram.disabledSeverity` | list of severity without alerts, separated comma like: "informational,warning" | `""` |
| `grafana.db.size` | Persistent Volume size for grafana database | `10Gi` |
| `grafana.resources.requests.cpu` | The minimum amount of CPU required for grafana | `100m` |
| `grafana.resources.requests.memory` | The minimum amount of memory required for grafana | `256Mi` |
| `grafana.resources.limits.cpu` | The maximum amount of CPU allowed for grafana | `1` |
| `grafana.resources.limits.memory` | The maximum amount of memory allowed for grafana | `1Gi` |
| Name | Description | Type | Value |
| ------ | --------------------------------------------------------------------------------------------------------- | -------- | ----- |
| `host` | The hostname used to access the grafana externally (defaults to 'grafana' subdomain for the tenant host). | `string` | `""` |
### Metrics storage configuration
| Name | Description | Type | Value |
| ------------------------------------------------ | -------------------------------------------------------------- | ---------- | ------- |
| `metricsStorages` | Configuration of metrics storage instances | `[]object` | `[...]` |
| `metricsStorages[i].name` | Name of the storage instance | `string` | `""` |
| `metricsStorages[i].retentionPeriod` | Retention period for the metrics in the storage instance | `string` | `""` |
| `metricsStorages[i].deduplicationInterval` | Deduplication interval for the metrics in the storage instance | `string` | `""` |
| `metricsStorages[i].storage` | Persistent Volume size for the storage instance | `string` | `""` |
| `metricsStorages[i].storageClassName` | StorageClass used to store the data | `*string` | `null` |
| `metricsStorages[i].vminsert` | Configuration for vminsert component of the storage instance | `*object` | `null` |
| `metricsStorages[i].vminsert.minAllowed` | Requests (minimum allowed/available resources) | `*object` | `null` |
| `metricsStorages[i].vminsert.minAllowed.cpu` | CPU request (minimum available CPU) | `*string` | `null` |
| `metricsStorages[i].vminsert.minAllowed.memory` | Memory request (minimum available memory) | `*string` | `null` |
| `metricsStorages[i].vminsert.maxAllowed` | Limits (maximum allowed/available resources ) | `*object` | `null` |
| `metricsStorages[i].vminsert.maxAllowed.cpu` | CPU limit (maximum available CPU) | `*string` | `null` |
| `metricsStorages[i].vminsert.maxAllowed.memory` | Memory limit (maximum available memory) | `*string` | `null` |
| `metricsStorages[i].vmselect` | Configuration for vmselect component of the storage instance | `*object` | `null` |
| `metricsStorages[i].vmselect.minAllowed` | Requests (minimum allowed/available resources) | `*object` | `null` |
| `metricsStorages[i].vmselect.minAllowed.cpu` | CPU request (minimum available CPU) | `*string` | `null` |
| `metricsStorages[i].vmselect.minAllowed.memory` | Memory request (minimum available memory) | `*string` | `null` |
| `metricsStorages[i].vmselect.maxAllowed` | Limits (maximum allowed/available resources ) | `*object` | `null` |
| `metricsStorages[i].vmselect.maxAllowed.cpu` | CPU limit (maximum available CPU) | `*string` | `null` |
| `metricsStorages[i].vmselect.maxAllowed.memory` | Memory limit (maximum available memory) | `*string` | `null` |
| `metricsStorages[i].vmstorage` | Configuration for vmstorage component of the storage instance | `*object` | `null` |
| `metricsStorages[i].vmstorage.minAllowed` | Requests (minimum allowed/available resources) | `*object` | `null` |
| `metricsStorages[i].vmstorage.minAllowed.cpu` | CPU request (minimum available CPU) | `*string` | `null` |
| `metricsStorages[i].vmstorage.minAllowed.memory` | Memory request (minimum available memory) | `*string` | `null` |
| `metricsStorages[i].vmstorage.maxAllowed` | Limits (maximum allowed/available resources ) | `*object` | `null` |
| `metricsStorages[i].vmstorage.maxAllowed.cpu` | CPU limit (maximum available CPU) | `*string` | `null` |
| `metricsStorages[i].vmstorage.maxAllowed.memory` | Memory limit (maximum available memory) | `*string` | `null` |
### Logs storage configuration
| Name | Description | Type | Value |
| ---------------------------------- | ----------------------------------------------------- | ---------- | ------- |
| `logsStorages` | Configuration of logs storage instances | `[]object` | `[...]` |
| `logsStorages[i].name` | Name of the storage instance | `string` | `""` |
| `logsStorages[i].retentionPeriod` | Retention period for the logs in the storage instance | `string` | `""` |
| `logsStorages[i].storage` | Persistent Volume size for the storage instance | `string` | `""` |
| `logsStorages[i].storageClassName` | StorageClass used to store the data | `*string` | `null` |
### Alerta configuration
| Name | Description | Type | Value |
| ----------------------------------------- | ----------------------------------------------------------------------------------- | --------- | ------- |
| `alerta` | Configuration for Alerta service | `object` | `{}` |
| `alerta.storage` | Persistent Volume size for the database | `string` | `10Gi` |
| `alerta.storageClassName` | StorageClass used to store the data | `string` | `""` |
| `alerta.resources` | Resources configuration | `*object` | `null` |
| `alerta.resources.requests` | | `*object` | `null` |
| `alerta.resources.requests.cpu` | CPU request (minimum available CPU) | `*string` | `100m` |
| `alerta.resources.requests.memory` | Memory request (minimum available memory) | `*string` | `256Mi` |
| `alerta.resources.limits` | | `*object` | `null` |
| `alerta.resources.limits.cpu` | CPU limit (maximum available CPU) | `*string` | `1` |
| `alerta.resources.limits.memory` | Memory limit (maximum available memory) | `*string` | `1Gi` |
| `alerta.alerts` | Configuration for alerts | `object` | `{}` |
| `alerta.alerts.telegram` | Configuration for Telegram alerts | `object` | `{}` |
| `alerta.alerts.telegram.token` | Telegram token for your bot | `string` | `""` |
| `alerta.alerts.telegram.chatID` | Specify multiple ID's separated by comma. Get yours in https://t.me/chatid_echo_bot | `string` | `""` |
| `alerta.alerts.telegram.disabledSeverity` | List of severity without alerts, separated by comma like: "informational,warning" | `string` | `""` |
### Grafana configuration
| Name | Description | Type | Value |
| ----------------------------------- | ----------------------------------------- | --------- | ------- |
| `grafana` | Configuration for Grafana | `object` | `{}` |
| `grafana.db` | Database configuration | `object` | `{}` |
| `grafana.db.size` | Persistent Volume size for the database | `string` | `10Gi` |
| `grafana.resources` | Resources configuration | `*object` | `null` |
| `grafana.resources.requests` | | `*object` | `null` |
| `grafana.resources.requests.cpu` | CPU request (minimum available CPU) | `*string` | `100m` |
| `grafana.resources.requests.memory` | Memory request (minimum available memory) | `*string` | `256Mi` |
| `grafana.resources.limits` | | `*object` | `null` |
| `grafana.resources.limits.cpu` | CPU limit (maximum available CPU) | `*string` | `1` |
| `grafana.resources.limits.memory` | Memory limit (maximum available memory) | `*string` | `1Gi` |

View File

@@ -1,152 +1,487 @@
{
"title": "Chart Values",
"type": "object",
"properties": {
"alerta": {
"description": "Configuration for Alerta service",
"type": "object",
"default": {
"alerts": {
"telegram": {
"chatID": "",
"disabledSeverity": "",
"token": ""
}
},
"resources": {
"limits": {
"cpu": "1",
"memory": "1Gi"
},
"requests": {
"cpu": "100m",
"memory": "256Mi"
}
},
"storage": "10Gi",
"storageClassName": ""
},
"required": [
"alerts",
"storage",
"storageClassName"
],
"properties": {
"alerts": {
"description": "Configuration for alerts",
"type": "object",
"default": {
"telegram": {
"chatID": "",
"disabledSeverity": "",
"token": ""
}
},
"required": [
"telegram"
],
"properties": {
"telegram": {
"description": "Configuration for Telegram alerts",
"type": "object",
"default": {
"chatID": "",
"disabledSeverity": "",
"token": ""
},
"required": [
"chatID",
"disabledSeverity",
"token"
],
"properties": {
"chatID": {
"default": "",
"description": "specify multiple ID's separated by comma. Get yours in https://t.me/chatid_echo_bot",
"description": "Specify multiple ID's separated by comma. Get yours in https://t.me/chatid_echo_bot",
"type": "string"
},
"disabledSeverity": {
"default": "",
"description": "list of severity without alerts, separated comma like: \"informational,warning\"",
"description": "List of severity without alerts, separated by comma like: \"informational,warning\"",
"type": "string"
},
"token": {
"default": "",
"description": "telegram token for your bot",
"description": "Telegram token for your bot",
"type": "string"
}
},
"type": "object"
}
}
},
"type": "object"
}
},
"resources": {
"properties": {
"description": "Resources configuration",
"type": "object",
"default": {
"limits": {
"properties": {
"cpu": {
"default": "1",
"description": "The maximum amount of CPU allowed for alerta",
"type": "string"
},
"memory": {
"default": "1Gi",
"description": "The maximum amount of memory allowed for alerta",
"type": "string"
}
},
"type": "object"
"cpu": "1",
"memory": "1Gi"
},
"requests": {
"properties": {
"cpu": {
"default": "100m",
"description": "The minimum amount of CPU required for alerta",
"type": "string"
},
"memory": {
"default": "256Mi",
"description": "The minimum amount of memory required for alerta",
"type": "string"
}
},
"type": "object"
"cpu": "100m",
"memory": "256Mi"
}
},
"type": "object"
"properties": {
"limits": {
"type": "object",
"default": {
"cpu": "1",
"memory": "1Gi"
},
"properties": {
"cpu": {
"description": "CPU limit (maximum available CPU)",
"type": "string",
"default": "1",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"x-kubernetes-int-or-string": true
},
"memory": {
"description": "Memory limit (maximum available memory)",
"type": "string",
"default": "1Gi",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"x-kubernetes-int-or-string": true
}
}
},
"requests": {
"type": "object",
"default": {
"cpu": "100m",
"memory": "256Mi"
},
"properties": {
"cpu": {
"description": "CPU request (minimum available CPU)",
"type": "string",
"default": "100m",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"x-kubernetes-int-or-string": true
},
"memory": {
"description": "Memory request (minimum available memory)",
"type": "string",
"default": "256Mi",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"x-kubernetes-int-or-string": true
}
}
}
}
},
"storage": {
"default": "10Gi",
"description": "Persistent Volume size for alerta database",
"type": "string"
"description": "Persistent Volume size for the database",
"type": "string",
"default": "10Gi"
},
"storageClassName": {
"default": "",
"description": "StorageClass used to store the data",
"type": "string"
}
},
"type": "object"
}
},
"grafana": {
"properties": {
"description": "Configuration for Grafana",
"type": "object",
"default": {
"db": {
"properties": {
"size": {
"default": "10Gi",
"description": "Persistent Volume size for grafana database",
"type": "string"
}
},
"type": "object"
"size": "10Gi"
},
"resources": {
"properties": {
"limits": {
"properties": {
"cpu": {
"default": "1",
"description": "The maximum amount of CPU allowed for grafana",
"type": "string"
},
"memory": {
"default": "1Gi",
"description": "The maximum amount of memory allowed for grafana",
"type": "string"
}
},
"type": "object"
},
"requests": {
"properties": {
"cpu": {
"default": "100m",
"description": "The minimum amount of CPU required for grafana",
"type": "string"
},
"memory": {
"default": "256Mi",
"description": "The minimum amount of memory required for grafana",
"type": "string"
}
},
"type": "object"
}
"limits": {
"cpu": "1",
"memory": "1Gi"
},
"type": "object"
"requests": {
"cpu": "100m",
"memory": "256Mi"
}
}
},
"type": "object"
"required": [
"db"
],
"properties": {
"db": {
"description": "Database configuration",
"type": "object",
"default": {
"size": "10Gi"
},
"required": [
"size"
],
"properties": {
"size": {
"description": "Persistent Volume size for the database",
"type": "string",
"default": "10Gi"
}
}
},
"resources": {
"description": "Resources configuration",
"type": "object",
"default": {
"limits": {
"cpu": "1",
"memory": "1Gi"
},
"requests": {
"cpu": "100m",
"memory": "256Mi"
}
},
"properties": {
"limits": {
"type": "object",
"default": {
"cpu": "1",
"memory": "1Gi"
},
"properties": {
"cpu": {
"description": "CPU limit (maximum available CPU)",
"type": "string",
"default": "1",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"x-kubernetes-int-or-string": true
},
"memory": {
"description": "Memory limit (maximum available memory)",
"type": "string",
"default": "1Gi",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"x-kubernetes-int-or-string": true
}
}
},
"requests": {
"type": "object",
"default": {
"cpu": "100m",
"memory": "256Mi"
},
"properties": {
"cpu": {
"description": "CPU request (minimum available CPU)",
"type": "string",
"default": "100m",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"x-kubernetes-int-or-string": true
},
"memory": {
"description": "Memory request (minimum available memory)",
"type": "string",
"default": "256Mi",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"x-kubernetes-int-or-string": true
}
}
}
}
}
}
},
"host": {
"default": "",
"description": "The hostname used to access the grafana externally (defaults to 'grafana' subdomain for the tenant host).",
"type": "string"
},
"logsStorages": {
"default": [],
"description": "Configuration of logs storage instances",
"type": "array",
"default": [
{
"name": "generic",
"retentionPeriod": "1",
"storage": "10Gi",
"storageClassName": "replicated"
}
],
"items": {
"type": "object"
},
"type": "array"
"type": "object",
"required": [
"name",
"retentionPeriod",
"storage"
],
"properties": {
"name": {
"description": "Name of the storage instance",
"type": "string"
},
"retentionPeriod": {
"description": "Retention period for the logs in the storage instance",
"type": "string"
},
"storage": {
"description": "Persistent Volume size for the storage instance",
"type": "string"
},
"storageClassName": {
"description": "StorageClass used to store the data",
"type": "string"
}
}
}
},
"metricsStorages": {
"default": [],
"description": "Configuration of metrics storage instances",
"type": "array",
"default": [
{
"deduplicationInterval": "15s",
"name": "shortterm",
"retentionPeriod": "3d",
"storage": "10Gi",
"storageClassName": ""
},
{
"deduplicationInterval": "5m",
"name": "longterm",
"retentionPeriod": "14d",
"storage": "10Gi",
"storageClassName": ""
}
],
"items": {
"type": "object"
},
"type": "array"
"type": "object",
"required": [
"deduplicationInterval",
"name",
"retentionPeriod",
"storage"
],
"properties": {
"deduplicationInterval": {
"description": "Deduplication interval for the metrics in the storage instance",
"type": "string"
},
"name": {
"description": "Name of the storage instance",
"type": "string"
},
"retentionPeriod": {
"description": "Retention period for the metrics in the storage instance",
"type": "string"
},
"storage": {
"description": "Persistent Volume size for the storage instance",
"type": "string"
},
"storageClassName": {
"description": "StorageClass used to store the data",
"type": "string"
},
"vminsert": {
"description": "Configuration for vminsert component of the storage instance",
"type": "object",
"properties": {
"maxAllowed": {
"description": "Limits (maximum allowed/available resources )",
"type": "object",
"properties": {
"cpu": {
"description": "CPU limit (maximum available CPU)",
"type": "string",
"default": "1",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"x-kubernetes-int-or-string": true
},
"memory": {
"description": "Memory limit (maximum available memory)",
"type": "string",
"default": "1Gi",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"x-kubernetes-int-or-string": true
}
}
},
"minAllowed": {
"description": "Requests (minimum allowed/available resources)",
"type": "object",
"properties": {
"cpu": {
"description": "CPU request (minimum available CPU)",
"type": "string",
"default": "100m",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"x-kubernetes-int-or-string": true
},
"memory": {
"description": "Memory request (minimum available memory)",
"type": "string",
"default": "256Mi",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"x-kubernetes-int-or-string": true
}
}
}
}
},
"vmselect": {
"description": "Configuration for vmselect component of the storage instance",
"type": "object",
"properties": {
"maxAllowed": {
"description": "Limits (maximum allowed/available resources )",
"type": "object",
"properties": {
"cpu": {
"description": "CPU limit (maximum available CPU)",
"type": "string",
"default": "1",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"x-kubernetes-int-or-string": true
},
"memory": {
"description": "Memory limit (maximum available memory)",
"type": "string",
"default": "1Gi",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"x-kubernetes-int-or-string": true
}
}
},
"minAllowed": {
"description": "Requests (minimum allowed/available resources)",
"type": "object",
"properties": {
"cpu": {
"description": "CPU request (minimum available CPU)",
"type": "string",
"default": "100m",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"x-kubernetes-int-or-string": true
},
"memory": {
"description": "Memory request (minimum available memory)",
"type": "string",
"default": "256Mi",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"x-kubernetes-int-or-string": true
}
}
}
}
},
"vmstorage": {
"description": "Configuration for vmstorage component of the storage instance",
"type": "object",
"properties": {
"maxAllowed": {
"description": "Limits (maximum allowed/available resources )",
"type": "object",
"properties": {
"cpu": {
"description": "CPU limit (maximum available CPU)",
"type": "string",
"default": "1",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"x-kubernetes-int-or-string": true
},
"memory": {
"description": "Memory limit (maximum available memory)",
"type": "string",
"default": "1Gi",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"x-kubernetes-int-or-string": true
}
}
},
"minAllowed": {
"description": "Requests (minimum allowed/available resources)",
"type": "object",
"properties": {
"cpu": {
"description": "CPU request (minimum available CPU)",
"type": "string",
"default": "100m",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"x-kubernetes-int-or-string": true
},
"memory": {
"description": "Memory request (minimum available memory)",
"type": "string",
"default": "256Mi",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"x-kubernetes-int-or-string": true
}
}
}
}
}
}
}
}
},
"title": "Chart Values",
"type": "object"
}
}
}

View File

@@ -1,10 +1,29 @@
## @section Common parameters
## @param host The hostname used to access the grafana externally (defaults to 'grafana' subdomain for the tenant host).
## @param host {string} The hostname used to access the grafana externally (defaults to 'grafana' subdomain for the tenant host).
host: ""
## @param metricsStorages [array] Configuration of metrics storage instances
##
## @section Metrics storage configuration
## @param metricsStorages {[]metricsStorage} Configuration of metrics storage instances
## @field metricsStorage.name {string} Name of the storage instance
## @field metricsStorage.retentionPeriod {string} Retention period for the metrics in the storage instance
## @field metricsStorage.deduplicationInterval {string} Deduplication interval for the metrics in the storage instance
## @field metricsStorage.storage {string} Persistent Volume size for the storage instance
## @field metricsStorage.storageClassName {*string} StorageClass used to store the data
## @field metricsStorage.vminsert {*vmcomponent} Configuration for vminsert component of the storage instance
## @field metricsStorage.vmselect {*vmcomponent} Configuration for vmselect component of the storage instance
## @field metricsStorage.vmstorage {*vmcomponent} Configuration for vmstorage component of the storage instance
## @field request.cpu {*quantity} CPU request (minimum available CPU)
## @field request.memory {*quantity} Memory request (minimum available memory)
## @field limit.cpu {*quantity} CPU limit (maximum available CPU)
## @field limit.memory {*quantity} Memory limit (maximum available memory)
## @field vmcomponent.minAllowed {*request} Requests (minimum allowed/available resources)
## @field vmcomponent.maxAllowed {*limit} Limits (maximum allowed/available resources )
## @field resources.requests {*request}
## @field resources.limits {*limit}
## Example:
## metricsStorages:
## - name: shortterm
@@ -46,7 +65,13 @@ metricsStorages:
storage: 10Gi
storageClassName: ""
## @param logsStorages [array] Configuration of logs storage instances
## @section Logs storage configuration
## @param logsStorages {[]logsStorage} Configuration of logs storage instances
## @field logsStorage.name {string} Name of the storage instance
## @field logsStorage.retentionPeriod {string} Retention period for the logs in the storage instance
## @field logsStorage.storage {string} Persistent Volume size for the storage instance
## @field logsStorage.storageClassName {*string} StorageClass used to store the data
##
logsStorages:
- name: generic
@@ -54,14 +79,17 @@ logsStorages:
storage: 10Gi
storageClassName: replicated
## Configuration for Alerta
## @param alerta.storage Persistent Volume size for alerta database
## @param alerta.storageClassName StorageClass used to store the data
## @param alerta.resources.requests.cpu The minimum amount of CPU required for alerta
## @param alerta.resources.requests.memory The minimum amount of memory required for alerta
## @param alerta.resources.limits.cpu The maximum amount of CPU allowed for alerta
## @param alerta.resources.limits.memory The maximum amount of memory allowed for alerta
##
## @section Alerta configuration
## @param alerta {alerta} Configuration for Alerta service
## @field alerta.storage {string} Persistent Volume size for the database
## @field alerta.storageClassName {string} StorageClass used to store the data
## @field alerta.resources {*resources} Resources configuration
## @field alerta.alerts {alerts} Configuration for alerts
## @field alerts.telegram {telegramAlerts} Configuration for Telegram alerts
## @field telegramAlerts.token {string} Telegram token for your bot
## @field telegramAlerts.chatID {string} Specify multiple ID's separated by comma. Get yours in https://t.me/chatid_echo_bot
## @field telegramAlerts.disabledSeverity {string} List of severity without alerts, separated by comma like: "informational,warning"
alerta:
storage: 10Gi
storageClassName: ""
@@ -73,9 +101,6 @@ alerta:
cpu: 100m
memory: 256Mi
alerts:
## @param alerta.alerts.telegram.token telegram token for your bot
## @param alerta.alerts.telegram.chatID specify multiple ID's separated by comma. Get yours in https://t.me/chatid_echo_bot
## @param alerta.alerts.telegram.disabledSeverity list of severity without alerts, separated comma like: "informational,warning"
## example:
## telegram:
## token: "7262461387:AAGtwq16iwuVtWtzoN6TUEMpF00fpC9Xz34"
@@ -87,12 +112,14 @@ alerta:
chatID: ""
disabledSeverity: ""
## Configuration for Grafana
## @param grafana.db.size Persistent Volume size for grafana database
## @param grafana.resources.requests.cpu The minimum amount of CPU required for grafana
## @param grafana.resources.requests.memory The minimum amount of memory required for grafana
## @param grafana.resources.limits.cpu The maximum amount of CPU allowed for grafana
## @param grafana.resources.limits.memory The maximum amount of memory allowed for grafana
## @section Grafana configuration
## @param grafana {grafana} Configuration for Grafana
## @field grafana.db {grafanaDB} Database configuration
## @field grafanaDB.size {string} Persistent Volume size for the database
## @field grafana.resources {*resources} Resources configuration
grafana:
db:
size: 10Gi

View File

@@ -1,11 +1,12 @@
apiVersion: v2
appVersion: 0.23.4
appVersion: 0.25.2
description: 'Helm chart to deploy [altinity-clickhouse-operator](https://github.com/Altinity/clickhouse-operator). The
ClickHouse Operator creates, configures and manages ClickHouse clusters running
on Kubernetes. For upgrade please install CRDs separately: ```bash kubectl apply
-f https://github.com/Altinity/clickhouse-operator/raw/master/deploy/helm/crds/CustomResourceDefinition-clickhouseinstallations.clickhouse.altinity.com.yaml kubectl
apply -f https://github.com/Altinity/clickhouse-operator/raw/master/deploy/helm/crds/CustomResourceDefinition-clickhouseinstallationtemplates.clickhouse.altinity.com.yaml kubectl
apply -f https://github.com/Altinity/clickhouse-operator/raw/master/deploy/helm/crds/CustomResourceDefinition-clickhouseoperatorconfigurations.clickhouse.altinity.com.yaml
-f https://github.com/Altinity/clickhouse-operator/raw/master/deploy/helm/clickhouse-operator/crds/CustomResourceDefinition-clickhouseinstallations.clickhouse.altinity.com.yaml kubectl
apply -f https://github.com/Altinity/clickhouse-operator/raw/master/deploy/helm/clickhouse-operator/crds/CustomResourceDefinition-clickhouseinstallationtemplates.clickhouse.altinity.com.yaml kubectl
apply -f https://github.com/Altinity/clickhouse-operator/raw/master/deploy/helm/clickhouse-operator/crds/CustomResourceDefinition-clickhouseoperatorconfigurations.clickhouse.altinity.com.yaml kubectl
apply -f https://github.com/Altinity/clickhouse-operator/raw/master/deploy/helm/clickhouse-operator/crds/CustomResourceDefinition-clickhousekeeperinstallations.clickhouse-keeper.altinity.com.yaml
```'
home: https://github.com/Altinity/clickhouse-operator
icon: https://logosandtypes.com/wp-content/uploads/2020/12/altinity.svg
@@ -14,4 +15,4 @@ maintainers:
name: altinity
name: altinity-clickhouse-operator
type: application
version: 0.23.4
version: 0.25.2

View File

@@ -1,6 +1,6 @@
# altinity-clickhouse-operator
![Version: 0.23.4](https://img.shields.io/badge/Version-0.23.4-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: 0.23.4](https://img.shields.io/badge/AppVersion-0.23.4-informational?style=flat-square)
![Version: 0.25.2](https://img.shields.io/badge/Version-0.25.2-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: 0.25.2](https://img.shields.io/badge/AppVersion-0.25.2-informational?style=flat-square)
Helm chart to deploy [altinity-clickhouse-operator](https://github.com/Altinity/clickhouse-operator).
@@ -8,9 +8,10 @@ The ClickHouse Operator creates, configures and manages ClickHouse clusters runn
For upgrade please install CRDs separately:
```bash
kubectl apply -f https://github.com/Altinity/clickhouse-operator/raw/master/deploy/helm/crds/CustomResourceDefinition-clickhouseinstallations.clickhouse.altinity.com.yaml
kubectl apply -f https://github.com/Altinity/clickhouse-operator/raw/master/deploy/helm/crds/CustomResourceDefinition-clickhouseinstallationtemplates.clickhouse.altinity.com.yaml
kubectl apply -f https://github.com/Altinity/clickhouse-operator/raw/master/deploy/helm/crds/CustomResourceDefinition-clickhouseoperatorconfigurations.clickhouse.altinity.com.yaml
kubectl apply -f https://github.com/Altinity/clickhouse-operator/raw/master/deploy/helm/clickhouse-operator/crds/CustomResourceDefinition-clickhouseinstallations.clickhouse.altinity.com.yaml
kubectl apply -f https://github.com/Altinity/clickhouse-operator/raw/master/deploy/helm/clickhouse-operator/crds/CustomResourceDefinition-clickhouseinstallationtemplates.clickhouse.altinity.com.yaml
kubectl apply -f https://github.com/Altinity/clickhouse-operator/raw/master/deploy/helm/clickhouse-operator/crds/CustomResourceDefinition-clickhouseoperatorconfigurations.clickhouse.altinity.com.yaml
kubectl apply -f https://github.com/Altinity/clickhouse-operator/raw/master/deploy/helm/clickhouse-operator/crds/CustomResourceDefinition-clickhousekeeperinstallations.clickhouse-keeper.altinity.com.yaml
```
**Homepage:** <https://github.com/Altinity/clickhouse-operator>
@@ -25,34 +26,38 @@ For upgrade please install CRDs separately:
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| additionalResources | list | `[]` | list of additional resources to create (are processed via `tpl` function), useful for create ClickHouse clusters together with clickhouse-operator, look `kubectl explain chi` for details |
| affinity | object | `{}` | affinity for scheduler pod assignment, look `kubectl explain pod.spec.affinity` for details |
| configs | object | check the values.yaml file for the config content, auto-generated from latest operator release | clickhouse-operator configs |
| additionalResources | list | `[]` | list of additional resources to create (processed via `tpl` function), useful for create ClickHouse clusters together with clickhouse-operator. check `kubectl explain chi` for details |
| affinity | object | `{}` | affinity for scheduler pod assignment, check `kubectl explain pod.spec.affinity` for details |
| commonAnnotations | object | `{}` | set of annotations that will be applied to all the resources for the operator |
| commonLabels | object | `{}` | set of labels that will be applied to all the resources for the operator |
| configs | object | check the `values.yaml` file for the config content (auto-generated from latest operator release) | clickhouse operator configs |
| dashboards.additionalLabels | object | `{"grafana_dashboard":""}` | labels to add to a secret with dashboards |
| dashboards.annotations | object | `{}` | annotations to add to a secret with dashboards |
| dashboards.enabled | bool | `false` | provision grafana dashboards as secrets (can be synced by grafana dashboards sidecar https://github.com/grafana/helm-charts/blob/grafana-6.33.1/charts/grafana/values.yaml#L679 ) |
| dashboards.enabled | bool | `false` | provision grafana dashboards as configMaps (can be synced by grafana dashboards sidecar https://github.com/grafana/helm-charts/blob/grafana-8.3.4/charts/grafana/values.yaml#L778 ) |
| dashboards.grafana_folder | string | `"clickhouse"` | |
| fullnameOverride | string | `""` | full name of the chart. |
| imagePullSecrets | list | `[]` | image pull secret for private images in clickhouse-operator pod possible value format [{"name":"your-secret-name"}] look `kubectl explain pod.spec.imagePullSecrets` for details |
| imagePullSecrets | list | `[]` | image pull secret for private images in clickhouse-operator pod possible value format `[{"name":"your-secret-name"}]`, check `kubectl explain pod.spec.imagePullSecrets` for details |
| metrics.containerSecurityContext | object | `{}` | |
| metrics.enabled | bool | `true` | |
| metrics.env | list | `[]` | additional environment variables for the deployment of metrics-exporter containers possible format value [{"name": "SAMPLE", "value": "text"}] |
| metrics.env | list | `[]` | additional environment variables for the deployment of metrics-exporter containers possible format value `[{"name": "SAMPLE", "value": "text"}]` |
| metrics.image.pullPolicy | string | `"IfNotPresent"` | image pull policy |
| metrics.image.repository | string | `"altinity/metrics-exporter"` | image repository |
| metrics.image.tag | string | `""` | image tag (chart's appVersion value will be used if not set) |
| metrics.resources | object | `{}` | custom resource configuration |
| nameOverride | string | `""` | override name of the chart |
| nodeSelector | object | `{}` | node for scheduler pod assignment, look `kubectl explain pod.spec.nodeSelector` for details |
| namespaceOverride | string | `""` | |
| nodeSelector | object | `{}` | node for scheduler pod assignment, check `kubectl explain pod.spec.nodeSelector` for details |
| operator.containerSecurityContext | object | `{}` | |
| operator.env | list | `[]` | additional environment variables for the clickhouse-operator container in deployment possible format value [{"name": "SAMPLE", "value": "text"}] |
| operator.env | list | `[]` | additional environment variables for the clickhouse-operator container in deployment possible format value `[{"name": "SAMPLE", "value": "text"}]` |
| operator.image.pullPolicy | string | `"IfNotPresent"` | image pull policy |
| operator.image.repository | string | `"altinity/clickhouse-operator"` | image repository |
| operator.image.tag | string | `""` | image tag (chart's appVersion value will be used if not set) |
| operator.resources | object | `{}` | custom resource configuration, look `kubectl explain pod.spec.containers.resources` for details |
| podAnnotations | object | `{"clickhouse-operator-metrics/port":"9999","clickhouse-operator-metrics/scrape":"true","prometheus.io/port":"8888","prometheus.io/scrape":"true"}` | annotations to add to the clickhouse-operator pod, look `kubectl explain pod.spec.annotations` for details |
| operator.resources | object | `{}` | custom resource configuration, check `kubectl explain pod.spec.containers.resources` for details |
| podAnnotations | object | check the `values.yaml` file | annotations to add to the clickhouse-operator pod, check `kubectl explain pod.spec.annotations` for details |
| podLabels | object | `{}` | labels to add to the clickhouse-operator pod |
| podSecurityContext | object | `{}` | |
| rbac.create | bool | `true` | specifies whether cluster roles and cluster role bindings should be created |
| rbac.create | bool | `true` | specifies whether rbac resources should be created |
| rbac.namespaceScoped | bool | `false` | specifies whether to create roles and rolebindings at the cluster level or namespace level |
| secret.create | bool | `true` | create a secret with operator credentials |
| secret.password | string | `"clickhouse_operator_password"` | operator credentials password |
| secret.username | string | `"clickhouse_operator"` | operator credentials username |
@@ -60,6 +65,15 @@ For upgrade please install CRDs separately:
| serviceAccount.create | bool | `true` | specifies whether a service account should be created |
| serviceAccount.name | string | `nil` | the name of the service account to use; if not set and create is true, a name is generated using the fullname template |
| serviceMonitor.additionalLabels | object | `{}` | additional labels for service monitor |
| serviceMonitor.enabled | bool | `false` | ServiceMonitor Custom resource is created for a (prometheus-operator)[https://github.com/prometheus-operator/prometheus-operator] |
| tolerations | list | `[]` | tolerations for scheduler pod assignment, look `kubectl explain pod.spec.tolerations` for details |
| serviceMonitor.clickhouseMetrics.interval | string | `"30s"` | |
| serviceMonitor.clickhouseMetrics.metricRelabelings | list | `[]` | |
| serviceMonitor.clickhouseMetrics.relabelings | list | `[]` | |
| serviceMonitor.clickhouseMetrics.scrapeTimeout | string | `""` | |
| serviceMonitor.enabled | bool | `false` | ServiceMonitor Custom resource is created for a [prometheus-operator](https://github.com/prometheus-operator/prometheus-operator) In serviceMonitor will be created two endpoints clickhouse-metrics on port 8888 and operator-metrics # 9999. Ypu can specify interval, scrapeTimeout, relabelings, metricRelabelings for each endpoint below |
| serviceMonitor.operatorMetrics.interval | string | `"30s"` | |
| serviceMonitor.operatorMetrics.metricRelabelings | list | `[]` | |
| serviceMonitor.operatorMetrics.relabelings | list | `[]` | |
| serviceMonitor.operatorMetrics.scrapeTimeout | string | `""` | |
| tolerations | list | `[]` | tolerations for scheduler pod assignment, check `kubectl explain pod.spec.tolerations` for details |
| topologySpreadConstraints | list | `[]` | |

View File

@@ -0,0 +1,17 @@
{{ template "chart.header" . }}
{{ template "chart.deprecationWarning" . }}
{{ template "chart.badgesSection" . }}
{{ template "chart.description" . }}
{{ template "chart.homepageLine" . }}
{{ template "chart.maintainersSection" . }}
{{ template "chart.sourcesSection" . }}
{{ template "chart.requirementsSection" . }}
{{ template "chart.valuesSection" . }}

View File

@@ -4,14 +4,14 @@
# SINGULAR=clickhouseinstallation
# PLURAL=clickhouseinstallations
# SHORT=chi
# OPERATOR_VERSION=0.23.4
# OPERATOR_VERSION=0.25.2
#
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: clickhouseinstallations.clickhouse.altinity.com
labels:
clickhouse.altinity.com/chop: 0.23.4
clickhouse.altinity.com/chop: 0.25.2
spec:
group: clickhouse.altinity.com
scope: Namespaced
@@ -51,13 +51,12 @@ spec:
jsonPath: .status.taskID
- name: status
type: string
description: CHI status
description: Resource status
jsonPath: .status.status
- name: hosts-unchanged
- name: hosts-completed
type: integer
description: Unchanged hosts count
priority: 1 # show in wide view
jsonPath: .status.hostsUnchanged
description: Completed hosts count
jsonPath: .status.hostsCompleted
- name: hosts-updated
type: integer
description: Updated hosts count
@@ -68,20 +67,11 @@ spec:
description: Added hosts count
priority: 1 # show in wide view
jsonPath: .status.hostsAdded
- name: hosts-completed
type: integer
description: Completed hosts count
jsonPath: .status.hostsCompleted
- name: hosts-deleted
type: integer
description: Hosts deleted count
priority: 1 # show in wide view
jsonPath: .status.hostsDeleted
- name: hosts-delete
type: integer
description: Hosts to be deleted count
priority: 1 # show in wide view
jsonPath: .status.hostsDelete
- name: endpoint
type: string
description: Client access endpoint
@@ -92,39 +82,51 @@ spec:
description: Age of the resource
# Displayed in all priorities
jsonPath: .metadata.creationTimestamp
- name: suspend
type: string
description: Suspend reconciliation
# Displayed in all priorities
jsonPath: .spec.suspend
subresources:
status: {}
schema:
openAPIV3Schema:
description: "define a set of Kubernetes resources (StatefulSet, PVC, Service, ConfigMap) which describe behavior one or more ClickHouse clusters"
description: "define a set of Kubernetes resources (StatefulSet, PVC, Service, ConfigMap) which describe behavior one or more clusters"
type: object
required:
- spec
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
description: |
APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type: string
kind:
description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
description: |
Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type: string
metadata:
type: object
status:
type: object
description: "Current ClickHouseInstallation manifest status, contains many fields like a normalized configuration, clickhouse-operator version, current action and all applied action list, current taskID and all applied taskIDs and other"
description: |
Status contains many fields like a normalized configuration, clickhouse-operator version, current action and all applied action list, current taskID and all applied taskIDs and other
properties:
chop-version:
type: string
description: "ClickHouse operator version"
description: "Operator version"
chop-commit:
type: string
description: "ClickHouse operator git commit SHA"
description: "Operator git commit SHA"
chop-date:
type: string
description: "ClickHouse operator build date"
description: "Operator build date"
chop-ip:
type: string
description: "IP address of the operator's pod which managed this CHI"
description: "IP address of the operator's pod which managed this resource"
clusters:
type: integer
minimum: 0
@@ -222,17 +224,23 @@ spec:
endpoint:
type: string
description: "Endpoint"
endpoints:
type: array
description: "All endpoints"
nullable: true
items:
type: string
generation:
type: integer
minimum: 0
description: "Generation"
normalized:
type: object
description: "Normalized CHI requested"
description: "Normalized resource requested"
x-kubernetes-preserve-unknown-fields: true
normalizedCompleted:
type: object
description: "Normalized CHI completed"
description: "Normalized resource completed"
x-kubernetes-preserve-unknown-fields: true
hostsWithTablesCreated:
type: array
@@ -240,6 +248,12 @@ spec:
nullable: true
items:
type: string
hostsWithReplicaCaughtUp:
type: array
description: "List of hosts with replica caught up"
nullable: true
items:
type: string
usedTemplates:
type: array
description: "List of templates used to build this CHI"
@@ -301,6 +315,13 @@ spec:
enum:
- ""
- "RollingUpdate"
suspend:
!!merge <<: *TypeStringBool
description: |
Suspend reconciliation of resources managed by a ClickHouse Installation.
Works as the following:
- When `suspend` is `true` operator stops reconciling all resources.
- When `suspend` is `false` or not set, operator reconciles all resources.
troubleshoot:
!!merge <<: *TypeStringBool
description: |
@@ -412,6 +433,63 @@ spec:
service:
!!merge <<: *TypeObjectsCleanup
description: "Behavior policy for failed Service, `Retain` by default"
runtime:
type: object
description: "runtime parameters for clickhouse-operator process which are used during reconcile cycle"
properties:
reconcileShardsThreadsNumber:
type: integer
minimum: 1
maximum: 65535
description: "How many goroutines will be used to reconcile shards of a cluster in parallel, 1 by default"
reconcileShardsMaxConcurrencyPercent:
type: integer
minimum: 0
maximum: 100
description: "The maximum percentage of cluster shards that may be reconciled in parallel, 50 percent by default."
macros:
type: object
description: "macros parameters"
properties:
sections:
type: object
description: "sections behaviour for macros"
properties:
users:
type: object
description: "sections behaviour for macros on users"
properties:
enabled:
!!merge <<: *TypeStringBool
description: "enabled or not"
profiles:
type: object
description: "sections behaviour for macros on profiles"
properties:
enabled:
!!merge <<: *TypeStringBool
description: "enabled or not"
quotas:
type: object
description: "sections behaviour for macros on quotas"
properties:
enabled:
!!merge <<: *TypeStringBool
description: "enabled or not"
settings:
type: object
description: "sections behaviour for macros on settings"
properties:
enabled:
!!merge <<: *TypeStringBool
description: "enabled or not"
files:
type: object
description: "sections behaviour for macros on files"
properties:
enabled:
!!merge <<: *TypeStringBool
description: "enabled or not"
defaults:
type: object
description: |
@@ -424,7 +502,7 @@ spec:
description: |
define should replicas be specified by FQDN in `<host></host>`.
In case of "no" will use short hostname and clickhouse-server will use kubernetes default suffixes for DNS lookup
"yes" by default
"no" by default
distributedDDL:
type: object
description: |
@@ -474,7 +552,13 @@ spec:
description: "optional, template name from chi.spec.templates.volumeClaimTemplates, allows customization each `PVC` which will mount for clickhouse log directory in each `Pod` during render and reconcile every StatefulSet.spec resource described in `chi.spec.configuration.clusters`"
serviceTemplate:
type: string
description: "optional, template name from chi.spec.templates.serviceTemplates, allows customization for one `Service` resource which will created by `clickhouse-operator` which cover all clusters in whole `chi` resource"
description: "optional, template name from chi.spec.templates.serviceTemplates. used for customization of the `Service` resource, created by `clickhouse-operator` to cover all clusters in whole `chi` resource"
serviceTemplates:
type: array
description: "optional, template names from chi.spec.templates.serviceTemplates. used for customization of the `Service` resources, created by `clickhouse-operator` to cover all clusters in whole `chi` resource"
nullable: true
items:
type: string
clusterServiceTemplate:
type: string
description: "optional, template name from chi.spec.templates.serviceTemplates, allows customization for each `Service` resource which will created by `clickhouse-operator` which cover each clickhouse cluster described in `chi.spec.configuration.clusters`"
@@ -486,7 +570,7 @@ spec:
description: "optional, template name from chi.spec.templates.serviceTemplates, allows customization for each `Service` resource which will created by `clickhouse-operator` which cover each replica inside each shard inside each clickhouse cluster described in `chi.spec.configuration.clusters`"
volumeClaimTemplate:
type: string
description: "DEPRECATED! VolumeClaimTemplate is deprecated in favor of DataVolumeClaimTemplate and LogVolumeClaimTemplate"
description: "optional, alias for dataVolumeClaimTemplate, template name from chi.spec.templates.volumeClaimTemplates, allows customization each `PVC` which will mount for clickhouse data directory in each `Pod` during render and reconcile every StatefulSet.spec resource described in `chi.spec.configuration.clusters`"
configuration:
type: object
description: "allows configure multiple aspects and behavior for `clickhouse-server` instance and also allows describe multiple `clickhouse-server` clusters inside one `chi` resource"
@@ -521,6 +605,9 @@ spec:
secure:
!!merge <<: *TypeStringBool
description: "if a secure connection to Zookeeper is required"
availabilityZone:
type: string
description: "availability zone for Zookeeper node"
session_timeout_ms:
type: integer
description: "session timeout during connect to Zookeeper"
@@ -540,6 +627,20 @@ spec:
you can configure password hashed, authorization restrictions, database level security row filters etc.
More details: https://clickhouse.tech/docs/en/operations/settings/settings-users/
Your yaml code will convert to XML, see examples https://github.com/Altinity/clickhouse-operator/blob/master/docs/custom_resource_explained.md#specconfigurationusers
any key could contains `valueFrom` with `secretKeyRef` which allow pass password from kubernetes secrets
secret value will pass in `pod.spec.containers.evn`, and generate with from_env=XXX in XML in /etc/clickhouse-server/users.d/chop-generated-users.xml
it not allow automatically updates when updates `secret`, change spec.taskID for manually trigger reconcile cycle
look into https://github.com/Altinity/clickhouse-operator/blob/master/docs/chi-examples/05-settings-01-overview.yaml for examples
any key with prefix `k8s_secret_` shall has value with format namespace/secret/key or secret/key
in this case value from secret will write directly into XML tag during render *-usersd ConfigMap
any key with prefix `k8s_secret_env` shall has value with format namespace/secret/key or secret/key
in this case value from secret will write into environment variable and write to XML tag via from_env=XXX
look into https://github.com/Altinity/clickhouse-operator/blob/master/docs/chi-examples/05-settings-01-overview.yaml for examples
# nullable: true
x-kubernetes-preserve-unknown-fields: true
profiles:
@@ -566,6 +667,12 @@ spec:
allows configure `clickhouse-server` settings inside <yandex>...</yandex> tag in each `Pod` during generate `ConfigMap` which will mount in `/etc/clickhouse-server/config.d/`
More details: https://clickhouse.tech/docs/en/operations/settings/settings/
Your yaml code will convert to XML, see examples https://github.com/Altinity/clickhouse-operator/blob/master/docs/custom_resource_explained.md#specconfigurationsettings
any key could contains `valueFrom` with `secretKeyRef` which allow pass password from kubernetes secrets
look into https://github.com/Altinity/clickhouse-operator/blob/master/docs/chi-examples/05-settings-01-overview.yaml for examples
secret value will pass in `pod.spec.env`, and generate with from_env=XXX in XML in /etc/clickhouse-server/config.d/chop-generated-settings.xml
it not allow automatically updates when updates `secret`, change spec.taskID for manually trigger reconcile cycle
# nullable: true
x-kubernetes-preserve-unknown-fields: true
files: &TypeFiles
@@ -575,14 +682,20 @@ spec:
every key in this object is the file name
every value in this object is the file content
you can use `!!binary |` and base64 for binary files, see details here https://yaml.org/type/binary.html
each key could contains prefix like USERS, COMMON, HOST or config.d, users.d, cond.d, wrong prefixes will ignored, subfolders also will ignored
each key could contains prefix like {common}, {users}, {hosts} or config.d, users.d, conf.d, wrong prefixes will be ignored, subfolders also will be ignored
More details: https://github.com/Altinity/clickhouse-operator/blob/master/docs/chi-examples/05-settings-05-files-nested.yaml
any key could contains `valueFrom` with `secretKeyRef` which allow pass values from kubernetes secrets
secrets will mounted into pod as separate volume in /etc/clickhouse-server/secrets.d/
and will automatically update when update secret
it useful for pass SSL certificates from cert-manager or similar tool
look into https://github.com/Altinity/clickhouse-operator/blob/master/docs/chi-examples/05-settings-01-overview.yaml for examples
# nullable: true
x-kubernetes-preserve-unknown-fields: true
clusters:
type: array
description: |
describes ClickHouse clusters layout and allows change settings on cluster-level, shard-level and replica-level
describes clusters layout and allows change settings on cluster-level, shard-level and replica-level
every cluster is a set of StatefulSet, one StatefulSet contains only one Pod with `clickhouse-server`
all Pods will rendered in <remote_server> part of ClickHouse configs, mounted from ConfigMap as `/etc/clickhouse-server/config.d/chop-generated-remote_servers.xml`
Clusters will use for Distributed table engine, more details: https://clickhouse.tech/docs/en/engines/table-engines/special/distributed/
@@ -595,7 +708,7 @@ spec:
properties:
name:
type: string
description: "cluster name, used to identify set of ClickHouse servers and wide used during generate names of related Kubernetes resources"
description: "cluster name, used to identify set of servers and wide used during generate names of related Kubernetes resources"
minLength: 1
# See namePartClusterMaxLen const
maxLength: 15
@@ -683,6 +796,32 @@ spec:
required:
- name
- key
pdbMaxUnavailable:
type: integer
description: |
Pod eviction is allowed if at most "pdbMaxUnavailable" pods are unavailable after the eviction,
i.e. even in absence of the evicted pod. For example, one can prevent all voluntary evictions
by specifying 0. This is a mutually exclusive setting with "minAvailable".
minimum: 0
maximum: 65535
reconcile:
type: object
description: "allow tuning reconciling process"
properties:
runtime:
type: object
description: "runtime parameters for clickhouse-operator process which are used during reconcile cycle"
properties:
reconcileShardsThreadsNumber:
type: integer
minimum: 1
maximum: 65535
description: "How many goroutines will be used to reconcile shards of a cluster in parallel, 1 by default"
reconcileShardsMaxConcurrencyPercent:
type: integer
minimum: 0
maximum: 100
description: "The maximum percentage of cluster shards that may be reconciled in parallel, 50 percent by default."
layout:
type: object
description: |
@@ -690,18 +829,24 @@ spec:
allows override settings on each shard and replica separatelly
# nullable: true
properties:
type:
type: string
description: "DEPRECATED - to be removed soon"
shardsCount:
type: integer
description: "how much shards for current ClickHouse cluster will run in Kubernetes, each shard contains shared-nothing part of data and contains set of replicas, cluster contains 1 shard by default"
description: |
how much shards for current ClickHouse cluster will run in Kubernetes,
each shard contains shared-nothing part of data and contains set of replicas,
cluster contains 1 shard by default"
replicasCount:
type: integer
description: "how much replicas in each shards for current ClickHouse cluster will run in Kubernetes, each replica is a separate `StatefulSet` which contains only one `Pod` with `clickhouse-server` instance, every shard contains 1 replica by default"
description: |
how much replicas in each shards for current cluster will run in Kubernetes,
each replica is a separate `StatefulSet` which contains only one `Pod` with `clickhouse-server` instance,
every shard contains 1 replica by default"
shards:
type: array
description: "optional, allows override top-level `chi.spec.configuration`, cluster-level `chi.spec.configuration.clusters` settings for each shard separately, use it only if you fully understand what you do"
description: |
optional, allows override top-level `chi.spec.configuration`, cluster-level
`chi.spec.configuration.clusters` settings for each shard separately,
use it only if you fully understand what you do"
# nullable: true
items:
type: object
@@ -1036,7 +1181,7 @@ spec:
description: "template name, could use to link inside top-level `chi.spec.defaults.templates.podTemplate`, cluster-level `chi.spec.configuration.clusters.templates.podTemplate`, shard-level `chi.spec.configuration.clusters.layout.shards.temlates.podTemplate`, replica-level `chi.spec.configuration.clusters.layout.replicas.templates.podTemplate`"
generateName:
type: string
description: "allows define format for generated `Pod` name, look to https://github.com/Altinity/clickhouse-operator/blob/master/docs/custom_resource_explained.md#spectemplatesservicetemplates for details about aviailable template variables"
description: "allows define format for generated `Pod` name, look to https://github.com/Altinity/clickhouse-operator/blob/master/docs/custom_resource_explained.md#spectemplatesservicetemplates for details about available template variables"
zone:
type: object
description: "allows define custom zone name and will separate ClickHouse `Pods` between nodes, shortcut for `chi.spec.templates.podTemplates.spec.affinity.podAntiAffinity`"
@@ -1108,7 +1253,9 @@ spec:
maximum: 65535
topologyKey:
type: string
description: "use for inter-pod affinity look to `pod.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.topologyKey`, More info: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity"
description: |
use for inter-pod affinity look to `pod.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.topologyKey`,
more info: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity"
metadata:
type: object
description: |
@@ -1124,7 +1271,8 @@ spec:
x-kubernetes-preserve-unknown-fields: true
volumeClaimTemplates:
type: array
description: "allows define template for rendering `PVC` kubernetes resource, which would use inside `Pod` for mount clickhouse `data`, clickhouse `logs` or something else"
description: |
allows define template for rendering `PVC` kubernetes resource, which would use inside `Pod` for mount clickhouse `data`, clickhouse `logs` or something else
# nullable: true
items:
type: object
@@ -1177,14 +1325,17 @@ spec:
replica-level `chi.spec.configuration.clusters.layout.replicas.templates.replicaServiceTemplate` or `chi.spec.configuration.clusters.layout.shards.replicas.replicaServiceTemplate`
generateName:
type: string
description: "allows define format for generated `Service` name, look to https://github.com/Altinity/clickhouse-operator/blob/master/docs/custom_resource_explained.md#spectemplatesservicetemplates for details about aviailable template variables"
description: |
allows define format for generated `Service` name,
look to https://github.com/Altinity/clickhouse-operator/blob/master/docs/custom_resource_explained.md#spectemplatesservicetemplates
for details about available template variables"
metadata:
# TODO specify ObjectMeta
type: object
description: |
allows pass standard object's metadata from template to Service
Could be use for define specificly for Cloud Provider metadata which impact to behavior of service
More info: https://kubernetes.io/docs/concepts/services-networking/service/
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
# nullable: true
x-kubernetes-preserve-unknown-fields: true
spec:
@@ -1197,7 +1348,9 @@ spec:
x-kubernetes-preserve-unknown-fields: true
useTemplates:
type: array
description: "list of `ClickHouseInstallationTemplate` (chit) resource names which will merge with current `Chi` manifest during render Kubernetes resources to create related ClickHouse clusters"
description: |
list of `ClickHouseInstallationTemplate` (chit) resource names which will merge with current `CHI`
manifest during render Kubernetes resources to create related ClickHouse clusters"
# nullable: true
items:
type: object

View File

@@ -4,14 +4,14 @@
# SINGULAR=clickhouseinstallationtemplate
# PLURAL=clickhouseinstallationtemplates
# SHORT=chit
# OPERATOR_VERSION=0.23.4
# OPERATOR_VERSION=0.25.2
#
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: clickhouseinstallationtemplates.clickhouse.altinity.com
labels:
clickhouse.altinity.com/chop: 0.23.4
clickhouse.altinity.com/chop: 0.25.2
spec:
group: clickhouse.altinity.com
scope: Namespaced
@@ -51,13 +51,12 @@ spec:
jsonPath: .status.taskID
- name: status
type: string
description: CHI status
description: Resource status
jsonPath: .status.status
- name: hosts-unchanged
- name: hosts-completed
type: integer
description: Unchanged hosts count
priority: 1 # show in wide view
jsonPath: .status.hostsUnchanged
description: Completed hosts count
jsonPath: .status.hostsCompleted
- name: hosts-updated
type: integer
description: Updated hosts count
@@ -68,20 +67,11 @@ spec:
description: Added hosts count
priority: 1 # show in wide view
jsonPath: .status.hostsAdded
- name: hosts-completed
type: integer
description: Completed hosts count
jsonPath: .status.hostsCompleted
- name: hosts-deleted
type: integer
description: Hosts deleted count
priority: 1 # show in wide view
jsonPath: .status.hostsDeleted
- name: hosts-delete
type: integer
description: Hosts to be deleted count
priority: 1 # show in wide view
jsonPath: .status.hostsDelete
- name: endpoint
type: string
description: Client access endpoint
@@ -92,39 +82,51 @@ spec:
description: Age of the resource
# Displayed in all priorities
jsonPath: .metadata.creationTimestamp
- name: suspend
type: string
description: Suspend reconciliation
# Displayed in all priorities
jsonPath: .spec.suspend
subresources:
status: {}
schema:
openAPIV3Schema:
description: "define a set of Kubernetes resources (StatefulSet, PVC, Service, ConfigMap) which describe behavior one or more ClickHouse clusters"
description: "define a set of Kubernetes resources (StatefulSet, PVC, Service, ConfigMap) which describe behavior one or more clusters"
type: object
required:
- spec
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
description: |
APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type: string
kind:
description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
description: |
Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type: string
metadata:
type: object
status:
type: object
description: "Current ClickHouseInstallation manifest status, contains many fields like a normalized configuration, clickhouse-operator version, current action and all applied action list, current taskID and all applied taskIDs and other"
description: |
Status contains many fields like a normalized configuration, clickhouse-operator version, current action and all applied action list, current taskID and all applied taskIDs and other
properties:
chop-version:
type: string
description: "ClickHouse operator version"
description: "Operator version"
chop-commit:
type: string
description: "ClickHouse operator git commit SHA"
description: "Operator git commit SHA"
chop-date:
type: string
description: "ClickHouse operator build date"
description: "Operator build date"
chop-ip:
type: string
description: "IP address of the operator's pod which managed this CHI"
description: "IP address of the operator's pod which managed this resource"
clusters:
type: integer
minimum: 0
@@ -222,17 +224,23 @@ spec:
endpoint:
type: string
description: "Endpoint"
endpoints:
type: array
description: "All endpoints"
nullable: true
items:
type: string
generation:
type: integer
minimum: 0
description: "Generation"
normalized:
type: object
description: "Normalized CHI requested"
description: "Normalized resource requested"
x-kubernetes-preserve-unknown-fields: true
normalizedCompleted:
type: object
description: "Normalized CHI completed"
description: "Normalized resource completed"
x-kubernetes-preserve-unknown-fields: true
hostsWithTablesCreated:
type: array
@@ -240,6 +248,12 @@ spec:
nullable: true
items:
type: string
hostsWithReplicaCaughtUp:
type: array
description: "List of hosts with replica caught up"
nullable: true
items:
type: string
usedTemplates:
type: array
description: "List of templates used to build this CHI"
@@ -301,6 +315,13 @@ spec:
enum:
- ""
- "RollingUpdate"
suspend:
!!merge <<: *TypeStringBool
description: |
Suspend reconciliation of resources managed by a ClickHouse Installation.
Works as the following:
- When `suspend` is `true` operator stops reconciling all resources.
- When `suspend` is `false` or not set, operator reconciles all resources.
troubleshoot:
!!merge <<: *TypeStringBool
description: |
@@ -412,6 +433,63 @@ spec:
service:
!!merge <<: *TypeObjectsCleanup
description: "Behavior policy for failed Service, `Retain` by default"
runtime:
type: object
description: "runtime parameters for clickhouse-operator process which are used during reconcile cycle"
properties:
reconcileShardsThreadsNumber:
type: integer
minimum: 1
maximum: 65535
description: "How many goroutines will be used to reconcile shards of a cluster in parallel, 1 by default"
reconcileShardsMaxConcurrencyPercent:
type: integer
minimum: 0
maximum: 100
description: "The maximum percentage of cluster shards that may be reconciled in parallel, 50 percent by default."
macros:
type: object
description: "macros parameters"
properties:
sections:
type: object
description: "sections behaviour for macros"
properties:
users:
type: object
description: "sections behaviour for macros on users"
properties:
enabled:
!!merge <<: *TypeStringBool
description: "enabled or not"
profiles:
type: object
description: "sections behaviour for macros on profiles"
properties:
enabled:
!!merge <<: *TypeStringBool
description: "enabled or not"
quotas:
type: object
description: "sections behaviour for macros on quotas"
properties:
enabled:
!!merge <<: *TypeStringBool
description: "enabled or not"
settings:
type: object
description: "sections behaviour for macros on settings"
properties:
enabled:
!!merge <<: *TypeStringBool
description: "enabled or not"
files:
type: object
description: "sections behaviour for macros on files"
properties:
enabled:
!!merge <<: *TypeStringBool
description: "enabled or not"
defaults:
type: object
description: |
@@ -424,7 +502,7 @@ spec:
description: |
define should replicas be specified by FQDN in `<host></host>`.
In case of "no" will use short hostname and clickhouse-server will use kubernetes default suffixes for DNS lookup
"yes" by default
"no" by default
distributedDDL:
type: object
description: |
@@ -474,7 +552,13 @@ spec:
description: "optional, template name from chi.spec.templates.volumeClaimTemplates, allows customization each `PVC` which will mount for clickhouse log directory in each `Pod` during render and reconcile every StatefulSet.spec resource described in `chi.spec.configuration.clusters`"
serviceTemplate:
type: string
description: "optional, template name from chi.spec.templates.serviceTemplates, allows customization for one `Service` resource which will created by `clickhouse-operator` which cover all clusters in whole `chi` resource"
description: "optional, template name from chi.spec.templates.serviceTemplates. used for customization of the `Service` resource, created by `clickhouse-operator` to cover all clusters in whole `chi` resource"
serviceTemplates:
type: array
description: "optional, template names from chi.spec.templates.serviceTemplates. used for customization of the `Service` resources, created by `clickhouse-operator` to cover all clusters in whole `chi` resource"
nullable: true
items:
type: string
clusterServiceTemplate:
type: string
description: "optional, template name from chi.spec.templates.serviceTemplates, allows customization for each `Service` resource which will created by `clickhouse-operator` which cover each clickhouse cluster described in `chi.spec.configuration.clusters`"
@@ -486,7 +570,7 @@ spec:
description: "optional, template name from chi.spec.templates.serviceTemplates, allows customization for each `Service` resource which will created by `clickhouse-operator` which cover each replica inside each shard inside each clickhouse cluster described in `chi.spec.configuration.clusters`"
volumeClaimTemplate:
type: string
description: "DEPRECATED! VolumeClaimTemplate is deprecated in favor of DataVolumeClaimTemplate and LogVolumeClaimTemplate"
description: "optional, alias for dataVolumeClaimTemplate, template name from chi.spec.templates.volumeClaimTemplates, allows customization each `PVC` which will mount for clickhouse data directory in each `Pod` during render and reconcile every StatefulSet.spec resource described in `chi.spec.configuration.clusters`"
configuration:
type: object
description: "allows configure multiple aspects and behavior for `clickhouse-server` instance and also allows describe multiple `clickhouse-server` clusters inside one `chi` resource"
@@ -521,6 +605,9 @@ spec:
secure:
!!merge <<: *TypeStringBool
description: "if a secure connection to Zookeeper is required"
availabilityZone:
type: string
description: "availability zone for Zookeeper node"
session_timeout_ms:
type: integer
description: "session timeout during connect to Zookeeper"
@@ -540,6 +627,20 @@ spec:
you can configure password hashed, authorization restrictions, database level security row filters etc.
More details: https://clickhouse.tech/docs/en/operations/settings/settings-users/
Your yaml code will convert to XML, see examples https://github.com/Altinity/clickhouse-operator/blob/master/docs/custom_resource_explained.md#specconfigurationusers
any key could contains `valueFrom` with `secretKeyRef` which allow pass password from kubernetes secrets
secret value will pass in `pod.spec.containers.evn`, and generate with from_env=XXX in XML in /etc/clickhouse-server/users.d/chop-generated-users.xml
it not allow automatically updates when updates `secret`, change spec.taskID for manually trigger reconcile cycle
look into https://github.com/Altinity/clickhouse-operator/blob/master/docs/chi-examples/05-settings-01-overview.yaml for examples
any key with prefix `k8s_secret_` shall has value with format namespace/secret/key or secret/key
in this case value from secret will write directly into XML tag during render *-usersd ConfigMap
any key with prefix `k8s_secret_env` shall has value with format namespace/secret/key or secret/key
in this case value from secret will write into environment variable and write to XML tag via from_env=XXX
look into https://github.com/Altinity/clickhouse-operator/blob/master/docs/chi-examples/05-settings-01-overview.yaml for examples
# nullable: true
x-kubernetes-preserve-unknown-fields: true
profiles:
@@ -566,6 +667,12 @@ spec:
allows configure `clickhouse-server` settings inside <yandex>...</yandex> tag in each `Pod` during generate `ConfigMap` which will mount in `/etc/clickhouse-server/config.d/`
More details: https://clickhouse.tech/docs/en/operations/settings/settings/
Your yaml code will convert to XML, see examples https://github.com/Altinity/clickhouse-operator/blob/master/docs/custom_resource_explained.md#specconfigurationsettings
any key could contains `valueFrom` with `secretKeyRef` which allow pass password from kubernetes secrets
look into https://github.com/Altinity/clickhouse-operator/blob/master/docs/chi-examples/05-settings-01-overview.yaml for examples
secret value will pass in `pod.spec.env`, and generate with from_env=XXX in XML in /etc/clickhouse-server/config.d/chop-generated-settings.xml
it not allow automatically updates when updates `secret`, change spec.taskID for manually trigger reconcile cycle
# nullable: true
x-kubernetes-preserve-unknown-fields: true
files: &TypeFiles
@@ -575,14 +682,20 @@ spec:
every key in this object is the file name
every value in this object is the file content
you can use `!!binary |` and base64 for binary files, see details here https://yaml.org/type/binary.html
each key could contains prefix like USERS, COMMON, HOST or config.d, users.d, cond.d, wrong prefixes will ignored, subfolders also will ignored
each key could contains prefix like {common}, {users}, {hosts} or config.d, users.d, conf.d, wrong prefixes will be ignored, subfolders also will be ignored
More details: https://github.com/Altinity/clickhouse-operator/blob/master/docs/chi-examples/05-settings-05-files-nested.yaml
any key could contains `valueFrom` with `secretKeyRef` which allow pass values from kubernetes secrets
secrets will mounted into pod as separate volume in /etc/clickhouse-server/secrets.d/
and will automatically update when update secret
it useful for pass SSL certificates from cert-manager or similar tool
look into https://github.com/Altinity/clickhouse-operator/blob/master/docs/chi-examples/05-settings-01-overview.yaml for examples
# nullable: true
x-kubernetes-preserve-unknown-fields: true
clusters:
type: array
description: |
describes ClickHouse clusters layout and allows change settings on cluster-level, shard-level and replica-level
describes clusters layout and allows change settings on cluster-level, shard-level and replica-level
every cluster is a set of StatefulSet, one StatefulSet contains only one Pod with `clickhouse-server`
all Pods will rendered in <remote_server> part of ClickHouse configs, mounted from ConfigMap as `/etc/clickhouse-server/config.d/chop-generated-remote_servers.xml`
Clusters will use for Distributed table engine, more details: https://clickhouse.tech/docs/en/engines/table-engines/special/distributed/
@@ -595,7 +708,7 @@ spec:
properties:
name:
type: string
description: "cluster name, used to identify set of ClickHouse servers and wide used during generate names of related Kubernetes resources"
description: "cluster name, used to identify set of servers and wide used during generate names of related Kubernetes resources"
minLength: 1
# See namePartClusterMaxLen const
maxLength: 15
@@ -683,6 +796,32 @@ spec:
required:
- name
- key
pdbMaxUnavailable:
type: integer
description: |
Pod eviction is allowed if at most "pdbMaxUnavailable" pods are unavailable after the eviction,
i.e. even in absence of the evicted pod. For example, one can prevent all voluntary evictions
by specifying 0. This is a mutually exclusive setting with "minAvailable".
minimum: 0
maximum: 65535
reconcile:
type: object
description: "allow tuning reconciling process"
properties:
runtime:
type: object
description: "runtime parameters for clickhouse-operator process which are used during reconcile cycle"
properties:
reconcileShardsThreadsNumber:
type: integer
minimum: 1
maximum: 65535
description: "How many goroutines will be used to reconcile shards of a cluster in parallel, 1 by default"
reconcileShardsMaxConcurrencyPercent:
type: integer
minimum: 0
maximum: 100
description: "The maximum percentage of cluster shards that may be reconciled in parallel, 50 percent by default."
layout:
type: object
description: |
@@ -690,18 +829,24 @@ spec:
allows override settings on each shard and replica separatelly
# nullable: true
properties:
type:
type: string
description: "DEPRECATED - to be removed soon"
shardsCount:
type: integer
description: "how much shards for current ClickHouse cluster will run in Kubernetes, each shard contains shared-nothing part of data and contains set of replicas, cluster contains 1 shard by default"
description: |
how much shards for current ClickHouse cluster will run in Kubernetes,
each shard contains shared-nothing part of data and contains set of replicas,
cluster contains 1 shard by default"
replicasCount:
type: integer
description: "how much replicas in each shards for current ClickHouse cluster will run in Kubernetes, each replica is a separate `StatefulSet` which contains only one `Pod` with `clickhouse-server` instance, every shard contains 1 replica by default"
description: |
how much replicas in each shards for current cluster will run in Kubernetes,
each replica is a separate `StatefulSet` which contains only one `Pod` with `clickhouse-server` instance,
every shard contains 1 replica by default"
shards:
type: array
description: "optional, allows override top-level `chi.spec.configuration`, cluster-level `chi.spec.configuration.clusters` settings for each shard separately, use it only if you fully understand what you do"
description: |
optional, allows override top-level `chi.spec.configuration`, cluster-level
`chi.spec.configuration.clusters` settings for each shard separately,
use it only if you fully understand what you do"
# nullable: true
items:
type: object
@@ -1036,7 +1181,7 @@ spec:
description: "template name, could use to link inside top-level `chi.spec.defaults.templates.podTemplate`, cluster-level `chi.spec.configuration.clusters.templates.podTemplate`, shard-level `chi.spec.configuration.clusters.layout.shards.temlates.podTemplate`, replica-level `chi.spec.configuration.clusters.layout.replicas.templates.podTemplate`"
generateName:
type: string
description: "allows define format for generated `Pod` name, look to https://github.com/Altinity/clickhouse-operator/blob/master/docs/custom_resource_explained.md#spectemplatesservicetemplates for details about aviailable template variables"
description: "allows define format for generated `Pod` name, look to https://github.com/Altinity/clickhouse-operator/blob/master/docs/custom_resource_explained.md#spectemplatesservicetemplates for details about available template variables"
zone:
type: object
description: "allows define custom zone name and will separate ClickHouse `Pods` between nodes, shortcut for `chi.spec.templates.podTemplates.spec.affinity.podAntiAffinity`"
@@ -1108,7 +1253,9 @@ spec:
maximum: 65535
topologyKey:
type: string
description: "use for inter-pod affinity look to `pod.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.topologyKey`, More info: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity"
description: |
use for inter-pod affinity look to `pod.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.topologyKey`,
more info: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity"
metadata:
type: object
description: |
@@ -1124,7 +1271,8 @@ spec:
x-kubernetes-preserve-unknown-fields: true
volumeClaimTemplates:
type: array
description: "allows define template for rendering `PVC` kubernetes resource, which would use inside `Pod` for mount clickhouse `data`, clickhouse `logs` or something else"
description: |
allows define template for rendering `PVC` kubernetes resource, which would use inside `Pod` for mount clickhouse `data`, clickhouse `logs` or something else
# nullable: true
items:
type: object
@@ -1177,14 +1325,17 @@ spec:
replica-level `chi.spec.configuration.clusters.layout.replicas.templates.replicaServiceTemplate` or `chi.spec.configuration.clusters.layout.shards.replicas.replicaServiceTemplate`
generateName:
type: string
description: "allows define format for generated `Service` name, look to https://github.com/Altinity/clickhouse-operator/blob/master/docs/custom_resource_explained.md#spectemplatesservicetemplates for details about aviailable template variables"
description: |
allows define format for generated `Service` name,
look to https://github.com/Altinity/clickhouse-operator/blob/master/docs/custom_resource_explained.md#spectemplatesservicetemplates
for details about available template variables"
metadata:
# TODO specify ObjectMeta
type: object
description: |
allows pass standard object's metadata from template to Service
Could be use for define specificly for Cloud Provider metadata which impact to behavior of service
More info: https://kubernetes.io/docs/concepts/services-networking/service/
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
# nullable: true
x-kubernetes-preserve-unknown-fields: true
spec:
@@ -1197,7 +1348,9 @@ spec:
x-kubernetes-preserve-unknown-fields: true
useTemplates:
type: array
description: "list of `ClickHouseInstallationTemplate` (chit) resource names which will merge with current `Chi` manifest during render Kubernetes resources to create related ClickHouse clusters"
description: |
list of `ClickHouseInstallationTemplate` (chit) resource names which will merge with current `CHI`
manifest during render Kubernetes resources to create related ClickHouse clusters"
# nullable: true
items:
type: object

View File

@@ -1,13 +1,13 @@
# Template Parameters:
#
# OPERATOR_VERSION=0.23.4
# OPERATOR_VERSION=0.25.2
#
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: clickhousekeeperinstallations.clickhouse-keeper.altinity.com
labels:
clickhouse-keeper.altinity.com/chop: 0.23.4
clickhouse-keeper.altinity.com/chop: 0.25.2
spec:
group: clickhouse-keeper.altinity.com
scope: Namespaced
@@ -22,123 +22,487 @@ spec:
served: true
storage: true
additionalPrinterColumns:
- name: version
type: string
description: Operator version
priority: 1 # show in wide view
jsonPath: .status.chop-version
- name: clusters
type: integer
description: Clusters count
jsonPath: .status.clusters
- name: shards
type: integer
description: Shards count
priority: 1 # show in wide view
jsonPath: .status.shards
- name: hosts
type: integer
description: Hosts count
jsonPath: .status.hosts
- name: taskID
type: string
description: TaskID
priority: 1 # show in wide view
jsonPath: .status.taskID
- name: status
type: string
description: CHK status
description: Resource status
jsonPath: .status.status
- name: replicas
- name: hosts-unchanged
type: integer
description: Replica count
description: Unchanged hosts count
priority: 1 # show in wide view
jsonPath: .status.replicas
jsonPath: .status.hostsUnchanged
- name: hosts-updated
type: integer
description: Updated hosts count
priority: 1 # show in wide view
jsonPath: .status.hostsUpdated
- name: hosts-added
type: integer
description: Added hosts count
priority: 1 # show in wide view
jsonPath: .status.hostsAdded
- name: hosts-completed
type: integer
description: Completed hosts count
jsonPath: .status.hostsCompleted
- name: hosts-deleted
type: integer
description: Hosts deleted count
priority: 1 # show in wide view
jsonPath: .status.hostsDeleted
- name: hosts-delete
type: integer
description: Hosts to be deleted count
priority: 1 # show in wide view
jsonPath: .status.hostsDelete
- name: endpoint
type: string
description: Client access endpoint
priority: 1 # show in wide view
jsonPath: .status.endpoint
- name: age
type: date
description: Age of the resource
# Displayed in all priorities
jsonPath: .metadata.creationTimestamp
- name: suspend
type: string
description: Suspend reconciliation
# Displayed in all priorities
jsonPath: .spec.suspend
subresources:
status: {}
schema:
openAPIV3Schema:
description: "define a set of Kubernetes resources (StatefulSet, PVC, Service, ConfigMap) which describe behavior one or more clusters"
type: object
required:
- spec
description: "define a set of Kubernetes resources (StatefulSet, PVC, Service, ConfigMap) which describe behavior one ClickHouse Keeper cluster"
properties:
apiVersion:
type: string
description: |
APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
kind:
type: string
kind:
description: |
Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type: string
metadata:
type: object
status:
type: object
description: |
Current ClickHouseKeeperInstallation status, contains many fields like overall status, desired replicas and ready replica list with their endpoints
Status contains many fields like a normalized configuration, clickhouse-operator version, current action and all applied action list, current taskID and all applied taskIDs and other
properties:
chop-version:
type: string
description: "ClickHouse operator version"
description: "Operator version"
chop-commit:
type: string
description: "ClickHouse operator git commit SHA"
description: "Operator git commit SHA"
chop-date:
type: string
description: "ClickHouse operator build date"
description: "Operator build date"
chop-ip:
type: string
description: "IP address of the operator's pod which managed this CHI"
description: "IP address of the operator's pod which managed this resource"
clusters:
type: integer
minimum: 0
description: "Clusters count"
shards:
type: integer
minimum: 0
description: "Shards count"
replicas:
type: integer
minimum: 0
description: "Replicas count"
hosts:
type: integer
minimum: 0
description: "Hosts count"
status:
type: string
description: "Status"
replicas:
type: integer
format: int32
description: Replicas is the number of number of desired replicas in the cluster
readyReplicas:
taskID:
type: string
description: "Current task id"
taskIDsStarted:
type: array
description: ReadyReplicas is the array of endpoints of those ready replicas in the cluster
description: "Started task ids"
nullable: true
items:
type: object
properties:
host:
type: string
description: dns name or ip address for Keeper node
port:
type: integer
minimum: 0
maximum: 65535
description: TCP port which used to connect to Keeper node
secure:
type: string
description: if a secure connection to Keeper is required
type: string
taskIDsCompleted:
type: array
description: "Completed task ids"
nullable: true
items:
type: string
action:
type: string
description: "Action"
actions:
type: array
description: "Actions"
nullable: true
items:
type: string
error:
type: string
description: "Last error"
errors:
type: array
description: "Errors"
nullable: true
items:
type: string
hostsUnchanged:
type: integer
minimum: 0
description: "Unchanged Hosts count"
hostsUpdated:
type: integer
minimum: 0
description: "Updated Hosts count"
hostsAdded:
type: integer
minimum: 0
description: "Added Hosts count"
hostsCompleted:
type: integer
minimum: 0
description: "Completed Hosts count"
hostsDeleted:
type: integer
minimum: 0
description: "Deleted Hosts count"
hostsDelete:
type: integer
minimum: 0
description: "About to delete Hosts count"
pods:
type: array
description: "Pods"
nullable: true
items:
type: string
pod-ips:
type: array
description: "Pod IPs"
nullable: true
items:
type: string
fqdns:
type: array
description: "Pods FQDNs"
nullable: true
items:
type: string
endpoint:
type: string
description: "Endpoint"
endpoints:
type: array
description: "All endpoints"
nullable: true
items:
type: string
generation:
type: integer
minimum: 0
description: "Generation"
normalized:
type: object
description: "Normalized CHK requested"
description: "Normalized resource requested"
x-kubernetes-preserve-unknown-fields: true
normalizedCompleted:
type: object
description: "Normalized CHK completed"
description: "Normalized resource completed"
x-kubernetes-preserve-unknown-fields: true
hostsWithTablesCreated:
type: array
description: "List of hosts with tables created by the operator"
nullable: true
items:
type: string
hostsWithReplicaCaughtUp:
type: array
description: "List of hosts with replica caught up"
nullable: true
items:
type: string
usedTemplates:
type: array
description: "List of templates used to build this CHI"
nullable: true
x-kubernetes-preserve-unknown-fields: true
items:
type: object
x-kubernetes-preserve-unknown-fields: true
spec:
type: object
description: KeeperSpec defines the desired state of a Keeper cluster
# x-kubernetes-preserve-unknown-fields: true
description: |
Specification of the desired behavior of one or more ClickHouse clusters
More info: https://github.com/Altinity/clickhouse-operator/blob/master/docs/custom_resource_explained.md
properties:
taskID:
type: string
description: |
Allows to define custom taskID for CHI update and watch status of this update execution.
Displayed in all .status.taskID* fields.
By default (if not filled) every update of CHI manifest will generate random taskID
stop: &TypeStringBool
type: string
description: |
Allows to stop all ClickHouse clusters defined in a CHI.
Works as the following:
- When `stop` is `1` operator sets `Replicas: 0` in each StatefulSet. Thie leads to having all `Pods` and `Service` deleted. All PVCs are kept intact.
- When `stop` is `0` operator sets `Replicas: 1` and `Pod`s and `Service`s will created again and all retained PVCs will be attached to `Pod`s.
enum:
# List StringBoolXXX constants from model
- ""
- "0"
- "1"
- "False"
- "false"
- "True"
- "true"
- "No"
- "no"
- "Yes"
- "yes"
- "Off"
- "off"
- "On"
- "on"
- "Disable"
- "disable"
- "Enable"
- "enable"
- "Disabled"
- "disabled"
- "Enabled"
- "enabled"
suspend:
!!merge <<: *TypeStringBool
description: |
Suspend reconciliation of resources managed by a ClickHouse Keeper.
Works as the following:
- When `suspend` is `true` operator stops reconciling all resources.
- When `suspend` is `false` or not set, operator reconciles all resources.
namespaceDomainPattern:
type: string
description: |
Custom domain pattern which will be used for DNS names of `Service` or `Pod`.
Typical use scenario - custom cluster domain in Kubernetes cluster
Example: %s.svc.my.test
replicas:
type: integer
format: int32
reconciling:
type: object
description: "Optional, allows tuning reconciling cycle for ClickhouseInstallation from clickhouse-operator side"
# nullable: true
properties:
policy:
type: string
description: |
DISCUSSED TO BE DEPRECATED
Syntax sugar
Overrides all three 'reconcile.host.wait.{exclude, queries, include}' values from the operator's config
Possible values:
- wait - should wait to exclude host, complete queries and include host back into the cluster
- nowait - should NOT wait to exclude host, complete queries and include host back into the cluster
enum:
- ""
- "wait"
- "nowait"
configMapPropagationTimeout:
type: integer
description: |
Timeout in seconds for `clickhouse-operator` to wait for modified `ConfigMap` to propagate into the `Pod`
More details: https://kubernetes.io/docs/concepts/configuration/configmap/#mounted-configmaps-are-updated-automatically
minimum: 0
maximum: 3600
cleanup:
type: object
description: "Optional, defines behavior for cleanup Kubernetes resources during reconcile cycle"
# nullable: true
properties:
unknownObjects:
type: object
description: |
Describes what clickhouse-operator should do with found Kubernetes resources which should be managed by clickhouse-operator,
but do not have `ownerReference` to any currently managed `ClickHouseInstallation` resource.
Default behavior is `Delete`"
# nullable: true
properties:
statefulSet: &TypeObjectsCleanup
type: string
description: "Behavior policy for unknown StatefulSet, `Delete` by default"
enum:
# List ObjectsCleanupXXX constants from model
- ""
- "Retain"
- "Delete"
pvc:
type: string
!!merge <<: *TypeObjectsCleanup
description: "Behavior policy for unknown PVC, `Delete` by default"
configMap:
!!merge <<: *TypeObjectsCleanup
description: "Behavior policy for unknown ConfigMap, `Delete` by default"
service:
!!merge <<: *TypeObjectsCleanup
description: "Behavior policy for unknown Service, `Delete` by default"
reconcileFailedObjects:
type: object
description: |
Describes what clickhouse-operator should do with Kubernetes resources which are failed during reconcile.
Default behavior is `Retain`"
# nullable: true
properties:
statefulSet:
!!merge <<: *TypeObjectsCleanup
description: "Behavior policy for failed StatefulSet, `Retain` by default"
pvc:
!!merge <<: *TypeObjectsCleanup
description: "Behavior policy for failed PVC, `Retain` by default"
configMap:
!!merge <<: *TypeObjectsCleanup
description: "Behavior policy for failed ConfigMap, `Retain` by default"
service:
!!merge <<: *TypeObjectsCleanup
description: "Behavior policy for failed Service, `Retain` by default"
defaults:
type: object
description: |
Replicas is the expected size of the keeper cluster.
The valid range of size is from 1 to 7.
minimum: 1
maximum: 7
define default behavior for whole ClickHouseInstallation, some behavior can be re-define on cluster, shard and replica level
More info: https://github.com/Altinity/clickhouse-operator/blob/master/docs/custom_resource_explained.md#specdefaults
# nullable: true
properties:
replicasUseFQDN:
!!merge <<: *TypeStringBool
description: |
define should replicas be specified by FQDN in `<host></host>`.
In case of "no" will use short hostname and clickhouse-server will use kubernetes default suffixes for DNS lookup
"no" by default
distributedDDL:
type: object
description: |
allows change `<yandex><distributed_ddl></distributed_ddl></yandex>` settings
More info: https://clickhouse.tech/docs/en/operations/server-configuration-parameters/settings/#server-settings-distributed_ddl
# nullable: true
properties:
profile:
type: string
description: "Settings from this profile will be used to execute DDL queries"
storageManagement:
type: object
description: default storage management options
properties:
provisioner: &TypePVCProvisioner
type: string
description: "defines `PVC` provisioner - be it StatefulSet or the Operator"
enum:
- ""
- "StatefulSet"
- "Operator"
reclaimPolicy: &TypePVCReclaimPolicy
type: string
description: |
defines behavior of `PVC` deletion.
`Delete` by default, if `Retain` specified then `PVC` will be kept when deleting StatefulSet
enum:
- ""
- "Retain"
- "Delete"
templates: &TypeTemplateNames
type: object
description: "optional, configuration of the templates names which will use for generate Kubernetes resources according to one or more ClickHouse clusters described in current ClickHouseInstallation (chi) resource"
# nullable: true
properties:
hostTemplate:
type: string
description: "optional, template name from chi.spec.templates.hostTemplates, which will apply to configure every `clickhouse-server` instance during render ConfigMap resources which will mount into `Pod`"
podTemplate:
type: string
description: "optional, template name from chi.spec.templates.podTemplates, allows customization each `Pod` resource during render and reconcile each StatefulSet.spec resource described in `chi.spec.configuration.clusters`"
dataVolumeClaimTemplate:
type: string
description: "optional, template name from chi.spec.templates.volumeClaimTemplates, allows customization each `PVC` which will mount for clickhouse data directory in each `Pod` during render and reconcile every StatefulSet.spec resource described in `chi.spec.configuration.clusters`"
logVolumeClaimTemplate:
type: string
description: "optional, template name from chi.spec.templates.volumeClaimTemplates, allows customization each `PVC` which will mount for clickhouse log directory in each `Pod` during render and reconcile every StatefulSet.spec resource described in `chi.spec.configuration.clusters`"
serviceTemplate:
type: string
description: "optional, template name from chi.spec.templates.serviceTemplates. used for customization of the `Service` resource, created by `clickhouse-operator` to cover all clusters in whole `chi` resource"
serviceTemplates:
type: array
description: "optional, template names from chi.spec.templates.serviceTemplates. used for customization of the `Service` resources, created by `clickhouse-operator` to cover all clusters in whole `chi` resource"
nullable: true
items:
type: string
clusterServiceTemplate:
type: string
description: "optional, template name from chi.spec.templates.serviceTemplates, allows customization for each `Service` resource which will created by `clickhouse-operator` which cover each clickhouse cluster described in `chi.spec.configuration.clusters`"
shardServiceTemplate:
type: string
description: "optional, template name from chi.spec.templates.serviceTemplates, allows customization for each `Service` resource which will created by `clickhouse-operator` which cover each shard inside clickhouse cluster described in `chi.spec.configuration.clusters`"
replicaServiceTemplate:
type: string
description: "optional, template name from chi.spec.templates.serviceTemplates, allows customization for each `Service` resource which will created by `clickhouse-operator` which cover each replica inside each shard inside each clickhouse cluster described in `chi.spec.configuration.clusters`"
volumeClaimTemplate:
type: string
description: "optional, alias for dataVolumeClaimTemplate, template name from chi.spec.templates.volumeClaimTemplates, allows customization each `PVC` which will mount for clickhouse data directory in each `Pod` during render and reconcile every StatefulSet.spec resource described in `chi.spec.configuration.clusters`"
configuration:
type: object
description: "allows configure multiple aspects and behavior for `clickhouse-server` instance and also allows describe multiple `clickhouse-server` clusters inside one `chi` resource"
# nullable: true
properties:
settings:
settings: &TypeSettings
type: object
description: "allows configure multiple aspects and behavior for `clickhouse-keeper` instance"
description: |
allows configure multiple aspects and behavior for `clickhouse-keeper` instance
# nullable: true
x-kubernetes-preserve-unknown-fields: true
files: &TypeFiles
type: object
description: |
allows define content of any setting
# nullable: true
x-kubernetes-preserve-unknown-fields: true
clusters:
type: array
description: |
describes ClickHouseKeeper clusters layout and allows change settings on cluster-level and replica-level
describes clusters layout and allows change settings on cluster-level and replica-level
# nullable: true
items:
type: object
@@ -147,25 +511,178 @@ spec:
properties:
name:
type: string
description: "cluster name, used to identify set of ClickHouseKeeper servers and wide used during generate names of related Kubernetes resources"
description: "cluster name, used to identify set of servers and wide used during generate names of related Kubernetes resources"
minLength: 1
# See namePartClusterMaxLen const
maxLength: 15
pattern: "^[a-zA-Z0-9-]{0,15}$"
settings:
!!merge <<: *TypeSettings
description: |
optional, allows configure `clickhouse-server` settings inside <yandex>...</yandex> tag in each `Pod` only in one cluster during generate `ConfigMap` which will mount in `/etc/clickhouse-server/config.d/`
override top-level `chi.spec.configuration.settings`
More details: https://clickhouse.tech/docs/en/operations/settings/settings/
files:
!!merge <<: *TypeFiles
description: |
optional, allows define content of any setting file inside each `Pod` on current cluster during generate `ConfigMap` which will mount in `/etc/clickhouse-server/config.d/` or `/etc/clickhouse-server/conf.d/` or `/etc/clickhouse-server/users.d/`
override top-level `chi.spec.configuration.files`
templates:
!!merge <<: *TypeTemplateNames
description: |
optional, configuration of the templates names which will use for generate Kubernetes resources according to selected cluster
override top-level `chi.spec.configuration.templates`
layout:
type: object
description: |
describe current cluster layout, how many replicas
describe current cluster layout, how much shards in cluster, how much replica in shard
allows override settings on each shard and replica separatelly
# nullable: true
properties:
replicasCount:
type: integer
description: "how many replicas in ClickHouseKeeper cluster"
description: |
how much replicas in each shards for current cluster will run in Kubernetes,
each replica is a separate `StatefulSet` which contains only one `Pod` with `clickhouse-server` instance,
every shard contains 1 replica by default"
replicas:
type: array
description: "optional, allows override top-level `chi.spec.configuration` and cluster-level `chi.spec.configuration.clusters` configuration for each replica and each shard relates to selected replica, use it only if you fully understand what you do"
# nullable: true
items:
type: object
properties:
name:
type: string
description: "optional, by default replica name is generated, but you can override it and setup custom name"
minLength: 1
# See namePartShardMaxLen const
maxLength: 15
pattern: "^[a-zA-Z0-9-]{0,15}$"
settings:
!!merge <<: *TypeSettings
description: |
optional, allows configure `clickhouse-server` settings inside <yandex>...</yandex> tag in `Pod` only in one replica during generate `ConfigMap` which will mount in `/etc/clickhouse-server/conf.d/`
override top-level `chi.spec.configuration.settings`, cluster-level `chi.spec.configuration.clusters.settings` and will ignore if shard-level `chi.spec.configuration.clusters.layout.shards` present
More details: https://clickhouse.tech/docs/en/operations/settings/settings/
files:
!!merge <<: *TypeFiles
description: |
optional, allows define content of any setting file inside each `Pod` only in one replica during generate `ConfigMap` which will mount in `/etc/clickhouse-server/config.d/` or `/etc/clickhouse-server/conf.d/` or `/etc/clickhouse-server/users.d/`
override top-level `chi.spec.configuration.files` and cluster-level `chi.spec.configuration.clusters.files`, will ignore if `chi.spec.configuration.clusters.layout.shards` presents
templates:
!!merge <<: *TypeTemplateNames
description: |
optional, configuration of the templates names which will use for generate Kubernetes resources according to selected replica
override top-level `chi.spec.configuration.templates`, cluster-level `chi.spec.configuration.clusters.templates`
shardsCount:
type: integer
description: "optional, count of shards related to current replica, you can override each shard behavior on low-level `chi.spec.configuration.clusters.layout.replicas.shards`"
minimum: 1
shards:
type: array
description: "optional, list of shards related to current replica, will ignore if `chi.spec.configuration.clusters.layout.shards` presents"
# nullable: true
items:
# Host
type: object
properties:
name:
type: string
description: "optional, by default shard name is generated, but you can override it and setup custom name"
minLength: 1
# See namePartReplicaMaxLen const
maxLength: 15
pattern: "^[a-zA-Z0-9-]{0,15}$"
zkPort:
type: integer
minimum: 1
maximum: 65535
raftPort:
type: integer
minimum: 1
maximum: 65535
settings:
!!merge <<: *TypeSettings
description: |
optional, allows configure `clickhouse-server` settings inside <yandex>...</yandex> tag in `Pod` only in one shard related to current replica during generate `ConfigMap` which will mount in `/etc/clickhouse-server/conf.d/`
override top-level `chi.spec.configuration.settings`, cluster-level `chi.spec.configuration.clusters.settings` and replica-level `chi.spec.configuration.clusters.layout.replicas.settings`
More details: https://clickhouse.tech/docs/en/operations/settings/settings/
files:
!!merge <<: *TypeFiles
description: |
optional, allows define content of any setting file inside each `Pod` only in one shard related to current replica during generate `ConfigMap` which will mount in `/etc/clickhouse-server/config.d/` or `/etc/clickhouse-server/conf.d/` or `/etc/clickhouse-server/users.d/`
override top-level `chi.spec.configuration.files` and cluster-level `chi.spec.configuration.clusters.files`, will ignore if `chi.spec.configuration.clusters.layout.shards` presents
templates:
!!merge <<: *TypeTemplateNames
description: |
optional, configuration of the templates names which will use for generate Kubernetes resources according to selected replica
override top-level `chi.spec.configuration.templates`, cluster-level `chi.spec.configuration.clusters.templates`, replica-level `chi.spec.configuration.clusters.layout.replicas.templates`
templates:
type: object
description: "allows define templates which will use for render Kubernetes resources like StatefulSet, ConfigMap, Service, PVC, by default, clickhouse-operator have own templates, but you can override it"
# nullable: true
properties:
hostTemplates:
type: array
description: "hostTemplate will use during apply to generate `clickhose-server` config files"
# nullable: true
items:
type: object
#required:
# - name
properties:
name:
description: "template name, could use to link inside top-level `chi.spec.defaults.templates.hostTemplate`, cluster-level `chi.spec.configuration.clusters.templates.hostTemplate`, shard-level `chi.spec.configuration.clusters.layout.shards.temlates.hostTemplate`, replica-level `chi.spec.configuration.clusters.layout.replicas.templates.hostTemplate`"
type: string
portDistribution:
type: array
description: "define how will distribute numeric values of named ports in `Pod.spec.containers.ports` and clickhouse-server configs"
# nullable: true
items:
type: object
#required:
# - type
properties:
type:
type: string
description: "type of distribution, when `Unspecified` (default value) then all listen ports on clickhouse-server configuration in all Pods will have the same value, when `ClusterScopeIndex` then ports will increment to offset from base value depends on shard and replica index inside cluster with combination of `chi.spec.templates.podTemlates.spec.HostNetwork` it allows setup ClickHouse cluster inside Kubernetes and provide access via external network bypass Kubernetes internal network"
enum:
# List PortDistributionXXX constants
- ""
- "Unspecified"
- "ClusterScopeIndex"
spec:
# Host
type: object
properties:
name:
type: string
description: "by default, hostname will generate, but this allows define custom name for each `clickhuse-server`"
minLength: 1
# See namePartReplicaMaxLen const
maxLength: 15
pattern: "^[a-zA-Z0-9-]{0,15}$"
zkPort:
type: integer
minimum: 1
maximum: 65535
raftPort:
type: integer
minimum: 1
maximum: 65535
settings:
!!merge <<: *TypeSettings
description: |
optional, allows configure `clickhouse-server` settings inside <yandex>...</yandex> tag in each `Pod` where this template will apply during generate `ConfigMap` which will mount in `/etc/clickhouse-server/conf.d/`
More details: https://clickhouse.tech/docs/en/operations/settings/settings/
files:
!!merge <<: *TypeFiles
description: |
optional, allows define content of any setting file inside each `Pod` where this template will apply during generate `ConfigMap` which will mount in `/etc/clickhouse-server/config.d/` or `/etc/clickhouse-server/conf.d/` or `/etc/clickhouse-server/users.d/`
templates:
!!merge <<: *TypeTemplateNames
description: "be careful, this part of CRD allows override template inside template, don't use it if you don't understand what you do"
podTemplates:
type: array
description: |
@@ -180,6 +697,83 @@ spec:
name:
type: string
description: "template name, could use to link inside top-level `chi.spec.defaults.templates.podTemplate`, cluster-level `chi.spec.configuration.clusters.templates.podTemplate`, shard-level `chi.spec.configuration.clusters.layout.shards.temlates.podTemplate`, replica-level `chi.spec.configuration.clusters.layout.replicas.templates.podTemplate`"
generateName:
type: string
description: "allows define format for generated `Pod` name, look to https://github.com/Altinity/clickhouse-operator/blob/master/docs/custom_resource_explained.md#spectemplatesservicetemplates for details about available template variables"
zone:
type: object
description: "allows define custom zone name and will separate ClickHouse `Pods` between nodes, shortcut for `chi.spec.templates.podTemplates.spec.affinity.podAntiAffinity`"
#required:
# - values
properties:
key:
type: string
description: "optional, if defined, allows select kubernetes nodes by label with `name` equal `key`"
values:
type: array
description: "optional, if defined, allows select kubernetes nodes by label with `value` in `values`"
# nullable: true
items:
type: string
distribution:
type: string
description: "DEPRECATED, shortcut for `chi.spec.templates.podTemplates.spec.affinity.podAntiAffinity`"
enum:
- ""
- "Unspecified"
- "OnePerHost"
podDistribution:
type: array
description: "define ClickHouse Pod distribution policy between Kubernetes Nodes inside Shard, Replica, Namespace, CHI, another ClickHouse cluster"
# nullable: true
items:
type: object
#required:
# - type
properties:
type:
type: string
description: "you can define multiple affinity policy types"
enum:
# List PodDistributionXXX constants
- ""
- "Unspecified"
- "ClickHouseAntiAffinity"
- "ShardAntiAffinity"
- "ReplicaAntiAffinity"
- "AnotherNamespaceAntiAffinity"
- "AnotherClickHouseInstallationAntiAffinity"
- "AnotherClusterAntiAffinity"
- "MaxNumberPerNode"
- "NamespaceAffinity"
- "ClickHouseInstallationAffinity"
- "ClusterAffinity"
- "ShardAffinity"
- "ReplicaAffinity"
- "PreviousTailAffinity"
- "CircularReplication"
scope:
type: string
description: "scope for apply each podDistribution"
enum:
# list PodDistributionScopeXXX constants
- ""
- "Unspecified"
- "Shard"
- "Replica"
- "Cluster"
- "ClickHouseInstallation"
- "Namespace"
number:
type: integer
description: "define, how much ClickHouse Pods could be inside selected scope with selected distribution type"
minimum: 0
maximum: 65535
topologyKey:
type: string
description: |
use for inter-pod affinity look to `pod.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.topologyKey`,
more info: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity"
metadata:
type: object
description: |
@@ -195,7 +789,8 @@ spec:
x-kubernetes-preserve-unknown-fields: true
volumeClaimTemplates:
type: array
description: "allows define template for rendering `PVC` kubernetes resource, which would use inside `Pod` for mount clickhouse `data`, clickhouse `logs` or something else"
description: |
allows define template for rendering `PVC` kubernetes resource, which would use inside `Pod` for mount clickhouse `data`, clickhouse `logs` or something else
# nullable: true
items:
type: object
@@ -211,6 +806,8 @@ spec:
cluster-level `chi.spec.configuration.clusters.templates.dataVolumeClaimTemplate` or `chi.spec.configuration.clusters.templates.logVolumeClaimTemplate`,
shard-level `chi.spec.configuration.clusters.layout.shards.temlates.dataVolumeClaimTemplate` or `chi.spec.configuration.clusters.layout.shards.temlates.logVolumeClaimTemplate`
replica-level `chi.spec.configuration.clusters.layout.replicas.templates.dataVolumeClaimTemplate` or `chi.spec.configuration.clusters.layout.replicas.templates.logVolumeClaimTemplate`
provisioner: *TypePVCProvisioner
reclaimPolicy: *TypePVCReclaimPolicy
metadata:
type: object
description: |
@@ -244,6 +841,12 @@ spec:
cluster-level `chi.spec.configuration.clusters.templates.clusterServiceTemplate`
shard-level `chi.spec.configuration.clusters.layout.shards.temlates.shardServiceTemplate`
replica-level `chi.spec.configuration.clusters.layout.replicas.templates.replicaServiceTemplate` or `chi.spec.configuration.clusters.layout.shards.replicas.replicaServiceTemplate`
generateName:
type: string
description: |
allows define format for generated `Service` name,
look to https://github.com/Altinity/clickhouse-operator/blob/master/docs/custom_resource_explained.md#spectemplatesservicetemplates
for details about available template variables"
metadata:
# TODO specify ObjectMeta
type: object

View File

@@ -7,7 +7,7 @@ kind: CustomResourceDefinition
metadata:
name: clickhouseoperatorconfigurations.clickhouse.altinity.com
labels:
clickhouse.altinity.com/chop: 0.23.4
clickhouse.altinity.com/chop: 0.25.2
spec:
group: clickhouse.altinity.com
scope: Namespaced
@@ -137,6 +137,7 @@ spec:
items:
type: object
description: "setting: value pairs for configuration restart policy"
x-kubernetes-preserve-unknown-fields: true
access:
type: object
description: "parameters which use for connect to clickhouse from clickhouse-operator deployment"
@@ -181,6 +182,47 @@ spec:
minimum: 1
maximum: 600
description: "Timout to perform SQL query from the operator to ClickHouse instances. In seconds."
addons:
type: object
description: "Configuration addons specifies additional settings"
properties:
rules:
type: array
description: "Array of set of rules per specified ClickHouse versions"
items:
type: object
properties:
version:
type: string
description: "ClickHouse version expression"
spec:
type: object
description: "spec"
properties:
configuration:
type: object
description: "allows configure multiple aspects and behavior for `clickhouse-server` instance and also allows describe multiple `clickhouse-server` clusters inside one `chi` resource"
properties:
users:
type: object
description: "see same section from CR spec"
x-kubernetes-preserve-unknown-fields: true
profiles:
type: object
description: "see same section from CR spec"
x-kubernetes-preserve-unknown-fields: true
quotas:
type: object
description: "see same section from CR spec"
x-kubernetes-preserve-unknown-fields: true
settings:
type: object
description: "see same section from CR spec"
x-kubernetes-preserve-unknown-fields: true
files:
type: object
description: "see same section from CR spec"
x-kubernetes-preserve-unknown-fields: true
metrics:
type: object
description: "parameters which use for connect to fetch metrics from clickhouse by clickhouse-operator"
@@ -323,6 +365,19 @@ spec:
include:
!!merge <<: *TypeStringBool
description: "Whether the operator during reconcile procedure should wait for a ClickHouse host to be included into a ClickHouse cluster"
replicas:
type: object
description: "Whether the operator during reconcile procedure should wait for replicas to catch-up"
properties:
all:
!!merge <<: *TypeStringBool
description: "Whether the operator during reconcile procedure should wait for all replicas to catch-up"
new:
!!merge <<: *TypeStringBool
description: "Whether the operator during reconcile procedure should wait for new replicas to catch-up"
delay:
type: integer
description: "replication max absolute delay to consider replica is not delayed"
annotation:
type: object
description: "defines which metadata.annotations items will include or exclude during render StatefulSet, Pod, PVC resources"
@@ -373,6 +428,40 @@ spec:
- "LabelClusterScopeCycleSize"
- "LabelClusterScopeCycleIndex"
- "LabelClusterScopeCycleOffset"
metrics:
type: object
description: "defines metrics exporter options"
properties:
labels:
type: object
description: "defines metric labels options"
properties:
exclude:
type: array
description: |
When adding labels to a metric exclude labels with names from the following list
items:
type: string
status:
type: object
description: "defines status options"
properties:
fields:
type: object
description: "defines status fields options"
properties:
action:
!!merge <<: *TypeStringBool
description: "Whether the operator should fill status field 'action'"
actions:
!!merge <<: *TypeStringBool
description: "Whether the operator should fill status field 'actions'"
error:
!!merge <<: *TypeStringBool
description: "Whether the operator should fill status field 'error'"
errors:
!!merge <<: *TypeStringBool
description: "Whether the operator should fill status field 'errors'"
statefulSet:
type: object
description: "define StatefulSet-specific parameters"

View File

@@ -147,8 +147,8 @@
"format": "time_series",
"interval": "",
"intervalFactor": 2,
"query": "SELECT\r\n t,\r\n arrayMap(a -> (a.1, a.2 / runningDifference(t / 1000)), groupArr)\r\nFROM (\r\n SELECT t, groupArray((q, c)) AS groupArr\r\n FROM (\r\n SELECT\r\n (intDiv(toUInt32(event_time), 2) * 2) * 1000 AS t,\r\n normalizeQuery(query) AS q,\r\n count() c\r\n FROM cluster('all-sharded',system.query_log)\r\n WHERE $timeFilter\r\n AND( ('$type' = '1,2,3,4' AND type != 'QueryStart') OR ('$type' != '1,2,3,4' AND type IN ($type)))\r\n $conditionalTest(AND query_kind IN ($query_kind), $query_kind)\r\n $conditionalTest(AND initial_user IN ($user), $user)\r\n $conditionalTest(AND query_duration_ms >= $min_duration_ms, $min_duration_ms)\r\n $conditionalTest(AND query_duration_ms <= $max_duration_ms, $max_duration_ms)\r\n AND normalized_query_hash GLOBAL IN (\r\n SELECT normalized_query_hash AS h\r\n FROM cluster('all-sharded',system.query_log)\r\n WHERE $timeFilter\r\n AND( ('$type' = '1,2,3,4' AND type != 'QueryStart') OR ('$type' != '1,2,3,4' AND type IN ($type)))\r\n $conditionalTest(AND query_kind IN ($query_kind), $query_kind)\r\n $conditionalTest(AND type IN ($type), $type)\r\n $conditionalTest(AND initial_user IN ($user), $user)\r\n $conditionalTest(AND query_duration_ms >= $min_duration_ms, $min_duration_ms)\r\n $conditionalTest(AND query_duration_ms <= $max_duration_ms, $max_duration_ms)\r\n GROUP BY h\r\n ORDER BY count() DESC\r\n LIMIT $top\r\n SETTINGS skip_unavailable_shards=1\r\n )\r\n GROUP BY t, query\r\n ORDER BY t\r\n )\r\n GROUP BY t\r\n ORDER BY t\r\n) SETTINGS skip_unavailable_shards=1",
"rawQuery": "SELECT\r\n t,\r\n arrayMap(a -> (a.1, a.2 / runningDifference(t / 1000)), groupArr)\r\nFROM (\r\n SELECT t, groupArray((q, c)) AS groupArr\r\n FROM (\r\n SELECT\r\n (intDiv(toUInt32(event_time), 2) * 2) * 1000 AS t,\r\n normalizeQuery(query) AS q,\r\n count() c\r\n FROM cluster('all-sharded',system.query_log)\r\n WHERE event_date >= toDate(1694531137) AND event_date <= toDate(1694534737) AND event_time >= toDateTime(1694531137) AND event_time <= toDateTime(1694534737)\r\n AND( ('1,2,3,4' = '1,2,3,4' AND type != 'QueryStart') OR ('1,2,3,4' != '1,2,3,4' AND type IN (1,2,3,4)))\r\n \r\n \r\n \r\n \r\n AND normalized_query_hash GLOBAL IN (\r\n SELECT normalized_query_hash AS h\r\n FROM cluster('all-sharded',system.query_log)\r\n WHERE event_date >= toDate(1694531137) AND event_date <= toDate(1694534737) AND event_time >= toDateTime(1694531137) AND event_time <= toDateTime(1694534737)\r\n AND( ('1,2,3,4' = '1,2,3,4' AND type != 'QueryStart') OR ('1,2,3,4' != '1,2,3,4' AND type IN (1,2,3,4)))\r\n \r\n \r\n \r\n \r\n \r\n GROUP BY h\r\n ORDER BY count() DESC\r\n LIMIT 30\r\n SETTINGS skip_unavailable_shards=1\r\n )\r\n GROUP BY t, query\r\n ORDER BY t\r\n )\r\n GROUP BY t\r\n ORDER BY t\r\n) SETTINGS skip_unavailable_shards=1",
"query": "SELECT\r\n t,\r\n arrayMap(a -> (a.1, a.2 / (t/1000 - lagInFrame(t/1000,1,0) OVER ()) ), groupArr)\r\nFROM (\r\n SELECT t, groupArray((q, c)) AS groupArr\r\n FROM (\r\n SELECT\r\n (intDiv(toUInt32(event_time), 2) * 2) * 1000 AS t,\r\n normalizeQuery(query) AS q,\r\n count() c\r\n FROM cluster('all-sharded',system.query_log)\r\n WHERE $timeFilter\r\n AND( ('$type' = '1,2,3,4' AND type != 'QueryStart') OR ('$type' != '1,2,3,4' AND type IN ($type)))\r\n $conditionalTest(AND query_kind IN ($query_kind), $query_kind)\r\n $conditionalTest(AND initial_user IN ($user), $user)\r\n $conditionalTest(AND query_duration_ms >= $min_duration_ms, $min_duration_ms)\r\n $conditionalTest(AND query_duration_ms <= $max_duration_ms, $max_duration_ms)\r\n AND normalized_query_hash GLOBAL IN (\r\n SELECT normalized_query_hash AS h\r\n FROM cluster('all-sharded',system.query_log)\r\n WHERE $timeFilter\r\n AND( ('$type' = '1,2,3,4' AND type != 'QueryStart') OR ('$type' != '1,2,3,4' AND type IN ($type)))\r\n $conditionalTest(AND query_kind IN ($query_kind), $query_kind)\r\n $conditionalTest(AND type IN ($type), $type)\r\n $conditionalTest(AND initial_user IN ($user), $user)\r\n $conditionalTest(AND query_duration_ms >= $min_duration_ms, $min_duration_ms)\r\n $conditionalTest(AND query_duration_ms <= $max_duration_ms, $max_duration_ms)\r\n GROUP BY h\r\n ORDER BY count() DESC\r\n LIMIT $top\r\n SETTINGS skip_unavailable_shards=1\r\n )\r\n GROUP BY t, query\r\n ORDER BY t\r\n )\r\n GROUP BY t\r\n ORDER BY t\r\n) SETTINGS skip_unavailable_shards=1",
"rawQuery": "SELECT\r\n t,\r\n arrayMap(a -> (a.1, a.2 / (t/1000 - lagInFrame(t/1000,1,0) OVER ()) ), groupArr)\r\nFROM (\r\n SELECT t, groupArray((q, c)) AS groupArr\r\n FROM (\r\n SELECT\r\n (intDiv(toUInt32(event_time), 2) * 2) * 1000 AS t,\r\n normalizeQuery(query) AS q,\r\n count() c\r\n FROM cluster('all-sharded',system.query_log)\r\n WHERE event_date >= toDate(1694531137) AND event_date <= toDate(1694534737) AND event_time >= toDateTime(1694531137) AND event_time <= toDateTime(1694534737)\r\n AND( ('1,2,3,4' = '1,2,3,4' AND type != 'QueryStart') OR ('1,2,3,4' != '1,2,3,4' AND type IN (1,2,3,4)))\r\n \r\n \r\n \r\n \r\n AND normalized_query_hash GLOBAL IN (\r\n SELECT normalized_query_hash AS h\r\n FROM cluster('all-sharded',system.query_log)\r\n WHERE event_date >= toDate(1694531137) AND event_date <= toDate(1694534737) AND event_time >= toDateTime(1694531137) AND event_time <= toDateTime(1694534737)\r\n AND( ('1,2,3,4' = '1,2,3,4' AND type != 'QueryStart') OR ('1,2,3,4' != '1,2,3,4' AND type IN (1,2,3,4)))\r\n \r\n \r\n \r\n \r\n \r\n GROUP BY h\r\n ORDER BY count() DESC\r\n LIMIT 30\r\n SETTINGS skip_unavailable_shards=1\r\n )\r\n GROUP BY t, query\r\n ORDER BY t\r\n )\r\n GROUP BY t\r\n ORDER BY t\r\n) SETTINGS skip_unavailable_shards=1",
"refId": "A",
"resultFormat": "time_series",
"round": "0s",
@@ -743,7 +743,7 @@
"interval": "",
"intervalFactor": 2,
"query": "$rate(count() c)\nFROM cluster('all-sharded',system.query_log)\nWHERE $timeFilter\n AND( ('$type' = '1,2,3,4' AND type != 'QueryStart') OR ('$type' != '1,2,3,4' AND type IN ($type)))\n $conditionalTest(AND query_kind IN ($query_kind), $query_kind)\n $conditionalTest(AND initial_user IN ($user), $user)\n $conditionalTest(AND query_duration_ms >= $min_duration_ms,$min_duration_ms)\n $conditionalTest(AND query_duration_ms <= $max_duration_ms,$max_duration_ms)\n",
"rawQuery": "SELECT t, c/runningDifference(t/1000) cRate FROM ( SELECT (intDiv(toUInt32(event_time), 4) * 4) * 1000 AS t, count() c FROM cluster('all-sharded',system.query_log)\nWHERE event_date >= toDate(1694531229) AND event_date <= toDate(1694534829) AND event_time >= toDateTime(1694531229) AND event_time <= toDateTime(1694534829) AND event_date >= toDate(1694531229) AND event_date <= toDate(1694534829) AND event_time >= toDateTime(1694531229) AND event_time <= toDateTime(1694534829)\n AND( ('1,2,3,4' = '1,2,3,4' AND type != 'QueryStart') OR ('1,2,3,4' != '1,2,3,4' AND type IN (1,2,3,4)))\n \n \n \n GROUP BY t ORDER BY t)",
"rawQuery": "SELECT t, c/(t/1000 - lagInFrame(t/1000,1,0) OVER ()) cRate FROM ( SELECT (intDiv(toUInt32(event_time), 4) * 4) * 1000 AS t, count() c FROM cluster('all-sharded',system.query_log)\nWHERE event_date >= toDate(1694531229) AND event_date <= toDate(1694534829) AND event_time >= toDateTime(1694531229) AND event_time <= toDateTime(1694534829) AND event_date >= toDate(1694531229) AND event_date <= toDate(1694534829) AND event_time >= toDateTime(1694531229) AND event_time <= toDateTime(1694534829)\n AND( ('1,2,3,4' = '1,2,3,4' AND type != 'QueryStart') OR ('1,2,3,4' != '1,2,3,4' AND type IN (1,2,3,4)))\n \n \n \n GROUP BY t ORDER BY t)",
"refId": "A",
"resultFormat": "time_series",
"round": "0s",

View File

@@ -1,4 +1,15 @@
{{/* vim: set filetype=go-template: */}}
{{/*
Allow the release namespace to be overridden for multi-namespace deployments in combined charts
*/}}
{{- define "altinity-clickhouse-operator.namespace" -}}
{{- if .Values.namespaceOverride -}}
{{- .Values.namespaceOverride -}}
{{- else -}}
{{- .Release.Namespace -}}
{{- end -}}
{{- end -}}
{{/*
Expand the name of the chart.
*/}}
@@ -40,8 +51,8 @@ helm.sh/chart: {{ include "altinity-clickhouse-operator.chart" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
{{- if .Values.podLabels }}
{{ toYaml .Values.podLabels }}
{{- if .Values.commonLabels }}
{{ toYaml .Values.commonLabels }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end -}}
@@ -54,6 +65,17 @@ app.kubernetes.io/name: {{ include "altinity-clickhouse-operator.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end -}}
{{/*
Common annotations
*/}}
{{- define "altinity-clickhouse-operator.annotations" -}}
meta.helm.sh/release-name: {{ .Release.Name }}
meta.helm.sh/release-namespace: {{ .Release.Namespace }}
{{- if .Values.commonAnnotations }}
{{ toYaml .Values.commonAnnotations }}
{{- end -}}
{{- end -}}
{{/*
Create the name of the service account to use
*/}}

View File

@@ -0,0 +1,21 @@
{{- if .Values.dashboards.enabled }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "altinity-clickhouse-operator.fullname" . }}-dashboards
namespace: {{ include "altinity-clickhouse-operator.namespace" . }}
labels:
{{- include "altinity-clickhouse-operator.labels" . | nindent 4 }}
{{- if .Values.dashboards.additionalLabels }}
{{- toYaml .Values.dashboards.additionalLabels | nindent 4 }}
{{- end }}
annotations:
{{- include "altinity-clickhouse-operator.annotations" . | nindent 4 }}
{{- if .Values.dashboards.annotations }}
{{- toYaml .Values.dashboards.annotations | nindent 4 }}
{{- end }}
data:
{{- range $path, $_ := .Files.Glob "files/*.json" }}
{{ $path | trimPrefix "files/" }}: |- {{ $.Files.Get $path | nindent 4 -}}
{{ end }}
{{- end }}

View File

@@ -1,21 +0,0 @@
{{- if .Values.dashboards.enabled }}
apiVersion: v1
kind: Secret
metadata:
name: {{ include "altinity-clickhouse-operator.fullname" . }}-dashboards
namespace: {{ .Release.Namespace }}
labels:
{{- include "altinity-clickhouse-operator.labels" . | nindent 4 }}
{{- if .Values.dashboards.additionalLabels }}
{{- toYaml .Values.dashboards.additionalLabels | nindent 4 }}
{{- end }}
{{- with .Values.dashboards.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
type: Opaque
data:
{{- range $path, $_ := .Files.Glob "files/*.json" }}
{{ $path | trimPrefix "files/" }}: {{ $.Files.Get $path | b64enc -}}
{{ end }}
{{- end }}

View File

@@ -1,4 +1,4 @@
{{- if .Values.rbac.create -}}
{{- if (and .Values.rbac.create (not .Values.rbac.namespaceScoped)) -}}
# Specifies either
# ClusterRole
# or
@@ -12,7 +12,7 @@ metadata:
name: {{ include "altinity-clickhouse-operator.fullname" . }}
#namespace: kube-system
labels: {{ include "altinity-clickhouse-operator.labels" . | nindent 4 }}
namespace: {{ .Release.Namespace }}
annotations: {{ include "altinity-clickhouse-operator.annotations" . | nindent 4 }}
rules:
#
# Core API group

View File

@@ -1,4 +1,4 @@
{{- if .Values.rbac.create -}}
{{- if (and .Values.rbac.create (not .Values.rbac.namespaceScoped)) -}}
# Specifies either
# ClusterRoleBinding between ClusterRole and ServiceAccount.
# or
@@ -11,7 +11,7 @@ metadata:
name: {{ include "altinity-clickhouse-operator.fullname" . }}
#namespace: kube-system
labels: {{ include "altinity-clickhouse-operator.labels" . | nindent 4 }}
namespace: {{ .Release.Namespace }}
annotations: {{ include "altinity-clickhouse-operator.annotations" . | nindent 4 }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
@@ -19,5 +19,15 @@ roleRef:
subjects:
- kind: ServiceAccount
name: {{ include "altinity-clickhouse-operator.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
namespace: {{ include "altinity-clickhouse-operator.namespace" . }}
# Template Parameters:
#
# NAMESPACE=kube-system
# COMMENT=
# ROLE_KIND=Role
# ROLE_NAME=clickhouse-operator
# ROLE_BINDING_KIND=RoleBinding
# ROLE_BINDING_NAME=clickhouse-operator
#
{{- end }}

View File

@@ -8,6 +8,7 @@ apiVersion: v1
kind: ConfigMap
metadata:
name: {{ printf "%s-confd-files" (include "altinity-clickhouse-operator.fullname" .) }}
namespace: {{ .Release.Namespace }}
namespace: {{ include "altinity-clickhouse-operator.namespace" . }}
labels: {{ include "altinity-clickhouse-operator.labels" . | nindent 4 }}
annotations: {{ include "altinity-clickhouse-operator.annotations" . | nindent 4 }}
data: {{ include "altinity-clickhouse-operator.configmap-data" (list . .Values.configs.confdFiles) | nindent 2 }}

View File

@@ -8,6 +8,7 @@ apiVersion: v1
kind: ConfigMap
metadata:
name: {{ printf "%s-configd-files" (include "altinity-clickhouse-operator.fullname" .) }}
namespace: {{ .Release.Namespace }}
namespace: {{ include "altinity-clickhouse-operator.namespace" . }}
labels: {{ include "altinity-clickhouse-operator.labels" . | nindent 4 }}
annotations: {{ include "altinity-clickhouse-operator.annotations" . | nindent 4 }}
data: {{ include "altinity-clickhouse-operator.configmap-data" (list . .Values.configs.configdFiles) | nindent 2 }}

View File

@@ -8,6 +8,7 @@ apiVersion: v1
kind: ConfigMap
metadata:
name: {{ printf "%s-files" (include "altinity-clickhouse-operator.fullname" .) }}
namespace: {{ .Release.Namespace }}
namespace: {{ include "altinity-clickhouse-operator.namespace" . }}
labels: {{ include "altinity-clickhouse-operator.labels" . | nindent 4 }}
annotations: {{ include "altinity-clickhouse-operator.annotations" . | nindent 4 }}
data: {{ include "altinity-clickhouse-operator.configmap-data" (list . .Values.configs.files) | nindent 2 }}

View File

@@ -8,6 +8,7 @@ apiVersion: v1
kind: ConfigMap
metadata:
name: {{ printf "%s-templatesd-files" (include "altinity-clickhouse-operator.fullname" .) }}
namespace: {{ .Release.Namespace }}
namespace: {{ include "altinity-clickhouse-operator.namespace" . }}
labels: {{ include "altinity-clickhouse-operator.labels" . | nindent 4 }}
annotations: {{ include "altinity-clickhouse-operator.annotations" . | nindent 4 }}
data: {{ include "altinity-clickhouse-operator.configmap-data" (list . .Values.configs.templatesdFiles) | nindent 2 }}

View File

@@ -8,6 +8,7 @@ apiVersion: v1
kind: ConfigMap
metadata:
name: {{ printf "%s-usersd-files" (include "altinity-clickhouse-operator.fullname" .) }}
namespace: {{ .Release.Namespace }}
namespace: {{ include "altinity-clickhouse-operator.namespace" . }}
labels: {{ include "altinity-clickhouse-operator.labels" . | nindent 4 }}
annotations: {{ include "altinity-clickhouse-operator.annotations" . | nindent 4 }}
data: {{ include "altinity-clickhouse-operator.configmap-data" (list . .Values.configs.usersdFiles) | nindent 2 }}

View File

@@ -0,0 +1,14 @@
# Template Parameters:
#
# NAME=etc-keeper-operator-confd-files
# NAMESPACE=kube-system
# COMMENT=
#
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ printf "%s-keeper-confd-files" (include "altinity-clickhouse-operator.fullname" .) }}
namespace: {{ include "altinity-clickhouse-operator.namespace" . }}
labels: {{ include "altinity-clickhouse-operator.labels" . | nindent 4 }}
annotations: {{ include "altinity-clickhouse-operator.annotations" . | nindent 4 }}
data: {{ include "altinity-clickhouse-operator.configmap-data" (list . .Values.configs.keeperConfdFiles) | nindent 2 }}

View File

@@ -0,0 +1,14 @@
# Template Parameters:
#
# NAME=etc-keeper-operator-configd-files
# NAMESPACE=kube-system
# COMMENT=
#
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ printf "%s-keeper-configd-files" (include "altinity-clickhouse-operator.fullname" .) }}
namespace: {{ include "altinity-clickhouse-operator.namespace" . }}
labels: {{ include "altinity-clickhouse-operator.labels" . | nindent 4 }}
annotations: {{ include "altinity-clickhouse-operator.annotations" . | nindent 4 }}
data: {{ include "altinity-clickhouse-operator.configmap-data" (list . .Values.configs.keeperConfigdFiles) | nindent 2 }}

View File

@@ -0,0 +1,14 @@
# Template Parameters:
#
# NAME=etc-keeper-operator-templatesd-files
# NAMESPACE=kube-system
# COMMENT=
#
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ printf "%s-keeper-templatesd-files" (include "altinity-clickhouse-operator.fullname" .) }}
namespace: {{ include "altinity-clickhouse-operator.namespace" . }}
labels: {{ include "altinity-clickhouse-operator.labels" . | nindent 4 }}
annotations: {{ include "altinity-clickhouse-operator.annotations" . | nindent 4 }}
data: {{ include "altinity-clickhouse-operator.configmap-data" (list . .Values.configs.keeperTemplatesdFiles) | nindent 2 }}

View File

@@ -0,0 +1,14 @@
# Template Parameters:
#
# NAME=etc-keeper-operator-usersd-files
# NAMESPACE=kube-system
# COMMENT=
#
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ printf "%s-keeper-usersd-files" (include "altinity-clickhouse-operator.fullname" .) }}
namespace: {{ include "altinity-clickhouse-operator.namespace" . }}
labels: {{ include "altinity-clickhouse-operator.labels" . | nindent 4 }}
annotations: {{ include "altinity-clickhouse-operator.annotations" . | nindent 4 }}
data: {{ include "altinity-clickhouse-operator.configmap-data" (list . .Values.configs.keeperUsersdFiles) | nindent 2 }}

View File

@@ -2,9 +2,9 @@
#
# NAMESPACE=kube-system
# COMMENT=
# OPERATOR_IMAGE=altinity/clickhouse-operator:0.23.4
# OPERATOR_IMAGE=altinity/clickhouse-operator:0.25.2
# OPERATOR_IMAGE_PULL_POLICY=Always
# METRICS_EXPORTER_IMAGE=altinity/metrics-exporter:0.23.4
# METRICS_EXPORTER_IMAGE=altinity/metrics-exporter:0.25.2
# METRICS_EXPORTER_IMAGE_PULL_POLICY=Always
#
# Setup Deployment for clickhouse-operator
@@ -13,22 +13,27 @@ kind: Deployment
apiVersion: apps/v1
metadata:
name: {{ include "altinity-clickhouse-operator.fullname" . }}
namespace: {{ .Release.Namespace }}
namespace: {{ include "altinity-clickhouse-operator.namespace" . }}
labels: {{ include "altinity-clickhouse-operator.labels" . | nindent 4 }}
annotations: {{ include "altinity-clickhouse-operator.annotations" . | nindent 4 }}
spec:
replicas: 1
selector:
matchLabels: {{ include "altinity-clickhouse-operator.selectorLabels" . | nindent 6 }}
template:
metadata:
labels: {{ include "altinity-clickhouse-operator.labels" . | nindent 8 }}
labels: {{ include "altinity-clickhouse-operator.labels" . | nindent 8 }}{{ if .Values.podLabels }}{{ toYaml .Values.podLabels | nindent 8 }}{{ end }}
annotations:
{{ toYaml .Values.podAnnotations | nindent 8 }}
{{ if .Values.podAnnotations }}{{ toYaml .Values.podAnnotations | nindent 8 }}{{ end }}
checksum/files: {{ include (print $.Template.BasePath "/generated/ConfigMap-etc-clickhouse-operator-files.yaml") . | sha256sum }}
checksum/confd-files: {{ include (print $.Template.BasePath "/generated/ConfigMap-etc-clickhouse-operator-confd-files.yaml") . | sha256sum }}
checksum/configd-files: {{ include (print $.Template.BasePath "/generated/ConfigMap-etc-clickhouse-operator-configd-files.yaml") . | sha256sum }}
checksum/templatesd-files: {{ include (print $.Template.BasePath "/generated/ConfigMap-etc-clickhouse-operator-templatesd-files.yaml") . | sha256sum }}
checksum/usersd-files: {{ include (print $.Template.BasePath "/generated/ConfigMap-etc-clickhouse-operator-usersd-files.yaml") . | sha256sum }}
checksum/keeper-confd-files: {{ include (print $.Template.BasePath "/generated/ConfigMap-etc-keeper-operator-confd-files.yaml") . | sha256sum }}
checksum/keeper-configd-files: {{ include (print $.Template.BasePath "/generated/ConfigMap-etc-keeper-operator-configd-files.yaml") . | sha256sum }}
checksum/keeper-templatesd-files: {{ include (print $.Template.BasePath "/generated/ConfigMap-etc-keeper-operator-templatesd-files.yaml") . | sha256sum }}
checksum/keeper-usersd-files: {{ include (print $.Template.BasePath "/generated/ConfigMap-etc-keeper-operator-usersd-files.yaml") . | sha256sum }}
spec:
serviceAccountName: {{ include "altinity-clickhouse-operator.serviceAccountName" . }}
volumes:
@@ -47,6 +52,18 @@ spec:
- name: etc-clickhouse-operator-usersd-folder
configMap:
name: {{ include "altinity-clickhouse-operator.fullname" . }}-usersd-files
- name: etc-keeper-operator-confd-folder
configMap:
name: {{ include "altinity-clickhouse-operator.fullname" . }}-keeper-confd-files
- name: etc-keeper-operator-configd-folder
configMap:
name: {{ include "altinity-clickhouse-operator.fullname" . }}-keeper-configd-files
- name: etc-keeper-operator-templatesd-folder
configMap:
name: {{ include "altinity-clickhouse-operator.fullname" . }}-keeper-templatesd-files
- name: etc-keeper-operator-usersd-folder
configMap:
name: {{ include "altinity-clickhouse-operator.fullname" . }}-keeper-usersd-files
containers:
- name: {{ .Chart.Name }}
image: {{ .Values.operator.image.repository }}:{{ include "altinity-clickhouse-operator.operator.tag" . }}
@@ -55,13 +72,21 @@ spec:
- name: etc-clickhouse-operator-folder
mountPath: /etc/clickhouse-operator
- name: etc-clickhouse-operator-confd-folder
mountPath: /etc/clickhouse-operator/conf.d
mountPath: /etc/clickhouse-operator/chi/conf.d
- name: etc-clickhouse-operator-configd-folder
mountPath: /etc/clickhouse-operator/config.d
mountPath: /etc/clickhouse-operator/chi/config.d
- name: etc-clickhouse-operator-templatesd-folder
mountPath: /etc/clickhouse-operator/templates.d
mountPath: /etc/clickhouse-operator/chi/templates.d
- name: etc-clickhouse-operator-usersd-folder
mountPath: /etc/clickhouse-operator/users.d
mountPath: /etc/clickhouse-operator/chi/users.d
- name: etc-keeper-operator-confd-folder
mountPath: /etc/clickhouse-operator/chk/conf.d
- name: etc-keeper-operator-configd-folder
mountPath: /etc/clickhouse-operator/chk/keeper_config.d
- name: etc-keeper-operator-templatesd-folder
mountPath: /etc/clickhouse-operator/chk/templates.d
- name: etc-keeper-operator-usersd-folder
mountPath: /etc/clickhouse-operator/chk/users.d
env:
# Pod-specific
# spec.nodeName: ip-172-20-52-62.ec2.internal
@@ -125,13 +150,21 @@ spec:
- name: etc-clickhouse-operator-folder
mountPath: /etc/clickhouse-operator
- name: etc-clickhouse-operator-confd-folder
mountPath: /etc/clickhouse-operator/conf.d
mountPath: /etc/clickhouse-operator/chi/conf.d
- name: etc-clickhouse-operator-configd-folder
mountPath: /etc/clickhouse-operator/config.d
mountPath: /etc/clickhouse-operator/chi/config.d
- name: etc-clickhouse-operator-templatesd-folder
mountPath: /etc/clickhouse-operator/templates.d
mountPath: /etc/clickhouse-operator/chi/templates.d
- name: etc-clickhouse-operator-usersd-folder
mountPath: /etc/clickhouse-operator/users.d
mountPath: /etc/clickhouse-operator/chi/users.d
- name: etc-keeper-operator-confd-folder
mountPath: /etc/clickhouse-operator/chk/conf.d
- name: etc-keeper-operator-configd-folder
mountPath: /etc/clickhouse-operator/chk/keeper_config.d
- name: etc-keeper-operator-templatesd-folder
mountPath: /etc/clickhouse-operator/chk/templates.d
- name: etc-keeper-operator-usersd-folder
mountPath: /etc/clickhouse-operator/chk/users.d
env:
# Pod-specific
# spec.nodeName: ip-172-20-52-62.ec2.internal
@@ -193,3 +226,4 @@ spec:
affinity: {{ toYaml .Values.affinity | nindent 8 }}
tolerations: {{ toYaml .Values.tolerations | nindent 8 }}
securityContext: {{ toYaml .Values.podSecurityContext | nindent 8 }}
topologySpreadConstraints: {{ toYaml .Values.topologySpreadConstraints | nindent 8 }}

View File

@@ -0,0 +1,211 @@
{{- if (and .Values.rbac.create .Values.rbac.namespaceScoped) -}}
# Specifies either
# ClusterRole
# or
# Role
# to be bound to ServiceAccount.
# ClusterRole is namespace-less and must have unique name
# Role is namespace-bound
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ include "altinity-clickhouse-operator.fullname" . }}
namespace: {{ include "altinity-clickhouse-operator.namespace" . }}
labels: {{ include "altinity-clickhouse-operator.labels" . | nindent 4 }}
annotations: {{ include "altinity-clickhouse-operator.annotations" . | nindent 4 }}
rules:
#
# Core API group
#
- apiGroups:
- ""
resources:
- configmaps
- services
- persistentvolumeclaims
- secrets
verbs:
- get
- list
- patch
- update
- watch
- create
- delete
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- apiGroups:
- ""
resources:
- persistentvolumes
verbs:
- get
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- list
- patch
- update
- watch
- delete
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
- list
#
# apps.* resources
#
- apiGroups:
- apps
resources:
- statefulsets
verbs:
- get
- list
- patch
- update
- watch
- create
- delete
- apiGroups:
- apps
resources:
- replicasets
verbs:
- get
- patch
- update
- delete
# The operator deployment personally, identified by name
- apiGroups:
- apps
resources:
- deployments
resourceNames:
- {{ include "altinity-clickhouse-operator.fullname" . }}
verbs:
- get
- patch
- update
- delete
#
# policy.* resources
#
- apiGroups:
- policy
resources:
- poddisruptionbudgets
verbs:
- get
- list
- patch
- update
- watch
- create
- delete
#
# apiextensions
#
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
verbs:
- get
- list
# clickhouse - related resources
- apiGroups:
- clickhouse.altinity.com
#
# The operators specific Custom Resources
#
resources:
- clickhouseinstallations
verbs:
- get
- list
- watch
- patch
- update
- delete
- apiGroups:
- clickhouse.altinity.com
resources:
- clickhouseinstallationtemplates
- clickhouseoperatorconfigurations
verbs:
- get
- list
- watch
- apiGroups:
- clickhouse.altinity.com
resources:
- clickhouseinstallations/finalizers
- clickhouseinstallationtemplates/finalizers
- clickhouseoperatorconfigurations/finalizers
verbs:
- update
- apiGroups:
- clickhouse.altinity.com
resources:
- clickhouseinstallations/status
- clickhouseinstallationtemplates/status
- clickhouseoperatorconfigurations/status
verbs:
- get
- update
- patch
- create
- delete
# clickhouse-keeper - related resources
- apiGroups:
- clickhouse-keeper.altinity.com
resources:
- clickhousekeeperinstallations
verbs:
- get
- list
- watch
- patch
- update
- delete
- apiGroups:
- clickhouse-keeper.altinity.com
resources:
- clickhousekeeperinstallations/finalizers
verbs:
- update
- apiGroups:
- clickhouse-keeper.altinity.com
resources:
- clickhousekeeperinstallations/status
verbs:
- get
- update
- patch
- create
- delete
{{- end }}

View File

@@ -0,0 +1,23 @@
{{- if (and .Values.rbac.create .Values.rbac.namespaceScoped) -}}
# Specifies either
# ClusterRoleBinding between ClusterRole and ServiceAccount.
# or
# RoleBinding between Role and ServiceAccount.
# ClusterRoleBinding is namespace-less and must have unique name
# RoleBinding is namespace-bound
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ include "altinity-clickhouse-operator.fullname" . }}
namespace: {{ include "altinity-clickhouse-operator.namespace" . }}
labels: {{ include "altinity-clickhouse-operator.labels" . | nindent 4 }}
annotations: {{ include "altinity-clickhouse-operator.annotations" . | nindent 4 }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ include "altinity-clickhouse-operator.fullname" . }}
subjects:
- kind: ServiceAccount
name: {{ include "altinity-clickhouse-operator.serviceAccountName" . }}
namespace: {{ include "altinity-clickhouse-operator.namespace" . }}
{{- end }}

View File

@@ -3,7 +3,7 @@
# Template parameters available:
# NAMESPACE=kube-system
# COMMENT=
# OPERATOR_VERSION=0.23.4
# OPERATOR_VERSION=0.25.2
# CH_USERNAME_SECRET_PLAIN=clickhouse_operator
# CH_PASSWORD_SECRET_PLAIN=clickhouse_operator_password
#
@@ -11,8 +11,9 @@ apiVersion: v1
kind: Secret
metadata:
name: {{ include "altinity-clickhouse-operator.fullname" . }}
namespace: {{ .Release.Namespace }}
namespace: {{ include "altinity-clickhouse-operator.namespace" . }}
labels: {{ include "altinity-clickhouse-operator.labels" . | nindent 4 }}
annotations: {{ include "altinity-clickhouse-operator.annotations" . | nindent 4 }}
type: Opaque
data:
username: {{ .Values.secret.username | b64enc }}

View File

@@ -12,8 +12,9 @@ kind: Service
apiVersion: v1
metadata:
name: {{ printf "%s-metrics" (include "altinity-clickhouse-operator.fullname" .) }}
namespace: {{ .Release.Namespace }}
namespace: {{ include "altinity-clickhouse-operator.namespace" . }}
labels: {{ include "altinity-clickhouse-operator.labels" . | nindent 4 }}
annotations: {{ include "altinity-clickhouse-operator.annotations" . | nindent 4 }}
spec:
ports:
- port: 8888

View File

@@ -10,9 +10,9 @@ apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "altinity-clickhouse-operator.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
namespace: {{ include "altinity-clickhouse-operator.namespace" . }}
labels: {{ include "altinity-clickhouse-operator.labels" . | nindent 4 }}
annotations: {{ toYaml .Values.serviceAccount.annotations | nindent 4 }}
annotations: {{ include "altinity-clickhouse-operator.annotations" . | nindent 4 }}{{ if .Values.serviceAccount.annotations }}{{ toYaml .Values.serviceAccount.annotations | nindent 4 }}{{ end }}
# Template Parameters:
#

View File

@@ -3,16 +3,45 @@ apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ printf "%s-clickhouse-metrics" (include "altinity-clickhouse-operator.fullname" .) }}
namespace: {{ .Release.Namespace }}
namespace: {{ include "altinity-clickhouse-operator.namespace" . }}
labels:
{{- include "altinity-clickhouse-operator.labels" . | nindent 4 }}
{{- if .Values.serviceMonitor.additionalLabels }}
{{- if .Values.serviceMonitor.additionalLabels }}
{{- toYaml .Values.serviceMonitor.additionalLabels | nindent 4 }}
{{- end }}
{{- end }}
annotations: {{ include "altinity-clickhouse-operator.annotations" . | nindent 4 }}
spec:
endpoints:
- port: clickhouse-metrics # 8888
{{- with .Values.serviceMonitor.clickhouseMetrics.interval }}
interval: {{ . }}
{{- end }}
{{- with .Values.serviceMonitor.clickhouseMetrics.scrapeTimeout }}
scrapeTimeout: {{ . }}
{{- end }}
{{- with .Values.serviceMonitor.clickhouseMetrics.relabelings }}
relabelings:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.serviceMonitor.clickhouseMetrics.metricRelabelings }}
metricRelabelings:
{{- toYaml . | nindent 8 }}
{{- end }}
- port: operator-metrics # 9999
{{- with .Values.serviceMonitor.operatorMetrics.interval }}
interval: {{ . }}
{{- end }}
{{- with .Values.serviceMonitor.operatorMetrics.scrapeTimeout }}
scrapeTimeout: {{ . }}
{{- end }}
{{- with .Values.serviceMonitor.operatorMetrics.relabelings }}
relabelings:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.serviceMonitor.operatorMetrics.metricRelabelings }}
metricRelabelings:
{{- toYaml . | nindent 8 }}
{{- end }}
selector:
matchLabels:
{{- include "altinity-clickhouse-operator.selectorLabels" . | nindent 6 }}

View File

@@ -1,3 +1,8 @@
namespaceOverride: ""
# commonLabels -- set of labels that will be applied to all the resources for the operator
commonLabels: {}
# commonAnnotations -- set of annotations that will be applied to all the resources for the operator
commonAnnotations: {}
operator:
image:
# operator.image.repository -- image repository
@@ -7,7 +12,7 @@ operator:
# operator.image.pullPolicy -- image pull policy
pullPolicy: IfNotPresent
containerSecurityContext: {}
# operator.resources -- custom resource configuration, look `kubectl explain pod.spec.containers.resources` for details
# operator.resources -- custom resource configuration, check `kubectl explain pod.spec.containers.resources` for details
resources: {}
# limits:
# cpu: 100m
@@ -17,7 +22,7 @@ operator:
# memory: 128Mi
# operator.env -- additional environment variables for the clickhouse-operator container in deployment
# possible format value [{"name": "SAMPLE", "value": "text"}]
# possible format value `[{"name": "SAMPLE", "value": "text"}]`
env: []
metrics:
enabled: true
@@ -39,15 +44,16 @@ metrics:
# memory: 128Mi
# metrics.env -- additional environment variables for the deployment of metrics-exporter containers
# possible format value [{"name": "SAMPLE", "value": "text"}]
# possible format value `[{"name": "SAMPLE", "value": "text"}]`
env: []
# imagePullSecrets -- image pull secret for private images in clickhouse-operator pod
# possible value format [{"name":"your-secret-name"}]
# look `kubectl explain pod.spec.imagePullSecrets` for details
# possible value format `[{"name":"your-secret-name"}]`,
# check `kubectl explain pod.spec.imagePullSecrets` for details
imagePullSecrets: []
# podLabels -- labels to add to the clickhouse-operator pod
podLabels: {}
# podAnnotations -- annotations to add to the clickhouse-operator pod, look `kubectl explain pod.spec.annotations` for details
# podAnnotations -- annotations to add to the clickhouse-operator pod, check `kubectl explain pod.spec.annotations` for details
# @default -- check the `values.yaml` file
podAnnotations:
prometheus.io/port: '8888'
prometheus.io/scrape: 'true'
@@ -65,8 +71,10 @@ serviceAccount:
# serviceAccount.name -- the name of the service account to use; if not set and create is true, a name is generated using the fullname template
name:
rbac:
# rbac.create -- specifies whether cluster roles and cluster role bindings should be created
# rbac.create -- specifies whether rbac resources should be created
create: true
# rbac.namespaceScoped -- specifies whether to create roles and rolebindings at the cluster level or namespace level
namespaceScoped: false
secret:
# secret.create -- create a secret with operator credentials
create: true
@@ -74,21 +82,42 @@ secret:
username: clickhouse_operator
# secret.password -- operator credentials password
password: clickhouse_operator_password
# nodeSelector -- node for scheduler pod assignment, look `kubectl explain pod.spec.nodeSelector` for details
# nodeSelector -- node for scheduler pod assignment, check `kubectl explain pod.spec.nodeSelector` for details
nodeSelector: {}
# tolerations -- tolerations for scheduler pod assignment, look `kubectl explain pod.spec.tolerations` for details
# tolerations -- tolerations for scheduler pod assignment, check `kubectl explain pod.spec.tolerations` for details
tolerations: []
# affinity -- affinity for scheduler pod assignment, look `kubectl explain pod.spec.affinity` for details
# affinity -- affinity for scheduler pod assignment, check `kubectl explain pod.spec.affinity` for details
affinity: {}
# podSecurityContext - operator deployment SecurityContext, look `kubectl explain pod.spec.securityContext` for details
# podSecurityContext - operator deployment SecurityContext, check `kubectl explain pod.spec.securityContext` for details
podSecurityContext: {}
# topologySpreadConstraints - topologySpreadConstraints affinity for scheduler pod assignment, check `kubectl explain pod.spec.topologySpreadConstraints` for details
topologySpreadConstraints: []
serviceMonitor:
# serviceMonitor.enabled -- ServiceMonitor Custom resource is created for a (prometheus-operator)[https://github.com/prometheus-operator/prometheus-operator]
# serviceMonitor.enabled -- ServiceMonitor Custom resource is created for a [prometheus-operator](https://github.com/prometheus-operator/prometheus-operator)
# In serviceMonitor will be created two endpoints clickhouse-metrics on port 8888 and operator-metrics # 9999. Ypu can specify interval, scrapeTimeout, relabelings, metricRelabelings for each endpoint below
enabled: false
# serviceMonitor.additionalLabels -- additional labels for service monitor
additionalLabels: {}
# configs -- clickhouse-operator configs
# @default -- check the values.yaml file for the config content, auto-generated from latest operator release
clickhouseMetrics:
# serviceMonitor.interval for clickhouse-metrics endpoint --
interval: 30s
# serviceMonitor.scrapeTimeout for clickhouse-metrics endpoint -- Prometheus ServiceMonitor scrapeTimeout. If empty, Prometheus uses the global scrape timeout unless it is less than the target's scrape interval value in which the latter is used.
scrapeTimeout: ""
# serviceMonitor.relabelings for clickhouse-metrics endpoint -- Prometheus [RelabelConfigs] to apply to samples before scraping
relabelings: []
# serviceMonitor.metricRelabelings for clickhouse-metrics endpoint -- Prometheus [MetricRelabelConfigs] to apply to samples before ingestio
metricRelabelings: []
operatorMetrics:
# serviceMonitor.interval for operator-metrics endpoint --
interval: 30s
# serviceMonitor.scrapeTimeout for operator-metrics endpoint -- Prometheus ServiceMonitor scrapeTimeout. If empty, Prometheus uses the global scrape timeout unless it is less than the target's scrape interval value in which the latter is used.
scrapeTimeout: ""
# serviceMonitor.relabelings for operator-metrics endpoint -- Prometheus [RelabelConfigs] to apply to samples before scraping
relabelings: []
# serviceMonitor.metricRelabelings for operator-metrics endpoint-- Prometheus [MetricRelabelConfigs] to apply to samples before ingestio
metricRelabelings: []
# configs -- clickhouse operator configs
# @default -- check the `values.yaml` file for the config content (auto-generated from latest operator release)
configs:
confdFiles: null
configdFiles:
@@ -212,12 +241,12 @@ configs:
# In case path is relative - it is relative to the folder where configuration file you are reading right now is located.
path:
# Path to the folder where ClickHouse configuration files common for all instances within a CHI are located.
common: config.d
common: chi/config.d
# Path to the folder where ClickHouse configuration files unique for each instance (host) within a CHI are located.
host: conf.d
host: chi/conf.d
# Path to the folder where ClickHouse configuration files with users' settings are located.
# Files are common for all instances within a CHI.
user: users.d
user: chi/users.d
################################################
##
## Configuration users section
@@ -287,10 +316,13 @@ configs:
- settings/macros/*: "no"
- settings/remote_servers/*: "no"
- settings/user_directories/*: "no"
# these settings should not lead to pod restarts
- settings/display_secrets_in_show_and_select: "no"
- zookeeper/*: "yes"
- files/*.xml: "yes"
- files/config.d/*.xml: "yes"
- files/config.d/*dict*.xml: "no"
- files/config.d/*no_restart*: "no"
# exceptions in default profile
- profiles/default/background_*_pool_size: "yes"
- profiles/default/max_*_for_server: "yes"
@@ -312,7 +344,6 @@ configs:
# These credentials are used for:
# 1. Metrics requests
# 2. Schema maintenance
# 3. DROP DNS CACHE
# User with these credentials can be specified in additional ClickHouse .xml config files,
# located in 'clickhouse.configuration.file.path.user' folder
username: ""
@@ -339,6 +370,56 @@ configs:
connect: 1
# Timout to perform SQL query from the operator to ClickHouse instances. In seconds.
query: 4
################################################
##
## Addons specifies additional configuration sections
## Should it be called something like "templates"?
##
################################################
addons:
rules:
- version: "*"
spec:
configuration:
users:
profiles:
quotas:
settings:
files:
- version: ">= 23.3"
spec:
configuration:
###
### users.d is global while description depends on CH version which may vary on per-host basis
### In case of global-ness this may be better to implement via auto-templates
###
### As a solution, this may be applied on the whole cluster based on any of its hosts
###
### What to do when host is just created? CH version is not known prior to CH started and user config is required before CH started.
### We do not have any info about the cluster on initial creation
###
users:
"{clickhouseOperatorUser}/access_management": 1
"{clickhouseOperatorUser}/named_collection_control": 1
"{clickhouseOperatorUser}/show_named_collections": 1
"{clickhouseOperatorUser}/show_named_collections_secrets": 1
profiles:
quotas:
settings:
files:
- version: ">= 23.5"
spec:
configuration:
users:
profiles:
clickhouse_operator/format_display_secrets_in_show_and_select: 1
quotas:
settings:
##
## this may be added on per-host basis into host's conf.d folder
##
display_secrets_in_show_and_select: 1
files:
#################################################
##
## Metrics collection
@@ -352,6 +433,25 @@ configs:
# Upon reaching this timeout metrics collection is aborted and no more metrics are collected in this cycle.
# All collected metrics are returned.
collect: 9
keeper:
configuration:
################################################
##
## Configuration files section
##
################################################
file:
# Each 'path' can be either absolute or relative.
# In case path is absolute - it is used as is
# In case path is relative - it is relative to the folder where configuration file you are reading right now is located.
path:
# Path to the folder where Keeper configuration files common for all instances within a CHK are located.
common: chk/keeper_config.d
# Path to the folder where Keeper configuration files unique for each instance (host) within a CHK are located.
host: chk/conf.d
# Path to the folder where Keeper configuration files with users' settings are located.
# Files are common for all instances within a CHI.
user: chk/users.d
################################################
##
## Template(s) management section
@@ -367,7 +467,17 @@ configs:
# Path to the folder where ClickHouseInstallation templates .yaml manifests are located.
# Templates are added to the list of all templates and used when CHI is reconciled.
# Templates are applied in sorted alpha-numeric order.
path: templates.d
path: chi/templates.d
chk:
# CHK template updates handling policy
# Possible policy values:
# - ReadOnStart. Accept CHIT updates on the operators start only.
# - ApplyOnNextReconcile. Accept CHIT updates at all time. Apply news CHITs on next regular reconcile of the CHI
policy: ApplyOnNextReconcile
# Path to the folder where ClickHouseInstallation templates .yaml manifests are located.
# Templates are added to the list of all templates and used when CHI is reconciled.
# Templates are applied in sorted alpha-numeric order.
path: chk/templates.d
################################################
##
## Reconcile section
@@ -386,9 +496,9 @@ configs:
# 3. The first shard is always reconciled alone. Concurrency starts from the second shard and onward.
# Thus limiting number of shards being reconciled (and thus having hosts down) in each CHI by both number and percentage
# Max number of concurrent shard reconciles within one CHI in progress
# Max number of concurrent shard reconciles within one cluster in progress
reconcileShardsThreadsNumber: 5
# Max percentage of concurrent shard reconciles within one CHI in progress
# Max percentage of concurrent shard reconciles within one cluster in progress
reconcileShardsMaxConcurrencyPercent: 50
# Reconcile StatefulSet scenario
statefulSet:
@@ -429,6 +539,10 @@ configs:
exclude: true
queries: true
include: false
replicas:
all: no
new: yes
delay: 10
################################################
##
## Annotations management section
@@ -473,6 +587,25 @@ configs:
appendScope: "no"
################################################
##
## Metrics management section
##
################################################
metrics:
labels:
exclude: []
################################################
##
## Status management section
##
################################################
status:
fields:
action: false
actions: false
error: false
errors: false
################################################
##
## StatefulSet management section
##
################################################
@@ -631,20 +764,87 @@ configs:
</default>
</profiles>
</yandex>
# additionalResources -- list of additional resources to create (are processed via `tpl` function), useful for create ClickHouse clusters together with clickhouse-operator, look `kubectl explain chi` for details
keeperConfdFiles: null
keeperConfigdFiles:
01-keeper-01-default-config.xml: |
<!-- IMPORTANT -->
<!-- This file is auto-generated -->
<!-- Do not edit this file - all changes would be lost -->
<!-- Edit appropriate template in the following folder: -->
<!-- deploy/builder/templates-config -->
<!-- IMPORTANT -->
<clickhouse>
<keeper_server>
<coordination_settings>
<min_session_timeout_ms>10000</min_session_timeout_ms>
<operation_timeout_ms>10000</operation_timeout_ms>
<raft_logs_level>information</raft_logs_level>
<session_timeout_ms>100000</session_timeout_ms>
</coordination_settings>
<hostname_checks_enabled>true</hostname_checks_enabled>
<log_storage_path>/var/lib/clickhouse-keeper/coordination/logs</log_storage_path>
<snapshot_storage_path>/var/lib/clickhouse-keeper/coordination/snapshots</snapshot_storage_path>
<storage_path>/var/lib/clickhouse-keeper</storage_path>
<tcp_port>2181</tcp_port>
</keeper_server>
<listen_host>::</listen_host>
<listen_host>0.0.0.0</listen_host>
<listen_try>1</listen_try>
<logger>
<console>1</console>
<level>information</level>
</logger>
<max_connections>4096</max_connections>
</clickhouse>
01-keeper-02-readiness.xml: |
<!-- IMPORTANT -->
<!-- This file is auto-generated -->
<!-- Do not edit this file - all changes would be lost -->
<!-- Edit appropriate template in the following folder: -->
<!-- deploy/builder/templates-config -->
<!-- IMPORTANT -->
<clickhouse>
<keeper_server>
<http_control>
<port>9182</port>
<readiness>
<endpoint>/ready</endpoint>
</readiness>
</http_control>
</keeper_server>
</clickhouse>
01-keeper-03-enable-reconfig.xml: |-
<!-- IMPORTANT -->
<!-- This file is auto-generated -->
<!-- Do not edit this file - all changes would be lost -->
<!-- Edit appropriate template in the following folder: -->
<!-- deploy/builder/templates-config -->
<!-- IMPORTANT -->
<clickhouse>
<keeper_server>
<enable_reconfiguration>false</enable_reconfiguration>
</keeper_server>
</clickhouse>
keeperTemplatesdFiles:
readme: |-
Templates in this folder are packaged with an operator and available via 'useTemplate'
keeperUsersdFiles: null
# additionalResources -- list of additional resources to create (processed via `tpl` function),
# useful for create ClickHouse clusters together with clickhouse-operator.
# check `kubectl explain chi` for details
additionalResources: []
# - |
# apiVersion: v1
# kind: ConfigMap
# metadata:
# name: {{ include "altinity-clickhouse-operator.fullname" . }}-cm
# namespace: {{ .Release.Namespace }}
# namespace: {{ include "altinity-clickhouse-operator.namespace" . }}
# - |
# apiVersion: v1
# kind: Secret
# metadata:
# name: {{ include "altinity-clickhouse-operator.fullname" . }}-s
# namespace: {{ .Release.Namespace }}
# namespace: {{ include "altinity-clickhouse-operator.namespace" . }}
# stringData:
# mykey: my-value
# - |
@@ -652,15 +852,16 @@ additionalResources: []
# kind: ClickHouseInstallation
# metadata:
# name: {{ include "altinity-clickhouse-operator.fullname" . }}-chi
# namespace: {{ .Release.Namespace }}
# namespace: {{ include "altinity-clickhouse-operator.namespace" . }}
# spec:
# configuration:
# clusters:
# - name: default
# layout:
# shardsCount: 1
dashboards:
# dashboards.enabled -- provision grafana dashboards as secrets (can be synced by grafana dashboards sidecar https://github.com/grafana/helm-charts/blob/grafana-6.33.1/charts/grafana/values.yaml#L679 )
# dashboards.enabled -- provision grafana dashboards as configMaps (can be synced by grafana dashboards sidecar https://github.com/grafana/helm-charts/blob/grafana-8.3.4/charts/grafana/values.yaml#L778 )
enabled: false
# dashboards.additionalLabels -- labels to add to a secret with dashboards
additionalLabels:

View File

@@ -1,7 +1,7 @@
FROM bitnami/node:20.15.1 AS build
WORKDIR /app
ARG COMMIT_REF=cdf9095f50c74505870de337725d2a9d0bd20947
ARG COMMIT_REF=4926bc68fabb0914afab574006643c85a597b371
RUN wget -O- https://github.com/cozystack/kubeapps/archive/${COMMIT_REF}.tar.gz | tar xzf - --strip-components=2 kubeapps-${COMMIT_REF}/dashboard
RUN yarn install --frozen-lockfile

View File

@@ -4,7 +4,7 @@
# syntax = docker/dockerfile:1
FROM alpine AS source
ARG COMMIT_REF=cdf9095f50c74505870de337725d2a9d0bd20947
ARG COMMIT_REF=4926bc68fabb0914afab574006643c85a597b371
RUN apk add --no-cache patch
WORKDIR /source
RUN wget -O- https://github.com/cozystack/kubeapps/archive/${COMMIT_REF}.tar.gz | tar xzf - --strip-components=1

View File

@@ -8,7 +8,7 @@ annotations:
- name: Upstream Project
url: https://github.com/controlplaneio-fluxcd/flux-operator
apiVersion: v2
appVersion: v0.24.1
appVersion: v0.27.0
description: 'A Helm chart for deploying the Flux Operator. '
home: https://github.com/controlplaneio-fluxcd
icon: https://raw.githubusercontent.com/cncf/artwork/main/projects/flux/icon/color/flux-icon-color.png
@@ -25,4 +25,4 @@ sources:
- https://github.com/controlplaneio-fluxcd/flux-operator
- https://github.com/controlplaneio-fluxcd/charts
type: application
version: 0.24.1
version: 0.27.0

View File

@@ -1,6 +1,6 @@
# flux-operator
![Version: 0.24.1](https://img.shields.io/badge/Version-0.24.1-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: v0.24.1](https://img.shields.io/badge/AppVersion-v0.24.1-informational?style=flat-square)
![Version: 0.27.0](https://img.shields.io/badge/Version-0.27.0-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: v0.27.0](https://img.shields.io/badge/AppVersion-v0.27.0-informational?style=flat-square)
The [Flux Operator](https://github.com/controlplaneio-fluxcd/flux-operator) provides a
declarative API for the installation and upgrade of CNCF [Flux](https://fluxcd.io) and the
@@ -56,7 +56,7 @@ see the Flux Operator [documentation](https://fluxcd.control-plane.io/operator/)
| rbac.createAggregation | bool | `true` | Grant the Kubernetes view, edit and admin roles access to ResourceSet APIs. |
| readinessProbe | object | `{"httpGet":{"path":"/readyz","port":8081},"initialDelaySeconds":5,"periodSeconds":10}` | Container readiness probe settings. |
| reporting | object | `{"interval":"5m"}` | Flux [reporting](https://fluxcd.control-plane.io/operator/fluxreport/) settings. |
| resources | object | `{"limits":{"cpu":"1000m","memory":"1Gi"},"requests":{"cpu":"100m","memory":"64Mi"}}` | Container resources requests and limits settings. |
| resources | object | `{"limits":{"cpu":"2000m","memory":"1Gi"},"requests":{"cpu":"100m","memory":"64Mi"}}` | Container resources requests and limits settings. |
| securityContext | object | `{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]},"readOnlyRootFilesystem":true,"runAsNonRoot":true,"seccompProfile":{"type":"RuntimeDefault"}}` | Container security context settings. The default is compliant with the pod security restricted profile. |
| serviceAccount | object | `{"automount":true,"create":true,"name":""}` | Pod service account settings. The name of the service account defaults to the release name. |
| serviceMonitor | object | `{"create":false,"interval":"60s","labels":{},"scrapeTimeout":"30s"}` | Prometheus Operator scraping settings. |

View File

@@ -85,6 +85,16 @@ spec:
required for object-level workload identity.
This feature is only available in Flux v2.6.0 and later.
type: boolean
size:
description: |-
Size defines the vertical scaling profile of the Flux controllers.
The size is used to determine the concurrency and CPU/Memory limits for the Flux controllers.
Accepted values are: 'small', 'medium' and 'large'.
enum:
- small
- medium
- large
type: string
tenantDefaultServiceAccount:
description: |-
TenantDefaultServiceAccount is the name of the service account

View File

@@ -1,5 +1,10 @@
{
"$schema": "https://json-schema.org/draft/2019-09/schema",
"type": "object",
"required": [
"resources",
"securityContext"
],
"properties": {
"affinity": {
"default": {
@@ -21,16 +26,23 @@
}
}
},
"type": "object",
"properties": {
"nodeAffinity": {
"type": "object",
"properties": {
"requiredDuringSchedulingIgnoredDuringExecution": {
"type": "object",
"properties": {
"nodeSelectorTerms": {
"type": "array",
"items": {
"type": "object",
"properties": {
"matchExpressions": {
"type": "array",
"items": {
"type": "object",
"properties": {
"key": {
"type": "string"
@@ -39,29 +51,22 @@
"type": "string"
},
"values": {
"type": "array",
"items": {
"type": "string"
},
"type": "array"
}
}
},
"type": "object"
},
"type": "array"
}
}
}
},
"type": "object"
},
"type": "array"
}
}
}
},
"type": "object"
}
}
},
"type": "object"
}
}
},
"type": "object"
}
},
"apiPriority": {
"default": {
@@ -69,6 +74,7 @@
"extraServiceAccounts": [],
"level": "workload-high"
},
"type": "object",
"properties": {
"enabled": {
"type": "boolean"
@@ -79,30 +85,41 @@
"level": {
"type": "string"
}
},
"type": "object"
}
},
"commonAnnotations": {
"properties": {},
"type": "object"
},
"commonLabels": {
"properties": {},
"type": "object"
},
"extraArgs": {
"type": "array",
"uniqueItems": true,
"items": {
"type": "string"
},
"type": "array",
"uniqueItems": true
}
},
"extraEnvs": {
"type": "array",
"uniqueItems": true,
"items": {
"type": "object"
},
}
},
"extraVolumeMounts": {
"type": "array",
"uniqueItems": true
"uniqueItems": true,
"items": {
"type": "object"
}
},
"extraVolumes": {
"type": "array",
"uniqueItems": true,
"items": {
"type": "object"
}
},
"fullnameOverride": {
"type": "string"
@@ -112,21 +129,25 @@
"type": "boolean"
},
"image": {
"type": "object",
"required": [
"repository"
],
"properties": {
"imagePullPolicy": {
"type": "string",
"enum": [
"IfNotPresent",
"Always",
"Never"
],
"type": "string"
]
},
"pullSecrets": {
"type": "array",
"uniqueItems": true,
"items": {
"type": "object"
},
"type": "array",
"uniqueItems": true
}
},
"repository": {
"type": "string"
@@ -134,11 +155,7 @@
"tag": {
"type": "string"
}
},
"required": [
"repository"
],
"type": "object"
}
},
"installCRDs": {
"default": true,
@@ -153,8 +170,10 @@
"initialDelaySeconds": 15,
"periodSeconds": 20
},
"type": "object",
"properties": {
"httpGet": {
"type": "object",
"properties": {
"path": {
"type": "string"
@@ -162,8 +181,7 @@
"port": {
"type": "integer"
}
},
"type": "object"
}
},
"initialDelaySeconds": {
"type": "integer"
@@ -171,18 +189,18 @@
"periodSeconds": {
"type": "integer"
}
},
"type": "object"
}
},
"logLevel": {
"type": "string",
"enum": [
"debug",
"info",
"error"
],
"type": "string"
]
},
"marketplace": {
"type": "object",
"properties": {
"account": {
"type": "string"
@@ -193,10 +211,13 @@
"type": {
"type": "string"
}
},
"type": "object"
}
},
"multitenancy": {
"type": "object",
"required": [
"defaultServiceAccount"
],
"properties": {
"defaultServiceAccount": {
"type": "string"
@@ -204,26 +225,18 @@
"enabled": {
"type": "boolean"
}
},
"required": [
"defaultServiceAccount"
],
"type": "object"
}
},
"nameOverride": {
"type": "string"
},
"nodeSelector": {
"properties": {},
"type": [
"object"
]
"type": "object"
},
"podSecurityContext": {
"default": {
"fsGroup": 1337
},
"properties": {},
"type": "object"
},
"priorityClassName": {
@@ -231,6 +244,7 @@
"type": "string"
},
"rbac": {
"type": "object",
"properties": {
"create": {
"type": "boolean"
@@ -238,8 +252,7 @@
"createAggregation": {
"type": "boolean"
}
},
"type": "object"
}
},
"readinessProbe": {
"default": {
@@ -250,8 +263,10 @@
"initialDelaySeconds": 5,
"periodSeconds": 10
},
"type": "object",
"properties": {
"httpGet": {
"type": "object",
"properties": {
"path": {
"type": "string"
@@ -259,8 +274,7 @@
"port": {
"type": "integer"
}
},
"type": "object"
}
},
"initialDelaySeconds": {
"type": "integer"
@@ -268,23 +282,24 @@
"periodSeconds": {
"type": "integer"
}
},
"type": "object"
}
},
"reporting": {
"type": "object",
"required": [
"interval"
],
"properties": {
"interval": {
"type": "string"
}
},
"required": [
"interval"
],
"type": "object"
}
},
"resources": {
"type": "object",
"properties": {
"limits": {
"type": "object",
"properties": {
"cpu": {
"type": "string"
@@ -292,14 +307,14 @@
"memory": {
"type": "string"
}
},
"type": "object"
}
},
"requests": {
"default": {
"cpu": "100m",
"memory": "64Mi"
},
"type": "object",
"properties": {
"cpu": {
"type": "string"
@@ -307,13 +322,12 @@
"memory": {
"type": "string"
}
},
"type": "object"
}
}
},
"type": "object"
}
},
"securityContext": {
"type": "object",
"properties": {
"allowPrivilegeEscalation": {
"default": false,
@@ -325,16 +339,16 @@
"ALL"
]
},
"type": "object",
"properties": {
"drop": {
"type": "array",
"uniqueItems": true,
"items": {
"type": "string"
},
"type": "array",
"uniqueItems": true
}
}
},
"type": "object"
}
},
"readOnlyRootFilesystem": {
"default": true,
@@ -348,15 +362,14 @@
"default": {
"type": "RuntimeDefault"
},
"type": "object",
"properties": {
"type": {
"type": "string"
}
},
"type": "object"
}
}
},
"type": "object"
}
},
"serviceAccount": {
"default": {
@@ -364,6 +377,7 @@
"create": true,
"name": ""
},
"type": "object",
"properties": {
"automount": {
"type": "boolean"
@@ -374,8 +388,7 @@
"name": {
"type": "string"
}
},
"type": "object"
}
},
"serviceMonitor": {
"default": {
@@ -383,6 +396,7 @@
"interval": "60s",
"scrapeTimeout": "30s"
},
"type": "object",
"properties": {
"create": {
"type": "boolean"
@@ -391,26 +405,19 @@
"type": "string"
},
"labels": {
"properties": {},
"type": "object"
},
"scrapeTimeout": {
"type": "string"
}
},
"type": "object"
}
},
"tolerations": {
"type": "array",
"uniqueItems": true,
"items": {
"type": "object"
},
"type": "array",
"uniqueItems": true
}
}
},
"required": [
"resources",
"securityContext"
],
"type": "object"
}
}

View File

@@ -46,7 +46,7 @@ apiPriority: # @schema default: {"enabled":false,"level":"workload-high","extraS
# -- Container resources requests and limits settings.
resources: # @schema required: true
limits:
cpu: 1000m
cpu: 2000m
memory: 1Gi
requests: # @schema default: {"cpu":"100m","memory":"64Mi"}
cpu: 100m

View File

@@ -8,7 +8,7 @@ annotations:
- name: Upstream Project
url: https://github.com/controlplaneio-fluxcd/flux-operator
apiVersion: v2
appVersion: v0.24.1
appVersion: v0.27.0
description: 'A Helm chart for deploying a Flux instance managed by Flux Operator. '
home: https://github.com/controlplaneio-fluxcd
icon: https://raw.githubusercontent.com/cncf/artwork/main/projects/flux/icon/color/flux-icon-color.png
@@ -25,4 +25,4 @@ sources:
- https://github.com/controlplaneio-fluxcd/flux-operator
- https://github.com/controlplaneio-fluxcd/charts
type: application
version: 0.24.1
version: 0.27.0

View File

@@ -1,6 +1,6 @@
# flux-instance
![Version: 0.24.1](https://img.shields.io/badge/Version-0.24.1-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: v0.24.1](https://img.shields.io/badge/AppVersion-v0.24.1-informational?style=flat-square)
![Version: 0.27.0](https://img.shields.io/badge/Version-0.27.0-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: v0.27.0](https://img.shields.io/badge/AppVersion-v0.27.0-informational?style=flat-square)
This chart is a thin wrapper around the `FluxInstance` custom resource, which is
used by the [Flux Operator](https://github.com/controlplaneio-fluxcd/flux-operator)
@@ -37,7 +37,9 @@ helm -n flux-system uninstall flux
| commonAnnotations | object | `{}` | Common annotations to add to all deployed objects including pods. |
| commonLabels | object | `{}` | Common labels to add to all deployed objects including pods. |
| fullnameOverride | string | `"flux"` | |
| instance.cluster | object | `{"domain":"cluster.local","multitenant":false,"networkPolicy":true,"tenantDefaultServiceAccount":"default","type":"kubernetes"}` | Cluster https://fluxcd.control-plane.io/operator/fluxinstance/#cluster-configuration |
| healthcheck.enabled | bool | `false` | Enable post-install and post-upgrade health checks. |
| healthcheck.timeout | string | `"5m"` | Health check timeout in Go duration format. |
| instance.cluster | object | `{"domain":"cluster.local","multitenant":false,"networkPolicy":true,"size":"","tenantDefaultServiceAccount":"default","type":"kubernetes"}` | Cluster https://fluxcd.control-plane.io/operator/fluxinstance/#cluster-configuration |
| instance.commonMetadata | object | `{"annotations":{},"labels":{}}` | Common metadata https://fluxcd.control-plane.io/operator/fluxinstance/#common-metadata |
| instance.components | list | `["source-controller","kustomize-controller","helm-controller","notification-controller"]` | Components https://fluxcd.control-plane.io/operator/fluxinstance/#components-configuration |
| instance.distribution | object | `{"artifact":"oci://ghcr.io/controlplaneio-fluxcd/flux-operator-manifests:latest","artifactPullSecret":"","imagePullSecret":"","registry":"ghcr.io/fluxcd","version":"2.x"}` | Distribution https://fluxcd.control-plane.io/operator/fluxinstance/#distribution-configuration |

View File

@@ -0,0 +1,78 @@
{{- if .Values.healthcheck.enabled }}
apiVersion: batch/v1
kind: Job
metadata:
name: "{{ .Release.Name }}-healthcheck"
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
annotations:
helm.sh/hook: post-install,post-upgrade
helm.sh/hook-weight: "-5"
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
spec:
template:
metadata:
name: "{{ .Release.Name }}"
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
spec:
restartPolicy: Never
{{- with .Values.healthcheck.image.pullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ .Values.healthcheck.serviceAccount.name }}
{{- with .Values.healthcheck.podSecurityContext }}
securityContext:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- if .Values.healthcheck.hostNetwork }}
hostNetwork: true
{{- end }}
containers:
- name: healthcheck
image: "{{ .Values.healthcheck.image.repository }}:{{ .Values.healthcheck.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: "{{ .Values.healthcheck.image.imagePullPolicy }}"
args:
- wait
- instance
- {{ include "flux-instance.fullname" . }}
- --namespace={{ .Release.Namespace }}
- --timeout={{ .Values.healthcheck.timeout }}
{{- range .Values.healthcheck.extraArgs }}
- {{ . }}
{{- end }}
{{- with .Values.healthcheck.envs }}
env:
{{- toYaml . | nindent 12 }}
{{- end }}
securityContext:
{{- toYaml .Values.healthcheck.securityContext | nindent 12 }}
resources:
{{- toYaml .Values.healthcheck.resources | nindent 12 }}
{{- with .Values.healthcheck.volumeMounts }}
volumeMounts:
{{- toYaml . | nindent 12 }}
{{- end }}
{{- with .Values.healthcheck.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.healthcheck.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.healthcheck.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.healthcheck.volumes }}
volumes:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}

View File

@@ -24,7 +24,12 @@ spec:
imagePullSecret: {{ .Values.instance.distribution.imagePullSecret }}
{{- end }}
components: {{ .Values.instance.components | toYaml | nindent 4 }}
cluster: {{ .Values.instance.cluster | toYaml | nindent 4 }}
cluster:
{{- range $key, $value := .Values.instance.cluster }}
{{- if not (and (kindIs "string" $value) (eq $value "")) }}
{{ $key }}: {{ $value }}
{{- end }}
{{- end }}
{{- if or .Values.instance.commonMetadata.annotations .Values.instance.commonMetadata.labels }}
commonMetadata:
{{- with .Values.instance.commonMetadata.annotations }}

View File

@@ -0,0 +1,17 @@
{{- if .Values.healthcheck.serviceAccount.create }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ .Values.healthcheck.serviceAccount.name }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "flux-instance.labels" . | nindent 4 }}
{{- with .Values.commonLabels }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- with .Values.commonAnnotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
automountServiceAccountToken: {{ .Values.healthcheck.serviceAccount.automount }}
{{- end }}

View File

@@ -1,20 +1,275 @@
{
"$schema": "https://json-schema.org/draft/2019-09/schema",
"type": "object",
"properties": {
"commonAnnotations": {
"properties": {},
"type": "object"
},
"commonLabels": {
"properties": {},
"type": "object"
},
"fullnameOverride": {
"type": "string"
},
"healthcheck": {
"type": "object",
"required": [
"resources",
"securityContext"
],
"properties": {
"affinity": {
"default": {
"nodeAffinity": {
"requiredDuringSchedulingIgnoredDuringExecution": {
"nodeSelectorTerms": [
{
"matchExpressions": [
{
"key": "kubernetes.io/os",
"operator": "In",
"values": [
"linux"
]
}
]
}
]
}
}
},
"type": "object",
"properties": {
"nodeAffinity": {
"type": "object",
"properties": {
"requiredDuringSchedulingIgnoredDuringExecution": {
"type": "object",
"properties": {
"nodeSelectorTerms": {
"type": "array",
"items": {
"type": "object",
"properties": {
"matchExpressions": {
"type": "array",
"items": {
"type": "object",
"properties": {
"key": {
"type": "string"
},
"operator": {
"type": "string"
},
"values": {
"type": "array",
"items": {
"type": "string"
}
}
}
}
}
}
}
}
}
}
}
}
}
},
"enabled": {
"type": "boolean"
},
"envs": {
"type": "array",
"uniqueItems": true,
"items": {
"type": "object"
}
},
"extraArgs": {
"type": "array",
"uniqueItems": true,
"items": {
"type": "string"
}
},
"hostNetwork": {
"default": false,
"type": "boolean"
},
"image": {
"type": "object",
"required": [
"repository"
],
"properties": {
"imagePullPolicy": {
"type": "string",
"enum": [
"IfNotPresent",
"Always",
"Never"
]
},
"pullSecrets": {
"type": "array",
"uniqueItems": true,
"items": {
"type": "object"
}
},
"repository": {
"type": "string"
},
"tag": {
"type": "string"
}
}
},
"nodeSelector": {
"type": "object"
},
"podSecurityContext": {
"default": {
"fsGroup": 1337
},
"type": "object"
},
"resources": {
"type": "object",
"properties": {
"limits": {
"type": "object",
"properties": {
"cpu": {
"type": "string"
},
"memory": {
"type": "string"
}
}
},
"requests": {
"default": {
"cpu": "100m",
"memory": "64Mi"
},
"type": "object",
"properties": {
"cpu": {
"type": "string"
},
"memory": {
"type": "string"
}
}
}
}
},
"securityContext": {
"type": "object",
"properties": {
"allowPrivilegeEscalation": {
"default": false,
"type": "boolean"
},
"capabilities": {
"default": {
"drop": [
"ALL"
]
},
"type": "object",
"properties": {
"drop": {
"type": "array",
"uniqueItems": true,
"items": {
"type": "string"
}
}
}
},
"readOnlyRootFilesystem": {
"default": true,
"type": "boolean"
},
"runAsNonRoot": {
"default": true,
"type": "boolean"
},
"seccompProfile": {
"default": {
"type": "RuntimeDefault"
},
"type": "object",
"properties": {
"type": {
"type": "string"
}
}
}
}
},
"serviceAccount": {
"default": {
"automount": true,
"create": false,
"name": "flux-operator"
},
"type": "object",
"properties": {
"automount": {
"type": "boolean"
},
"create": {
"type": "boolean"
},
"name": {
"type": "string"
}
}
},
"timeout": {
"default": "5m",
"type": "string"
},
"tolerations": {
"type": "array",
"uniqueItems": true,
"items": {
"type": "object"
}
},
"volumeMounts": {
"type": "array",
"uniqueItems": true,
"items": {
"type": "object"
}
},
"volumes": {
"type": "array",
"uniqueItems": true,
"items": {
"type": "object"
}
}
}
},
"instance": {
"type": "object",
"required": [
"distribution",
"cluster"
],
"properties": {
"cluster": {
"type": "object",
"properties": {
"domain": {
"type": "string"
@@ -25,37 +280,46 @@
"networkPolicy": {
"type": "boolean"
},
"size": {
"type": "string",
"enum": [
"",
"small",
"medium",
"large"
]
},
"tenantDefaultServiceAccount": {
"type": "string"
},
"type": {
"type": "string",
"enum": [
"kubernetes",
"openshift",
"aws",
"azure",
"gcp"
],
"type": "string"
]
}
},
"type": "object"
}
},
"commonMetadata": {
"type": "object",
"properties": {
"annotations": {
"properties": {},
"type": "object"
},
"labels": {
"properties": {},
"type": "object"
}
},
"type": "object"
}
},
"components": {
"type": "array",
"uniqueItems": true,
"items": {
"type": "string",
"enum": [
"source-controller",
"kustomize-controller",
@@ -63,13 +327,15 @@
"notification-controller",
"image-reflector-controller",
"image-automation-controller"
],
"type": "string"
},
"type": "array",
"uniqueItems": true
]
}
},
"distribution": {
"type": "object",
"required": [
"version",
"registry"
],
"properties": {
"artifact": {
"type": "string"
@@ -86,39 +352,35 @@
"version": {
"type": "string"
}
},
"required": [
"version",
"registry"
],
"type": "object"
}
},
"kustomize": {
"type": "object",
"properties": {
"patches": {
"type": "array",
"items": {
"type": "object"
},
"type": "array"
}
}
},
"type": "object"
}
},
"sharding": {
"type": "object",
"properties": {
"key": {
"type": "string"
},
"shards": {
"type": "array",
"items": {
"type": "string"
},
"type": "array"
}
}
},
"type": "object"
}
},
"storage": {
"type": "object",
"properties": {
"class": {
"type": "string"
@@ -126,21 +388,21 @@
"size": {
"type": "string"
}
},
"type": "object"
}
},
"sync": {
"type": "object",
"properties": {
"interval": {
"type": "string"
},
"kind": {
"type": "string",
"enum": [
"GitRepository",
"OCIRepository",
"Bucket"
],
"type": "string"
]
},
"name": {
"type": "string"
@@ -160,19 +422,12 @@
"url": {
"type": "string"
}
},
"type": "object"
}
}
},
"required": [
"distribution",
"cluster"
],
"type": "object"
}
},
"nameOverride": {
"type": "string"
}
},
"type": "object"
}
}

View File

@@ -20,6 +20,7 @@ instance:
# -- Cluster https://fluxcd.control-plane.io/operator/fluxinstance/#cluster-configuration
cluster: # @schema required: true
type: kubernetes # @schema enum:[kubernetes,openshift,aws,azure,gcp]
size: "" # @schema enum:['',small,medium,large]
domain: "cluster.local"
networkPolicy: true
multitenant: false
@@ -35,7 +36,7 @@ instance:
# -- Sharding https://fluxcd.control-plane.io/operator/fluxinstance/#sharding-configuration
sharding: # @schema required: false
key: "sharding.fluxcd.io/key"
shards: [] # @schema item: string
shards: [ ] # @schema item: string
# -- Sync https://fluxcd.control-plane.io/operator/fluxinstance/#sync-configuration
sync: # @schema required: false
interval: 1m
@@ -48,10 +49,101 @@ instance:
provider: ""
kustomize: # @schema required: false
# -- Kustomize patches https://fluxcd.control-plane.io/operator/fluxinstance/#kustomize-patches
patches: [] # @schema item: object
patches: [ ] # @schema item: object
# -- Common annotations to add to all deployed objects including pods.
commonAnnotations: { }
# -- Common labels to add to all deployed objects including pods.
commonLabels: { }
# Healthcheck job settings.
healthcheck:
# -- Enable post-install and post-upgrade health checks.
enabled: false
# -- Health check timeout in Go duration format.
timeout: 5m # @schema default: "5m"
# Container image settings.
# The image tag defaults to the chart appVersion.
# @ignore
image:
repository: ghcr.io/controlplaneio-fluxcd/flux-operator-cli # @schema required: true
tag: ""
pullSecrets: [ ] # @schema item: object ; uniqueItems: true
imagePullPolicy: IfNotPresent # @schema enum:[IfNotPresent, Always, Never]
# Container resources requests and limits settings.
# @ignore
resources: # @schema required: true
limits:
cpu: 1000m
memory: 1Gi
requests: # @schema default: {"cpu":"100m","memory":"64Mi"}
cpu: 100m
memory: 64Mi
# Pod service account settings.
# The name of the service account defaults to the release name.
# @ignore
serviceAccount: # @schema default: {"create":false,"automount":true,"name":"flux-operator"}
create: false
automount: true
name: "flux-operator"
# Pod security context settings.
# @ignore
podSecurityContext: { } # @schema default: {"fsGroup":1337}
# Container security context settings.
# The default is compliant with the pod security restricted profile.
# @ignore
securityContext: # @schema required: true
runAsNonRoot: true # @schema default: true
readOnlyRootFilesystem: true # @schema default: true
allowPrivilegeEscalation: false # @schema default: false
capabilities: # @schema default: {"drop":["ALL"]}
drop: # @schema item: string ; uniqueItems: true
- "ALL"
seccompProfile: # @schema default: {"type":"RuntimeDefault"}
type: "RuntimeDefault"
# Pod affinity and anti-affinity settings.
# @ignore
affinity: # @schema default: {"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"kubernetes.io/os","operator":"In","values":["linux"]}]}]}}}
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
# Pod tolerations settings.
# @ignore
tolerations: [ ] # @schema item: object ; uniqueItems: true
# Pod Node Selector settings.
# @ignore
nodeSelector: { } # @schema type: object
# If `true`, the container ports (`8080` and `8081`) are exposed on the host network.
# @ignore
hostNetwork: false # @schema default: false
# Pod extra volumes.
# @ignore
volumes: [ ] # @schema item: object ; uniqueItems: true
# Container extra volume mounts.
# @ignore
volumeMounts: [ ] # @schema item: object ; uniqueItems: true
# Container extra environment variables.
# @ignore
envs: [ ] # @schema item: object ; uniqueItems: true
# Container extra arguments.
# @ignore
extraArgs: [ ] # @schema item: string ; uniqueItems: true