Merge remote-tracking branch 'origin/main' into pr/1482-flux-kingdonb

This commit is contained in:
Timofei Larkin
2025-10-27 17:33:08 +03:00
235 changed files with 14346 additions and 1217 deletions

2
.github/CODEOWNERS vendored
View File

@@ -1 +1 @@
* @kvaps @lllamnyp @klinch0
* @kvaps @lllamnyp @nbykov0

50
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@@ -0,0 +1,50 @@
---
name: Bug report
about: Create a report to help us improve
labels: 'bug'
assignees: ''
---
<!--
Thank you for submitting a bug report!
Please fill in the fields below to help us investigate the problem.
-->
**Describe the bug**
A clear and concise description of what the bug is.
**Environment**
- Cozystack version
- Provider: on-prem, Hetzner, and so on
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behaviour**
When taking the steps to reproduce, what should have happened differently?
**Actual behaviour**
A clear and concise description of what happens when the bug occurs. Explain how the system currently behaves, including error messages, unexpected results, or incorrect functionality observed during execution.
**Logs**
```
Paste any relevant logs here. Please redact tokens, passwords, private keys.
```
**Screenshots**
If applicable, add screenshots to help explain the problem.
**Additional context**
Add any other context about the problem here.
**Checklist**
- [ ] I have checked the documentation
- [ ] I have searched for similar issues
- [ ] I have included all required information
- [ ] I have provided clear steps to reproduce
- [ ] I have included relevant logs

View File

@@ -1,7 +1,8 @@
name: Pull Request
env:
REGISTRY: ${{ vars.OCIR_REPO }}
# TODO: unhardcode this
REGISTRY: iad.ocir.io/idyksih5sir9/cozystack
on:
pull_request:
types: [opened, synchronize, reopened]

View File

@@ -1,3 +1,22 @@
# Code of Conduct
Cozystack follows the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md).
# Cozystack Vendor Neutrality Manifesto
Cozystack exists for the cloud-native community. We are committed to a project culture where no single company, product, or commercial agenda directs our roadmap, governance, brand, or releases. Our North Star is user value, technical excellence, and open collaboration under the CNCF umbrella.
## Our Commitments
- **Community-first:** Decisions prioritize the broader community over any vendor interest.
- **Open collaboration:** Ideas, discussions, and outcomes happen in public spaces; contributions are welcomed from all.
- **Merit over affiliation:** Proposals are evaluated on technical merit and user impact, not on who submits them.
- **Inclusive stewardship:** Leadership and maintenance are open to contributors who demonstrate sustained, constructive impact.
- **Technology choice:** We prefer open, pluggable designs that interoperate with multiple ecosystems and providers.
- **Neutral brand & voice:** Our name, logo, website, and documentation do not imply endorsement or preference for any vendor.
- **Transparent practices:** Funding acknowledgments, partnerships, and potential conflicts are communicated openly.
- **User trust:** Security handling, releases, and communications aim to be timely, transparent, and fair to all users.
By contributing to Cozystack, we affirm these principles and work together to keep the project open, welcoming, and vendor-neutral.
*— The Cozystack community*

151
CONTRIBUTOR_LADDER.md Normal file
View File

@@ -0,0 +1,151 @@
# Contributor Ladder
* [Contributor Ladder](#contributor-ladder)
* [Community Participant](#community-participant)
* [Contributor](#contributor)
* [Reviewer](#reviewer)
* [Maintainer](#maintainer)
* [Inactivity](#inactivity)
* [Involuntary Removal](#involuntary-removal-or-demotion)
* [Stepping Down/Emeritus Process](#stepping-downemeritus-process)
* [Contact](#contact)
## Contributor Ladder
Hello! We are excited that you want to learn more about our project contributor ladder! This contributor ladder outlines the different contributor roles within the project, along with the responsibilities and privileges that come with them. Community members generally start at the first levels of the "ladder" and advance up it as their involvement in the project grows. Our project members are happy to help you advance along the contributor ladder.
Each of the contributor roles below is organized into lists of three types of things. "Responsibilities" are things that a contributor is expected to do. "Requirements" are qualifications a person needs to meet to be in that role, and "Privileges" are things contributors on that level are entitled to.
### Community Participant
Description: A Community Participant engages with the project and its community, contributing their time, thoughts, etc. Community participants are usually users who have stopped being anonymous and started being active in project discussions.
* Responsibilities:
* Must follow the [CNCF CoC](https://github.com/cncf/foundation/blob/main/code-of-conduct.md)
* How users can get involved with the community:
* Participating in community discussions
* Helping other users
* Submitting bug reports
* Commenting on issues
* Trying out new releases
* Attending community events
### Contributor
Description: A Contributor contributes directly to the project and adds value to it. Contributions need not be code. People at the Contributor level may be new contributors, or they may only contribute occasionally.
* Responsibilities include:
* Follow the [CNCF CoC](https://github.com/cncf/foundation/blob/main/code-of-conduct.md)
* Follow the project [contributing guide] (https://github.com/cozystack/cozystack/blob/main/CONTRIBUTING.md)
* Requirements (one or several of the below):
* Report and sometimes resolve issues
* Occasionally submit PRs
* Contribute to the documentation
* Show up at meetings, takes notes
* Answer questions from other community members
* Submit feedback on issues and PRs
* Test releases and patches and submit reviews
* Run or helps run events
* Promote the project in public
* Help run the project infrastructure
* Privileges:
* Invitations to contributor events
* Eligible to become a Maintainer
### Reviewer
Description: A Reviewer has responsibility for specific code, documentation, test, or other project areas. They are collectively responsible, with other Reviewers, for reviewing all changes to those areas and indicating whether those changes are ready to merge. They have a track record of contribution and review in the project.
Reviewers are responsible for a "specific area." This can be a specific code directory, driver, chapter of the docs, test job, event, or other clearly-defined project component that is smaller than an entire repository or subproject. Most often it is one or a set of directories in one or more Git repositories. The "specific area" below refers to this area of responsibility.
Reviewers have all the rights and responsibilities of a Contributor, plus:
* Responsibilities include:
* Continues to contribute regularly, as demonstrated by having at least 15 PRs a year, as demonstrated by [Cozystack devstats](https://cozystack.devstats.cncf.io).
* Following the reviewing guide
* Reviewing most Pull Requests against their specific areas of responsibility
* Reviewing at least 40 PRs per year
* Helping other contributors become reviewers
* Requirements:
* Must have successful contributions to the project, including at least one of the following:
* 10 accepted PRs,
* Reviewed 20 PRs,
* Resolved and closed 20 Issues,
* Become responsible for a key project management area,
* Or some equivalent combination or contribution
* Must have been contributing for at least 6 months
* Must be actively contributing to at least one project area
* Must have two sponsors who are also Reviewers or Maintainers, at least one of whom does not work for the same employer
* Has reviewed, or helped review, at least 20 Pull Requests
* Has analyzed and resolved test failures in their specific area
* Has demonstrated an in-depth knowledge of the specific area
* Commits to being responsible for that specific area
* Is supportive of new and occasional contributors and helps get useful PRs in shape to commit
* Additional privileges:
* Has GitHub or CI/CD rights to approve pull requests in specific directories
* Can recommend and review other contributors to become Reviewers
* May be assigned Issues and Reviews
* May give commands to CI/CD automation
* Can recommend other contributors to become Reviewers
The process of becoming a Reviewer is:
1. The contributor is nominated by opening a PR against the appropriate repository, which adds their GitHub username to the OWNERS file for one or more directories.
2. At least two members of the team that owns that repository or main directory, who are already Approvers, approve the PR.
### Maintainer
Description: Maintainers are very established contributors who are responsible for the entire project. As such, they have the ability to approve PRs against any area of the project, and are expected to participate in making decisions about the strategy and priorities of the project.
A Maintainer must meet the responsibilities and requirements of a Reviewer, plus:
* Responsibilities include:
* Reviewing at least 40 PRs per year, especially PRs that involve multiple parts of the project
* Mentoring new Reviewers
* Writing refactoring PRs
* Participating in CNCF maintainer activities
* Determining strategy and policy for the project
* Participating in, and leading, community meetings
* Requirements
* Experience as a Reviewer for at least 6 months
* Demonstrates a broad knowledge of the project across multiple areas
* Is able to exercise judgment for the good of the project, independent of their employer, friends, or team
* Mentors other contributors
* Can commit to spending at least 10 hours per month working on the project
* Additional privileges:
* Approve PRs to any area of the project
* Represent the project in public as a Maintainer
* Communicate with the CNCF on behalf of the project
* Have a vote in Maintainer decision-making meetings
Process of becoming a maintainer:
1. Any current Maintainer may nominate a current Reviewer to become a new Maintainer, by opening a PR against the root of the cozystack repository adding the nominee as an Approver in the [MAINTAINERS](https://github.com/cozystack/cozystack/blob/main/MAINTAINERS.md) file.
2. The nominee will add a comment to the PR testifying that they agree to all requirements of becoming a Maintainer.
3. A majority of the current Maintainers must then approve the PR.
## Inactivity
It is important for contributors to be and stay active to set an example and show commitment to the project. Inactivity is harmful to the project as it may lead to unexpected delays, contributor attrition, and a lost of trust in the project.
* Inactivity is measured by:
* Periods of no contributions for longer than 6 months
* Periods of no communication for longer than 3 months
* Consequences of being inactive include:
* Involuntary removal or demotion
* Being asked to move to Emeritus status
## Involuntary Removal or Demotion
Involuntary removal/demotion of a contributor happens when responsibilities and requirements aren't being met. This may include repeated patterns of inactivity, extended period of inactivity, a period of failing to meet the requirements of your role, and/or a violation of the Code of Conduct. This process is important because it protects the community and its deliverables while also opens up opportunities for new contributors to step in.
Involuntary removal or demotion is handled through a vote by a majority of the current Maintainers.
## Stepping Down/Emeritus Process
If and when contributors' commitment levels change, contributors can consider stepping down (moving down the contributor ladder) vs moving to emeritus status (completely stepping away from the project).
Contact the Maintainers about changing to Emeritus status, or reducing your contributor level.
## Contact
* For inquiries, please reach out to: @kvaps, @tym83

View File

@@ -7,6 +7,6 @@
| Kingdon Barrett | [@kingdonb](https://github.com/kingdonb) | Urmanac | FluxCD and flux-operator |
| Timofei Larkin | [@lllamnyp](https://github.com/lllamnyp) | 3commas | Etcd-operator Lead |
| Artem Bortnikov | [@aobort](https://github.com/aobort) | Timescale | Etcd-operator Lead |
| Andrei Gumilev | [@chumkaska](https://github.com/chumkaska) | Ænix | Platform Documentation |
| Timur Tukaev | [@tym83](https://github.com/tym83) | Ænix | Cozystack Website, Marketing, Community Management |
| Kirill Klinchenkov | [@klinch0](https://github.com/klinch0) | Ænix | Core Maintainer |
| Nikita Bykov | [@nbykov0](https://github.com/nbykov0) | Ænix | Maintainer of ARM and stuff |

View File

@@ -15,6 +15,7 @@ build: build-deps
make -C packages/extra/monitoring image
make -C packages/system/cozystack-api image
make -C packages/system/cozystack-controller image
make -C packages/system/lineage-controller-webhook image
make -C packages/system/cilium image
make -C packages/system/kubeovn image
make -C packages/system/kubeovn-webhook image

View File

@@ -51,7 +51,11 @@ type CozystackResourceDefinitionSpec struct {
Release CozystackResourceDefinitionRelease `json:"release"`
// Secret selectors
Secrets CozystackResourceDefinitionSecrets `json:"secrets,omitempty"`
Secrets CozystackResourceDefinitionResources `json:"secrets,omitempty"`
// Service selectors
Services CozystackResourceDefinitionResources `json:"services,omitempty"`
// Ingress selectors
Ingresses CozystackResourceDefinitionResources `json:"ingresses,omitempty"`
// Dashboard configuration for this resource
Dashboard *CozystackResourceDefinitionDashboard `json:"dashboard,omitempty"`
@@ -95,16 +99,46 @@ type CozystackResourceDefinitionRelease struct {
Prefix string `json:"prefix"`
}
type CozystackResourceDefinitionSecrets struct {
// Exclude contains an array of label selectors that target secrets.
// If a secret matches the selector in any of the elements in the array, it is
// CozystackResourceDefinitionResourceSelector extends metav1.LabelSelector with resourceNames support.
// A resource matches this selector only if it satisfies ALL criteria:
// - Label selector conditions (matchExpressions and matchLabels)
// - AND has a name that matches one of the names in resourceNames (if specified)
//
// The resourceNames field supports Go templates with the following variables available:
// - {{ .name }}: The name of the managing application (from apps.cozystack.io/application.name)
// - {{ .kind }}: The lowercased kind of the managing application (from apps.cozystack.io/application.kind)
// - {{ .namespace }}: The namespace of the resource being processed
//
// Example YAML:
// secrets:
// include:
// - matchExpressions:
// - key: badlabel
// operator: DoesNotExist
// matchLabels:
// goodlabel: goodvalue
// resourceNames:
// - "{{ .name }}-secret"
// - "{{ .kind }}-{{ .name }}-tls"
// - "specificname"
type CozystackResourceDefinitionResourceSelector struct {
metav1.LabelSelector `json:",inline"`
// ResourceNames is a list of resource names to match
// If specified, the resource must have one of these exact names to match the selector
// +optional
ResourceNames []string `json:"resourceNames,omitempty"`
}
type CozystackResourceDefinitionResources struct {
// Exclude contains an array of resource selectors that target resources.
// If a resource matches the selector in any of the elements in the array, it is
// hidden from the user, regardless of the matches in the include array.
Exclude []*metav1.LabelSelector `json:"exclude,omitempty"`
// Include contains an array of label selectors that target secrets.
// If a secret matches the selector in any of the elements in the array, and
// matches none of the selectors in the exclude array that secret is marked
// as a tenant secret and is visible to users.
Include []*metav1.LabelSelector `json:"include,omitempty"`
Exclude []*CozystackResourceDefinitionResourceSelector `json:"exclude,omitempty"`
// Include contains an array of resource selectors that target resources.
// If a resource matches the selector in any of the elements in the array, and
// matches none of the selectors in the exclude array that resource is marked
// as a tenant resource and is visible to users.
Include []*CozystackResourceDefinitionResourceSelector `json:"include,omitempty"`
}
// ---- Dashboard types ----

View File

@@ -22,7 +22,6 @@ package v1alpha1
import (
"k8s.io/apimachinery/pkg/api/resource"
"k8s.io/apimachinery/pkg/apis/meta/v1"
runtime "k8s.io/apimachinery/pkg/runtime"
)
@@ -175,38 +174,59 @@ func (in *CozystackResourceDefinitionRelease) DeepCopy() *CozystackResourceDefin
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *CozystackResourceDefinitionSecrets) DeepCopyInto(out *CozystackResourceDefinitionSecrets) {
func (in *CozystackResourceDefinitionResourceSelector) DeepCopyInto(out *CozystackResourceDefinitionResourceSelector) {
*out = *in
in.LabelSelector.DeepCopyInto(&out.LabelSelector)
if in.ResourceNames != nil {
in, out := &in.ResourceNames, &out.ResourceNames
*out = make([]string, len(*in))
copy(*out, *in)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CozystackResourceDefinitionResourceSelector.
func (in *CozystackResourceDefinitionResourceSelector) DeepCopy() *CozystackResourceDefinitionResourceSelector {
if in == nil {
return nil
}
out := new(CozystackResourceDefinitionResourceSelector)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *CozystackResourceDefinitionResources) DeepCopyInto(out *CozystackResourceDefinitionResources) {
*out = *in
if in.Exclude != nil {
in, out := &in.Exclude, &out.Exclude
*out = make([]*v1.LabelSelector, len(*in))
*out = make([]*CozystackResourceDefinitionResourceSelector, len(*in))
for i := range *in {
if (*in)[i] != nil {
in, out := &(*in)[i], &(*out)[i]
*out = new(v1.LabelSelector)
*out = new(CozystackResourceDefinitionResourceSelector)
(*in).DeepCopyInto(*out)
}
}
}
if in.Include != nil {
in, out := &in.Include, &out.Include
*out = make([]*v1.LabelSelector, len(*in))
*out = make([]*CozystackResourceDefinitionResourceSelector, len(*in))
for i := range *in {
if (*in)[i] != nil {
in, out := &(*in)[i], &(*out)[i]
*out = new(v1.LabelSelector)
*out = new(CozystackResourceDefinitionResourceSelector)
(*in).DeepCopyInto(*out)
}
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CozystackResourceDefinitionSecrets.
func (in *CozystackResourceDefinitionSecrets) DeepCopy() *CozystackResourceDefinitionSecrets {
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CozystackResourceDefinitionResources.
func (in *CozystackResourceDefinitionResources) DeepCopy() *CozystackResourceDefinitionResources {
if in == nil {
return nil
}
out := new(CozystackResourceDefinitionSecrets)
out := new(CozystackResourceDefinitionResources)
in.DeepCopyInto(out)
return out
}
@@ -217,6 +237,8 @@ func (in *CozystackResourceDefinitionSpec) DeepCopyInto(out *CozystackResourceDe
out.Application = in.Application
in.Release.DeepCopyInto(&out.Release)
in.Secrets.DeepCopyInto(&out.Secrets)
in.Services.DeepCopyInto(&out.Services)
in.Ingresses.DeepCopyInto(&out.Ingresses)
if in.Dashboard != nil {
in, out := &in.Dashboard, &out.Dashboard
*out = new(CozystackResourceDefinitionDashboard)

View File

@@ -39,7 +39,6 @@ import (
cozystackiov1alpha1 "github.com/cozystack/cozystack/api/v1alpha1"
"github.com/cozystack/cozystack/internal/controller"
"github.com/cozystack/cozystack/internal/controller/dashboard"
lcw "github.com/cozystack/cozystack/internal/lineagecontrollerwebhook"
"github.com/cozystack/cozystack/internal/telemetry"
helmv2 "github.com/fluxcd/helm-controller/api/v2"
@@ -222,20 +221,6 @@ func main() {
os.Exit(1)
}
// special one that's both a webhook and a reconciler
lineageControllerWebhook := &lcw.LineageControllerWebhook{
Client: mgr.GetClient(),
Scheme: mgr.GetScheme(),
}
if err := lineageControllerWebhook.SetupWithManagerAsController(mgr); err != nil {
setupLog.Error(err, "unable to setup controller", "controller", "LineageController")
os.Exit(1)
}
if err := lineageControllerWebhook.SetupWithManagerAsWebhook(mgr); err != nil {
setupLog.Error(err, "unable to setup webhook", "webhook", "LineageWebhook")
os.Exit(1)
}
// +kubebuilder:scaffold:builder
if err := mgr.AddHealthzCheck("healthz", healthz.Ping); err != nil {

View File

@@ -0,0 +1,179 @@
/*
Copyright 2025.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package main
import (
"crypto/tls"
"flag"
"os"
// Import all Kubernetes client auth plugins (e.g. Azure, GCP, OIDC, etc.)
// to ensure that exec-entrypoint and run can make use of them.
_ "k8s.io/client-go/plugin/pkg/client/auth"
"k8s.io/apimachinery/pkg/runtime"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
clientgoscheme "k8s.io/client-go/kubernetes/scheme"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/healthz"
"sigs.k8s.io/controller-runtime/pkg/log/zap"
"sigs.k8s.io/controller-runtime/pkg/metrics/filters"
metricsserver "sigs.k8s.io/controller-runtime/pkg/metrics/server"
"sigs.k8s.io/controller-runtime/pkg/webhook"
cozystackiov1alpha1 "github.com/cozystack/cozystack/api/v1alpha1"
lcw "github.com/cozystack/cozystack/internal/lineagecontrollerwebhook"
// +kubebuilder:scaffold:imports
)
var (
scheme = runtime.NewScheme()
setupLog = ctrl.Log.WithName("setup")
)
func init() {
utilruntime.Must(clientgoscheme.AddToScheme(scheme))
utilruntime.Must(cozystackiov1alpha1.AddToScheme(scheme))
// +kubebuilder:scaffold:scheme
}
func main() {
var metricsAddr string
var enableLeaderElection bool
var probeAddr string
var secureMetrics bool
var enableHTTP2 bool
var tlsOpts []func(*tls.Config)
flag.StringVar(&metricsAddr, "metrics-bind-address", "0", "The address the metrics endpoint binds to. "+
"Use :8443 for HTTPS or :8080 for HTTP, or leave as 0 to disable the metrics service.")
flag.StringVar(&probeAddr, "health-probe-bind-address", ":8081", "The address the probe endpoint binds to.")
flag.BoolVar(&enableLeaderElection, "leader-elect", false,
"Enable leader election for controller manager. "+
"Enabling this will ensure there is only one active controller manager.")
flag.BoolVar(&secureMetrics, "metrics-secure", true,
"If set, the metrics endpoint is served securely via HTTPS. Use --metrics-secure=false to use HTTP instead.")
flag.BoolVar(&enableHTTP2, "enable-http2", false,
"If set, HTTP/2 will be enabled for the metrics and webhook servers")
opts := zap.Options{
Development: false,
}
opts.BindFlags(flag.CommandLine)
flag.Parse()
ctrl.SetLogger(zap.New(zap.UseFlagOptions(&opts)))
// if the enable-http2 flag is false (the default), http/2 should be disabled
// due to its vulnerabilities. More specifically, disabling http/2 will
// prevent from being vulnerable to the HTTP/2 Stream Cancellation and
// Rapid Reset CVEs. For more information see:
// - https://github.com/advisories/GHSA-qppj-fm5r-hxr3
// - https://github.com/advisories/GHSA-4374-p667-p6c8
disableHTTP2 := func(c *tls.Config) {
setupLog.Info("disabling http/2")
c.NextProtos = []string{"http/1.1"}
}
if !enableHTTP2 {
tlsOpts = append(tlsOpts, disableHTTP2)
}
webhookServer := webhook.NewServer(webhook.Options{
TLSOpts: tlsOpts,
})
// Metrics endpoint is enabled in 'config/default/kustomization.yaml'. The Metrics options configure the server.
// More info:
// - https://pkg.go.dev/sigs.k8s.io/controller-runtime@v0.19.1/pkg/metrics/server
// - https://book.kubebuilder.io/reference/metrics.html
metricsServerOptions := metricsserver.Options{
BindAddress: metricsAddr,
SecureServing: secureMetrics,
TLSOpts: tlsOpts,
}
if secureMetrics {
// FilterProvider is used to protect the metrics endpoint with authn/authz.
// These configurations ensure that only authorized users and service accounts
// can access the metrics endpoint. The RBAC are configured in 'config/rbac/kustomization.yaml'. More info:
// https://pkg.go.dev/sigs.k8s.io/controller-runtime@v0.19.1/pkg/metrics/filters#WithAuthenticationAndAuthorization
metricsServerOptions.FilterProvider = filters.WithAuthenticationAndAuthorization
// TODO(user): If CertDir, CertName, and KeyName are not specified, controller-runtime will automatically
// generate self-signed certificates for the metrics server. While convenient for development and testing,
// this setup is not recommended for production.
}
// Configure rate limiting for the Kubernetes client
config := ctrl.GetConfigOrDie()
config.QPS = 50.0 // Increased from default 5.0
config.Burst = 100 // Increased from default 10
mgr, err := ctrl.NewManager(config, ctrl.Options{
Scheme: scheme,
Metrics: metricsServerOptions,
WebhookServer: webhookServer,
HealthProbeBindAddress: probeAddr,
LeaderElection: enableLeaderElection,
LeaderElectionID: "8796f12d.cozystack.io",
// LeaderElectionReleaseOnCancel defines if the leader should step down voluntarily
// when the Manager ends. This requires the binary to immediately end when the
// Manager is stopped, otherwise, this setting is unsafe. Setting this significantly
// speeds up voluntary leader transitions as the new leader don't have to wait
// LeaseDuration time first.
//
// In the default scaffold provided, the program ends immediately after
// the manager stops, so would be fine to enable this option. However,
// if you are doing or is intended to do any operation such as perform cleanups
// after the manager stops then its usage might be unsafe.
// LeaderElectionReleaseOnCancel: true,
})
if err != nil {
setupLog.Error(err, "unable to start manager")
os.Exit(1)
}
lineageControllerWebhook := &lcw.LineageControllerWebhook{
Client: mgr.GetClient(),
Scheme: mgr.GetScheme(),
}
if err := lineageControllerWebhook.SetupWithManagerAsController(mgr); err != nil {
setupLog.Error(err, "unable to setup controller", "controller", "LineageController")
os.Exit(1)
}
if err := lineageControllerWebhook.SetupWithManagerAsWebhook(mgr); err != nil {
setupLog.Error(err, "unable to setup webhook", "webhook", "LineageWebhook")
os.Exit(1)
}
// +kubebuilder:scaffold:builder
if err := mgr.AddHealthzCheck("healthz", healthz.Ping); err != nil {
setupLog.Error(err, "unable to set up health check")
os.Exit(1)
}
if err := mgr.AddReadyzCheck("readyz", healthz.Ping); err != nil {
setupLog.Error(err, "unable to set up ready check")
os.Exit(1)
}
setupLog.Info("starting manager")
if err := mgr.Start(ctrl.SetupSignalHandler()); err != nil {
setupLog.Error(err, "problem running manager")
os.Exit(1)
}
}

View File

@@ -0,0 +1,18 @@
<!--
https://github.com/cozystack/cozystack/releases/tag/v0..
-->
## Features and Improvements
## Security
## Fixes
## Dependencies
## Development, Testing, and CI/CD
---
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v0.36.0...main

View File

@@ -17,4 +17,4 @@ https://github.com/cozystack/cozystack/releases/tag/v0..
---
**Full Changelog**: **Full Changelog**: https://github.com/cozystack/cozystack/compare/v0.34.0...v0.35.0
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v0.34.0...v0.35.0

View File

@@ -0,0 +1,3 @@
# Changes after v0.37.0
* [lineage] Break webhook out into a separate daemonset. Reduce unnecessary webhook calls by marking handled resources and excluding them from consideration by the webhook's object selector (@lllamnyp in #1515).

View File

@@ -0,0 +1,10 @@
<!--
https://github.com/cozystack/cozystack/releases/tag/v0.35.3
-->
## Fixes
* [seaweedfs] Add a liveness check for the SeaweedFS S3 endpoint to improve health monitoring and enable automatic recovery. (@IvanHunters in https://github.com/cozystack/cozystack/pull/1368)
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v0.35.2...v0.35.3

View File

@@ -0,0 +1,14 @@
<!--
https://github.com/cozystack/cozystack/releases/tag/v0.35.4
-->
## Fixes
* [virtual-machine] Fix the regression in VM update hook introduced in https://github.com/cozystack/cozystack/pull/1169 by targeting the correct API resource and avoiding conflicts with KubeVirt resources. (@kvaps in https://github.com/cozystack/cozystack/pull/1376, backported in https://github.com/cozystack/cozystack/pull/1377)
* [cozy-lib] Add the missing template `cozy-lib.resources.flatten`. (@kvaps in https://github.com/cozystack/cozystack/pull/1372, backported in https://github.com/cozystack/cozystack/pull/1375)
* [platform] Fix a boolean override bug in Helm merge. ConfigMap values now correctly take precedence over bundle defaults. (@dyudin0821 in https://github.com/cozystack/cozystack/pull/1385, backported in https://github.com/cozystack/cozystack/pull/1388)
* [seaweedfs] Resolve connectivity issues in SeaweedFS. Increase Nginx ingress timeouts for SeaweedFS S3 endpoint. (@kvaps in https://github.com/cozystack/cozystack/pull/1386, backported in https://github.com/cozystack/cozystack/pull/1390)
* [dx] Remove the BUILDER and PLATFORM autodetect logic in Makefiles. (@kvaps in https://github.com/cozystack/cozystack/pull/1391, backported in https://github.com/cozystack/cozystack/pull/1392)
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v0.35.3...v0.35.4

View File

@@ -0,0 +1,11 @@
<!--
https://github.com/cozystack/cozystack/releases/tag/v0.35.5
-->
## Fixes
* [etcd] Ensure that TopologySpreadConstraints consistently target etcd pods. (@kvaps in https://github.com/cozystack/cozystack/pull/1405, backported in https://github.com/cozystack/cozystack/pull/1406)
* [tests] Add resource quota for testing namespaces. (@IvanHunters in https://github.com/cozystack/cozystack/commit/4982cdf5024c8bb9aa794b91d55545ea6b105d17)
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v0.35.4...v0.35.5

117
docs/changelogs/v0.36.0.md Normal file
View File

@@ -0,0 +1,117 @@
<!--
https://github.com/cozystack/cozystack/releases/tag/v0.36.0
-->
## Feature Highlights
Release v0.36.0 focuses on the stability, observability, and flexible configuration of managed applications.
### Per-Namespace Resource Limits for Tenants
Resource management for Cozystack tenants has received a final patch and is now graduated to a stable feature.
Platform administrators can define explicit CPU, memory, and storage limits for each tenant's namespace
via the tenant specification.
This prevents any single tenant from consuming more than their share of cluster resources,
ensuring cluster stability and a guaranteed service level for each tenant.
### Kube-OVN Cluster Health Monitor
A new component called the Kube-OVN Plunger continuously monitors the health of the Kube-OVN network's central control cluster.
This external agent gathers OVN cluster status and consensus information, exposing Prometheus metrics and live events stream via SSE.
As a result, it provides much better visibility of the virtual network layer and helps maintain a reliable and observable network in Cozystack.
This change opens the road to automated Kube-OVN database operations and recovery in specific corner cases.
### Configurable CoreDNS Addon for Kubernetes
Cozystack introduces a dedicated CoreDNS addon for managing cluster DNS with greater flexibility.
CoreDNS is now deployed via a Helm chart and can be tuned through custom values in the cluster specification,
including autoscaling, replica count, and adjusting service IP.
CoreDNS can now be configured in the dashboard and using Cozystack API.
### Granular SeaweedFS Service Configuration
The SeaweedFS S3 storage service in Cozystack is now far more configurable at a component level.
The Helm chart for SeaweedFS now includes independent configuration for each component and its resources.
It includes the master nodes, volume servers with support for multiple zones, filers, the backing database, and the S3 gateway.
Administrators can set per-component parameters such as the number of replicas, available CPU, memory, and storage size.
### Server-side Encryption for S3
Cozystack v0.36.0 includes SeaweedFS 3.97, bringing support for server-side encryption of S3 buckets (SSE-C, SSE-KMS, and SSE-S3).
**Breaking change:** upon updating Cozystack, SeaweedFS will be updated to a newer version, and the services specification
will be converted to the new format.
### Custom Resource Profiles for Ingress Controller
NGINX controller is now configurable on a per-replica basis.
Configurations include the ingress controller pods' CPU and memory requests/limits, either with direct values or using one of the available presets.
### Cozystack REST API Documentation
[Cozystack REST API reference](https://cozystack.io/docs/cozystack-api/rest/) is now published on the website.
It includes endpoints and methods for listing, creating, updating, and removing each managed application, defined as Cozystack CRD.
### Built-in LLDP-Based Neighbor Discovery in Talos
Cozystack now includes the LLDPD extension in its Talos OS image, enabling Link Layer Discovery Protocol (LLDP) out of the box.
This means each node can automatically discover and advertise its network neighbors and topology without any manual setup.
### Use external IP for Egress Traffic in VMs
When a virtual machine has an external IP assigned to it, it will now always use it for egress traffic, independently of the external method used.
## Major Features and Improvements
* [talos] Add LLDPD (`ghcr.io/siderolabs/lldpd`) as a built-in system extension, enabling LLDP-based neighbor discovery out of the box. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1351 and https://github.com/cozystack/cozystack/pull/1360)
* [kubernetes] Add a configurable CoreDNS addon with valuesOverride, packaged chart, and managed deployment (metrics, autoscaling, HPA, customizable Service). (@klinch0 in https://github.com/cozystack/cozystack/pull/1362)
* [kube-ovn] Implement the Kube-OVN plunger, an external monitoring agent for the ovn-central cluster. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1380, patched in https://github.com/cozystack/cozystack/pull/1414 and https://github.com/cozystack/cozystack/pull/1418)
* [tenant] Enable per-namespace resource quota settings in tenants, with explicit cpu, memory, and storage values. (@IvanHunters in https://github.com/cozystack/cozystack/pull/1389)
* [seaweedfs] Add detailed resource configuration for each component of the SeaweedFS service. (@klinch0 and @kvaps in https://github.com/cozystack/cozystack/pull/1415)
* [ingress] Enable per-replica resource configuration to the ingress controller. (@kvaps in https://github.com/cozystack/cozystack/pull/1416)
* [virtual-machine] Use external IP for egress traffic with `PortList` method. (@kvaps in https://github.com/cozystack/cozystack/pull/1349)
## Fixes
* [cozy-lib] Fix malformed retrieval of `cozyConfig` in the cozy-lib template. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1348)
* [cozy-lib] Add the missing template `cozy-lib.resources.flatten`. (@kvaps in https://github.com/cozystack/cozystack/pull/1372)
* [cozystack-api] Sanitize the OpenAPI v2 schema. (@kvaps in https://github.com/cozystack/cozystack/pull/1353)
* [kube-ovn] Improve northd leader detection. Patch the northd leader check to test against all endpoints instead of just the first one marked as ready. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1363)
* [seaweedfs] Add a liveness check for the SeaweedFS S3 endpoint to improve health monitoring and enable automatic recovery. (@IvanHunters in https://github.com/cozystack/cozystack/pull/1368)
* [seaweedfs] Resolve race conditions in SeaweedFS. Increase deployment timeouts and set install/upgrade remediation to unlimited retries to improve deployment resilience. (@IvanHunters in https://github.com/cozystack/cozystack/pull/1371)
* [seaweedfs] Resolve connectivity issues in SeaweedFS. Increase Nginx ingress timeouts for SeaweedFS S3 endpoint. (@kvaps in https://github.com/cozystack/cozystack/pull/1386)
* [virtual-machine] Fix the reg ression in VM update hook introduced in https://github.com/cozystack/cozystack/pull/1169. Target the correct API resource and avoid conflicts with KubeVirt resources. (@kvaps in https://github.com/cozystack/cozystack/pull/1376)
* [virtual-machine] Correct app version references in `virtual-machine` and `vm-instance`, ensuring accurate versioning during migrations. (@kvaps in https://github.com/cozystack/cozystack/pull/1378).
* [cozyreport] Fix an error where cozyreport tried to parse non-existent objects and generated garbage output in CI debug logs. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1383)
* [platform] Fix a boolean override bug in Helm merge. ConfigMap values now correctly take precedence over bundle defaults. (@dyudin0821 in https://github.com/cozystack/cozystack/pull/1385)
* [kubernetes] CoreDNS release now installs and stores state in the `kube-system` namespace. (@kvaps in https://github.com/cozystack/cozystack/pull/1395)
* [kubernetes] Expose configuration for CoreDNS, enabling setting the image repository and replica count via `values.yaml`. (@kvaps in https://github.com/cozystack/cozystack/pull/1410)
* [etcd] Ensure that TopologySpreadConstraints consistently target etcd pods. (@kvaps in https://github.com/cozystack/cozystack/pull/1405)
* [tenant] Use force-upgrade for ingress controller charts. (@klinch0 in https://github.com/cozystack/cozystack/pull/1404)
* [cozystack-controller] Fix an RBAC error that prevented the workload labelling feature from working. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1419)
* [seaweedfs] Remove VerticalPodAutoscaler for SeaweedFS. (@kvaps in https://github.com/cozystack/cozystack/pull/1421)
## Dependencies
* Update LINSTOR to v1.31.3. (@kvaps in https://github.com/cozystack/cozystack/pull/1358)
* Update SeaweedFS to v3.97. (@kvaps in https://github.com/cozystack/cozystack/pull/1361 and https://github.com/cozystack/cozystack/pull/1373)
* Update Kube-OVN to 1.14.5. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1363)
* Replace Bitnami images with alternatives in all charts. (@kvaps in https://github.com/cozystack/cozystack/pull/1374)
## Documentation
## Development, Testing, and CI/CD
* [dx] Remove the BUILDER and PLATFORM autodetect logic in Makefiles. (@kvaps in https://github.com/cozystack/cozystack/pull/1391)
* [ci] Use the host buildx config in CI. (@kvaps in https://github.com/cozystack/cozystack/pull/1015)
* [ci] Add `jq` and `git` to the installer image. (@kvaps in https://github.com/cozystack/cozystack/pull/1417)
* [ci] Source the `REGISTRY` environment variable from actions' variables, not secrets, so external pull requests can work. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1423)
---
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v0.35.0...v0.36.0

View File

@@ -0,0 +1,22 @@
<!--
https://github.com/cozystack/cozystack/releases/tag/v0.36.1
-->
## Major Features and Improvements
* [cozystack-api] Implement recursive, Kubernetes-like defaulting for applications: missing fields in nested objects and arrays are auto-populated safely without mutating shared defaults. (@kvaps in https://github.com/cozystack/cozystack/pull/1432)
## Fixes
* [cozystack-api] Update defaulting API schemas. (@kvaps in https://github.com/cozystack/cozystack/pull/1433)
* [dashboard] Fix Bitnami dependencies. (@kvaps in https://github.com/cozystack/cozystack/pull/1431)
* [seaweedfs] Fix SeaweedFS migration. (@kvaps in https://github.com/cozystack/cozystack/pull/1430)
## Development, Testing, and CI/CD
* [adopters] Add [Hidora](https://hikube.cloud) to the Cozystack adopters list. (@matthieu-robin in https://github.com/cozystack/cozystack/pull/1429)
---
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v0.36.0...v0.36.1

View File

@@ -0,0 +1,18 @@
<!--
https://github.com/cozystack/cozystack/releases/tag/v0.36.2
-->
## Features and Improvements
## Security
## Fixes
## Dependencies
## Development, Testing, and CI/CD
---
**Full Changelog**: [v0.36.1...v0.36.2](https://github.com/cozystack/cozystack/compare/v0.36.1...v0.36.2)

117
docs/changelogs/v0.37.0.md Normal file
View File

@@ -0,0 +1,117 @@
# Cozystack v0.37 — “OpenAPI Dashboard & Lineage Everywhere”
Weve shipped a big usability push this cycle: a brand-new **OpenAPI-driven dashboard**, lineage labeling across core resource types, and several reliability improvements to smooth upgrades from 0.36→ 0.37. Below are the highlights and the full categorized lists.
## Highlights
* **New OpenAPI-based Dashboard** replaces the old UI, adds module-aware navigation, dynamic branding, and richer Kubernetes resource views ([**@kvaps**](https://github.com/kvaps) in #1269, #1463, #1460).
* **Lineage Webhook** tags Pods, PVCs, Services, Ingresses, and Secrets, adding labels referencing the managing Cozystack application ([**@lllamnyp**](https://github.com/lllamnyp) in #1448, #1452, #1477, #1486, #1497; [**@kvaps**](https://github.com/kvaps) in #1454).
* **Smoother upgrades** with installer and migration hardening, decoupled CRDs vs. API server ([**@lllamnyp**](https://github.com/lllamnyp) in #1494, #1498; [**@kvaps**](https://github.com/kvaps) in #1506).
* **Operations quality**: Kubernetes tests with smarter waits/readiness checks ([**@IvanHunters**](https://github.com/IvanHunters) in #1485).
---
## New features
### Dashboard
* Introduce the OpenAPI-based dashboard and controller; implement TenantNamespace, TenantModules, TenantSecret/SecretsTable resources ([**@kvaps**](https://github.com/kvaps) in #1269).
* Module-aware navigation, richer detail views (Services/Secrets/Ingresses), improved sidebars; “Tenant Modules” grouping ([**@kvaps**](https://github.com/kvaps) in #1463).
* Dynamic branding via cluster config (tenant name, footer/title, logo/icon SVGs) ([**@kvaps**](https://github.com/kvaps) in #1460).
* Dashboard: fix namespace listing for unprivileged users and stabilize streamed requests; build-time patching ([**@kvaps**](https://github.com/kvaps) in #1456).
* Dashboard UX set: marketplace hides module resources; consistent navigation/links; prefill “name” in forms; ingress factory; formatted TenantNamespaces tables ([**@kvaps**](https://github.com/kvaps) in #1463).
* **Dashboard**: list modules reliably; remove Tenant from Marketplace; fix field override while typing ([**@kvaps**](https://github.com/kvaps) in #1501, #1503).
* **Dashboard**: correct API group for applications; sidebars; disable auto-expand; fix `/docs` redirect ([**@kvaps**](https://github.com/kvaps) in #1463, #1465, #1462).
* **Dashboard**: show Secrets with empty values correctly ([**@kvaps**](https://github.com/kvaps) in #1480).
* Dashboard configuration refactor: generate static resources at startup; auto-cleanup stale objects; higher controller client throughput ([**@kvaps**](https://github.com/kvaps) in #1457).
### Migration to v0.37
* **Installer/Migrations**: prevent unintended deletion of platform resource definitions; resilient timestamping; tolerant annotations; stronger migrate-then-reconcile flow ([**@kvaps**](https://github.com/kvaps) in #1475; Andrei Kvapil & [**@lllamnyp**](https://github.com/lllamnyp) in #1498).
* Installer hardening for **migration #20**: packaged apply, ordered waits/readiness checks, RFC3339(nano) stamping; Helm in installer image (Andrei Kvapil & [**@lllamnyp**](https://github.com/lllamnyp) in #1498).
* **Decoupled API & CozyRDs**: You can now upgrade the Cozystack API server independently of CRDs/CozyRD instances, easing 0.36 → 0.37 migrations ([**@lllamnyp**](https://github.com/lllamnyp) in #1494).
* **Migration #20**: The installer runs migration from packaged Helm charts with ordered waits/readiness checks; annotations are tolerant; timestamps are environment-robust (Andrei Kvapil & [**@lllamnyp**](https://github.com/lllamnyp) in #1498; [**@kvaps**](https://github.com/kvaps) in #1475).
### Webhook / Lineage
* Add a lineage mutating webhook to auto-label Pods/Secrets/PVCs/Ingresses/WorkloadMonitors with owning app ([**@lllamnyp**](https://github.com/lllamnyp) in #1448, #1497, [**@kvaps**](https://github.com/kvaps) in #1454).
* **Name-based** selectors for Secret visibility (templates supported) ([**@lllamnyp**](https://github.com/lllamnyp) in #1477).
* Select **Services** and **Ingresses** in CRDs/API; treat them as user-facing when configured ([**@lllamnyp**](https://github.com/lllamnyp) in #1486).
* **VictoriaMetrics integration**: Lineage labels are explicitly set on VM resources; `managedMetadata` is configured to avoid controller “fights” over labels ([**@lllamnyp**](https://github.com/lllamnyp) in #1452).
* Webhook **excludes** `default` and `kube-system` to avoid unintended mutations (part of the installer/migration hardening by Andrei Kvapil & [**@lllamnyp**](https://github.com/lllamnyp) in #1498).
### API / Platform
* Decouple the Cozystack API from Cozystack Resource Definitions to allow independent upgrades ([**@lllamnyp**](https://github.com/lllamnyp) in #1494).
* Add **label selectors** to app definitions for Secret include/exclude ([**@lllamnyp**](https://github.com/lllamnyp) in #1447).
### Monitoring & Ops
* Reduce node labelsets in target relabeling configs on cadvisor/kubelet metrics to reduce cardinality while keeping useful CPU metrics ([**@IvanHunters**](https://github.com/IvanHunters) in #1455).
### Storage & Backups
* PVC expansion in tenant clusters via KubeVirt CSI resizer; RBAC updates (Klinch0 in #1438).
* Velero upgraded to **v1.17.0**; node agent enabled by default and a raft of usability features ([**@kvaps**](https://github.com/kvaps) in #1484).
### Kubernetes/tests & Tooling
* Smarter Kubernetes test flows: node readiness checks, kubelet version validation, longer rollout waits, per-component readiness ([**@IvanHunters**](https://github.com/IvanHunters) in #1485).
### UI/Icons
* New **VM-Disk** SVG icon ([**@kvapsova**](https://github.com/kvapsova) in #1435).
---
## Improvements (minor)
* Make the **Info** app deploy irrespective of OIDC settings ([**klinch0**](https://github.com/klinch0) in #1474).
* Move SA token Secret creation to **Info** app ([**@lllamnyp**](https://github.com/lllamnyp) in #1446).
* Explicitly set lineage labels for VictoriaMetrics resources ([**@lllamnyp**](https://github.com/lllamnyp) in #1452).
---
## Bug fixes
* **Kubernetes**: fix MachineDeployment `spec.selector` mismatch to ensure proper targeting ([**@kvaps**](https://github.com/kvaps) in #1502).
* **Old dashboard**: FerretDB spec typo prevented deploy/display ([**@lllamnyp**](https://github.com/lllamnyp) in #1440).
* **SeaweedFS**: fix per-zone size fallback for multi-DC volumes; make migrations more robust ([**@kvaps**](https://github.com/kvaps) in #1476, #1430).
* **CoreDNS**: pin tag to v1.12.4 ([**@kvaps**](https://github.com/kvaps) in #1469).
* **OIDC**: avoid creating KeycloakRealmGroup before operator API is available ([**@lllamnyp**](https://github.com/lllamnyp) in #1495).
* **Kafka**: disable noisy alerts when Kafka isnt deployed ([**@lllamnyp**](https://github.com/lllamnyp) in #1488).
---
## Dependency & version updates
* **Velero → v1.17.0**; Helm chart v11; node agent default-on ([**@kvaps**](https://github.com/kvaps) in #1484).
* **Cilium → v1.17.8** ([**@kvaps**](https://github.com/kvaps) in #1473).
* **Flux Operator → v0.29.0** (Kingdon Barrett in #1466).
---
## Refactors & chores
* Remove legacy `versions_map`; unify packaging targets; tighten HelmRelease defaults; replace many chart versions with build-time placeholders ([**@kvaps**](https://github.com/kvaps) in #1453).
* Pin CoreDNS image and refresh numerous images ([**@kvaps**](https://github.com/kvaps) in #1469; related image refreshes across #1448 work).
---
## Documentation & governance
* **Contributor Ladder** created and later updated (Timur Tukaev in #1224; Andrei Kvapil & Timur Tukaev in #1492).
* **Code of Conduct** updated with a Vendor Neutrality Manifesto (Timur Tukaev in #1493).
* **Adopters**: add Hidora (Matthieu Robin in #1429).
* **MAINTAINERS**: add/remove entries (Nikita Bykov in #1487; Timur Tukaev in #1491).
* **Issue templates**: new bug-report template and tweaks (Moriarti).
* **README**: updated dark-theme screenshot ([**@kvaps**](https://github.com/kvaps) in #1459).
---
## Breaking changes & upgrade notes
---
## Security & stability

View File

@@ -0,0 +1,44 @@
#!/usr/bin/env bats
@test "Create DB FerretDB" {
name='test'
kubectl apply -f - <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: FerretDB
metadata:
name: $name
namespace: tenant-test
spec:
backup:
destinationPath: "s3://bucket/path/to/folder/"
enabled: false
endpointURL: "http://minio-gateway-service:9000"
retentionPolicy: "30d"
s3AccessKey: "<your-access-key>"
s3SecretKey: "<your-secret-key>"
schedule: "0 2 * * * *"
bootstrap:
enabled: false
external: false
quorum:
maxSyncReplicas: 0
minSyncReplicas: 0
replicas: 2
resources: {}
resourcesPreset: "micro"
size: "10Gi"
users:
testuser:
password: xai7Wepo
EOF
sleep 5
kubectl -n tenant-test wait hr ferretdb-$name --timeout=100s --for=condition=ready
timeout 40 sh -ec "until kubectl -n tenant-test get svc ferretdb-$name-postgres-r -o jsonpath='{.spec.ports[0].port}' | grep -q '5432'; do sleep 10; done"
timeout 40 sh -ec "until kubectl -n tenant-test get svc ferretdb-$name-postgres-ro -o jsonpath='{.spec.ports[0].port}' | grep -q '5432'; do sleep 10; done"
timeout 40 sh -ec "until kubectl -n tenant-test get svc ferretdb-$name-postgres-rw -o jsonpath='{.spec.ports[0].port}' | grep -q '5432'; do sleep 10; done"
timeout 120 sh -ec "until kubectl -n tenant-test get endpoints ferretdb-$name-postgres-r -o jsonpath='{.subsets[*].addresses[*].ip}' | grep -q '[0-9]'; do sleep 10; done"
# for some reason it takes longer for the read-only endpoint to be ready
#timeout 120 sh -ec "until kubectl -n tenant-test get endpoints ferretdb-$name-postgres-ro -o jsonpath='{.subsets[*].addresses[*].ip}' | grep -q '[0-9]'; do sleep 10; done"
timeout 120 sh -ec "until kubectl -n tenant-test get endpoints ferretdb-$name-postgres-rw -o jsonpath='{.subsets[*].addresses[*].ip}' | grep -q '[0-9]'; do sleep 10; done"
kubectl -n tenant-test delete ferretdb.apps.cozystack.io $name
}

View File

@@ -0,0 +1,121 @@
#!/usr/bin/env bats
@test "Create DB FoundationDB" {
name='test'
kubectl apply -f - <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: FoundationDB
metadata:
name: $name
namespace: tenant-test
spec:
cluster:
version: "7.3.63"
processCounts:
storage: 3
stateless: -1
cluster_controller: 1
redundancyMode: "double"
storageEngine: "ssd-2"
faultDomain:
key: "foundationdb.org/none"
valueFrom: "\$FDB_ZONE_ID"
storage:
size: "1Gi"
storageClass: ""
resourcesPreset: "small"
backup:
enabled: false
s3:
bucket: ""
endpoint: ""
region: ""
credentials:
accessKeyId: ""
secretAccessKey: ""
retentionPolicy: "7d"
monitoring:
enabled: true
customParameters:
- "knob_disable_posix_kernel_aio=1"
imageType: "unified"
automaticReplacements: true
EOF
sleep 15
# Wait for HelmRelease to be ready
kubectl -n tenant-test wait hr foundationdb-$name --timeout=300s --for=condition=ready
# Wait for FoundationDBCluster to be created (name has foundationdb- prefix)
timeout 300 sh -ec "until kubectl -n tenant-test get foundationdbclusters.apps.foundationdb.org foundationdb-$name; do sleep 15; done"
# Wait for cluster to become available (initial reconciliation takes time - allow 5 minutes)
timeout 300 sh -ec "until kubectl -n tenant-test get foundationdbclusters.apps.foundationdb.org foundationdb-$name -o jsonpath='{.status.databaseConfiguration.usable_regions}' | grep -q '1'; do sleep 30; done"
# Check that storage processes are running
timeout 300 sh -ec "until [ \$(kubectl -n tenant-test get pods -l foundationdb.org/fdb-cluster-name=foundationdb-$name,foundationdb.org/fdb-process-class=storage --field-selector=status.phase=Running --no-headers | wc -l) -eq 3 ]; do sleep 15; done"
# Check that log processes are running (these are the stateless processes)
timeout 300 sh -ec "until [ \$(kubectl -n tenant-test get pods -l foundationdb.org/fdb-cluster-name=foundationdb-$name,foundationdb.org/fdb-process-class=log --field-selector=status.phase=Running --no-headers | wc -l) -ge 1 ]; do sleep 15; done"
# Check that cluster controller is running
timeout 300 sh -ec "until [ \$(kubectl -n tenant-test get pods -l foundationdb.org/fdb-cluster-name=foundationdb-$name,foundationdb.org/fdb-process-class=cluster_controller --field-selector=status.phase=Running --no-headers | wc -l) -eq 1 ]; do sleep 15; done"
# Check WorkloadMonitor is created and configured
timeout 120 sh -ec "until kubectl -n tenant-test get workloadmonitor foundationdb-$name; do sleep 10; done"
timeout 60 sh -ec "until kubectl -n tenant-test get workloadmonitor foundationdb-$name -o jsonpath='{.spec.replicas}' | grep -q '3'; do sleep 5; done"
# Check dashboard resource map is created
kubectl -n tenant-test get configmap foundationdb-$name-resourcemap
# Verify cluster is healthy (check cluster status) - allow extra time for initial setup
timeout 300 sh -ec "until kubectl -n tenant-test get foundationdbclusters.apps.foundationdb.org foundationdb-$name -o jsonpath='{.status.health.available}' | grep -q 'true'; do sleep 20; done"
# Validate status.configured field
timeout 60 sh -ec "until kubectl -n tenant-test get foundationdbclusters.apps.foundationdb.org foundationdb-$name -o jsonpath='{.status.configured}' | grep -q 'true'; do sleep 10; done"
# Validate status.connectionString field exists and contains expected format
timeout 60 sh -ec "until kubectl -n tenant-test get foundationdbclusters.apps.foundationdb.org foundationdb-$name -o jsonpath='{.status.connectionString}' | grep -q '@.*\.svc\.cozy\.local'; do sleep 10; done"
# Validate comprehensive status.databaseConfiguration fields
timeout 60 sh -ec "until kubectl -n tenant-test get foundationdbclusters.apps.foundationdb.org foundationdb-$name -o jsonpath='{.status.databaseConfiguration.logs}' | grep -q '3'; do sleep 10; done"
timeout 60 sh -ec "until kubectl -n tenant-test get foundationdbclusters.apps.foundationdb.org foundationdb-$name -o jsonpath='{.status.databaseConfiguration.proxies}' | grep -q '3'; do sleep 10; done"
timeout 60 sh -ec "until kubectl -n tenant-test get foundationdbclusters.apps.foundationdb.org foundationdb-$name -o jsonpath='{.status.databaseConfiguration.redundancy_mode}' | grep -q 'double'; do sleep 10; done"
timeout 60 sh -ec "until kubectl -n tenant-test get foundationdbclusters.apps.foundationdb.org foundationdb-$name -o jsonpath='{.status.databaseConfiguration.resolvers}' | grep -q '1'; do sleep 10; done"
timeout 60 sh -ec "until kubectl -n tenant-test get foundationdbclusters.apps.foundationdb.org foundationdb-$name -o jsonpath='{.status.databaseConfiguration.storage_engine}' | grep -q 'ssd-2'; do sleep 10; done"
timeout 60 sh -ec "until kubectl -n tenant-test get foundationdbclusters.apps.foundationdb.org foundationdb-$name -o jsonpath='{.status.databaseConfiguration.usable_regions}' | grep -q '1'; do sleep 10; done"
# Validate status.desiredProcessGroups field
timeout 60 sh -ec "until kubectl -n tenant-test get foundationdbclusters.apps.foundationdb.org foundationdb-$name -o jsonpath='{.status.desiredProcessGroups}' | grep -q '^[0-9][0-9]*$'; do sleep 10; done"
# Validate status.generations.reconciled field
timeout 60 sh -ec "until kubectl -n tenant-test get foundationdbclusters.apps.foundationdb.org foundationdb-$name -o jsonpath='{.status.generations.reconciled}' | grep -q '^[0-9][0-9]*$'; do sleep 10; done"
# Validate status.hasListenIPsForAllPods field
timeout 60 sh -ec "until kubectl -n tenant-test get foundationdbclusters.apps.foundationdb.org foundationdb-$name -o jsonpath='{.status.hasListenIPsForAllPods}' | grep -q 'true'; do sleep 10; done"
# Validate comprehensive status.health fields
timeout 60 sh -ec "until kubectl -n tenant-test get foundationdbclusters.apps.foundationdb.org foundationdb-$name -o jsonpath='{.status.health.fullReplication}' | grep -q 'true'; do sleep 10; done"
timeout 60 sh -ec "until kubectl -n tenant-test get foundationdbclusters.apps.foundationdb.org foundationdb-$name -o jsonpath='{.status.health.healthy}' | grep -q 'true'; do sleep 10; done"
# Verify security context is applied correctly (non-root user)
storage_pod=$(kubectl -n tenant-test get pods -l foundationdb.org/fdb-cluster-name=foundationdb-$name,foundationdb.org/fdb-process-class=storage --no-headers | head -n1 | awk '{print $1}')
kubectl -n tenant-test get pod "$storage_pod" -o jsonpath='{.spec.containers[0].securityContext.runAsUser}' | grep -q '4059'
kubectl -n tenant-test get pod "$storage_pod" -o jsonpath='{.spec.containers[0].securityContext.runAsGroup}' | grep -q '4059'
# Verify volumeClaimTemplate is properly configured in FoundationDBCluster CRD
timeout 60 sh -ec "until kubectl -n tenant-test get foundationdbclusters.apps.foundationdb.org foundationdb-$name -o jsonpath='{.spec.processes.general.volumeClaimTemplate.spec.resources.requests.storage}' | grep -q '1Gi'; do sleep 10; done"
# Verify PVCs are created with correct storage size (1Gi as specified in test)
timeout 120 sh -ec "until [ \$(kubectl -n tenant-test get pvc -l foundationdb.org/fdb-cluster-name=foundationdb-$name --no-headers | wc -l) -ge 3 ]; do sleep 10; done"
kubectl -n tenant-test get pvc -l foundationdb.org/fdb-cluster-name=foundationdb-$name -o jsonpath='{.items[*].spec.resources.requests.storage}' | grep -q '1Gi'
# Verify actual PVC storage capacity matches requested size
kubectl -n tenant-test get pvc -l foundationdb.org/fdb-cluster-name=foundationdb-$name -o jsonpath='{.items[*].status.capacity.storage}' | grep -q '1Gi'
# Clean up
kubectl -n tenant-test delete foundationdb $name
# Wait for cleanup to complete
timeout 120 sh -ec "while kubectl -n tenant-test get foundationdbclusters.apps.foundationdb.org foundationdb-$name 2>/dev/null; do sleep 10; done"
}

View File

@@ -64,37 +64,90 @@ spec:
EOF
# Wait for the tenant-test namespace to be active
kubectl wait namespace tenant-test --timeout=20s --for=jsonpath='{.status.phase}'=Active
# Wait for the Kamaji control plane to be created (retry for up to 10 seconds)
timeout 10 sh -ec 'until kubectl get kamajicontrolplane -n tenant-test kubernetes-'"${test_name}"'; do sleep 1; done'
# Wait for the tenant control plane to be fully created (timeout after 4 minutes)
kubectl wait --for=condition=TenantControlPlaneCreated kamajicontrolplane -n tenant-test kubernetes-${test_name} --timeout=4m
# Wait for Kubernetes resources to be ready (timeout after 2 minutes)
kubectl wait tcp -n tenant-test kubernetes-${test_name} --timeout=2m --for=jsonpath='{.status.kubernetesResources.version.status}'=Ready
# Wait for all required deployments to be available (timeout after 4 minutes)
kubectl wait deploy --timeout=4m --for=condition=available -n tenant-test kubernetes-${test_name} kubernetes-${test_name}-cluster-autoscaler kubernetes-${test_name}-kccm kubernetes-${test_name}-kcsi-controller
# Wait for the machine deployment to scale to 2 replicas (timeout after 1 minute)
kubectl wait machinedeployment kubernetes-${test_name}-md0 -n tenant-test --timeout=1m --for=jsonpath='{.status.replicas}'=2
# Get the admin kubeconfig and save it to a file
kubectl get secret kubernetes-${test_name}-admin-kubeconfig -ojsonpath='{.data.super-admin\.conf}' -n tenant-test | base64 -d > tenantkubeconfig
# Update the kubeconfig to use localhost for the API server
yq -i ".clusters[0].cluster.server = \"https://localhost:${port}\"" tenantkubeconfig
# Set up port forwarding to the Kubernetes API server for a 40 second timeout
bash -c 'timeout 40s kubectl port-forward service/kubernetes-'"${test_name}"' -n tenant-test '"${port}"':6443 > /dev/null 2>&1 &'
# Set up port forwarding to the Kubernetes API server for a 200 second timeout
bash -c 'timeout 200s kubectl port-forward service/kubernetes-'"${test_name}"' -n tenant-test '"${port}"':6443 > /dev/null 2>&1 &'
# Verify the Kubernetes version matches what we expect (retry for up to 20 seconds)
timeout 20 sh -ec 'until kubectl --kubeconfig tenantkubeconfig version 2>/dev/null | grep -Fq "Server Version: ${k8s_version}"; do sleep 5; done'
# Wait for the nodes to be ready (timeout after 2 minutes)
timeout 2m bash -c '
until [ "$(kubectl --kubeconfig tenantkubeconfig get nodes -o jsonpath="{.items[*].metadata.name}" | wc -w)" -eq 2 ]; do
sleep 3
done
'
# Verify the nodes are ready
kubectl --kubeconfig tenantkubeconfig wait node --all --timeout=2m --for=condition=Ready
kubectl --kubeconfig tenantkubeconfig get nodes -o wide
# Verify the kubelet version matches what we expect
versions=$(kubectl --kubeconfig tenantkubeconfig get nodes -o jsonpath='{.items[*].status.nodeInfo.kubeletVersion}')
node_ok=true
case "$k8s_version" in
v1.32*)
echo "⚠️ TODO: Temporary stub — allowing nodes with v1.33 while k8s_version is v1.32"
;;
esac
for v in $versions; do
case "$k8s_version" in
v1.32|v1.32.*)
case "$v" in
v1.32 | v1.32.* | v1.32-* | v1.33 | v1.33.* | v1.33-*)
;;
*)
node_ok=false
break
;;
esac
;;
*)
case "$v" in
"${k8s_version}" | "${k8s_version}".* | "${k8s_version}"-*)
;;
*)
node_ok=false
break
;;
esac
;;
esac
done
if [ "$node_ok" != true ]; then
echo "Kubelet versions did not match expected ${k8s_version}" >&2
exit 1
fi
# Wait for all machine deployment replicas to be ready (timeout after 10 minutes)
kubectl wait machinedeployment kubernetes-${test_name}-md0 -n tenant-test --timeout=10m --for=jsonpath='{.status.v1beta2.readyReplicas}'=2
for component in cilium coredns csi ingress-nginx vsnap-crd; do
kubectl wait hr kubernetes-${test_name}-${component} -n tenant-test --timeout=1m --for=condition=ready
done
# Clean up by deleting the Kubernetes resource
kubectl -n tenant-test delete kuberneteses.apps.cozystack.io $test_name

View File

@@ -53,4 +53,6 @@ kube::codegen::gen_openapi \
"${SCRIPT_ROOT}/pkg/apis"
$CONTROLLER_GEN object:headerFile="hack/boilerplate.go.txt" paths="./api/..."
$CONTROLLER_GEN rbac:roleName=manager-role crd paths="./api/..." output:crd:artifacts:config=packages/system/cozystack-controller/templates/crds
$CONTROLLER_GEN rbac:roleName=manager-role crd paths="./api/..." output:crd:artifacts:config=packages/system/cozystack-controller/crds
mv packages/system/cozystack-controller/crds/cozystack.io_cozystackresourcedefinitions.yaml \
packages/system/cozystack-resource-definition-crd/definition/cozystack.io_cozystackresourcedefinitions.yaml

View File

@@ -8,7 +8,7 @@ need yq; need jq; need base64
CHART_YAML="${CHART_YAML:-Chart.yaml}"
VALUES_YAML="${VALUES_YAML:-values.yaml}"
SCHEMA_JSON="${SCHEMA_JSON:-values.schema.json}"
CRD_DIR="../../system/cozystack-api/templates/cozystack-resource-definitions"
CRD_DIR="../../system/cozystack-resource-definitions/cozyrds"
[[ -f "$CHART_YAML" ]] || { echo "No $CHART_YAML found"; exit 1; }
[[ -f "$SCHEMA_JSON" ]] || { echo "No $SCHEMA_JSON found"; exit 1; }

View File

@@ -248,9 +248,10 @@ func servicesTab(kind string) map[string]any {
"customizationId": "factory-details-v1.services",
"pathToItems": []any{"items"},
"labelsSelector": map[string]any{
"apps.cozystack.io/application.group": "apps.cozystack.io",
"apps.cozystack.io/application.kind": kind,
"apps.cozystack.io/application.name": "{reqs[0]['metadata','name']}",
"apps.cozystack.io/application.group": "apps.cozystack.io",
"apps.cozystack.io/application.kind": kind,
"apps.cozystack.io/application.name": "{reqs[0]['metadata','name']}",
"internal.cozystack.io/tenantresource": "true",
},
},
},
@@ -273,9 +274,10 @@ func ingressesTab(kind string) map[string]any {
"customizationId": "factory-details-networking.k8s.io.v1.ingresses",
"pathToItems": []any{"items"},
"labelsSelector": map[string]any{
"apps.cozystack.io/application.group": "apps.cozystack.io",
"apps.cozystack.io/application.kind": kind,
"apps.cozystack.io/application.name": "{reqs[0]['metadata','name']}",
"apps.cozystack.io/application.group": "apps.cozystack.io",
"apps.cozystack.io/application.kind": kind,
"apps.cozystack.io/application.name": "{reqs[0]['metadata','name']}",
"internal.cozystack.io/tenantresource": "true",
},
},
},

View File

@@ -38,8 +38,8 @@ func (m *Manager) ensureMarketplacePanel(ctx context.Context, crd *cozyv1alpha1.
return reconcile.Result{}, nil
}
// Skip module resources (they don't need MarketplacePanel)
if crd.Spec.Dashboard.Module {
// Skip module and tenant resources (they don't need MarketplacePanel)
if crd.Spec.Dashboard.Module || crd.Spec.Application.Kind == "Tenant" {
err := m.client.Get(ctx, client.ObjectKey{Name: mp.Name}, mp)
if apierrors.IsNotFound(err) {
return reconcile.Result{}, nil

View File

@@ -998,6 +998,15 @@ func createBoolColumn(name, jsonPath string) map[string]any {
}
}
// createReadyColumn creates a Ready column with Boolean type and condition check
func createReadyColumn() map[string]any {
return map[string]any{
"name": "Ready",
"type": "Boolean",
"jsonPath": `.status.conditions[?(@.type=="Ready")].status`,
}
}
// createConverterBytesColumn creates a column with ConverterBytes component
func createConverterBytesColumn(name, jsonPath string) map[string]any {
return map[string]any{

View File

@@ -142,16 +142,16 @@ func CreateAllCustomColumnsOverrides() []*dashboardv1alpha1.CustomColumnsOverrid
createCustomColumnsOverride("stock-namespace-/v1/services", []any{
createCustomColumnWithJsonPath("Name", ".metadata.name", "S", "service", getColorForType("service"), "/openapi-ui/{2}/{reqsJsonPath[0]['.metadata.namespace']['-']}/factory/kube-service-details/{reqsJsonPath[0]['.metadata.name']['-']}"),
createStringColumn("ClusterIP", ".spec.clusterIP"),
createStringColumn("LoadbalancerIP", ".spec.loadBalancerIP"),
createStringColumn("LoadbalancerIP", ".status.loadBalancer.ingress[0].ip"),
createTimestampColumn("Created", ".metadata.creationTimestamp"),
}),
// Stock namespace core cozystack io v1alpha1 tenantmodules
createCustomColumnsOverride("stock-namespace-/core.cozystack.io/v1alpha1/tenantmodules", []any{
createCustomColumnWithJsonPath("Name", ".metadata.name", "M", "module", getColorForType("module"), "/openapi-ui/{2}/{reqsJsonPath[0]['.metadata.namespace']['-']}/factory/{reqsJsonPath[0]['.metadata.name']['-']}-details/{reqsJsonPath[0]['.metadata.name']['-']}"),
createStringColumn("Version", ".spec.version"),
createStringColumn("Status", ".status.phase"),
createReadyColumn(),
createTimestampColumn("Created", ".metadata.creationTimestamp"),
createStringColumn("Version", ".status.version"),
}),
// Factory service details port mapping

View File

@@ -38,6 +38,9 @@ func (l *LineageControllerWebhook) Map(hr *helmv2.HelmRelease) (string, string,
if !ok {
return "", "", "", fmt.Errorf("failed to load chart-app mapping from config")
}
if hr.Spec.Chart == nil {
return "", "", "", fmt.Errorf("cannot map helm release %s/%s to dynamic app", hr.Namespace, hr.Name)
}
s := hr.Spec.Chart.Spec
val, ok := cfg.chartAppMap[chartRef{s.SourceRef.Name, s.Chart}]
if !ok {

View File

@@ -1,34 +1,73 @@
package lineagecontrollerwebhook
import (
"bytes"
"context"
"text/template"
cozyv1alpha1 "github.com/cozystack/cozystack/api/v1alpha1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/labels"
"sigs.k8s.io/controller-runtime/pkg/log"
)
func matchLabelsToSelector(l map[string]string, s *metav1.LabelSelector) bool {
// TODO: emit warning if error
sel, err := metav1.LabelSelectorAsSelector(s)
if err != nil {
return false
// matchName checks if the provided name matches any of the resource names in the array.
// Each entry in resourceNames is treated as a Go template that gets rendered using the passed context.
// A nil resourceNames array matches any string.
func matchName(ctx context.Context, name string, templateContext map[string]string, resourceNames []string) bool {
if resourceNames == nil {
return true
}
return sel.Matches(labels.Set(l))
logger := log.FromContext(ctx)
for _, templateStr := range resourceNames {
tmpl, err := template.New("resourceName").Parse(templateStr)
if err != nil {
logger.Error(err, "failed to parse resource name template", "template", templateStr)
continue
}
var buf bytes.Buffer
err = tmpl.Execute(&buf, templateContext)
if err != nil {
logger.Error(err, "failed to execute resource name template", "template", templateStr, "context", templateContext)
continue
}
if buf.String() == name {
return true
}
}
return false
}
func matchLabelsToSelectorArray(l map[string]string, ss []*metav1.LabelSelector) bool {
func matchResourceToSelector(ctx context.Context, name string, templateContext, l map[string]string, s *cozyv1alpha1.CozystackResourceDefinitionResourceSelector) bool {
sel, err := metav1.LabelSelectorAsSelector(&s.LabelSelector)
if err != nil {
log.FromContext(ctx).Error(err, "failed to convert label selector to selector")
return false
}
labelMatches := sel.Matches(labels.Set(l))
nameMatches := matchName(ctx, name, templateContext, s.ResourceNames)
return labelMatches && nameMatches
}
func matchResourceToSelectorArray(ctx context.Context, name string, templateContext, l map[string]string, ss []*cozyv1alpha1.CozystackResourceDefinitionResourceSelector) bool {
for _, s := range ss {
if matchLabelsToSelector(l, s) {
if matchResourceToSelector(ctx, name, templateContext, l, s) {
return true
}
}
return false
}
func matchLabelsToExcludeInclude(l map[string]string, ex, in []*metav1.LabelSelector) bool {
if matchLabelsToSelectorArray(l, ex) {
func matchResourceToExcludeInclude(ctx context.Context, name string, templateContext, l map[string]string, resources *cozyv1alpha1.CozystackResourceDefinitionResources) bool {
if resources == nil {
return false
}
if matchLabelsToSelectorArray(l, in) {
return true
if matchResourceToSelectorArray(ctx, name, templateContext, l, resources.Exclude) {
return false
}
return false
return matchResourceToSelectorArray(ctx, name, templateContext, l, resources.Include)
}

View File

@@ -5,18 +5,20 @@ import (
"encoding/json"
"errors"
"fmt"
"strings"
"github.com/cozystack/cozystack/pkg/lineage"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/client-go/discovery"
"k8s.io/client-go/discovery/cached/memory"
"k8s.io/client-go/dynamic"
"k8s.io/client-go/rest"
"k8s.io/client-go/restmapper"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client/apiutil"
"sigs.k8s.io/controller-runtime/pkg/log"
"sigs.k8s.io/controller-runtime/pkg/webhook/admission"
cozyv1alpha1 "github.com/cozystack/cozystack/api/v1alpha1"
corev1alpha1 "github.com/cozystack/cozystack/pkg/apis/core/v1alpha1"
)
var (
@@ -24,6 +26,27 @@ var (
AncestryAmbiguous = fmt.Errorf("object ancestry is ambiguous")
)
const (
ManagedObjectKey = "internal.cozystack.io/managed-by-cozystack"
ManagerGroupKey = "apps.cozystack.io/application.group"
ManagerKindKey = "apps.cozystack.io/application.kind"
ManagerNameKey = "apps.cozystack.io/application.name"
)
// getResourceSelectors returns the appropriate CozystackResourceDefinitionResources for a given GroupKind
func (h *LineageControllerWebhook) getResourceSelectors(gk schema.GroupKind, crd *cozyv1alpha1.CozystackResourceDefinition) *cozyv1alpha1.CozystackResourceDefinitionResources {
switch {
case gk.Group == "" && gk.Kind == "Secret":
return &crd.Spec.Secrets
case gk.Group == "" && gk.Kind == "Service":
return &crd.Spec.Services
case gk.Group == "networking.k8s.io" && gk.Kind == "Ingress":
return &crd.Spec.Ingresses
default:
return nil
}
}
// SetupWithManager registers the handler with the webhook server.
func (h *LineageControllerWebhook) SetupWithManagerAsWebhook(mgr ctrl.Manager) error {
cfg := rest.CopyConfig(mgr.GetConfig())
@@ -34,13 +57,15 @@ func (h *LineageControllerWebhook) SetupWithManagerAsWebhook(mgr ctrl.Manager) e
return err
}
discoClient, err := discovery.NewDiscoveryClientForConfig(cfg)
httpClient, err := rest.HTTPClientFor(cfg)
if err != nil {
return err
}
cachedDisco := memory.NewMemCacheClient(discoClient)
h.mapper = restmapper.NewDeferredDiscoveryRESTMapper(cachedDisco)
h.mapper, err = apiutil.NewDynamicRESTMapper(cfg, httpClient)
if err != nil {
return err
}
h.initConfig()
// Register HTTP path -> handler.
@@ -73,7 +98,7 @@ func (h *LineageControllerWebhook) Handle(ctx context.Context, req admission.Req
labels, err := h.computeLabels(ctx, obj)
for {
if err != nil && errors.Is(err, NoAncestors) {
return admission.Allowed("object not managed by app")
break // not a problem, mark object as unmanaged
}
if err != nil && errors.Is(err, AncestryAmbiguous) {
warn = append(warn, "object ancestry ambiguous, using first ancestor found")
@@ -101,7 +126,7 @@ func (h *LineageControllerWebhook) Handle(ctx context.Context, req admission.Req
func (h *LineageControllerWebhook) computeLabels(ctx context.Context, o *unstructured.Unstructured) (map[string]string, error) {
owners := lineage.WalkOwnershipGraph(ctx, h.dynClient, h.mapper, h, o)
if len(owners) == 0 {
return nil, NoAncestors
return map[string]string{ManagedObjectKey: "false"}, NoAncestors
}
obj, err := owners[0].GetUnstructured(ctx, h.dynClient, h.mapper)
if err != nil {
@@ -117,7 +142,8 @@ func (h *LineageControllerWebhook) computeLabels(ctx context.Context, o *unstruc
}
labels := map[string]string{
// truncate apigroup to first 63 chars
"apps.cozystack.io/application.group": func(s string) string {
ManagedObjectKey: "true",
ManagerGroupKey: func(s string) string {
if len(s) < 63 {
return s
}
@@ -127,22 +153,24 @@ func (h *LineageControllerWebhook) computeLabels(ctx context.Context, o *unstruc
}
return s
}(gv.Group),
"apps.cozystack.io/application.kind": obj.GetKind(),
"apps.cozystack.io/application.name": obj.GetName(),
ManagerKindKey: obj.GetKind(),
ManagerNameKey: obj.GetName(),
}
if o.GetAPIVersion() != "v1" || o.GetKind() != "Secret" {
return labels, err
templateLabels := map[string]string{
"kind": strings.ToLower(obj.GetKind()),
"name": obj.GetName(),
"namespace": o.GetNamespace(),
}
cfg := h.config.Load().(*runtimeConfig)
crd := cfg.appCRDMap[appRef{gv.Group, obj.GetKind()}]
resourceSelectors := h.getResourceSelectors(o.GroupVersionKind().GroupKind(), crd)
// TODO: expand this to work with other resources than Secrets
labels["apps.cozystack.io/tenantresource"] = func(b bool) string {
labels[corev1alpha1.TenantResourceLabelKey] = func(b bool) string {
if b {
return "true"
return corev1alpha1.TenantResourceLabelValue
}
return "false"
}(matchLabelsToExcludeInclude(o.GetLabels(), crd.Spec.Secrets.Exclude, crd.Spec.Secrets.Include))
}(matchResourceToExcludeInclude(ctx, o.GetName(), templateLabels, o.GetLabels(), resourceSelectors))
return labels, err
}

View File

@@ -4,8 +4,6 @@ apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-backup-script
labels:
apps.cozystack.io/tenantresource: "false"
stringData:
backup.sh: |
#!/bin/sh

View File

@@ -0,0 +1 @@
Makefile

View File

@@ -0,0 +1,25 @@
apiVersion: v2
name: foundationdb
description: Managed FoundationDB service
icon: /logos/foundationdb.svg
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "7.3.63"

View File

@@ -0,0 +1,4 @@
include ../../../scripts/package.mk
generate:
cozyvalues-gen -v values.yaml -s values.schema.json -r README.md

View File

@@ -0,0 +1,195 @@
# FoundationDB
A managed FoundationDB service for Cozystack.
## Overview
FoundationDB is a distributed database designed to handle large volumes of structured data across clusters of commodity servers. It organizes data as an ordered key-value store and employs ACID transactions for all operations.
This package provides a managed FoundationDB cluster deployment using the FoundationDB Kubernetes Operator.
## Features
- **High Availability**: Multi-instance deployment with automatic failover
- **ACID Transactions**: Full ACID transaction support across the cluster
- **Scalable**: Easily scale storage and compute resources
- **Backup Integration**: Optional S3-compatible backup storage
- **Monitoring**: Built-in monitoring and alerting through WorkloadMonitor
- **Flexible Configuration**: Support for custom FoundationDB parameters
## Configuration
### Basic Configuration
```yaml
# Cluster process configuration
cluster:
version: "7.3.63"
processCounts:
storage: 3 # Number of storage processes (determines cluster size)
stateless: -1 # Automatically calculated
cluster_controller: 1
faultDomain:
key: "kubernetes.io/hostname"
valueFrom: "spec.nodeName"
```
### Storage
```yaml
storage:
size: "16Gi" # Storage size per instance
storageClass: "" # Storage class (optional)
```
### Resources
```yaml
# Use preset sizing
resourcesPreset: "medium" # small, medium, large, xlarge, 2xlarge
# Or custom resource configuration
resources:
cpu: "2000m"
memory: "4Gi"
```
### Backup (Optional)
```yaml
backup:
enabled: true
s3:
bucket: "my-fdb-backups"
endpoint: "https://s3.amazonaws.com"
region: "us-east-1"
credentials:
accessKeyId: "AKIA..."
secretAccessKey: "..."
retentionPolicy: "7d"
```
### Advanced Configuration
```yaml
# Custom FoundationDB parameters
customParameters:
- "knob_disable_posix_kernel_aio=1"
# Image type (unified is default and recommended for new deployments)
imageType: "unified"
# Enable automatic pod replacements
automaticReplacements: true
# Security context configuration
securityContext:
runAsUser: 4059
runAsGroup: 4059
```
## Prerequisites
- FoundationDB Operator must be installed in the cluster
- Sufficient storage and compute resources
- For backups: S3-compatible storage credentials
## Deployment
1. Install the FoundationDB operator (system package)
2. Deploy this application package with your desired configuration
3. The cluster will be automatically provisioned and configured
## Monitoring
This package includes WorkloadMonitor integration for cluster health monitoring and resource tracking. Monitoring can be disabled by setting:
```yaml
monitoring:
enabled: false
```
## Security
- All containers run with restricted security contexts
- No privilege escalation allowed
- Read-only root filesystem where possible
- Custom security context configurations supported
## Fault Tolerance
FoundationDB is designed for high availability:
- Automatic failure detection and recovery
- Data replication across instances
- Configurable fault domains for rack/zone awareness
- Transaction log redundancy
The included `WorkloadMonitor` is automatically configured based on the `cluster.redundancyMode` value. It sets the `minReplicas` property on the `WorkloadMonitor` resource to ensure the cluster's health status accurately reflects its fault tolerance level. The number of tolerated failures is as follows:
- `single`: 0 failures
- `double`: 1 failure
- `triple` and datacenter-aware modes: 2 failures
For example, with the default configuration (`redundancyMode: double` and 3 storage pods), `minReplicas` will be set to 2.
## Performance Considerations
- Use SSD storage for better performance
- Consider dedicating nodes for storage processes
- Monitor cluster metrics for optimization opportunities
- Scale storage and stateless processes based on workload
## Support
For issues related to FoundationDB itself, refer to the [FoundationDB documentation](https://apple.github.io/foundationdb/).
For Cozystack-specific issues, consult the Cozystack documentation or support channels.
## Parameters
### Common parameters
| Name | Description | Type | Value |
| ------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------ | ----------- | ------------------------ |
| `cluster` | Cluster configuration | `object` | `{}` |
| `cluster.processCounts` | Process counts for different roles | `object` | `{}` |
| `cluster.processCounts.stateless` | Number of stateless processes (-1 for automatic) | `int` | `-1` |
| `cluster.processCounts.storage` | Number of storage processes (determines cluster size) | `int` | `3` |
| `cluster.processCounts.cluster_controller` | Number of cluster controller processes | `int` | `1` |
| `cluster.version` | Version of FoundationDB to use | `string` | `7.3.63` |
| `cluster.redundancyMode` | Database redundancy mode (single, double, triple, three_datacenter, three_datacenter_fallback) | `string` | `double` |
| `cluster.storageEngine` | Storage engine (ssd-2, ssd-redwood-v1, ssd-rocksdb-v1, memory) | `string` | `ssd-2` |
| `cluster.faultDomain` | Fault domain configuration | `object` | `{}` |
| `cluster.faultDomain.key` | Fault domain key | `string` | `kubernetes.io/hostname` |
| `cluster.faultDomain.valueFrom` | Fault domain value source | `string` | `spec.nodeName` |
| `storage` | Storage configuration | `object` | `{}` |
| `storage.size` | Size of persistent volumes for each instance | `quantity` | `16Gi` |
| `storage.storageClass` | Storage class (if not set, uses cluster default) | `string` | `""` |
| `resources` | Explicit CPU and memory configuration for each FoundationDB instance. When left empty, the preset defined in `resourcesPreset` is applied. | `*object` | `null` |
| `resources.cpu` | CPU available to each instance | `*quantity` | `null` |
| `resources.memory` | Memory (RAM) available to each instance | `*quantity` | `null` |
| `resourcesPreset` | Default sizing preset used when `resources` is omitted. Allowed values: `small`, `medium`, `large`, `xlarge`, `2xlarge`. | `string` | `medium` |
| `backup` | Backup configuration | `object` | `{}` |
| `backup.enabled` | Enable backups | `bool` | `false` |
| `backup.s3` | S3 configuration for backups | `object` | `{}` |
| `backup.s3.bucket` | S3 bucket name | `string` | `""` |
| `backup.s3.endpoint` | S3 endpoint URL | `string` | `""` |
| `backup.s3.region` | S3 region | `string` | `us-east-1` |
| `backup.s3.credentials` | S3 credentials | `object` | `{}` |
| `backup.s3.credentials.accessKeyId` | S3 access key ID | `string` | `""` |
| `backup.s3.credentials.secretAccessKey` | S3 secret access key | `string` | `""` |
| `backup.retentionPolicy` | Retention policy for backups | `string` | `7d` |
| `monitoring` | Monitoring configuration | `object` | `{}` |
| `monitoring.enabled` | Enable WorkloadMonitor integration | `bool` | `true` |
### FoundationDB configuration
| Name | Description | Type | Value |
| ---------------------------- | ----------------------------------------- | ---------- | --------- |
| `customParameters` | Custom parameters to pass to FoundationDB | `[]string` | `[]` |
| `imageType` | Container image deployment type | `string` | `unified` |
| `securityContext` | Security context for containers | `object` | `{}` |
| `securityContext.runAsUser` | User ID to run the container | `int` | `4059` |
| `securityContext.runAsGroup` | Group ID to run the container | `int` | `4059` |
| `automaticReplacements` | Enable automatic pod replacements | `bool` | `true` |

View File

@@ -0,0 +1 @@
../../../library/cozy-lib

View File

@@ -0,0 +1,106 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<svg
width="144"
height="144"
viewBox="0 0 144 144"
fill="none"
version="1.1"
id="svg4"
sodipodi:docname="foundationdb.svg"
inkscape:version="1.4.2 (unknown)"
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
xmlns="http://www.w3.org/2000/svg"
xmlns:svg="http://www.w3.org/2000/svg">
<sodipodi:namedview
id="namedview4"
pagecolor="#ffffff"
bordercolor="#000000"
borderopacity="0.25"
inkscape:showpageshadow="2"
inkscape:pageopacity="0.0"
inkscape:pagecheckerboard="0"
inkscape:deskcolor="#d1d1d1"
inkscape:zoom="6.0902778"
inkscape:cx="72"
inkscape:cy="72.492588"
inkscape:window-width="1920"
inkscape:window-height="1128"
inkscape:window-x="0"
inkscape:window-y="0"
inkscape:window-maximized="1"
inkscape:current-layer="svg4" />
<rect
width="144"
height="144"
rx="24"
fill="url(#paint0_linear_fdb)"
id="rect1"
style="fill:#ffffff" />
<!-- FoundationDB Icon (scaled and positioned) -->
<!-- FoundationDB Text -->
<defs
id="defs4">
<linearGradient
id="paint0_linear_fdb"
x1="140"
y1="130.5"
x2="4"
y2="9.49999"
gradientUnits="userSpaceOnUse">
<stop
stop-color="#047BFE"
id="stop3" />
<stop
offset="1"
stop-color="#3F9AFB"
id="stop4" />
</linearGradient>
</defs>
<g
id="g1134"
transform="matrix(3.132791,0,0,3.132791,-115.98385,6.9294227)">
<g
transform="matrix(0.08541251,0,0,0.08541251,8.7615159,9.5962543)"
id="g10">
<polygon
style="fill:#3f9afb"
class="st0"
points="457.2,150.5 457.2,98.6 561.4,124 561.6,164.8 666.6,150.9 666.3,98.7 845.8,143 846.4,189.9 667.4,165.8 560.6,177.3 457.1,165.4 354.2,177.6 354.1,165.7 "
id="polygon4" />
<path
style="fill:#0b70e0"
inkscape:connector-curvature="0"
class="st1"
d="m 666.6,183.2 179.6,18.6 v 46 H 353.8 l -0.5,-12.2 h 103.5 c 0,0 0,-34.2 0,-52.3 34.8,3.4 103.8,10.2 103.8,10.2 v 40.9 h 106 z"
id="path6" />
<path
style="fill:#9eccfd"
inkscape:connector-curvature="0"
class="st2"
d="m 561.4,109.1 -0.3,-12.6 c 0,0 68.1,-20.4 103.3,-30.8 0,-16.9 0,-33.2 0,-52.9 61.8,24.8 121.2,48.8 181.2,72.9 0,15 0,29.4 0,45.4 -61.5,-16.9 -121.7,-33.5 -180.2,-49.6 -35.6,9.5 -104,27.6 -104,27.6 z"
id="path8" />
</g>
<polygon
transform="matrix(0.08541251,0,0,0.08541251,8.7795597,9.6869671)"
style="fill:#3f9afb"
class="st0"
points="666.6,150.9 666.3,98.7 845.8,143 846.4,189.9 667.4,165.8 560.6,177.3 457.1,165.4 354.2,177.6 354.1,165.7 457.2,150.5 457.2,98.6 561.4,124 561.6,164.8 "
id="polygon856" />
<path
style="fill:#0b70e0;stroke-width:0.0854125"
inkscape:connector-curvature="0"
class="st1"
d="m 65.715539,25.334539 15.340087,1.588673 v 3.928975 h -42.05712 l -0.04271,-1.042033 h 8.840195 c 0,0 0,-2.921107 0,-4.467074 2.972356,0.290403 8.865819,0.871208 8.865819,0.871208 v 3.493371 h 9.053726 z"
id="path858" />
<path
style="fill:#9eccfd;stroke-width:0.0854125"
inkscape:connector-curvature="0"
class="st2"
d="m 56.730143,19.005472 -0.02562,-1.076198 c 0,0 5.816592,-1.742415 8.823112,-2.630705 0,-1.443471 0,-2.835695 0,-4.518322 5.278493,2.11823 10.351997,4.168131 15.476747,6.226572 0,1.281188 0,2.511128 0,3.877728 -5.252869,-1.443471 -10.394702,-2.861319 -15.391334,-4.23646 -3.040686,0.811419 -8.882901,2.357385 -8.882901,2.357385 z"
id="path860" />
</g>
</svg>

After

Width:  |  Height:  |  Size: 3.6 KiB

View File

@@ -0,0 +1,47 @@
{{/*
Common resource definitions
*/}}
{{- define "foundationdb.resources" -}}
{{- include "cozy-lib.resources.defaultingSanitize" (list .Values.resources.preset .Values.resources $) }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "foundationdb.labels" -}}
helm.sh/chart: {{ include "foundationdb.chart" . }}
{{ include "foundationdb.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "foundationdb.selectorLabels" -}}
app.kubernetes.io/name: foundationdb
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Chart name and version
*/}}
{{- define "foundationdb.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Calculate minReplicas for WorkloadMonitor based on redundancyMode
*/}}
{{- define "foundationdb.minReplicas" -}}
{{- $replicas := .Values.cluster.processCounts.storage -}}
{{- if or (eq .Values.cluster.redundancyMode "triple") (eq .Values.cluster.redundancyMode "three_data_hall") (eq .Values.cluster.redundancyMode "three_datacenter") (eq .Values.cluster.redundancyMode "three_datacenter_fallback") (eq .Values.cluster.redundancyMode "three_data_hall_fallback") }}
{{- print (max 1 (sub $replicas 2)) -}}
{{- else if eq .Values.cluster.redundancyMode "double" }}
{{- print (max 1 (sub $replicas 1)) -}}
{{- else }}
{{- print $replicas -}}
{{- end -}}
{{- end -}}

View File

@@ -0,0 +1,65 @@
{{- if .Values.backup.enabled }}
---
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-s3-creds
labels:
app.kubernetes.io/name: foundationdb
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
type: Opaque
data:
AWS_ACCESS_KEY_ID: {{ .Values.backup.s3.credentials.accessKeyId | b64enc }}
AWS_SECRET_ACCESS_KEY: {{ .Values.backup.s3.credentials.secretAccessKey | b64enc }}
---
apiVersion: apps.foundationdb.org/v1beta2
kind: FoundationDBBackup
metadata:
name: {{ .Release.Name }}-backup
labels:
app.kubernetes.io/name: foundationdb
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
clusterName: {{ .Release.Name }}
backupState: Running
backupDeploymentSpec:
podTemplateSpec:
spec:
containers:
- name: foundationdb
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 100m
memory: 128Mi
securityContext:
runAsUser: 0
customParameters:
- backup_agent_snapshot_mode=0
snapshotPeriodSeconds: 3600
blobStoreConfiguration:
accountName: {{ .Values.backup.s3.bucket }}
bucket: {{ .Values.backup.s3.bucket }}
{{- if .Values.backup.s3.endpoint }}
endpoint: {{ .Values.backup.s3.endpoint }}
{{- end }}
credentials:
AWS_ACCESS_KEY_ID:
secretKeyRef:
name: {{ .Release.Name }}-s3-creds
key: AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY:
secretKeyRef:
name: {{ .Release.Name }}-s3-creds
key: AWS_SECRET_ACCESS_KEY
{{- end }}

View File

@@ -0,0 +1,98 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" | default (dict "data" (dict)) }}
{{- $clusterDomain := index $cozyConfig.data "cluster-domain" | default "cozy.local" }}
---
apiVersion: apps.foundationdb.org/v1beta2
kind: FoundationDBCluster
metadata:
name: {{ .Release.Name }}
labels:
app.kubernetes.io/name: foundationdb
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
version: {{ .Values.cluster.version | quote }}
databaseConfiguration:
redundancy_mode: {{ .Values.cluster.redundancyMode }}
storage_engine: {{ .Values.cluster.storageEngine }}
processCounts:
{{- toYaml .Values.cluster.processCounts | nindent 4 }}
automationOptions:
replacements:
enabled: {{ .Values.automaticReplacements }}
faultDomain:
key: {{ .Values.cluster.faultDomain.key }}
{{- if .Values.cluster.faultDomain.valueFrom }}
valueFrom: {{ .Values.cluster.faultDomain.valueFrom }}
{{- end }}
imageType: {{ .Values.imageType }}
labels:
filterOnOwnerReference: false
matchLabels:
foundationdb.org/fdb-cluster-name: {{ .Release.Name }}
processClassLabels:
- foundationdb.org/fdb-process-class
processGroupIDLabels:
- foundationdb.org/fdb-process-group-id
minimumUptimeSecondsForBounce: 60
processes:
general:
{{- if .Values.customParameters }}
customParameters:
{{- range .Values.customParameters }}
- {{ . }}
{{- end }}
{{- end }}
podTemplate:
metadata:
labels:
policy.cozystack.io/allow-to-apiserver: "true"
spec:
serviceAccountName: {{ .Release.Name }}-foundationdb
securityContext:
fsGroup: {{ .Values.securityContext.runAsGroup }}
containers:
- name: foundationdb
resources: {{- include "cozy-lib.resources.defaultingSanitize" (list .Values.resourcesPreset .Values.resources $) | nindent 16 }}
securityContext:
{{- toYaml .Values.securityContext | nindent 16 }}
- name: foundationdb-kubernetes-sidecar
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 100m
memory: 128Mi
securityContext:
{{- toYaml .Values.securityContext | nindent 16 }}
initContainers:
- name: foundationdb-kubernetes-init
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 100m
memory: 128Mi
securityContext:
{{- toYaml .Values.securityContext | nindent 16 }}
volumeClaimTemplate:
spec:
{{- if .Values.storage.storageClass }}
storageClassName: {{ .Values.storage.storageClass }}
{{- end }}
resources:
requests:
storage: {{ .Values.storage.size }}
routing:
dnsDomain: {{ $clusterDomain }}
defineDNSLocalityFields: true
sidecarContainer:
enableLivenessProbe: true
enableReadinessProbe: true

View File

@@ -0,0 +1,22 @@
{{- if .Values.monitoring.enabled }}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-resourcemap
labels:
app.kubernetes.io/name: foundationdb
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.cozystack.io/type: dashboard-resourcemap
data:
resources: |
- apiVersion: apps.foundationdb.org/v1beta2
kind: FoundationDBCluster
name: {{ .Release.Name }}
{{- if .Values.backup.enabled }}
- apiVersion: apps.foundationdb.org/v1beta2
kind: FoundationDBBackup
name: {{ .Release.Name }}-backup
{{- end }}
{{- end }}

View File

@@ -0,0 +1,22 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ .Release.Name }}-foundationdb
labels:
app.kubernetes.io/name: foundationdb
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- list
- watch
- create
- update
- patch
- delete

View File

@@ -0,0 +1,17 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ .Release.Name }}-foundationdb
labels:
app.kubernetes.io/name: foundationdb
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ .Release.Name }}-foundationdb
subjects:
- kind: ServiceAccount
name: {{ .Release.Name }}-foundationdb
namespace: {{ .Release.Namespace }}

View File

@@ -0,0 +1,9 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ .Release.Name }}-foundationdb
labels:
app.kubernetes.io/name: foundationdb
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}

View File

@@ -0,0 +1,20 @@
{{- if .Values.monitoring.enabled }}
---
apiVersion: cozystack.io/v1alpha1
kind: WorkloadMonitor
metadata:
name: {{ .Release.Name }}
labels:
app.kubernetes.io/name: foundationdb
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
replicas: {{ .Values.cluster.processCounts.storage }}
minReplicas: {{ include "foundationdb.minReplicas" . }}
kind: foundationdb
type: foundationdb
selector:
foundationdb.org/fdb-cluster-name: {{ .Release.Name }}
foundationdb.org/fdb-process-class: storage
version: {{ .Chart.Version }}
{{- end }}

View File

@@ -0,0 +1,282 @@
{
"title": "Chart Values",
"type": "object",
"properties": {
"automaticReplacements": {
"description": "Enable automatic pod replacements",
"type": "boolean",
"default": true
},
"backup": {
"description": "Backup configuration",
"type": "object",
"default": {},
"required": [
"enabled",
"retentionPolicy",
"s3"
],
"properties": {
"enabled": {
"description": "Enable backups",
"type": "boolean",
"default": false
},
"retentionPolicy": {
"description": "Retention policy for backups",
"type": "string",
"default": "7d"
},
"s3": {
"description": "S3 configuration for backups",
"type": "object",
"default": {},
"required": [
"bucket",
"credentials",
"endpoint",
"region"
],
"properties": {
"bucket": {
"description": "S3 bucket name",
"type": "string"
},
"credentials": {
"description": "S3 credentials",
"type": "object",
"default": {},
"required": [
"accessKeyId",
"secretAccessKey"
],
"properties": {
"accessKeyId": {
"description": "S3 access key ID",
"type": "string"
},
"secretAccessKey": {
"description": "S3 secret access key",
"type": "string"
}
}
},
"endpoint": {
"description": "S3 endpoint URL",
"type": "string"
},
"region": {
"description": "S3 region",
"type": "string",
"default": "us-east-1"
}
}
}
}
},
"cluster": {
"description": "Cluster configuration",
"type": "object",
"default": {},
"required": [
"faultDomain",
"processCounts",
"redundancyMode",
"storageEngine",
"version"
],
"properties": {
"faultDomain": {
"description": "Fault domain configuration",
"type": "object",
"default": {},
"required": [
"key",
"valueFrom"
],
"properties": {
"key": {
"description": "Fault domain key",
"type": "string",
"default": "kubernetes.io/hostname"
},
"valueFrom": {
"description": "Fault domain value source",
"type": "string",
"default": "spec.nodeName"
}
}
},
"processCounts": {
"description": "Process counts for different roles",
"type": "object",
"default": {},
"required": [
"cluster_controller",
"stateless",
"storage"
],
"properties": {
"cluster_controller": {
"description": "Number of cluster controller processes",
"type": "integer",
"default": 1
},
"stateless": {
"description": "Number of stateless processes (-1 for automatic)",
"type": "integer",
"default": -1
},
"storage": {
"description": "Number of storage processes (determines cluster size)",
"type": "integer",
"default": 3
}
}
},
"redundancyMode": {
"description": "Database redundancy mode (single, double, triple, three_datacenter, three_datacenter_fallback)",
"type": "string",
"default": "double"
},
"storageEngine": {
"description": "Storage engine (ssd-2, ssd-redwood-v1, ssd-rocksdb-v1, memory)",
"type": "string",
"default": "ssd-2"
},
"version": {
"description": "Version of FoundationDB to use",
"type": "string",
"default": "7.3.63"
}
}
},
"customParameters": {
"description": "Custom parameters to pass to FoundationDB",
"type": "array",
"default": [],
"items": {
"type": "string"
}
},
"imageType": {
"description": "Container image deployment type",
"type": "string",
"default": "unified",
"enum": [
"unified",
"split"
]
},
"monitoring": {
"description": "Monitoring configuration",
"type": "object",
"default": {},
"required": [
"enabled"
],
"properties": {
"enabled": {
"description": "Enable WorkloadMonitor integration",
"type": "boolean",
"default": true
}
}
},
"resources": {
"description": "Explicit CPU and memory configuration for each FoundationDB instance. When left empty, the preset defined in `resourcesPreset` is applied.",
"type": "object",
"default": {},
"properties": {
"cpu": {
"description": "CPU available to each instance",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"anyOf": [
{
"type": "integer"
},
{
"type": "string"
}
],
"x-kubernetes-int-or-string": true
},
"memory": {
"description": "Memory (RAM) available to each instance",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"anyOf": [
{
"type": "integer"
},
{
"type": "string"
}
],
"x-kubernetes-int-or-string": true
}
}
},
"resourcesPreset": {
"description": "Default sizing preset used when `resources` is omitted. Allowed values: `small`, `medium`, `large`, `xlarge`, `2xlarge`.",
"type": "string",
"default": "medium",
"enum": [
"small",
"medium",
"large",
"xlarge",
"2xlarge"
]
},
"securityContext": {
"description": "Security context for containers",
"type": "object",
"default": {},
"required": [
"runAsGroup",
"runAsUser"
],
"properties": {
"runAsGroup": {
"description": "Group ID to run the container",
"type": "integer",
"default": 4059
},
"runAsUser": {
"description": "User ID to run the container",
"type": "integer",
"default": 4059
}
}
},
"storage": {
"description": "Storage configuration",
"type": "object",
"default": {},
"required": [
"size",
"storageClass"
],
"properties": {
"size": {
"description": "Size of persistent volumes for each instance",
"default": "16Gi",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"anyOf": [
{
"type": "integer"
},
{
"type": "string"
}
],
"x-kubernetes-int-or-string": true
},
"storageClass": {
"description": "Storage class (if not set, uses cluster default)",
"type": "string"
}
}
}
}
}

View File

@@ -0,0 +1,93 @@
# Default values for foundationdb.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
## @section Common parameters
##
## @param cluster {cluster} Cluster configuration
## @field cluster.processCounts {clusterProcessCounts} Process counts for different roles
## @field clusterProcessCounts.stateless {int} Number of stateless processes (-1 for automatic)
## @field clusterProcessCounts.storage {int} Number of storage processes (determines cluster size)
## @field clusterProcessCounts.cluster_controller {int} Number of cluster controller processes
## @field cluster.version {string} Version of FoundationDB to use
## @field cluster.redundancyMode {string} Database redundancy mode (single, double, triple, three_datacenter, three_datacenter_fallback)
## @field cluster.storageEngine {string} Storage engine (ssd-2, ssd-redwood-v1, ssd-rocksdb-v1, memory)
## @field cluster.faultDomain {clusterFaultDomain} Fault domain configuration
## @field clusterFaultDomain.key {string} Fault domain key
## @field clusterFaultDomain.valueFrom {string} Fault domain value source
cluster:
processCounts:
stateless: -1 # Automatically calculated
storage: 3 # Number of storage processes (determines cluster size)
cluster_controller: 1
version: "7.3.63"
redundancyMode: "double" # Database redundancy mode
storageEngine: "ssd-2" # Storage engine
faultDomain:
key: "kubernetes.io/hostname"
valueFrom: "spec.nodeName"
## @param storage {storage} Storage configuration
## @field storage.size {quantity} Size of persistent volumes for each instance
## @field storage.storageClass {string} Storage class (if not set, uses cluster default)
storage:
size: "16Gi"
storageClass: ""
## @param resources {*resources} Explicit CPU and memory configuration for each FoundationDB instance. When left empty, the preset defined in `resourcesPreset` is applied.
## @field resources.cpu {*quantity} CPU available to each instance
## @field resources.memory {*quantity} Memory (RAM) available to each instance
resources: {}
# resources:
# cpu: 2000m
# memory: 4Gi
## @param resourcesPreset {string enum:"small,medium,large,xlarge,2xlarge"} Default sizing preset used when `resources` is omitted. Allowed values: `small`, `medium`, `large`, `xlarge`, `2xlarge`.
resourcesPreset: "medium"
## @param backup {backup} Backup configuration
## @field backup.enabled {bool} Enable backups
## @field backup.s3 {backupS3} S3 configuration for backups
## @field backupS3.bucket {string} S3 bucket name
## @field backupS3.endpoint {string} S3 endpoint URL
## @field backupS3.region {string} S3 region
## @field backupS3.credentials {backupS3Credentials} S3 credentials
## @field backupS3Credentials.accessKeyId {string} S3 access key ID
## @field backupS3Credentials.secretAccessKey {string} S3 secret access key
## @field backup.retentionPolicy {string} Retention policy for backups
backup:
enabled: false
s3:
bucket: ""
endpoint: ""
region: "us-east-1"
credentials:
accessKeyId: ""
secretAccessKey: ""
retentionPolicy: "7d"
## @param monitoring {monitoring} Monitoring configuration
## @field monitoring.enabled {bool} Enable WorkloadMonitor integration
monitoring:
enabled: true
## @section FoundationDB configuration
##
## @param customParameters {[]string} Custom parameters to pass to FoundationDB
customParameters: []
# Example:
# - knob_disable_posix_kernel_aio=1
## @param imageType {string enum:"unified,split"} Container image deployment type
imageType: "unified"
## @param securityContext {securityContext} Security context for containers
## @field securityContext.runAsUser {int} User ID to run the container
## @field securityContext.runAsGroup {int} Group ID to run the container
securityContext:
runAsUser: 4059
runAsGroup: 4059
## @param automaticReplacements {bool} Enable automatic pod replacements
automaticReplacements: true

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/nginx-cache:0.0.0@sha256:b7633717cd7449c0042ae92d8ca9b36e4d69566561f5c7d44e21058e7d05c6d5
ghcr.io/cozystack/cozystack/nginx-cache:0.0.0@sha256:50ac1581e3100bd6c477a71161cb455a341ffaf9e5e2f6086802e4e25271e8af

View File

@@ -1,4 +1,4 @@
KUBERNETES_VERSION = v1.32
KUBERNETES_VERSION = v1.33
KUBERNETES_PKG_TAG = $(shell awk '$$1 == "version:" {print $$2}' Chart.yaml)
include ../../../scripts/common-envs.mk

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/kubevirt-csi-driver:0.0.0@sha256:f0e1d9f9e91be8e4a22be9fbe01a8b0e81aba4230b865fba9608ef7f9fb5745f
ghcr.io/cozystack/cozystack/kubevirt-csi-driver:0.0.0@sha256:c8b08084a86251cdd18e237de89b695bca0e4f7eb1f1f6ddc2b903b4d74ea5ff

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/ubuntu-container-disk:v1.32@sha256:e53f2394c7aa76ad10818ffb945e40006cd77406999e47e036d41b8b0bf094cc
ghcr.io/cozystack/cozystack/ubuntu-container-disk:v1.33@sha256:a09724a7f95283f9130b3da2a89d81c4c6051c6edf0392a81b6fc90f404b76b6

View File

@@ -266,6 +266,10 @@ metadata:
{{- end }}
spec:
clusterName: {{ $.Release.Name }}
selector:
matchLabels:
cluster.x-k8s.io/cluster-name: {{ $.Release.Name }}
cluster.x-k8s.io/deployment-name: {{ $.Release.Name }}-{{ $groupName }}
template:
metadata:
labels:

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/mariadb-backup:0.0.0@sha256:a3789db9e9e065ff60cbac70771b4a8aa1460db3194307cf5ca5d4fe1b412b6b
ghcr.io/cozystack/cozystack/mariadb-backup:0.0.0@sha256:1c0beb1b23a109b0e13727b4c73d2c74830e11cede92858ab20101b66f45a858

View File

@@ -4,8 +4,6 @@ apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-backup-script
labels:
apps.cozystack.io/tenantresource: "false"
stringData:
backup.sh: |
#!/bin/sh

View File

@@ -20,8 +20,6 @@ apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-credentials
labels:
internal.cozystack.io/tenantsecret: "true"
stringData:
{{- range $user, $u := .Values.users }}
{{ quote $user }}: {{ quote (index $passwords $user) }}
@@ -32,8 +30,6 @@ apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-init-script
labels:
apps.cozystack.io/tenantresource: "false"
stringData:
init.sh: |
#!/bin/bash

View File

@@ -58,6 +58,8 @@ apiVersion: v1
kind: Secret
metadata:
name: {{ $.Release.Name }}-{{ kebabcase $user }}-credentials
labels:
apps.cozystack.io/user-secret: "true"
type: Opaque
stringData:
username: {{ $user }}

View File

@@ -1,6 +1,7 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $oidcEnabled := index $cozyConfig.data "oidc-enabled" }}
{{- if eq $oidcEnabled "true" }}
{{- if .Capabilities.APIVersions.Has "v1.edp.epam.com/v1" }}
apiVersion: v1.edp.epam.com/v1
kind: KeycloakRealmGroup
metadata:
@@ -51,3 +52,4 @@ spec:
name: keycloakrealm-cozy
kind: ClusterKeycloakRealm
{{- end }}
{{- end }}

View File

@@ -5,6 +5,7 @@ kind: Service
metadata:
name: {{ include "virtual-machine.fullname" . }}
labels:
apps.cozystack.io/user-service: "true"
{{- include "virtual-machine.labels" . | nindent 4 }}
annotations:
networking.cozystack.io/wholeIP: "true"

View File

@@ -5,6 +5,7 @@ kind: Service
metadata:
name: {{ include "virtual-machine.fullname" . }}
labels:
apps.cozystack.io/user-service: "true"
{{- include "virtual-machine.labels" . | nindent 4 }}
annotations:
networking.cozystack.io/wholeIP: "true"

View File

@@ -22,8 +22,6 @@ apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-vpn
labels:
apps.cozystack.io/tenantresource: "false"
type: Opaque
stringData:
shadowbox_server_config.json: |

View File

@@ -34,7 +34,7 @@ FROM alpine:3.22
RUN wget -O- https://github.com/cozystack/cozypkg/raw/refs/heads/main/hack/install.sh | sh -s -- -v 1.2.0
RUN apk add --no-cache make kubectl coreutils git jq
RUN apk add --no-cache make kubectl helm coreutils git jq
COPY --from=builder /src/scripts /cozystack/scripts
COPY --from=builder /src/packages/core /cozystack/packages/core

View File

@@ -3,25 +3,25 @@
arch: amd64
platform: metal
secureboot: false
version: v1.10.6
version: v1.11.3
input:
kernel:
path: /usr/install/amd64/vmlinuz
initramfs:
path: /usr/install/amd64/initramfs.xz
baseInstaller:
imageRef: "ghcr.io/siderolabs/installer:v1.10.6"
imageRef: "ghcr.io/siderolabs/installer:v1.11.3"
systemExtensions:
- imageRef: ghcr.io/siderolabs/amd-ucode:20250708@sha256:83fdaaf4a44e8574f792f2fb9d0dc5f1ff4817179cbba9ebcb4bc3249b732556
- imageRef: ghcr.io/siderolabs/amdgpu:20250708-v1.10.6@sha256:5e67db022f62ae9157d19cbcbcf8c96f19a040e26cfe7e0d5709a15b90413c43
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250708@sha256:f0fc731f3ff1bf417e9bd4dd3f7281e25a6f4e849358a1b46eb41a15066c4bd3
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250708@sha256:9f4c41baa3795fd1457bbb0826a3618e7425f465e99c4647a459217c8b723e6d
- imageRef: ghcr.io/siderolabs/i915:20250708-v1.10.6@sha256:c7d17f6e4e87c8d344f54a02af20631b6cea0f3053d182649b9977857100ce79
- imageRef: ghcr.io/siderolabs/intel-ucode:20250512@sha256:67a0e0de018229a0d44d950fb730f62311cf7fbf4e267978ebbbc42b5e6a32ae
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250708@sha256:97124ee3594ab1529c8153b633f85f2d2de1252ee8222a77f81904dcabd76815
- imageRef: ghcr.io/siderolabs/drbd:9.2.14-v1.10.6@sha256:ca7fba878c5acb8fdfe130a39472a6c0a5c9dd74d65ba7507c09780a873b29c7
- imageRef: ghcr.io/siderolabs/zfs:2.3.3-v1.10.6@sha256:4952ef7306cf014823b6a66cf6d29840f4c6b7b362e36f9d6e853846c7dd0025
- imageRef: ghcr.io/siderolabs/lldpd:1.0.19@sha256:73caa3c3a6c325970d0f527963f982698154d5f39c8c045b0fc2eb51d7da7b85
- imageRef: ghcr.io/siderolabs/amd-ucode:20250917@sha256:ff11ee9f1565d9f9b095a3dc41fb7962b211169b2ef05d658a488398cb98e2d2
- imageRef: ghcr.io/siderolabs/amdgpu:20250917-v1.11.3@sha256:527b694ddbc4b40e9529d736bfe9874cc786773aa5a1070bbefe77feb9a8a304
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250917@sha256:ac6aaaa0d3312e72279a5cde7de0d71fb61774aa2f97a4e56dd914a9f1dde4d1
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250917@sha256:c25225c371e81485c64f339864ede410b560f07eb0fc2702a73315e977a6323d
- imageRef: ghcr.io/siderolabs/i915:20250917-v1.11.3@sha256:e8db985ff2ef702d5f3989b0138e1b9dd5ac5e885a3adefa5b42ee6fa32b7027
- imageRef: ghcr.io/siderolabs/intel-ucode:20250812@sha256:31142ac037235e6779eea9f638e6399080a1f09e7c323ffa30b37488004057a5
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250917@sha256:7094e5db6931a1b68240416b65ddc0f3b546bd9b8520e3cfb1ddebcbfc83e890
- imageRef: ghcr.io/siderolabs/drbd:9.2.14-v1.11.3@sha256:4393756875751e2664a04e96c1ccff84c99958ca819dd93b46b82ad8f3b4be67
- imageRef: ghcr.io/siderolabs/zfs:2.3.3-v1.11.3@sha256:3c0b34a760914980ac234e66f130d829e428018e46420b7bca33219b1cc2dd87
- imageRef: ghcr.io/siderolabs/lldpd:1.0.20@sha256:4c6370518f5b2e1f03214a6ed54778eaea663fda8850e3f4da174ed69b636172
output:
kind: initramfs
imageOptions: {}

View File

@@ -3,25 +3,25 @@
arch: amd64
platform: metal
secureboot: false
version: v1.10.6
version: v1.11.3
input:
kernel:
path: /usr/install/amd64/vmlinuz
initramfs:
path: /usr/install/amd64/initramfs.xz
baseInstaller:
imageRef: "ghcr.io/siderolabs/installer:v1.10.6"
imageRef: "ghcr.io/siderolabs/installer:v1.11.3"
systemExtensions:
- imageRef: ghcr.io/siderolabs/amd-ucode:20250708@sha256:83fdaaf4a44e8574f792f2fb9d0dc5f1ff4817179cbba9ebcb4bc3249b732556
- imageRef: ghcr.io/siderolabs/amdgpu:20250708-v1.10.6@sha256:5e67db022f62ae9157d19cbcbcf8c96f19a040e26cfe7e0d5709a15b90413c43
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250708@sha256:f0fc731f3ff1bf417e9bd4dd3f7281e25a6f4e849358a1b46eb41a15066c4bd3
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250708@sha256:9f4c41baa3795fd1457bbb0826a3618e7425f465e99c4647a459217c8b723e6d
- imageRef: ghcr.io/siderolabs/i915:20250708-v1.10.6@sha256:c7d17f6e4e87c8d344f54a02af20631b6cea0f3053d182649b9977857100ce79
- imageRef: ghcr.io/siderolabs/intel-ucode:20250512@sha256:67a0e0de018229a0d44d950fb730f62311cf7fbf4e267978ebbbc42b5e6a32ae
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250708@sha256:97124ee3594ab1529c8153b633f85f2d2de1252ee8222a77f81904dcabd76815
- imageRef: ghcr.io/siderolabs/drbd:9.2.14-v1.10.6@sha256:ca7fba878c5acb8fdfe130a39472a6c0a5c9dd74d65ba7507c09780a873b29c7
- imageRef: ghcr.io/siderolabs/zfs:2.3.3-v1.10.6@sha256:4952ef7306cf014823b6a66cf6d29840f4c6b7b362e36f9d6e853846c7dd0025
- imageRef: ghcr.io/siderolabs/lldpd:1.0.19@sha256:73caa3c3a6c325970d0f527963f982698154d5f39c8c045b0fc2eb51d7da7b85
- imageRef: ghcr.io/siderolabs/amd-ucode:20250917@sha256:ff11ee9f1565d9f9b095a3dc41fb7962b211169b2ef05d658a488398cb98e2d2
- imageRef: ghcr.io/siderolabs/amdgpu:20250917-v1.11.3@sha256:527b694ddbc4b40e9529d736bfe9874cc786773aa5a1070bbefe77feb9a8a304
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250917@sha256:ac6aaaa0d3312e72279a5cde7de0d71fb61774aa2f97a4e56dd914a9f1dde4d1
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250917@sha256:c25225c371e81485c64f339864ede410b560f07eb0fc2702a73315e977a6323d
- imageRef: ghcr.io/siderolabs/i915:20250917-v1.11.3@sha256:e8db985ff2ef702d5f3989b0138e1b9dd5ac5e885a3adefa5b42ee6fa32b7027
- imageRef: ghcr.io/siderolabs/intel-ucode:20250812@sha256:31142ac037235e6779eea9f638e6399080a1f09e7c323ffa30b37488004057a5
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250917@sha256:7094e5db6931a1b68240416b65ddc0f3b546bd9b8520e3cfb1ddebcbfc83e890
- imageRef: ghcr.io/siderolabs/drbd:9.2.14-v1.11.3@sha256:4393756875751e2664a04e96c1ccff84c99958ca819dd93b46b82ad8f3b4be67
- imageRef: ghcr.io/siderolabs/zfs:2.3.3-v1.11.3@sha256:3c0b34a760914980ac234e66f130d829e428018e46420b7bca33219b1cc2dd87
- imageRef: ghcr.io/siderolabs/lldpd:1.0.20@sha256:4c6370518f5b2e1f03214a6ed54778eaea663fda8850e3f4da174ed69b636172
output:
kind: installer
imageOptions: {}

View File

@@ -3,25 +3,25 @@
arch: amd64
platform: metal
secureboot: false
version: v1.10.6
version: v1.11.3
input:
kernel:
path: /usr/install/amd64/vmlinuz
initramfs:
path: /usr/install/amd64/initramfs.xz
baseInstaller:
imageRef: "ghcr.io/siderolabs/installer:v1.10.6"
imageRef: "ghcr.io/siderolabs/installer:v1.11.3"
systemExtensions:
- imageRef: ghcr.io/siderolabs/amd-ucode:20250708@sha256:83fdaaf4a44e8574f792f2fb9d0dc5f1ff4817179cbba9ebcb4bc3249b732556
- imageRef: ghcr.io/siderolabs/amdgpu:20250708-v1.10.6@sha256:5e67db022f62ae9157d19cbcbcf8c96f19a040e26cfe7e0d5709a15b90413c43
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250708@sha256:f0fc731f3ff1bf417e9bd4dd3f7281e25a6f4e849358a1b46eb41a15066c4bd3
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250708@sha256:9f4c41baa3795fd1457bbb0826a3618e7425f465e99c4647a459217c8b723e6d
- imageRef: ghcr.io/siderolabs/i915:20250708-v1.10.6@sha256:c7d17f6e4e87c8d344f54a02af20631b6cea0f3053d182649b9977857100ce79
- imageRef: ghcr.io/siderolabs/intel-ucode:20250512@sha256:67a0e0de018229a0d44d950fb730f62311cf7fbf4e267978ebbbc42b5e6a32ae
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250708@sha256:97124ee3594ab1529c8153b633f85f2d2de1252ee8222a77f81904dcabd76815
- imageRef: ghcr.io/siderolabs/drbd:9.2.14-v1.10.6@sha256:ca7fba878c5acb8fdfe130a39472a6c0a5c9dd74d65ba7507c09780a873b29c7
- imageRef: ghcr.io/siderolabs/zfs:2.3.3-v1.10.6@sha256:4952ef7306cf014823b6a66cf6d29840f4c6b7b362e36f9d6e853846c7dd0025
- imageRef: ghcr.io/siderolabs/lldpd:1.0.19@sha256:73caa3c3a6c325970d0f527963f982698154d5f39c8c045b0fc2eb51d7da7b85
- imageRef: ghcr.io/siderolabs/amd-ucode:20250917@sha256:ff11ee9f1565d9f9b095a3dc41fb7962b211169b2ef05d658a488398cb98e2d2
- imageRef: ghcr.io/siderolabs/amdgpu:20250917-v1.11.3@sha256:527b694ddbc4b40e9529d736bfe9874cc786773aa5a1070bbefe77feb9a8a304
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250917@sha256:ac6aaaa0d3312e72279a5cde7de0d71fb61774aa2f97a4e56dd914a9f1dde4d1
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250917@sha256:c25225c371e81485c64f339864ede410b560f07eb0fc2702a73315e977a6323d
- imageRef: ghcr.io/siderolabs/i915:20250917-v1.11.3@sha256:e8db985ff2ef702d5f3989b0138e1b9dd5ac5e885a3adefa5b42ee6fa32b7027
- imageRef: ghcr.io/siderolabs/intel-ucode:20250812@sha256:31142ac037235e6779eea9f638e6399080a1f09e7c323ffa30b37488004057a5
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250917@sha256:7094e5db6931a1b68240416b65ddc0f3b546bd9b8520e3cfb1ddebcbfc83e890
- imageRef: ghcr.io/siderolabs/drbd:9.2.14-v1.11.3@sha256:4393756875751e2664a04e96c1ccff84c99958ca819dd93b46b82ad8f3b4be67
- imageRef: ghcr.io/siderolabs/zfs:2.3.3-v1.11.3@sha256:3c0b34a760914980ac234e66f130d829e428018e46420b7bca33219b1cc2dd87
- imageRef: ghcr.io/siderolabs/lldpd:1.0.20@sha256:4c6370518f5b2e1f03214a6ed54778eaea663fda8850e3f4da174ed69b636172
output:
kind: iso
imageOptions: {}

View File

@@ -3,25 +3,25 @@
arch: amd64
platform: metal
secureboot: false
version: v1.10.6
version: v1.11.3
input:
kernel:
path: /usr/install/amd64/vmlinuz
initramfs:
path: /usr/install/amd64/initramfs.xz
baseInstaller:
imageRef: "ghcr.io/siderolabs/installer:v1.10.6"
imageRef: "ghcr.io/siderolabs/installer:v1.11.3"
systemExtensions:
- imageRef: ghcr.io/siderolabs/amd-ucode:20250708@sha256:83fdaaf4a44e8574f792f2fb9d0dc5f1ff4817179cbba9ebcb4bc3249b732556
- imageRef: ghcr.io/siderolabs/amdgpu:20250708-v1.10.6@sha256:5e67db022f62ae9157d19cbcbcf8c96f19a040e26cfe7e0d5709a15b90413c43
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250708@sha256:f0fc731f3ff1bf417e9bd4dd3f7281e25a6f4e849358a1b46eb41a15066c4bd3
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250708@sha256:9f4c41baa3795fd1457bbb0826a3618e7425f465e99c4647a459217c8b723e6d
- imageRef: ghcr.io/siderolabs/i915:20250708-v1.10.6@sha256:c7d17f6e4e87c8d344f54a02af20631b6cea0f3053d182649b9977857100ce79
- imageRef: ghcr.io/siderolabs/intel-ucode:20250512@sha256:67a0e0de018229a0d44d950fb730f62311cf7fbf4e267978ebbbc42b5e6a32ae
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250708@sha256:97124ee3594ab1529c8153b633f85f2d2de1252ee8222a77f81904dcabd76815
- imageRef: ghcr.io/siderolabs/drbd:9.2.14-v1.10.6@sha256:ca7fba878c5acb8fdfe130a39472a6c0a5c9dd74d65ba7507c09780a873b29c7
- imageRef: ghcr.io/siderolabs/zfs:2.3.3-v1.10.6@sha256:4952ef7306cf014823b6a66cf6d29840f4c6b7b362e36f9d6e853846c7dd0025
- imageRef: ghcr.io/siderolabs/lldpd:1.0.19@sha256:73caa3c3a6c325970d0f527963f982698154d5f39c8c045b0fc2eb51d7da7b85
- imageRef: ghcr.io/siderolabs/amd-ucode:20250917@sha256:ff11ee9f1565d9f9b095a3dc41fb7962b211169b2ef05d658a488398cb98e2d2
- imageRef: ghcr.io/siderolabs/amdgpu:20250917-v1.11.3@sha256:527b694ddbc4b40e9529d736bfe9874cc786773aa5a1070bbefe77feb9a8a304
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250917@sha256:ac6aaaa0d3312e72279a5cde7de0d71fb61774aa2f97a4e56dd914a9f1dde4d1
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250917@sha256:c25225c371e81485c64f339864ede410b560f07eb0fc2702a73315e977a6323d
- imageRef: ghcr.io/siderolabs/i915:20250917-v1.11.3@sha256:e8db985ff2ef702d5f3989b0138e1b9dd5ac5e885a3adefa5b42ee6fa32b7027
- imageRef: ghcr.io/siderolabs/intel-ucode:20250812@sha256:31142ac037235e6779eea9f638e6399080a1f09e7c323ffa30b37488004057a5
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250917@sha256:7094e5db6931a1b68240416b65ddc0f3b546bd9b8520e3cfb1ddebcbfc83e890
- imageRef: ghcr.io/siderolabs/drbd:9.2.14-v1.11.3@sha256:4393756875751e2664a04e96c1ccff84c99958ca819dd93b46b82ad8f3b4be67
- imageRef: ghcr.io/siderolabs/zfs:2.3.3-v1.11.3@sha256:3c0b34a760914980ac234e66f130d829e428018e46420b7bca33219b1cc2dd87
- imageRef: ghcr.io/siderolabs/lldpd:1.0.20@sha256:4c6370518f5b2e1f03214a6ed54778eaea663fda8850e3f4da174ed69b636172
output:
kind: kernel
imageOptions: {}

View File

@@ -3,25 +3,25 @@
arch: amd64
platform: metal
secureboot: false
version: v1.10.6
version: v1.11.3
input:
kernel:
path: /usr/install/amd64/vmlinuz
initramfs:
path: /usr/install/amd64/initramfs.xz
baseInstaller:
imageRef: "ghcr.io/siderolabs/installer:v1.10.6"
imageRef: "ghcr.io/siderolabs/installer:v1.11.3"
systemExtensions:
- imageRef: ghcr.io/siderolabs/amd-ucode:20250708@sha256:83fdaaf4a44e8574f792f2fb9d0dc5f1ff4817179cbba9ebcb4bc3249b732556
- imageRef: ghcr.io/siderolabs/amdgpu:20250708-v1.10.6@sha256:5e67db022f62ae9157d19cbcbcf8c96f19a040e26cfe7e0d5709a15b90413c43
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250708@sha256:f0fc731f3ff1bf417e9bd4dd3f7281e25a6f4e849358a1b46eb41a15066c4bd3
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250708@sha256:9f4c41baa3795fd1457bbb0826a3618e7425f465e99c4647a459217c8b723e6d
- imageRef: ghcr.io/siderolabs/i915:20250708-v1.10.6@sha256:c7d17f6e4e87c8d344f54a02af20631b6cea0f3053d182649b9977857100ce79
- imageRef: ghcr.io/siderolabs/intel-ucode:20250512@sha256:67a0e0de018229a0d44d950fb730f62311cf7fbf4e267978ebbbc42b5e6a32ae
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250708@sha256:97124ee3594ab1529c8153b633f85f2d2de1252ee8222a77f81904dcabd76815
- imageRef: ghcr.io/siderolabs/drbd:9.2.14-v1.10.6@sha256:ca7fba878c5acb8fdfe130a39472a6c0a5c9dd74d65ba7507c09780a873b29c7
- imageRef: ghcr.io/siderolabs/zfs:2.3.3-v1.10.6@sha256:4952ef7306cf014823b6a66cf6d29840f4c6b7b362e36f9d6e853846c7dd0025
- imageRef: ghcr.io/siderolabs/lldpd:1.0.19@sha256:73caa3c3a6c325970d0f527963f982698154d5f39c8c045b0fc2eb51d7da7b85
- imageRef: ghcr.io/siderolabs/amd-ucode:20250917@sha256:ff11ee9f1565d9f9b095a3dc41fb7962b211169b2ef05d658a488398cb98e2d2
- imageRef: ghcr.io/siderolabs/amdgpu:20250917-v1.11.3@sha256:527b694ddbc4b40e9529d736bfe9874cc786773aa5a1070bbefe77feb9a8a304
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250917@sha256:ac6aaaa0d3312e72279a5cde7de0d71fb61774aa2f97a4e56dd914a9f1dde4d1
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250917@sha256:c25225c371e81485c64f339864ede410b560f07eb0fc2702a73315e977a6323d
- imageRef: ghcr.io/siderolabs/i915:20250917-v1.11.3@sha256:e8db985ff2ef702d5f3989b0138e1b9dd5ac5e885a3adefa5b42ee6fa32b7027
- imageRef: ghcr.io/siderolabs/intel-ucode:20250812@sha256:31142ac037235e6779eea9f638e6399080a1f09e7c323ffa30b37488004057a5
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250917@sha256:7094e5db6931a1b68240416b65ddc0f3b546bd9b8520e3cfb1ddebcbfc83e890
- imageRef: ghcr.io/siderolabs/drbd:9.2.14-v1.11.3@sha256:4393756875751e2664a04e96c1ccff84c99958ca819dd93b46b82ad8f3b4be67
- imageRef: ghcr.io/siderolabs/zfs:2.3.3-v1.11.3@sha256:3c0b34a760914980ac234e66f130d829e428018e46420b7bca33219b1cc2dd87
- imageRef: ghcr.io/siderolabs/lldpd:1.0.20@sha256:4c6370518f5b2e1f03214a6ed54778eaea663fda8850e3f4da174ed69b636172
output:
kind: image
imageOptions: { diskSize: 1306525696, diskFormat: raw }

View File

@@ -3,25 +3,25 @@
arch: amd64
platform: nocloud
secureboot: false
version: v1.10.6
version: v1.11.3
input:
kernel:
path: /usr/install/amd64/vmlinuz
initramfs:
path: /usr/install/amd64/initramfs.xz
baseInstaller:
imageRef: "ghcr.io/siderolabs/installer:v1.10.6"
imageRef: "ghcr.io/siderolabs/installer:v1.11.3"
systemExtensions:
- imageRef: ghcr.io/siderolabs/amd-ucode:20250708@sha256:83fdaaf4a44e8574f792f2fb9d0dc5f1ff4817179cbba9ebcb4bc3249b732556
- imageRef: ghcr.io/siderolabs/amdgpu:20250708-v1.10.6@sha256:5e67db022f62ae9157d19cbcbcf8c96f19a040e26cfe7e0d5709a15b90413c43
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250708@sha256:f0fc731f3ff1bf417e9bd4dd3f7281e25a6f4e849358a1b46eb41a15066c4bd3
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250708@sha256:9f4c41baa3795fd1457bbb0826a3618e7425f465e99c4647a459217c8b723e6d
- imageRef: ghcr.io/siderolabs/i915:20250708-v1.10.6@sha256:c7d17f6e4e87c8d344f54a02af20631b6cea0f3053d182649b9977857100ce79
- imageRef: ghcr.io/siderolabs/intel-ucode:20250512@sha256:67a0e0de018229a0d44d950fb730f62311cf7fbf4e267978ebbbc42b5e6a32ae
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250708@sha256:97124ee3594ab1529c8153b633f85f2d2de1252ee8222a77f81904dcabd76815
- imageRef: ghcr.io/siderolabs/drbd:9.2.14-v1.10.6@sha256:ca7fba878c5acb8fdfe130a39472a6c0a5c9dd74d65ba7507c09780a873b29c7
- imageRef: ghcr.io/siderolabs/zfs:2.3.3-v1.10.6@sha256:4952ef7306cf014823b6a66cf6d29840f4c6b7b362e36f9d6e853846c7dd0025
- imageRef: ghcr.io/siderolabs/lldpd:1.0.19@sha256:73caa3c3a6c325970d0f527963f982698154d5f39c8c045b0fc2eb51d7da7b85
- imageRef: ghcr.io/siderolabs/amd-ucode:20250917@sha256:ff11ee9f1565d9f9b095a3dc41fb7962b211169b2ef05d658a488398cb98e2d2
- imageRef: ghcr.io/siderolabs/amdgpu:20250917-v1.11.3@sha256:527b694ddbc4b40e9529d736bfe9874cc786773aa5a1070bbefe77feb9a8a304
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250917@sha256:ac6aaaa0d3312e72279a5cde7de0d71fb61774aa2f97a4e56dd914a9f1dde4d1
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250917@sha256:c25225c371e81485c64f339864ede410b560f07eb0fc2702a73315e977a6323d
- imageRef: ghcr.io/siderolabs/i915:20250917-v1.11.3@sha256:e8db985ff2ef702d5f3989b0138e1b9dd5ac5e885a3adefa5b42ee6fa32b7027
- imageRef: ghcr.io/siderolabs/intel-ucode:20250812@sha256:31142ac037235e6779eea9f638e6399080a1f09e7c323ffa30b37488004057a5
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250917@sha256:7094e5db6931a1b68240416b65ddc0f3b546bd9b8520e3cfb1ddebcbfc83e890
- imageRef: ghcr.io/siderolabs/drbd:9.2.14-v1.11.3@sha256:4393756875751e2664a04e96c1ccff84c99958ca819dd93b46b82ad8f3b4be67
- imageRef: ghcr.io/siderolabs/zfs:2.3.3-v1.11.3@sha256:3c0b34a760914980ac234e66f130d829e428018e46420b7bca33219b1cc2dd87
- imageRef: ghcr.io/siderolabs/lldpd:1.0.20@sha256:4c6370518f5b2e1f03214a6ed54778eaea663fda8850e3f4da174ed69b636172
output:
kind: image
imageOptions: { diskSize: 1306525696, diskFormat: raw }

View File

@@ -1,2 +1,2 @@
cozystack:
image: ghcr.io/cozystack/cozystack/installer:v0.37.0-alpha.2@sha256:ee9fa3a0c7599bb66be2842050aba58660fe497f6878b6ebd3ca0047c349b3ad
image: ghcr.io/cozystack/cozystack/installer:v0.37.0@sha256:256c5a0f0ae2fc3ad6865b9fda74c42945b38a5384240fa29554617185b60556

View File

@@ -68,6 +68,12 @@ releases:
disableTelemetry: true
{{- end }}
- name: lineage-controller-webhook
releaseName: lineage-controller-webhook
chart: cozy-lineage-controller-webhook
namespace: cozy-system
dependsOn: [cozystack-controller,cilium,cert-manager]
- name: cert-manager
releaseName: cert-manager
chart: cozy-cert-manager
@@ -155,6 +161,13 @@ releases:
optional: true
dependsOn: [cilium,victoria-metrics-operator]
- name: foundationdb-operator
releaseName: foundationdb-operator
chart: cozy-foundationdb-operator
namespace: cozy-foundationdb-operator
optional: true
dependsOn: [cilium,cert-manager]
- name: rabbitmq-operator
releaseName: rabbitmq-operator
chart: cozy-rabbitmq-operator

View File

@@ -36,6 +36,12 @@ releases:
disableTelemetry: true
{{- end }}
- name: lineage-controller-webhook
releaseName: lineage-controller-webhook
chart: cozy-lineage-controller-webhook
namespace: cozy-system
dependsOn: [cozystack-controller,cert-manager]
- name: cert-manager
releaseName: cert-manager
chart: cozy-cert-manager
@@ -116,6 +122,13 @@ releases:
optional: true
dependsOn: [victoria-metrics-operator]
- name: foundationdb-operator
releaseName: foundationdb-operator
chart: cozy-foundationdb-operator
namespace: cozy-foundationdb-operator
optional: true
dependsOn: [cert-manager]
- name: rabbitmq-operator
releaseName: rabbitmq-operator
chart: cozy-rabbitmq-operator

View File

@@ -105,6 +105,24 @@ releases:
disableTelemetry: true
{{- end }}
- name: lineage-controller-webhook
releaseName: lineage-controller-webhook
chart: cozy-lineage-controller-webhook
namespace: cozy-system
dependsOn: [cozystack-controller,cilium,kubeovn,cert-manager]
- name: cozystack-resource-definition-crd
releaseName: cozystack-resource-definition-crd
chart: cozystack-resource-definition-crd
namespace: cozy-system
dependsOn: [cilium,kubeovn,cozystack-api,cozystack-controller]
- name: cozystack-resource-definitions
releaseName: cozystack-resource-definitions
chart: cozystack-resource-definitions
namespace: cozy-system
dependsOn: [cilium,kubeovn,cozystack-api,cozystack-controller,cozystack-resource-definition-crd]
- name: cert-manager
releaseName: cert-manager
chart: cozy-cert-manager
@@ -230,6 +248,12 @@ releases:
namespace: cozy-clickhouse-operator
dependsOn: [cilium,kubeovn,victoria-metrics-operator]
- name: foundationdb-operator
releaseName: foundationdb-operator
chart: cozy-foundationdb-operator
namespace: cozy-foundationdb-operator
dependsOn: [cilium,kubeovn,cert-manager]
- name: rabbitmq-operator
releaseName: rabbitmq-operator
chart: cozy-rabbitmq-operator

View File

@@ -52,6 +52,24 @@ releases:
disableTelemetry: true
{{- end }}
- name: lineage-controller-webhook
releaseName: lineage-controller-webhook
chart: cozy-lineage-controller-webhook
namespace: cozy-system
dependsOn: [cozystack-controller,cert-manager]
- name: cozystack-resource-definition-crd
releaseName: cozystack-resource-definition-crd
chart: cozystack-resource-definition-crd
namespace: cozy-system
dependsOn: [cozystack-api,cozystack-controller]
- name: cozystack-resource-definitions
releaseName: cozystack-resource-definitions
chart: cozystack-resource-definitions
namespace: cozy-system
dependsOn: [cozystack-api,cozystack-controller,cozystack-resource-definition-crd]
- name: cert-manager
releaseName: cert-manager
chart: cozy-cert-manager
@@ -123,6 +141,12 @@ releases:
namespace: cozy-clickhouse-operator
dependsOn: [victoria-metrics-operator]
- name: foundationdb-operator
releaseName: foundationdb-operator
chart: cozy-foundationdb-operator
namespace: cozy-foundationdb-operator
dependsOn: [cert-manager]
- name: rabbitmq-operator
releaseName: rabbitmq-operator
chart: cozy-rabbitmq-operator

View File

@@ -1,2 +1,2 @@
e2e:
image: ghcr.io/cozystack/cozystack/e2e-sandbox:v0.37.0-alpha.2@sha256:de82519817d9f7447c7f5e957df590cf3150c24c8d048ccbf954550bb0999123
image: ghcr.io/cozystack/cozystack/e2e-sandbox:v0.37.0@sha256:10afd0a6c39248ec41d0e59ff1bc6c29bd0075b7cc9a512b01cf603ef39c33ea

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/matchbox:v0.37.0-alpha.2@sha256:868c37a32d767ed75a1ec9ae1a2a0d26a08f89b7b149dd095cb27b94f3d385df
ghcr.io/cozystack/cozystack/matchbox:v0.37.0@sha256:5cca5f56b755285aefa11b1052fe55e1aa83b25bae34aef80cdb77ff63091044

View File

@@ -4,12 +4,12 @@
### Common parameters
| Name | Description | Type | Value |
| ------------------ | ----------------------------------- | ----------- | ------ |
| `size` | Persistent Volume size | `*quantity` | `4Gi` |
| `storageClass` | StorageClass used to store the data | `*string` | `""` |
| `replicas` | Number of etcd replicas | `*int` | `3` |
| `resources` | Resource configuration for etcd | `*object` | `null` |
| `resources.cpu` | The number of CPU cores allocated | `*quantity` | `4` |
| `resources.memory` | The amount of memory allocated | `*quantity` | `1Gi` |
| Name | Description | Type | Value |
| ------------------ | ----------------------------------- | ----------- | ------- |
| `size` | Persistent Volume size | `*quantity` | `4Gi` |
| `storageClass` | StorageClass used to store the data | `*string` | `""` |
| `replicas` | Number of etcd replicas | `*int` | `3` |
| `resources` | Resource configuration for etcd | `*object` | `null` |
| `resources.cpu` | The number of CPU cores allocated | `*quantity` | `1000m` |
| `resources.memory` | The amount of memory allocated | `*quantity` | `512Mi` |

View File

@@ -0,0 +1,21 @@
---
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
name: etcd
spec:
targetRef:
apiVersion: apps/v1
kind: StatefulSet
name: etcd
updatePolicy:
updateMode: Auto
resourcePolicy:
containerPolicies:
- containerName: etcd
{{- with dict "cpu" "250m" "memory" "256Mi" }}
minAllowed: {{- get (include "cozy-lib.resources.sanitize" (list . $) | fromYaml) "requests" | toYaml | nindent 8 }}
{{- end }}
{{- with dict "cpu" "5000m" "memory" "8Gi" }}
maxAllowed: {{- get (include "cozy-lib.resources.sanitize" (list . $) | fromYaml) "requests" | toYaml | nindent 8 }}
{{- end }}

View File

@@ -14,7 +14,7 @@
"properties": {
"cpu": {
"description": "The number of CPU cores allocated",
"default": 4,
"default": "1000m",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"anyOf": [
{
@@ -28,7 +28,7 @@
},
"memory": {
"description": "The amount of memory allocated",
"default": "1Gi",
"default": "512Mi",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"anyOf": [
{

View File

@@ -12,5 +12,5 @@ replicas: 3
## @field resources.cpu {*quantity} The number of CPU cores allocated
## @field resources.memory {*quantity} The amount of memory allocated
resources:
cpu: 4
memory: 1Gi
cpu: 1000m
memory: 512Mi

View File

@@ -10,11 +10,11 @@ rules:
resources:
- secrets
resourceNames:
- {{- if eq $oidcEnabled "true" -}}
kubeconfig-{{ .Release.Namespace }}
{{- else -}}
tenant-{{ .Release.Namespace }}
{{- end }}
{{- if eq $oidcEnabled "true" }}
- kubeconfig-{{ .Release.Namespace }}
{{- else }}
- {{ .Release.Namespace }}
{{- end }}
verbs: ["get", "list", "watch"]
---
kind: RoleBinding

View File

@@ -72,6 +72,8 @@
| `alerta.alerts.telegram.token` | Telegram token for your bot | `string` | `""` |
| `alerta.alerts.telegram.chatID` | Specify multiple ID's separated by comma. Get yours in https://t.me/chatid_echo_bot | `string` | `""` |
| `alerta.alerts.telegram.disabledSeverity` | List of severity without alerts, separated by comma like: "informational,warning" | `string` | `""` |
| `alerta.alerts.slack` | Configuration for Slack alerts | `*object` | `null` |
| `alerta.alerts.slack.url` | Configuration uri for Slack alerts | `*string` | `""` |
### Grafana configuration

View File

@@ -109,9 +109,20 @@ spec:
- name: AUTH_REQUIRED
value: "True"
{{- $plugins := list }}
{{- if and .Values.alerta.alerts.telegram.chatID .Values.alerta.alerts.telegram.token }}
{{- $plugins = append $plugins "telegram" }}
{{- end }}
{{- if .Values.alerta.alerts.slack.url }}
{{- $plugins = append $plugins "slack" }}
{{- end }}
{{- if gt (len $plugins) 0 }}
- name: "PLUGINS"
value: "telegram"
value: "{{ default "" (join "," $plugins) }}"
{{- end }}
{{- if and .Values.alerta.alerts.telegram.chatID .Values.alerta.alerts.telegram.token }}
- name: TELEGRAM_CHAT_ID
value: "{{ .Values.alerta.alerts.telegram.chatID }}"
- name: TELEGRAM_TOKEN
@@ -122,6 +133,11 @@ spec:
value: "{{ .Values.alerta.alerts.telegram.disabledSeverity }}"
{{- end }}
{{- if .Values.alerta.alerts.slack.url }}
- name: "SLACK_WEBHOOK_URL"
value: "{{ .Values.alerta.alerts.slack.url }}"
{{- end }}
ports:
- name: http
containerPort: 8080
@@ -192,8 +208,6 @@ apiVersion: v1
kind: Secret
metadata:
name: alertmanager
labels:
apps.cozystack.io/tenantresource: "false"
type: Opaque
stringData:
alertmanager.yaml: |

View File

@@ -12,6 +12,17 @@
"type": "object",
"default": {},
"properties": {
"slack": {
"description": "Configuration for Slack alerts",
"type": "object",
"default": {},
"properties": {
"url": {
"description": "Configuration uri for Slack alerts",
"type": "string"
}
}
},
"telegram": {
"description": "Configuration for Telegram alerts",
"type": "object",

View File

@@ -90,6 +90,8 @@ logsStorages:
## @field telegramAlerts.token {string} Telegram token for your bot
## @field telegramAlerts.chatID {string} Specify multiple ID's separated by comma. Get yours in https://t.me/chatid_echo_bot
## @field telegramAlerts.disabledSeverity {string} List of severity without alerts, separated by comma like: "informational,warning"
## @field alerts.slack {*slackAlerts} Configuration for Slack alerts
## @field slackAlerts.url {*string} Configuration uri for Slack alerts
alerta:
storage: 10Gi
storageClassName: ""
@@ -112,6 +114,9 @@ alerta:
chatID: ""
disabledSeverity: ""
slack:
url: ""
## @section Grafana configuration
## @param grafana {grafana} Configuration for Grafana

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/objectstorage-sidecar:v0.37.0-alpha.2@sha256:3bde5040e9e6ef1afa000a8cfdb7a2ed2e503d4913fffc992fccf48f0e61057c
ghcr.io/cozystack/cozystack/objectstorage-sidecar:v0.37.0@sha256:f166f09cdc9cdbb758209883819ab8261a3793bc1d7a6b6685efd5a2b2930847

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/s3manager:v0.5.0@sha256:40da96d516d4400366c2f451d8b87a57fa8dc3cb5ce2d958318bd18cd2514528
ghcr.io/cozystack/cozystack/s3manager:v0.5.0@sha256:7348bec610f08bd902c88c9a9f28fdd644727e2728a1e4103f88f0c99febd5e7

View File

@@ -3,6 +3,7 @@
{{- $accessKeyID := index $bucketInfo.spec.secretS3 "accessKeyID" }}
{{- $accessSecretKey := index $bucketInfo.spec.secretS3 "accessSecretKey" }}
{{- $endpoint := index $bucketInfo.spec.secretS3 "endpoint" }}
{{- $bucketName := index $bucketInfo.spec "bucketName" }}
apiVersion: v1
kind: Secret
@@ -13,6 +14,7 @@ stringData:
accessKey: {{ $accessKeyID | quote }}
secretKey: {{ $accessSecretKey | quote }}
endpoint: {{ trimPrefix "https://" $endpoint }}
bucketName: {{ $bucketName | quote }}
---
apiVersion: v1
kind: Secret

View File

@@ -79,7 +79,7 @@ annotations:
Pod IP Pool\n description: |\n CiliumPodIPPool defines an IP pool that can
be used for pooled IPAM (i.e. the multi-pool IPAM mode).\n"
apiVersion: v2
appVersion: 1.17.5
appVersion: 1.17.8
description: eBPF-based Networking, Security, and Observability
home: https://cilium.io/
icon: https://cdn.jsdelivr.net/gh/cilium/cilium@main/Documentation/images/logo-solo.svg
@@ -95,4 +95,4 @@ kubeVersion: '>= 1.21.0-0'
name: cilium
sources:
- https://github.com/cilium/cilium
version: 1.17.5
version: 1.17.8

View File

@@ -1,6 +1,6 @@
# cilium
![Version: 1.17.5](https://img.shields.io/badge/Version-1.17.5-informational?style=flat-square) ![AppVersion: 1.17.5](https://img.shields.io/badge/AppVersion-1.17.5-informational?style=flat-square)
![Version: 1.17.8](https://img.shields.io/badge/Version-1.17.8-informational?style=flat-square) ![AppVersion: 1.17.8](https://img.shields.io/badge/AppVersion-1.17.8-informational?style=flat-square)
Cilium is open source software for providing and transparently securing
network connectivity and loadbalancing between application workloads such as
@@ -85,7 +85,7 @@ contributors across the globe, there is almost always someone available to help.
| authentication.mutual.spire.install.agent.tolerations | list | `[{"effect":"NoSchedule","key":"node.kubernetes.io/not-ready"},{"effect":"NoSchedule","key":"node-role.kubernetes.io/master"},{"effect":"NoSchedule","key":"node-role.kubernetes.io/control-plane"},{"effect":"NoSchedule","key":"node.cloudprovider.kubernetes.io/uninitialized","value":"true"},{"key":"CriticalAddonsOnly","operator":"Exists"}]` | SPIRE agent tolerations configuration By default it follows the same tolerations as the agent itself to allow the Cilium agent on this node to connect to SPIRE. ref: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/ |
| authentication.mutual.spire.install.enabled | bool | `true` | Enable SPIRE installation. This will only take effect only if authentication.mutual.spire.enabled is true |
| authentication.mutual.spire.install.existingNamespace | bool | `false` | SPIRE namespace already exists. Set to true if Helm should not create, manage, and import the SPIRE namespace. |
| authentication.mutual.spire.install.initImage | object | `{"digest":"sha256:f85340bf132ae937d2c2a763b8335c9bab35d6e8293f70f606b9c6178d84f42b","override":null,"pullPolicy":"IfNotPresent","repository":"docker.io/library/busybox","tag":"1.37.0","useDigest":true}` | init container image of SPIRE agent and server |
| authentication.mutual.spire.install.initImage | object | `{"digest":"sha256:d82f458899c9696cb26a7c02d5568f81c8c8223f8661bb2a7988b269c8b9051e","override":null,"pullPolicy":"IfNotPresent","repository":"docker.io/library/busybox","tag":"1.37.0","useDigest":true}` | init container image of SPIRE agent and server |
| authentication.mutual.spire.install.namespace | string | `"cilium-spire"` | SPIRE namespace to install into |
| authentication.mutual.spire.install.server.affinity | object | `{}` | SPIRE server affinity configuration |
| authentication.mutual.spire.install.server.annotations | object | `{}` | SPIRE server annotations |
@@ -197,7 +197,7 @@ contributors across the globe, there is almost always someone available to help.
| clustermesh.apiserver.extraVolumeMounts | list | `[]` | Additional clustermesh-apiserver volumeMounts. |
| clustermesh.apiserver.extraVolumes | list | `[]` | Additional clustermesh-apiserver volumes. |
| clustermesh.apiserver.healthPort | int | `9880` | TCP port for the clustermesh-apiserver health API. |
| clustermesh.apiserver.image | object | `{"digest":"sha256:78dc40b9cb8d7b1ad21a76ff3e11541809acda2ac4ef94150cc832100edc247d","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/clustermesh-apiserver","tag":"v1.17.5","useDigest":true}` | Clustermesh API server image. |
| clustermesh.apiserver.image | object | `{"digest":"sha256:3ac210d94d37a77ec010f9ac4c705edc8f15f22afa2b9a6f0e2a7d64d2360586","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/clustermesh-apiserver","tag":"v1.17.8","useDigest":true}` | Clustermesh API server image. |
| clustermesh.apiserver.kvstoremesh.enabled | bool | `true` | Enable KVStoreMesh. KVStoreMesh caches the information retrieved from the remote clusters in the local etcd instance. |
| clustermesh.apiserver.kvstoremesh.extraArgs | list | `[]` | Additional KVStoreMesh arguments. |
| clustermesh.apiserver.kvstoremesh.extraEnv | list | `[]` | Additional KVStoreMesh environment variables. |
@@ -378,7 +378,7 @@ contributors across the globe, there is almost always someone available to help.
| envoy.healthPort | int | `9878` | TCP port for the health API. |
| envoy.httpRetryCount | int | `3` | Maximum number of retries for each HTTP request |
| envoy.idleTimeoutDurationSeconds | int | `60` | Set Envoy upstream HTTP idle connection timeout seconds. Does not apply to connections with pending requests. Default 60s |
| envoy.image | object | `{"digest":"sha256:9f69e290a7ea3d4edf9192acd81694089af048ae0d8a67fb63bd62dc1d72203e","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium-envoy","tag":"v1.32.6-1749271279-0864395884b263913eac200ee2048fd985f8e626","useDigest":true}` | Envoy container image. |
| envoy.image | object | `{"digest":"sha256:06fbc4e55d926dd82ff2a0049919248dcc6be5354609b09012b01bc9c5b0ee28","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium-envoy","tag":"v1.33.9-1757932127-3c04e8f2f1027d106b96f8ef4a0215e81dbaaece","useDigest":true}` | Envoy container image. |
| envoy.initialFetchTimeoutSeconds | int | `30` | Time in seconds after which the initial fetch on an xDS stream is considered timed out |
| envoy.livenessProbe.failureThreshold | int | `10` | failure threshold of liveness probe |
| envoy.livenessProbe.periodSeconds | int | `30` | interval between checks of the liveness probe |
@@ -429,7 +429,6 @@ contributors across the globe, there is almost always someone available to help.
| etcd.enabled | bool | `false` | Enable etcd mode for the agent. |
| etcd.endpoints | list | `["https://CHANGE-ME:2379"]` | List of etcd endpoints |
| etcd.ssl | bool | `false` | Enable use of TLS/SSL for connectivity to etcd. |
| externalIPs.enabled | bool | `false` | Enable ExternalIPs service support. |
| externalWorkloads | object | `{"enabled":false}` | Configure external workloads support |
| externalWorkloads.enabled | bool | `false` | Enable support for external workloads, such as VMs (false by default). |
| extraArgs | list | `[]` | Additional agent container arguments. |
@@ -519,7 +518,7 @@ contributors across the globe, there is almost always someone available to help.
| hubble.relay.extraVolumes | list | `[]` | Additional hubble-relay volumes. |
| hubble.relay.gops.enabled | bool | `true` | Enable gops for hubble-relay |
| hubble.relay.gops.port | int | `9893` | Configure gops listen port for hubble-relay |
| hubble.relay.image | object | `{"digest":"sha256:fbb8a6afa8718200fca9381ad274ed695792dbadd2417b0e99c36210ae4964ff","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/hubble-relay","tag":"v1.17.5","useDigest":true}` | Hubble-relay container image. |
| hubble.relay.image | object | `{"digest":"sha256:2e576bf7a02291c07bffbc1ca0a66a6c70f4c3eb155480e5b3ac027bedd2858b","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/hubble-relay","tag":"v1.17.8","useDigest":true}` | Hubble-relay container image. |
| hubble.relay.listenHost | string | `""` | Host to listen to. Specify an empty string to bind to all the interfaces. |
| hubble.relay.listenPort | string | `"4245"` | Port to listen to. |
| hubble.relay.nodeSelector | object | `{"kubernetes.io/os":"linux"}` | Node labels for pod assignment ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector |
@@ -586,7 +585,7 @@ contributors across the globe, there is almost always someone available to help.
| hubble.ui.backend.extraEnv | list | `[]` | Additional hubble-ui backend environment variables. |
| hubble.ui.backend.extraVolumeMounts | list | `[]` | Additional hubble-ui backend volumeMounts. |
| hubble.ui.backend.extraVolumes | list | `[]` | Additional hubble-ui backend volumes. |
| hubble.ui.backend.image | object | `{"digest":"sha256:a034b7e98e6ea796ed26df8f4e71f83fc16465a19d166eff67a03b822c0bfa15","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/hubble-ui-backend","tag":"v0.13.2","useDigest":true}` | Hubble-ui backend image. |
| hubble.ui.backend.image | object | `{"digest":"sha256:db1454e45dc39ca41fbf7cad31eec95d99e5b9949c39daaad0fa81ef29d56953","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/hubble-ui-backend","tag":"v0.13.3","useDigest":true}` | Hubble-ui backend image. |
| hubble.ui.backend.livenessProbe.enabled | bool | `false` | Enable liveness probe for Hubble-ui backend (requires Hubble-ui 0.12+) |
| hubble.ui.backend.readinessProbe.enabled | bool | `false` | Enable readiness probe for Hubble-ui backend (requires Hubble-ui 0.12+) |
| hubble.ui.backend.resources | object | `{}` | Resource requests and limits for the 'backend' container of the 'hubble-ui' deployment. |
@@ -596,7 +595,7 @@ contributors across the globe, there is almost always someone available to help.
| hubble.ui.frontend.extraEnv | list | `[]` | Additional hubble-ui frontend environment variables. |
| hubble.ui.frontend.extraVolumeMounts | list | `[]` | Additional hubble-ui frontend volumeMounts. |
| hubble.ui.frontend.extraVolumes | list | `[]` | Additional hubble-ui frontend volumes. |
| hubble.ui.frontend.image | object | `{"digest":"sha256:9e37c1296b802830834cc87342a9182ccbb71ffebb711971e849221bd9d59392","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/hubble-ui","tag":"v0.13.2","useDigest":true}` | Hubble-ui frontend image. |
| hubble.ui.frontend.image | object | `{"digest":"sha256:661d5de7050182d495c6497ff0b007a7a1e379648e60830dd68c4d78ae21761d","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/hubble-ui","tag":"v0.13.3","useDigest":true}` | Hubble-ui frontend image. |
| hubble.ui.frontend.resources | object | `{}` | Resource requests and limits for the 'frontend' container of the 'hubble-ui' deployment. |
| hubble.ui.frontend.securityContext | object | `{}` | Hubble-ui frontend security context. |
| hubble.ui.frontend.server.ipv6 | object | `{"enabled":true}` | Controls server listener for ipv6 |
@@ -626,7 +625,7 @@ contributors across the globe, there is almost always someone available to help.
| hubble.ui.updateStrategy | object | `{"rollingUpdate":{"maxUnavailable":1},"type":"RollingUpdate"}` | hubble-ui update strategy. |
| identityAllocationMode | string | `"crd"` | Method to use for identity allocation (`crd`, `kvstore` or `doublewrite-readkvstore` / `doublewrite-readcrd` for migrating between identity backends). |
| identityChangeGracePeriod | string | `"5s"` | Time to wait before using new identity on endpoint identity change. |
| image | object | `{"digest":"sha256:baf8541723ee0b72d6c489c741c81a6fdc5228940d66cb76ef5ea2ce3c639ea6","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium","tag":"v1.17.5","useDigest":true}` | Agent container image. |
| image | object | `{"digest":"sha256:6d7ea72ed311eeca4c75a1f17617a3d596fb6038d30d00799090679f82a01636","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium","tag":"v1.17.8","useDigest":true}` | Agent container image. |
| imagePullSecrets | list | `[]` | Configure image pull secrets for pulling container images |
| ingressController.default | bool | `false` | Set cilium ingress controller to be the default ingress controller This will let cilium ingress controller route entries without ingress class set |
| ingressController.defaultSecretName | string | `nil` | Default secret name for ingresses without .spec.tls[].secretName set. |
@@ -737,7 +736,7 @@ contributors across the globe, there is almost always someone available to help.
| nodeinit.extraEnv | list | `[]` | Additional nodeinit environment variables. |
| nodeinit.extraVolumeMounts | list | `[]` | Additional nodeinit volumeMounts. |
| nodeinit.extraVolumes | list | `[]` | Additional nodeinit volumes. |
| nodeinit.image | object | `{"digest":"sha256:8d7b41c4ca45860254b3c19e20210462ef89479bb6331d6760c4e609d651b29c","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/startup-script","tag":"c54c7edeab7fde4da68e59acd319ab24af242c3f","useDigest":true}` | node-init image. |
| nodeinit.image | object | `{"digest":"sha256:5bdca3c2dec2c79f58d45a7a560bf1098c2126350c901379fe850b7f78d3d757","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/startup-script","tag":"1755531540-60ee83e","useDigest":true}` | node-init image. |
| nodeinit.nodeSelector | object | `{"kubernetes.io/os":"linux"}` | Node labels for nodeinit pod assignment ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector |
| nodeinit.podAnnotations | object | `{}` | Annotations to be added to node-init pods. |
| nodeinit.podLabels | object | `{}` | Labels to be added to node-init pods. |
@@ -764,7 +763,7 @@ contributors across the globe, there is almost always someone available to help.
| operator.hostNetwork | bool | `true` | HostNetwork setting |
| operator.identityGCInterval | string | `"15m0s"` | Interval for identity garbage collection. |
| operator.identityHeartbeatTimeout | string | `"30m0s"` | Timeout for identity heartbeats. |
| operator.image | object | `{"alibabacloudDigest":"sha256:654db67929f716b6178a34a15cb8f95e391465085bcf48cdba49819a56fcd259","awsDigest":"sha256:3e189ec1e286f1bf23d47c45bdeac6025ef7ec3d2dc16190ee768eb94708cbc3","azureDigest":"sha256:add78783fdaced7453a324612eeb9ebecf56002b56c14c73596b3b4923321026","genericDigest":"sha256:f954c97eeb1b47ed67d08cc8fb4108fb829f869373cbb3e698a7f8ef1085b09e","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/operator","suffix":"","tag":"v1.17.5","useDigest":true}` | cilium-operator image. |
| operator.image | object | `{"alibabacloudDigest":"sha256:72c25a405ad8e58d2cf03f7ea2b6696ed1edcfb51716b5f85e45c6c4fcaa6056","awsDigest":"sha256:28012f7d0f4f23e9f6c7d6a5dd931afa326bbac3e8103f3f6f22b9670847dffa","azureDigest":"sha256:619f9febf3efef2724a26522b253e4595cd33c274f5f49925e29a795fdc2d2d7","genericDigest":"sha256:5468807b9c31997f3a1a14558ec7c20c5b962a2df6db633b7afbe2f45a15da1c","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/operator","suffix":"","tag":"v1.17.8","useDigest":true}` | cilium-operator image. |
| operator.nodeGCInterval | string | `"5m0s"` | Interval for cilium node garbage collection. |
| operator.nodeSelector | object | `{"kubernetes.io/os":"linux"}` | Node labels for cilium-operator pod assignment ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector |
| operator.podAnnotations | object | `{}` | Annotations to be added to cilium-operator pods |
@@ -801,7 +800,7 @@ contributors across the globe, there is almost always someone available to help.
| pmtuDiscovery.enabled | bool | `false` | Enable path MTU discovery to send ICMP fragmentation-needed replies to the client. |
| podAnnotations | object | `{}` | Annotations to be added to agent pods |
| podLabels | object | `{}` | Labels to be added to agent pods |
| podSecurityContext | object | `{"appArmorProfile":{"type":"Unconfined"}}` | Security Context for cilium-agent pods. |
| podSecurityContext | object | `{"appArmorProfile":{"type":"Unconfined"},"seccompProfile":{"type":"Unconfined"}}` | Security Context for cilium-agent pods. |
| podSecurityContext.appArmorProfile | object | `{"type":"Unconfined"}` | AppArmorProfile options for the `cilium-agent` and init containers |
| policyCIDRMatchMode | string | `nil` | policyCIDRMatchMode is a list of entities that may be selected by CIDR selector. The possible value is "nodes". |
| policyEnforcementMode | string | `"default"` | The agent can be put into one of the three policy enforcement modes: default, always and never. ref: https://docs.cilium.io/en/stable/security/policy/intro/#policy-enforcement-modes |
@@ -814,7 +813,7 @@ contributors across the globe, there is almost always someone available to help.
| preflight.extraEnv | list | `[]` | Additional preflight environment variables. |
| preflight.extraVolumeMounts | list | `[]` | Additional preflight volumeMounts. |
| preflight.extraVolumes | list | `[]` | Additional preflight volumes. |
| preflight.image | object | `{"digest":"sha256:baf8541723ee0b72d6c489c741c81a6fdc5228940d66cb76ef5ea2ce3c639ea6","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium","tag":"v1.17.5","useDigest":true}` | Cilium pre-flight image. |
| preflight.image | object | `{"digest":"sha256:6d7ea72ed311eeca4c75a1f17617a3d596fb6038d30d00799090679f82a01636","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium","tag":"v1.17.8","useDigest":true}` | Cilium pre-flight image. |
| preflight.nodeSelector | object | `{"kubernetes.io/os":"linux"}` | Node labels for preflight pod assignment ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector |
| preflight.podAnnotations | object | `{}` | Annotations to be added to preflight pods |
| preflight.podDisruptionBudget.enabled | bool | `false` | enable PodDisruptionBudget ref: https://kubernetes.io/docs/concepts/workloads/pods/disruptions/ |

View File

@@ -3,6 +3,24 @@ _extensions.tpl contains template blocks that are intended to allow packagers
to modify or extend the default chart behaviors.
*/}}
{{/*
Allow packagers to add extra volumes to cilium-agent.
*/}}
{{- define "cilium-agent.volumes.extra" }}
{{- end }}
{{- define "cilium-agent.volumeMounts.extra" }}
{{- end }}
{{/*
Allow packagers to set dnsPolicy for cilium-agent.
*/}}
{{- define "cilium-agent.dnsPolicy" }}
{{- if .Values.dnsPolicy }}
dnsPolicy: {{ .Values.dnsPolicy }}
{{- end }}
{{- end }}
{{/*
Intentionally empty to allow downstream chart packagers to add extra
containers to hubble-relay without having to modify the deployment manifest

View File

@@ -399,6 +399,7 @@ spec:
{{- with .Values.extraVolumeMounts }}
{{- toYaml . | nindent 8 }}
{{- end }}
{{- include "cilium-agent.volumeMounts.extra" . | nindent 8 }}
{{- if .Values.monitor.enabled }}
- name: cilium-monitor
image: {{ include "cilium.image" .Values.image | quote }}
@@ -768,9 +769,7 @@ spec:
automountServiceAccountToken: {{ .Values.serviceAccounts.cilium.automount }}
terminationGracePeriodSeconds: {{ .Values.terminationGracePeriodSeconds }}
hostNetwork: true
{{- if .Values.dnsPolicy }}
dnsPolicy: {{ .Values.dnsPolicy }}
{{- end }}
{{- include "cilium-agent.dnsPolicy" . | nindent 6 }}
{{- if (eq .Values.scheduling.mode "anti-affinity") }}
{{- with .Values.affinity }}
affinity:
@@ -1063,4 +1062,5 @@ spec:
{{- with .Values.extraVolumes }}
{{- toYaml . | nindent 6 }}
{{- end }}
{{- include "cilium-agent.volumes.extra" . | nindent 6 }}
{{- end }}

View File

@@ -735,7 +735,7 @@ data:
kube-proxy-replacement: {{ $kubeProxyReplacement | quote }}
{{- if ne $kubeProxyReplacement "disabled" }}
{{- if eq $kubeProxyReplacement "true" }}
kube-proxy-replacement-healthz-bind-address: {{ default "" .Values.kubeProxyReplacementHealthzBindAddr | quote}}
{{- end }}
@@ -755,17 +755,13 @@ data:
{{- end }}
{{- if hasKey .Values "hostPort" }}
{{- if eq $kubeProxyReplacement "partial" }}
{{- if eq $kubeProxyReplacement "false" }}
enable-host-port: {{ .Values.hostPort.enabled | quote }}
{{- end }}
{{- end }}
{{- if hasKey .Values "externalIPs" }}
{{- if eq $kubeProxyReplacement "partial" }}
enable-external-ips: {{ .Values.externalIPs.enabled | quote }}
{{- end }}
{{- end }}
{{- if hasKey .Values "nodePort" }}
{{- if or (eq $kubeProxyReplacement "partial") (eq $kubeProxyReplacement "false") }}
{{- if eq $kubeProxyReplacement "false" }}
enable-node-port: {{ .Values.nodePort.enabled | quote }}
{{- end }}
{{- if hasKey .Values.nodePort "range" }}
@@ -1031,7 +1027,7 @@ data:
hubble-drop-events-interval: {{ .Values.hubble.dropEventEmitter.interval | quote }}
hubble-drop-events-reasons: {{ .Values.hubble.dropEventEmitter.reasons | join " " | quote }}
{{- end }}
{{- if .Values.hubble.preferIpv6 }}
{{- if or (eq .Values.hubble.preferIpv6 true) (eq .Values.ipv4.enabled false) }}
hubble-prefer-ipv6: "true"
{{- end }}
{{- if (not (kindIs "invalid" .Values.hubble.skipUnknownCGroupIDs)) }}

View File

@@ -222,6 +222,9 @@ spec:
name: cilium-config
key: enable-k8s-endpoint-slice
optional: true
{{- with .Values.clustermesh.apiserver.extraEnv }}
{{- toYaml . | trim | nindent 8 }}
{{- end }}
readinessProbe:
httpGet:
path: /readyz
@@ -229,9 +232,6 @@ spec:
{{- with .Values.clustermesh.apiserver.readinessProbe }}
{{- toYaml . | trim | nindent 10 }}
{{- end }}
{{- with .Values.clustermesh.apiserver.extraEnv }}
{{- toYaml . | trim | nindent 8 }}
{{- end }}
ports:
- name: apiserv-health
containerPort: {{ .Values.clustermesh.apiserver.healthPort }}

View File

@@ -535,10 +535,16 @@
"default": {
"properties": {
"burstLimit": {
"type": "null"
"type": [
"null",
"integer"
]
},
"rateLimit": {
"type": "null"
"type": [
"null",
"integer"
]
}
},
"type": "object"
@@ -2351,14 +2357,6 @@
},
"type": "object"
},
"externalIPs": {
"properties": {
"enabled": {
"type": "boolean"
}
},
"type": "object"
},
"externalWorkloads": {
"properties": {
"enabled": {
@@ -4653,6 +4651,14 @@
}
},
"type": "object"
},
"seccompProfile": {
"properties": {
"type": {
"type": "string"
}
},
"type": "object"
}
},
"type": "object"

View File

@@ -191,10 +191,10 @@ image:
# @schema
override: ~
repository: "quay.io/cilium/cilium"
tag: "v1.17.5"
tag: "v1.17.8"
pullPolicy: "IfNotPresent"
# cilium-digest
digest: "sha256:baf8541723ee0b72d6c489c741c81a6fdc5228940d66cb76ef5ea2ce3c639ea6"
digest: "sha256:6d7ea72ed311eeca4c75a1f17617a3d596fb6038d30d00799090679f82a01636"
useDigest: true
# -- Scheduling configurations for cilium pods
scheduling:
@@ -270,6 +270,8 @@ podSecurityContext:
# -- AppArmorProfile options for the `cilium-agent` and init containers
appArmorProfile:
type: "Unconfined"
seccompProfile:
type: "Unconfined"
# -- Annotations to be added to agent pods
podAnnotations: {}
# -- Labels to be added to agent pods
@@ -508,6 +510,9 @@ bpf:
events:
# -- Default settings for all types of events except dbg and pcap.
default:
# @schema
# type: [null, integer]
# @schema
# -- (int) Configure the limit of messages per second that can be written to
# BPF events map. The number of messages is averaged, meaning that if no messages
# were written to the map over 5 seconds, it's possible to write more events
@@ -516,6 +521,9 @@ bpf:
# and rateLimit to 0 disables BPF events rate limiting.
# @default -- `0`
rateLimit: ~
# @schema
# type: [null, integer]
# @schema
# -- (int) Configure the maximum number of messages that can be written to BPF events
# map in 1 second. If burstLimit is greater than 0, non-zero value for rateLimit must
# also be provided lest the configuration is considered invalid. Setting both burstLimit
@@ -1071,9 +1079,6 @@ eni:
# -- Filter via AWS EC2 Instance tags (k=v) which will dictate which AWS EC2 Instances
# are going to be used to create new ENIs
instanceTagsFilter: []
externalIPs:
# -- Enable ExternalIPs service support.
enabled: false
# fragmentTracking enables IPv4 fragment tracking support in the datapath.
# fragmentTracking: true
gke:
@@ -1440,9 +1445,9 @@ hubble:
# @schema
override: ~
repository: "quay.io/cilium/hubble-relay"
tag: "v1.17.5"
tag: "v1.17.8"
# hubble-relay-digest
digest: "sha256:fbb8a6afa8718200fca9381ad274ed695792dbadd2417b0e99c36210ae4964ff"
digest: "sha256:2e576bf7a02291c07bffbc1ca0a66a6c70f4c3eb155480e5b3ac027bedd2858b"
useDigest: true
pullPolicy: "IfNotPresent"
# -- Specifies the resources for the hubble-relay pods
@@ -1691,8 +1696,8 @@ hubble:
# @schema
override: ~
repository: "quay.io/cilium/hubble-ui-backend"
tag: "v0.13.2"
digest: "sha256:a034b7e98e6ea796ed26df8f4e71f83fc16465a19d166eff67a03b822c0bfa15"
tag: "v0.13.3"
digest: "sha256:db1454e45dc39ca41fbf7cad31eec95d99e5b9949c39daaad0fa81ef29d56953"
useDigest: true
pullPolicy: "IfNotPresent"
# -- Hubble-ui backend security context.
@@ -1725,8 +1730,8 @@ hubble:
# @schema
override: ~
repository: "quay.io/cilium/hubble-ui"
tag: "v0.13.2"
digest: "sha256:9e37c1296b802830834cc87342a9182ccbb71ffebb711971e849221bd9d59392"
tag: "v0.13.3"
digest: "sha256:661d5de7050182d495c6497ff0b007a7a1e379648e60830dd68c4d78ae21761d"
useDigest: true
pullPolicy: "IfNotPresent"
# -- Hubble-ui frontend security context.
@@ -2353,9 +2358,9 @@ envoy:
# @schema
override: ~
repository: "quay.io/cilium/cilium-envoy"
tag: "v1.32.6-1749271279-0864395884b263913eac200ee2048fd985f8e626"
tag: "v1.33.9-1757932127-3c04e8f2f1027d106b96f8ef4a0215e81dbaaece"
pullPolicy: "IfNotPresent"
digest: "sha256:9f69e290a7ea3d4edf9192acd81694089af048ae0d8a67fb63bd62dc1d72203e"
digest: "sha256:06fbc4e55d926dd82ff2a0049919248dcc6be5354609b09012b01bc9c5b0ee28"
useDigest: true
# -- Additional containers added to the cilium Envoy DaemonSet.
extraContainers: []
@@ -2710,15 +2715,15 @@ operator:
# @schema
override: ~
repository: "quay.io/cilium/operator"
tag: "v1.17.5"
tag: "v1.17.8"
# operator-generic-digest
genericDigest: "sha256:f954c97eeb1b47ed67d08cc8fb4108fb829f869373cbb3e698a7f8ef1085b09e"
genericDigest: "sha256:5468807b9c31997f3a1a14558ec7c20c5b962a2df6db633b7afbe2f45a15da1c"
# operator-azure-digest
azureDigest: "sha256:add78783fdaced7453a324612eeb9ebecf56002b56c14c73596b3b4923321026"
azureDigest: "sha256:619f9febf3efef2724a26522b253e4595cd33c274f5f49925e29a795fdc2d2d7"
# operator-aws-digest
awsDigest: "sha256:3e189ec1e286f1bf23d47c45bdeac6025ef7ec3d2dc16190ee768eb94708cbc3"
awsDigest: "sha256:28012f7d0f4f23e9f6c7d6a5dd931afa326bbac3e8103f3f6f22b9670847dffa"
# operator-alibabacloud-digest
alibabacloudDigest: "sha256:654db67929f716b6178a34a15cb8f95e391465085bcf48cdba49819a56fcd259"
alibabacloudDigest: "sha256:72c25a405ad8e58d2cf03f7ea2b6696ed1edcfb51716b5f85e45c6c4fcaa6056"
useDigest: true
pullPolicy: "IfNotPresent"
suffix: ""
@@ -2910,8 +2915,8 @@ nodeinit:
# @schema
override: ~
repository: "quay.io/cilium/startup-script"
tag: "c54c7edeab7fde4da68e59acd319ab24af242c3f"
digest: "sha256:8d7b41c4ca45860254b3c19e20210462ef89479bb6331d6760c4e609d651b29c"
tag: "1755531540-60ee83e"
digest: "sha256:5bdca3c2dec2c79f58d45a7a560bf1098c2126350c901379fe850b7f78d3d757"
useDigest: true
pullPolicy: "IfNotPresent"
# -- The priority class to use for the nodeinit pod.
@@ -2993,9 +2998,9 @@ preflight:
# @schema
override: ~
repository: "quay.io/cilium/cilium"
tag: "v1.17.5"
tag: "v1.17.8"
# cilium-digest
digest: "sha256:baf8541723ee0b72d6c489c741c81a6fdc5228940d66cb76ef5ea2ce3c639ea6"
digest: "sha256:6d7ea72ed311eeca4c75a1f17617a3d596fb6038d30d00799090679f82a01636"
useDigest: true
pullPolicy: "IfNotPresent"
# -- The priority class to use for the preflight pod.
@@ -3142,9 +3147,9 @@ clustermesh:
# @schema
override: ~
repository: "quay.io/cilium/clustermesh-apiserver"
tag: "v1.17.5"
tag: "v1.17.8"
# clustermesh-apiserver-digest
digest: "sha256:78dc40b9cb8d7b1ad21a76ff3e11541809acda2ac4ef94150cc832100edc247d"
digest: "sha256:3ac210d94d37a77ec010f9ac4c705edc8f15f22afa2b9a6f0e2a7d64d2360586"
useDigest: true
pullPolicy: "IfNotPresent"
# -- TCP port for the clustermesh-apiserver health API.
@@ -3653,7 +3658,7 @@ authentication:
override: ~
repository: "docker.io/library/busybox"
tag: "1.37.0"
digest: "sha256:f85340bf132ae937d2c2a763b8335c9bab35d6e8293f70f606b9c6178d84f42b"
digest: "sha256:d82f458899c9696cb26a7c02d5568f81c8c8223f8661bb2a7988b269c8b9051e"
useDigest: true
pullPolicy: "IfNotPresent"
# SPIRE agent configuration

View File

@@ -271,6 +271,8 @@ podSecurityContext:
# -- AppArmorProfile options for the `cilium-agent` and init containers
appArmorProfile:
type: "Unconfined"
seccompProfile:
type: "Unconfined"
# -- Annotations to be added to agent pods
podAnnotations: {}
# -- Labels to be added to agent pods
@@ -513,6 +515,9 @@ bpf:
events:
# -- Default settings for all types of events except dbg and pcap.
default:
# @schema
# type: [null, integer]
# @schema
# -- (int) Configure the limit of messages per second that can be written to
# BPF events map. The number of messages is averaged, meaning that if no messages
# were written to the map over 5 seconds, it's possible to write more events
@@ -521,6 +526,9 @@ bpf:
# and rateLimit to 0 disables BPF events rate limiting.
# @default -- `0`
rateLimit: ~
# @schema
# type: [null, integer]
# @schema
# -- (int) Configure the maximum number of messages that can be written to BPF events
# map in 1 second. If burstLimit is greater than 0, non-zero value for rateLimit must
# also be provided lest the configuration is considered invalid. Setting both burstLimit
@@ -1084,9 +1092,6 @@ eni:
# -- Filter via AWS EC2 Instance tags (k=v) which will dictate which AWS EC2 Instances
# are going to be used to create new ENIs
instanceTagsFilter: []
externalIPs:
# -- Enable ExternalIPs service support.
enabled: false
# fragmentTracking enables IPv4 fragment tracking support in the datapath.
# fragmentTracking: true
gke:

View File

@@ -1,2 +1,2 @@
ARG VERSION=v1.17.5
ARG VERSION=v1.17.8
FROM quay.io/cilium/cilium:${VERSION}

View File

@@ -14,7 +14,7 @@ cilium:
mode: "kubernetes"
image:
repository: ghcr.io/cozystack/cozystack/cilium
tag: 1.17.5
digest: "sha256:2def2dccfc17870be6e1d63584c25b32e812f21c9cdcfa06deadd2787606654d"
tag: 1.17.8
digest: "sha256:81262986a41487bfa3d0465091d3a386def5bd1ab476350bd4af2fdee5846fe6"
envoy:
enabled: false

Some files were not shown because too many files have changed in this diff Show More