mirror of
https://github.com/cozystack/cozystack.git
synced 2026-03-03 13:38:56 +00:00
Compare commits
2275 Commits
maintainer
...
v1.0.0
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
ab26d71cc7 | ||
|
|
66a61bd63e | ||
|
|
f887e34206 | ||
|
|
7a107296e5 | ||
|
|
e16d987403 | ||
|
|
c63fcf50b3 | ||
|
|
4417cc35a0 | ||
|
|
78cc4c0955 | ||
|
|
65c6936e95 | ||
|
|
cd3643b8cc | ||
|
|
2024ec3a8b | ||
|
|
f282f19c1b | ||
|
|
7427bbdaa3 | ||
|
|
da89203a32 | ||
|
|
e0dfc8a321 | ||
|
|
9cbd948b08 | ||
|
|
e5f7bc5c53 | ||
|
|
e89ba43c39 | ||
|
|
a20951def3 | ||
|
|
4c73ac54a0 | ||
|
|
cfb5914cdd | ||
|
|
948346ef6d | ||
|
|
da597225d1 | ||
|
|
7871d425dd | ||
|
|
a9adda5e88 | ||
|
|
880b99f3f7 | ||
|
|
c7290f3521 | ||
|
|
2b1b5e8fa9 | ||
|
|
4f2578a32b | ||
|
|
d8f5083c6d | ||
|
|
211e01bd87 | ||
|
|
d8bb3527de | ||
|
|
1fd1da45b9 | ||
|
|
2c82b22c5e | ||
|
|
b61dc7c988 | ||
|
|
473ac87d70 | ||
|
|
9fa311e5ac | ||
|
|
7994976052 | ||
|
|
e3a5933f7b | ||
|
|
7dfb819a9c | ||
|
|
d95ea930b6 | ||
|
|
8dbd6d5167 | ||
|
|
2c372ae378 | ||
|
|
02064888a4 | ||
|
|
4c3766a555 | ||
|
|
7bc93c5045 | ||
|
|
d2f7c9ab82 | ||
|
|
d856775961 | ||
|
|
17c2ea0e9c | ||
|
|
c98b6203a7 | ||
|
|
376e4d1fd3 | ||
|
|
1c05999812 | ||
|
|
356070615c | ||
|
|
4aa1f03321 | ||
|
|
8f1e52690d | ||
|
|
00ab6e792c | ||
|
|
3d89d3732c | ||
|
|
e39ba9fb8c | ||
|
|
5c5a170589 | ||
|
|
a6b498d7ec | ||
|
|
d18e6d1c24 | ||
|
|
8162e3828e | ||
|
|
def8a5c835 | ||
|
|
4e5455c72c | ||
|
|
d4cb47b58b | ||
|
|
4843a617bc | ||
|
|
0738fae56d | ||
|
|
8b9a11360e | ||
|
|
d0a6ddd782 | ||
|
|
8c6c69cdab | ||
|
|
4821f025fc | ||
|
|
dbfdbc8298 | ||
|
|
58e2b646be | ||
|
|
8450830f06 | ||
|
|
0e8b6515af | ||
|
|
655133b81c | ||
|
|
668ddc552e | ||
|
|
7a5eb76b6a | ||
|
|
a6e66a021a | ||
|
|
6437abb35d | ||
|
|
989686624c | ||
|
|
4387a3e95f | ||
|
|
db1425a8de | ||
|
|
16db457536 | ||
|
|
b5b2f95c3e | ||
|
|
7ff5b2ba23 | ||
|
|
199ffe319a | ||
|
|
681b2cef54 | ||
|
|
5aa53d14c8 | ||
|
|
daf1b71e7c | ||
|
|
0198c9896a | ||
|
|
78f8ee2deb | ||
|
|
87d0390256 | ||
|
|
96467cdefd | ||
|
|
bff5468b52 | ||
|
|
3b267d6882 | ||
|
|
e7ffc21743 | ||
|
|
5bafdfd453 | ||
|
|
3e1981bc24 | ||
|
|
7ac989923d | ||
|
|
affd91dd41 | ||
|
|
26178d97be | ||
|
|
efb9bc70b3 | ||
|
|
0e6ae28bb8 | ||
|
|
0f2ba5aba2 | ||
|
|
490faaf292 | ||
|
|
cea57f62c8 | ||
|
|
c815725bcf | ||
|
|
6c447b2fcb | ||
|
|
0c85639fed | ||
|
|
2dd3c03279 | ||
|
|
305495d023 | ||
|
|
543ce6e5fd | ||
|
|
09805ff382 | ||
|
|
92d261fc1e | ||
|
|
9eb13fdafe | ||
|
|
cecc5861af | ||
|
|
abd644122f | ||
|
|
3fbce0dba5 | ||
|
|
b3fe6a8c4a | ||
|
|
962f8e96f4 | ||
|
|
20d122445d | ||
|
|
4187b5ed94 | ||
|
|
1558fb428a | ||
|
|
879b10b777 | ||
|
|
1ddbe68bc2 | ||
|
|
d8dd5adbe0 | ||
|
|
55cd8fc0e1 | ||
|
|
58dfc97201 | ||
|
|
153d2c48ae | ||
|
|
39b95107a5 | ||
|
|
b3b7307105 | ||
|
|
bf1e49d34b | ||
|
|
96ba3b9ca5 | ||
|
|
536766cffc | ||
|
|
8cc8e52d15 | ||
|
|
6c431d0857 | ||
|
|
c6090c554c | ||
|
|
8b7813fdeb | ||
|
|
a52da8dd8d | ||
|
|
315e5dc0bd | ||
|
|
75e25fa977 | ||
|
|
73b8946a7e | ||
|
|
f131eb109a | ||
|
|
961da56e96 | ||
|
|
bfba9fb5e7 | ||
|
|
bae70596fc | ||
|
|
84b2fa90dd | ||
|
|
7ca6e5ce9e | ||
|
|
e092047630 | ||
|
|
956d9cc2a0 | ||
|
|
ef040c2ed2 | ||
|
|
d658850578 | ||
|
|
b6dec6042d | ||
|
|
12fb9ce7dd | ||
|
|
cf505c580d | ||
|
|
9031de0538 | ||
|
|
8e8bea039f | ||
|
|
fa55b5f41f | ||
|
|
6098e2ac12 | ||
|
|
b170a9f4f9 | ||
|
|
d1cec6a4bd | ||
|
|
34cda28568 | ||
|
|
2bc5e01fda | ||
|
|
6b407bd403 | ||
|
|
b6a840e873 | ||
|
|
dbba5c325b | ||
|
|
ef48d74c5a | ||
|
|
13aa341a28 | ||
|
|
08a5f9890e | ||
|
|
783682f171 | ||
|
|
ee54495dfb | ||
|
|
5568c0be9f | ||
|
|
7251395663 | ||
|
|
b7e16aaa96 | ||
|
|
18352a5267 | ||
|
|
28985ed0a8 | ||
|
|
d279fc40cb | ||
|
|
88baceae2c | ||
|
|
62116030dc | ||
|
|
6576b3bb87 | ||
|
|
371f67276a | ||
|
|
0faab1fa98 | ||
|
|
ef208d1986 | ||
|
|
a9ab1a4ce8 | ||
|
|
574c636761 | ||
|
|
168f6f2445 | ||
|
|
46103400f2 | ||
|
|
1af999a500 | ||
|
|
78c31f72a9 | ||
|
|
8c5b39b258 | ||
|
|
a628adeb35 | ||
|
|
28ec04505c | ||
|
|
32b9a7749c | ||
|
|
9a86551e40 | ||
|
|
740eb7028b | ||
|
|
bce5300116 | ||
|
|
fb8157ef9b | ||
|
|
5ebf6d3f6a | ||
|
|
0260b15aaf | ||
|
|
4c7f7fafbc | ||
|
|
b8ccdedbf8 | ||
|
|
67f9818370 | ||
|
|
4e804e0f86 | ||
|
|
f61c8f9859 | ||
|
|
3971e9cb39 | ||
|
|
ba7c729066 | ||
|
|
5f27152d18 | ||
|
|
c54e55e070 | ||
|
|
a6a08d8224 | ||
|
|
1b25c72b6d | ||
|
|
36b2a19d3c | ||
|
|
2673624261 | ||
|
|
84010a8015 | ||
|
|
8d496d0f11 | ||
|
|
8387ea4d08 | ||
|
|
4a4c7c7ad5 | ||
|
|
6a054ee76c | ||
|
|
5bf481ae4d | ||
|
|
d5e713a4e7 | ||
|
|
e267cfcf9d | ||
|
|
c932740dc5 | ||
|
|
e978e00c7e | ||
|
|
9e47669f68 | ||
|
|
d4556e4c53 | ||
|
|
dd34fb581e | ||
|
|
3685d49c4e | ||
|
|
7c0e99e1af | ||
|
|
9f20771cf8 | ||
|
|
1cbf183164 | ||
|
|
87e394c0c9 | ||
|
|
8f015efc93 | ||
|
|
da359d558a | ||
|
|
ad24693ca3 | ||
|
|
c9d2b54917 | ||
|
|
593a8b2baa | ||
|
|
5bd298651b | ||
|
|
470d43b33e | ||
|
|
fdfb8e0608 | ||
|
|
45b223ce5d | ||
|
|
5c889124e7 | ||
|
|
9213abc260 | ||
|
|
17dea98ab2 | ||
|
|
fe90454755 | ||
|
|
534017abbf | ||
|
|
5fccc13226 | ||
|
|
33868e1daa | ||
|
|
24c8b9c7a5 | ||
|
|
f20c7c4890 | ||
|
|
c1c1171c96 | ||
|
|
bb39a0f73f | ||
|
|
6b7b6b9f29 | ||
|
|
06e7517108 | ||
|
|
939727b936 | ||
|
|
14235b2939 | ||
|
|
698e542af5 | ||
|
|
4b73afe137 | ||
|
|
494144fb92 | ||
|
|
e8ffbbb097 | ||
|
|
71f7ee0bab | ||
|
|
051889e761 | ||
|
|
f466530ea8 | ||
|
|
1d3deab3f3 | ||
|
|
0d27d3a034 | ||
|
|
8144b8232e | ||
|
|
ae6b615933 | ||
|
|
80cbe1ba96 | ||
|
|
0791e79004 | ||
|
|
b6bff2eaa3 | ||
|
|
ec86a03d40 | ||
|
|
6fea830d15 | ||
|
|
33322e5324 | ||
|
|
13d848efc3 | ||
|
|
cf2c6bc15f | ||
|
|
d82c4d46c5 | ||
|
|
4c9b1d5263 | ||
|
|
70518a78e6 | ||
|
|
8926283bde | ||
|
|
085b527f0c | ||
|
|
6542ab58eb | ||
|
|
4c50529365 | ||
|
|
8010dc5250 | ||
|
|
2b15e2899e | ||
|
|
74a8313d65 | ||
|
|
ffd97e581f | ||
|
|
27f1e79e32 | ||
|
|
c815dd46c7 | ||
|
|
55b90f5d9f | ||
|
|
906c09f3c0 | ||
|
|
748d2ed56f | ||
|
|
1e293995de | ||
|
|
bb638f3447 | ||
|
|
90ac6de475 | ||
|
|
330cbe70d4 | ||
|
|
b71e4fe956 | ||
|
|
41b7829d4d | ||
|
|
0a3a38c3b6 | ||
|
|
cea30708bc | ||
|
|
69c0392dc6 | ||
|
|
97b5ea24c7 | ||
|
|
3c75e88190 | ||
|
|
90a5d534cf | ||
|
|
976b0011ac | ||
|
|
712c01419e | ||
|
|
d19b008bba | ||
|
|
9ed889bd54 | ||
|
|
5fcdc7b238 | ||
|
|
000b5ff76c | ||
|
|
23e399bd9a | ||
|
|
2cb299e602 | ||
|
|
86fd817ef9 | ||
|
|
874a238460 | ||
|
|
19afeff924 | ||
|
|
0b3845c941 | ||
|
|
2463154070 | ||
|
|
5d11d7a7ae | ||
|
|
59c3b7eb29 | ||
|
|
22dd42a7bb | ||
|
|
c87b8a9460 | ||
|
|
277b516fa2 | ||
|
|
a5b6dad96e | ||
|
|
fcb30e82e2 | ||
|
|
627fc1bc86 | ||
|
|
e31b559e67 | ||
|
|
3383499e91 | ||
|
|
3cdf031da6 | ||
|
|
3eaadfc95c | ||
|
|
c2a5572574 | ||
|
|
75a4b8ecbd | ||
|
|
919e70d184 | ||
|
|
337ee88170 | ||
|
|
4fa7eed058 | ||
|
|
806012388e | ||
|
|
23b1f5d708 | ||
|
|
587f5b61c3 | ||
|
|
9066530cd9 | ||
|
|
48a61bbae8 | ||
|
|
3e0217bbba | ||
|
|
8e210044f6 | ||
|
|
7320edd71d | ||
|
|
33f7bcef4f | ||
|
|
3a8e8fc290 | ||
|
|
09cd9e05c3 | ||
|
|
3f59ce4876 | ||
|
|
8f3d686492 | ||
|
|
1e8da1fca4 | ||
|
|
c21d1e4089 | ||
|
|
8a034c58b1 | ||
|
|
cd539b4b1e | ||
|
|
4287d17aef | ||
|
|
855aa76b29 | ||
|
|
b558489d22 | ||
|
|
281715b365 | ||
|
|
1916617686 | ||
|
|
116d9aeb50 | ||
|
|
77ce8227b4 | ||
|
|
395a57bc1b | ||
|
|
11eb255640 | ||
|
|
ce24ddf7a5 | ||
|
|
5e7087a160 | ||
|
|
052935a042 | ||
|
|
39fbb374aa | ||
|
|
cb90df4969 | ||
|
|
7927033864 | ||
|
|
f32f40f645 | ||
|
|
4f297eb262 | ||
|
|
58476a4d4a | ||
|
|
d384c6faf6 | ||
|
|
0e207d466e | ||
|
|
4132dff70a | ||
|
|
467b1eb350 | ||
|
|
c3197d59fa | ||
|
|
f72860c827 | ||
|
|
9681387c98 | ||
|
|
47e69e58a0 | ||
|
|
b7028aaa2a | ||
|
|
a5de8379c5 | ||
|
|
72c7290351 | ||
|
|
28152b62ec | ||
|
|
083ce14609 | ||
|
|
00bfde9078 | ||
|
|
40dc20f0f1 | ||
|
|
ded52c1279 | ||
|
|
063e9a49bd | ||
|
|
68d8271ede | ||
|
|
f485b5b92a | ||
|
|
f68fefbc12 | ||
|
|
37612de05b | ||
|
|
326921f236 | ||
|
|
9e63bd533c | ||
|
|
1741651b0c | ||
|
|
a56fc00c5c | ||
|
|
b0fa330d88 | ||
|
|
b45378d294 | ||
|
|
0de4755d56 | ||
|
|
9de268f596 | ||
|
|
3a75868cb0 | ||
|
|
af320a86a0 | ||
|
|
b864c04069 | ||
|
|
17a5dadd63 | ||
|
|
0448b1a199 | ||
|
|
8f7a174977 | ||
|
|
e8cc831f27 | ||
|
|
7cebafbafd | ||
|
|
befbdf0964 | ||
|
|
ee759dd11e | ||
|
|
f1a3f4db29 | ||
|
|
bda9030d33 | ||
|
|
a805aefa8d | ||
|
|
cde5873617 | ||
|
|
f8fea53146 | ||
|
|
d1d0627f0e | ||
|
|
b4271c4702 | ||
|
|
71c654bf0e | ||
|
|
2812df8081 | ||
|
|
beb6e1a0ba | ||
|
|
73d6e3013e | ||
|
|
8755497869 | ||
|
|
ffde02c992 | ||
|
|
0e05578f81 | ||
|
|
2cca1bc8d8 | ||
|
|
f6641c1547 | ||
|
|
f254c5f03e | ||
|
|
e3aab24810 | ||
|
|
272d2b7a20 | ||
|
|
02ace2e482 | ||
|
|
79bd3ad0d5 | ||
|
|
ba04063662 | ||
|
|
5924c484c9 | ||
|
|
d7ae3213ff | ||
|
|
41646b253e | ||
|
|
9e94a699a0 | ||
|
|
60a6e44963 | ||
|
|
a915cb67b9 | ||
|
|
4eb3c36301 | ||
|
|
0b95a72fa3 | ||
|
|
505b693c35 | ||
|
|
207a5171f0 | ||
|
|
e8de914bc0 | ||
|
|
4ffe453351 | ||
|
|
bb72dd885c | ||
|
|
0880bb107e | ||
|
|
9b638ddcef | ||
|
|
0510ff1e2d | ||
|
|
f7db9aad7c | ||
|
|
1a7e75677c | ||
|
|
5950982b1e | ||
|
|
534779f908 | ||
|
|
a737a84f5e | ||
|
|
400ed873ab | ||
|
|
faa419f535 | ||
|
|
b0b1a1f1f9 | ||
|
|
74a3b1a8b3 | ||
|
|
aaac302285 | ||
|
|
57a0276421 | ||
|
|
fbe0b43515 | ||
|
|
5c7c3359b3 | ||
|
|
e469021e87 | ||
|
|
a7313accef | ||
|
|
68b37cdb06 | ||
|
|
a7de06c62f | ||
|
|
1239f97325 | ||
|
|
a276717434 | ||
|
|
987a74ae5a | ||
|
|
271a52c892 | ||
|
|
8261ea4fcf | ||
|
|
2d1c8aae02 | ||
|
|
bfd2e1fd15 | ||
|
|
91f7a48fc9 | ||
|
|
fe95145cc0 | ||
|
|
6256893040 | ||
|
|
c58c959df6 | ||
|
|
d6aef6a9c4 | ||
|
|
464b6b3bb6 | ||
|
|
c075408792 | ||
|
|
d910f9facc | ||
|
|
f6b2b1619e | ||
|
|
8e26028a13 | ||
|
|
aff6c72d04 | ||
|
|
cc70dabe85 | ||
|
|
a02da91fa0 | ||
|
|
09290b5586 | ||
|
|
f004668622 | ||
|
|
b5e6791c6e | ||
|
|
57c8cc26d4 | ||
|
|
5b65acb745 | ||
|
|
6dbfcbb93e | ||
|
|
c3ccf5a0bc | ||
|
|
3618abed62 | ||
|
|
fd54647d01 | ||
|
|
22cd8f1dd1 | ||
|
|
2d022e38e3 | ||
|
|
8e9c0dd5ad | ||
|
|
99ac2fc710 | ||
|
|
43da779eee | ||
|
|
74d71606ab | ||
|
|
90762992aa | ||
|
|
909c3a5abc | ||
|
|
cf8ac03f45 | ||
|
|
746e58d2f8 | ||
|
|
3c4f0cd952 | ||
|
|
dececc8587 | ||
|
|
65fcee0a07 | ||
|
|
ea6ec3e5eb | ||
|
|
74589ce915 | ||
|
|
dc2773ba26 | ||
|
|
80256577f8 | ||
|
|
fd3c9cc737 | ||
|
|
80a5f19248 | ||
|
|
2079e2911e | ||
|
|
248eed338f | ||
|
|
88f469b3cd | ||
|
|
beea09f9c2 | ||
|
|
6faa4d6e4d | ||
|
|
0b45fbbd63 | ||
|
|
9ce1a68756 | ||
|
|
1c7c3b221f | ||
|
|
93f0dc3ff2 | ||
|
|
c2b1f7ff48 | ||
|
|
9cab76be02 | ||
|
|
407e2f2930 | ||
|
|
5e15a75d89 | ||
|
|
15883d4819 | ||
|
|
95a15e75e1 | ||
|
|
e56ba14126 | ||
|
|
305c9e436f | ||
|
|
a2a0747142 | ||
|
|
7996d68178 | ||
|
|
67c526582a | ||
|
|
bcd4b8976a | ||
|
|
aede1b9217 | ||
|
|
2a8a8a480f | ||
|
|
56cba3f1f4 | ||
|
|
115df4a2fa | ||
|
|
47b5a5757f | ||
|
|
b8b330ec8d | ||
|
|
8151e1e41a | ||
|
|
297acd90cd | ||
|
|
7f2ede81d0 | ||
|
|
6ca44232ab | ||
|
|
d73773eaa1 | ||
|
|
0fb02e6470 | ||
|
|
3a1e7fdd8f | ||
|
|
05b2244b36 | ||
|
|
d2fc6f8470 | ||
|
|
daa91fd2f3 | ||
|
|
43df3a1b70 | ||
|
|
7b75903cee | ||
|
|
07b406e9bc | ||
|
|
5f36396ccc | ||
|
|
811fde9993 | ||
|
|
695fc05dec | ||
|
|
2e61810547 | ||
|
|
6ca8011dfa | ||
|
|
36836fd84e | ||
|
|
bf1928c96f | ||
|
|
069a3ca9b0 | ||
|
|
f4228ffc20 | ||
|
|
bfafcaa3ab | ||
|
|
f59665208c | ||
|
|
66a756b606 | ||
|
|
a8688744e9 | ||
|
|
7a964eb7de | ||
|
|
3a5977ff60 | ||
|
|
bbeaaccd0c | ||
|
|
08e5a25ce7 | ||
|
|
dd0bbd375f | ||
|
|
d26d99c925 | ||
|
|
675eaa6178 | ||
|
|
01d01cf351 | ||
|
|
4494b6a111 | ||
|
|
5638a7eae9 | ||
|
|
d0bad07bee | ||
|
|
4e602fd55d | ||
|
|
aa66b8c0d3 | ||
|
|
1fc48da514 | ||
|
|
2d6e50bbeb | ||
|
|
dea42d035f | ||
|
|
f3722d0b2a | ||
|
|
f4e43469dc | ||
|
|
7e86a04fab | ||
|
|
935addebe9 | ||
|
|
a902eaec91 | ||
|
|
c200a45824 | ||
|
|
c06f399d20 | ||
|
|
f28d639aea | ||
|
|
63baadfd9a | ||
|
|
2d4e568efe | ||
|
|
b94fc6c9c8 | ||
|
|
36ef62af52 | ||
|
|
715509f57e | ||
|
|
db8a006565 | ||
|
|
7fab919e07 | ||
|
|
c43db3b7cc | ||
|
|
14803a4162 | ||
|
|
3a1f4ab854 | ||
|
|
307f20f8af | ||
|
|
5af0d132a5 | ||
|
|
a03042e017 | ||
|
|
4dfdbfeb62 | ||
|
|
da56c88aa7 | ||
|
|
e9cad4507b | ||
|
|
19e06f7beb | ||
|
|
0b16f83eae | ||
|
|
8cf68149cb | ||
|
|
cbc998b2ec | ||
|
|
fe9fca6abf | ||
|
|
d3f1b821f7 | ||
|
|
7165cea57a | ||
|
|
2990f0520a | ||
|
|
d7931fdb20 | ||
|
|
0697c0221b | ||
|
|
59f62b7834 | ||
|
|
f28b53d8dc | ||
|
|
e5c3492089 | ||
|
|
9d85cfb647 | ||
|
|
db800b510e | ||
|
|
656e00d182 | ||
|
|
202410a438 | ||
|
|
f7877b9bce | ||
|
|
ab6c6bad16 | ||
|
|
09e0b5f4ec | ||
|
|
e31ee932c0 | ||
|
|
517369629f | ||
|
|
00cef35214 | ||
|
|
e213b068e8 | ||
|
|
98488100e4 | ||
|
|
0ed52f670d | ||
|
|
b6900a258a | ||
|
|
c919ea6e39 | ||
|
|
7fa875db52 | ||
|
|
cf15ad1073 | ||
|
|
ba50d01877 | ||
|
|
7e7716aa44 | ||
|
|
669bf3d2f5 | ||
|
|
8d43f993e4 | ||
|
|
fe7bdcf06b | ||
|
|
733221286d | ||
|
|
9820d0c4b3 | ||
|
|
1f0f14cf18 | ||
|
|
df3a409142 | ||
|
|
97e8d2aa49 | ||
|
|
1805be3c48 | ||
|
|
c0e0ef0f7c | ||
|
|
1f0b5ff9ac | ||
|
|
1ec14d6bd6 | ||
|
|
03a71eb8de | ||
|
|
ee2a34ca81 | ||
|
|
0f7bd3e395 | ||
|
|
0d71525f7e | ||
|
|
8866a307bf | ||
|
|
10d35742e2 | ||
|
|
61ec812a3e | ||
|
|
373a0d1359 | ||
|
|
680f70c03a | ||
|
|
b1ba1f2172 | ||
|
|
e3b96e12be | ||
|
|
4f5ae287f5 | ||
|
|
6b8c490b1d | ||
|
|
90c725194f | ||
|
|
a05cc3512e | ||
|
|
8513dd6b3f | ||
|
|
0bab895026 | ||
|
|
349677ffe9 | ||
|
|
1f47fbc3dd | ||
|
|
d079dd4731 | ||
|
|
f3207fcd10 | ||
|
|
650e5290ea | ||
|
|
3975da93c6 | ||
|
|
19586e1eec | ||
|
|
578a810413 | ||
|
|
89897914fa | ||
|
|
892855276b | ||
|
|
58dd1f5881 | ||
|
|
67ecf3d0f6 | ||
|
|
38d6b98a70 | ||
|
|
0a93972c4f | ||
|
|
da4d6053bb | ||
|
|
28c933161a | ||
|
|
e50950b7a1 | ||
|
|
b1a55f5a38 | ||
|
|
1ee6eb8482 | ||
|
|
cbfa99148b | ||
|
|
54d0f52245 | ||
|
|
d0bfb6e2fc | ||
|
|
f86896eceb | ||
|
|
cb320f9d48 | ||
|
|
a7b423934f | ||
|
|
b1a7e9560e | ||
|
|
fe9d334880 | ||
|
|
1e2b66131c | ||
|
|
8d14bcb598 | ||
|
|
825390c209 | ||
|
|
09fd7c4094 | ||
|
|
8928731abf | ||
|
|
153635379a | ||
|
|
fbb2ea095a | ||
|
|
50c1d1a067 | ||
|
|
52ebcae8a2 | ||
|
|
5314d61987 | ||
|
|
8b29c53a45 | ||
|
|
4e4a5606d7 | ||
|
|
f2a4f1b1c8 | ||
|
|
33128748e6 | ||
|
|
7e1cad26e7 | ||
|
|
3d5118f5b3 | ||
|
|
06a25c1c45 | ||
|
|
13e0501acd | ||
|
|
ca29fc855a | ||
|
|
c7f478fc7d | ||
|
|
d53506ae2a | ||
|
|
8bc62d4c71 | ||
|
|
0b27f634c0 | ||
|
|
67e47256e2 | ||
|
|
df277b350c | ||
|
|
644d71eef7 | ||
|
|
9d1fb4ccf2 | ||
|
|
27efd3ad5e | ||
|
|
7b20e3f4cc | ||
|
|
5d354a07d6 | ||
|
|
aa8062c41c | ||
|
|
9ceb59e74c | ||
|
|
0df528a89d | ||
|
|
d70197c825 | ||
|
|
f2f8da0be1 | ||
|
|
094ee6da55 | ||
|
|
f256575fce | ||
|
|
d1ad38dd01 | ||
|
|
bc1fed4079 | ||
|
|
0b29ffefe0 | ||
|
|
c72a9333e9 | ||
|
|
d46cccda71 | ||
|
|
b5b12d0684 | ||
|
|
8283714930 | ||
|
|
aa428457db | ||
|
|
975011e04e | ||
|
|
0d88aa394a | ||
|
|
ec1a150d2c | ||
|
|
cbc6cd2567 | ||
|
|
fb7e39eaab | ||
|
|
9cc348733f | ||
|
|
00e0f45de3 | ||
|
|
b5c264de7d | ||
|
|
4ff60e4539 | ||
|
|
294458e7c4 | ||
|
|
42cb0e6974 | ||
|
|
73bf0e5f7e | ||
|
|
f512061a1c | ||
|
|
12db4fc520 | ||
|
|
8e351f1827 | ||
|
|
38a4adfaa3 | ||
|
|
91ddbb06ef | ||
|
|
7d2250be4d | ||
|
|
a070573af9 | ||
|
|
492aef93f5 | ||
|
|
23e6cf735a | ||
|
|
c5b1177149 | ||
|
|
84133ef2d3 | ||
|
|
1c9ae2bec5 | ||
|
|
03885f5ae2 | ||
|
|
bdff61eaed | ||
|
|
3d4ad39bce | ||
|
|
f2f575b450 | ||
|
|
aba4d2c977 | ||
|
|
e4021bbf57 | ||
|
|
ef8612e882 | ||
|
|
32b58dec5f | ||
|
|
1bafb7fb4f | ||
|
|
bc61d13ad3 | ||
|
|
972548cab4 | ||
|
|
bb8d07d384 | ||
|
|
6fdc9b0bad | ||
|
|
9c040cd42f | ||
|
|
5414d37376 | ||
|
|
a9818a7ce7 | ||
|
|
1651d94291 | ||
|
|
2b4afde373 | ||
|
|
a5c9bfabee | ||
|
|
143832c0b4 | ||
|
|
298206efc7 | ||
|
|
c81b222cf6 | ||
|
|
9d6af84449 | ||
|
|
7ddd9cf4a8 | ||
|
|
a861814c24 | ||
|
|
d65d293fbc | ||
|
|
523510469c | ||
|
|
cf5b2f2bbb | ||
|
|
4e5343e36c | ||
|
|
d8237b4321 | ||
|
|
83c3b0ca12 | ||
|
|
e1590aad1b | ||
|
|
304338d697 | ||
|
|
b65d639ecb | ||
|
|
339e71331f | ||
|
|
08be385665 | ||
|
|
2f0657f8ba | ||
|
|
a64ba184ce | ||
|
|
00328c8a31 | ||
|
|
7009c8da37 | ||
|
|
63db8ca009 | ||
|
|
369384f5ec | ||
|
|
4278692763 | ||
|
|
edc942b6c1 | ||
|
|
4c71e7fe57 | ||
|
|
627022972d | ||
|
|
1e8a9ee980 | ||
|
|
b45f4a6545 | ||
|
|
5b96190be8 | ||
|
|
8849570f74 | ||
|
|
b6958320b2 | ||
|
|
0a210bf5d3 | ||
|
|
90d50fef48 | ||
|
|
19ed058897 | ||
|
|
6438ce98b1 | ||
|
|
523d8ea638 | ||
|
|
e89896fdba | ||
|
|
ab5101a713 | ||
|
|
af460f1c41 | ||
|
|
634649f9ec | ||
|
|
df782fec9c | ||
|
|
172774b6cd | ||
|
|
62119eb761 | ||
|
|
48c6e23ca0 | ||
|
|
9064a72c92 | ||
|
|
dc06b16d11 | ||
|
|
739a74dc28 | ||
|
|
723eefea66 | ||
|
|
1d10907168 | ||
|
|
c19cddf08e | ||
|
|
4c08caafe1 | ||
|
|
be58047aba | ||
|
|
f60e2555c9 | ||
|
|
6443a1264e | ||
|
|
52a23eacfc | ||
|
|
2634b01465 | ||
|
|
15a3636d5f | ||
|
|
ef43ef6753 | ||
|
|
ba804b7c52 | ||
|
|
9c5abf49ca | ||
|
|
10e79651ef | ||
|
|
965818efd4 | ||
|
|
b1ebc9cc85 | ||
|
|
667c778f27 | ||
|
|
77d95e3b91 | ||
|
|
a8d3cbce82 | ||
|
|
eea685065a | ||
|
|
480f8027d7 | ||
|
|
19b56414a6 | ||
|
|
0f9806e9b0 | ||
|
|
177073596c | ||
|
|
93a9241899 | ||
|
|
5401ae9734 | ||
|
|
b78d97f374 | ||
|
|
8b95db06ee | ||
|
|
5a2d4d7e66 | ||
|
|
42e6f0e3f2 | ||
|
|
e2eb1e267b | ||
|
|
2ac533f2f6 | ||
|
|
ae9f9c57b1 | ||
|
|
18f253f77a | ||
|
|
bd9dcb52a3 | ||
|
|
be473a12be | ||
|
|
8f5adcccf5 | ||
|
|
08bd918a10 | ||
|
|
023276ebab | ||
|
|
19c4674ebb | ||
|
|
202da193c0 | ||
|
|
cc9687707c | ||
|
|
ac10e35272 | ||
|
|
fc7d5ee71f | ||
|
|
9d90503fb7 | ||
|
|
4be1c257d6 | ||
|
|
f3ba8eca8e | ||
|
|
0f286ee7ba | ||
|
|
5acf62824a | ||
|
|
93e33a0921 | ||
|
|
c4fa795491 | ||
|
|
f93042499b | ||
|
|
7cbe564ff1 | ||
|
|
62ff0c0b39 | ||
|
|
198b30887a | ||
|
|
9632772337 | ||
|
|
992c7d54fe | ||
|
|
4e3c8eafa1 | ||
|
|
05cd1a1c82 | ||
|
|
ee1c83ec85 | ||
|
|
1f784db3f7 | ||
|
|
f4e0145c1c | ||
|
|
efd96877ab | ||
|
|
5a20693d67 | ||
|
|
448fc61570 | ||
|
|
dc0eebd81e | ||
|
|
a545ff3781 | ||
|
|
82cebe3ad7 | ||
|
|
184441d82f | ||
|
|
ebbc76582c | ||
|
|
8e57ac487e | ||
|
|
766f6e9a9e | ||
|
|
ea74d7d59a | ||
|
|
74262977f6 | ||
|
|
d1fa0e6586 | ||
|
|
3e41504b2d | ||
|
|
06f68d28d9 | ||
|
|
21de4f7584 | ||
|
|
840c264e86 | ||
|
|
bbb92ba497 | ||
|
|
b163a5913f | ||
|
|
d26d3e1f40 | ||
|
|
ba8a9cc1f7 | ||
|
|
b858745cdd | ||
|
|
d9d6fa1a5a | ||
|
|
9e635fcc3f | ||
|
|
2791e3e96a | ||
|
|
1a977bd4b4 | ||
|
|
10516c38ec | ||
|
|
1e36722ab8 | ||
|
|
d57f9acc7a | ||
|
|
466f0fed52 | ||
|
|
c1edc5d711 | ||
|
|
354507a4ea | ||
|
|
a9c2bfb33b | ||
|
|
85c9da58de | ||
|
|
b1d5de1006 | ||
|
|
af96e3da94 | ||
|
|
bf1ece5f7c | ||
|
|
c3f70abc99 | ||
|
|
ea85856b1a | ||
|
|
e9bdbcf60d | ||
|
|
43ec8f0877 | ||
|
|
1f0edc5f79 | ||
|
|
fe8ec75ac7 | ||
|
|
671e13df70 | ||
|
|
36df9bda05 | ||
|
|
2a82273902 | ||
|
|
bb8f2047bf | ||
|
|
2a508c4f29 | ||
|
|
a4cbc7341d | ||
|
|
fdc2b8f3ec | ||
|
|
e6070210f6 | ||
|
|
e1e9eef63b | ||
|
|
f2a8e2d45d | ||
|
|
1a49cbef2d | ||
|
|
346dce83d4 | ||
|
|
4d8dca8049 | ||
|
|
7a19215f0a | ||
|
|
2d9dc9fe01 | ||
|
|
d86742eb36 | ||
|
|
857416d1d2 | ||
|
|
072aa9ebc0 | ||
|
|
aff8b0c30a | ||
|
|
51883cfc69 | ||
|
|
29a6cdec05 | ||
|
|
929dae8e24 | ||
|
|
a50f53de2e | ||
|
|
484211f7a0 | ||
|
|
b6eefe4453 | ||
|
|
3b9fa33240 | ||
|
|
9184450b39 | ||
|
|
59f42de1db | ||
|
|
2ae926d04e | ||
|
|
0ba4d4494e | ||
|
|
19c91071d8 | ||
|
|
9ce3f8e53f | ||
|
|
7eb701d846 | ||
|
|
40a3ec1e70 | ||
|
|
43c222decf | ||
|
|
13dccea84b | ||
|
|
e2b4cd8bd0 | ||
|
|
ad2858e113 | ||
|
|
c6e9131f60 | ||
|
|
45036ff249 | ||
|
|
6dd08947ae | ||
|
|
a1fd97f2d7 | ||
|
|
8076f120d8 | ||
|
|
4e766ed82e | ||
|
|
d42f4b1097 | ||
|
|
6b6cee8103 | ||
|
|
7f62e14e86 | ||
|
|
a369171a20 | ||
|
|
dfcdf19554 | ||
|
|
458ca63729 | ||
|
|
1ee3d00128 | ||
|
|
00199a788a | ||
|
|
dfb0838a1e | ||
|
|
42c9d65c7c | ||
|
|
4afda63440 | ||
|
|
50b8dda38e | ||
|
|
e3cebeab47 | ||
|
|
e986e7c16a | ||
|
|
1f158fa909 | ||
|
|
13dba1b8b4 | ||
|
|
9b0f919052 | ||
|
|
da0eb7a829 | ||
|
|
012906cd59 | ||
|
|
f2cd585b45 | ||
|
|
6937b8e2b6 | ||
|
|
a8562f03d1 | ||
|
|
2383bc9f13 | ||
|
|
670341f6bd | ||
|
|
945887f30d | ||
|
|
408b8dde3a | ||
|
|
295fdf1b8e | ||
|
|
6bd7d3add5 | ||
|
|
8d50dfb73f | ||
|
|
c16e37e079 | ||
|
|
66004c83e2 | ||
|
|
86d6706ee1 | ||
|
|
6de14d679d | ||
|
|
da13a6a2e5 | ||
|
|
82926a8b2a | ||
|
|
cbc7070269 | ||
|
|
94375f3161 | ||
|
|
0bdc801d9a | ||
|
|
4e618adf0a | ||
|
|
8601299a91 | ||
|
|
65bee1a8dc | ||
|
|
ffb1b89d2e | ||
|
|
bb9db7fcaf | ||
|
|
1753df590e | ||
|
|
7c1e103197 | ||
|
|
93ddc4e2c4 | ||
|
|
ded6a9fd69 | ||
|
|
bff8a5b8c7 | ||
|
|
8be9ac48ba | ||
|
|
fcab75177e | ||
|
|
b13ce92024 | ||
|
|
ab11b8e4dd | ||
|
|
e9403425a7 | ||
|
|
28cd5bebf2 | ||
|
|
364cba3100 | ||
|
|
dd76166e44 | ||
|
|
ef7dcabe64 | ||
|
|
b4c9ca36a9 | ||
|
|
37f9065d55 | ||
|
|
f130895b30 | ||
|
|
907dcb5e8b | ||
|
|
d52a2fbe94 | ||
|
|
f41ab0d251 | ||
|
|
58b7a6456c | ||
|
|
772d663bc1 | ||
|
|
e5c1cf97bd | ||
|
|
7605df5f29 | ||
|
|
df89117fa1 | ||
|
|
9873011ebf | ||
|
|
b25aa10243 | ||
|
|
f3b317ceea | ||
|
|
16496e238a | ||
|
|
c282d03330 | ||
|
|
0f8a9ac9ef | ||
|
|
21ca1349c4 | ||
|
|
23e59ea654 | ||
|
|
c81a1aa2b0 | ||
|
|
bb653e5a87 | ||
|
|
0afc3c1e86 | ||
|
|
789666d53b | ||
|
|
152ab20a17 | ||
|
|
9f9d8f8530 | ||
|
|
97f1b29975 | ||
|
|
f871fbdb1e | ||
|
|
5d76e6b626 | ||
|
|
4620f7dfa1 | ||
|
|
562145e69b | ||
|
|
0e1f73999b | ||
|
|
744a0f3ca6 | ||
|
|
3ac83ac48c | ||
|
|
d991c49254 | ||
|
|
48919c0cfe | ||
|
|
7e4883dfcc | ||
|
|
66b53cb1ae | ||
|
|
6005b76e96 | ||
|
|
e34d9613c7 | ||
|
|
2fcf975e6a | ||
|
|
ca19529c7d | ||
|
|
b3be1f4e1e | ||
|
|
b7bebecb64 | ||
|
|
53fbe7c2ee | ||
|
|
18ff789256 | ||
|
|
3d02fbfba4 | ||
|
|
8c6fc68367 | ||
|
|
9d2fe2605f | ||
|
|
edb3e92585 | ||
|
|
7118232490 | ||
|
|
19f81a2d32 | ||
|
|
b93fe65992 | ||
|
|
541347d321 | ||
|
|
1827d29412 | ||
|
|
a1a107a90b | ||
|
|
6cd0a3409e | ||
|
|
f5c575d12f | ||
|
|
d10b3635cc | ||
|
|
cdf53e89e9 | ||
|
|
37720b9609 | ||
|
|
ce522284c4 | ||
|
|
edc12e3f7e | ||
|
|
27b06f4fbd | ||
|
|
3f6888a470 | ||
|
|
1a4e979e63 | ||
|
|
a3be02132d | ||
|
|
c6ec3168f7 | ||
|
|
5b58ec5cdd | ||
|
|
7e622181ed | ||
|
|
bf38316163 | ||
|
|
1475196437 | ||
|
|
076d69a10b | ||
|
|
5654ac4e3d | ||
|
|
65a734bb65 | ||
|
|
07384c3605 | ||
|
|
87b2316194 | ||
|
|
585569f285 | ||
|
|
dbe1df8d27 | ||
|
|
17eb1e0ba3 | ||
|
|
b55c9f616d | ||
|
|
f025845a94 | ||
|
|
e54fc63af4 | ||
|
|
9352861051 | ||
|
|
b9eec3f261 | ||
|
|
f2cfb4f870 | ||
|
|
2291d0f7f2 | ||
|
|
15c100d262 | ||
|
|
2c9864bc09 | ||
|
|
bb1e8805dc | ||
|
|
08b5217b72 | ||
|
|
08d2d61f1a | ||
|
|
356fea6a37 | ||
|
|
e1b97e3727 | ||
|
|
16a700dabf | ||
|
|
ea27dc9497 | ||
|
|
f06c5d996d | ||
|
|
87c5540ad3 | ||
|
|
03e18ee02f | ||
|
|
382a9787f4 | ||
|
|
7f8b673dbc | ||
|
|
24482d958b | ||
|
|
2bca6b932c | ||
|
|
601f6bd3c9 | ||
|
|
1243a960e3 | ||
|
|
4dd062d9cd | ||
|
|
3e03b1bd86 | ||
|
|
8f1975d1da | ||
|
|
e15ff2a4d0 | ||
|
|
272185a2df | ||
|
|
be8495dd06 | ||
|
|
7f477eec96 | ||
|
|
cc4b7ea28c | ||
|
|
8335347dc3 | ||
|
|
49d69a5896 | ||
|
|
89a74f653a | ||
|
|
9f2b98d364 | ||
|
|
7090b8adf1 | ||
|
|
c5b46fc79c | ||
|
|
a291badbd4 | ||
|
|
52d749d46a | ||
|
|
9f89ef36bb | ||
|
|
f59d072ef1 | ||
|
|
c0d5e52e65 | ||
|
|
034f71cc9d | ||
|
|
fdd4f167c6 | ||
|
|
8fbebd4e47 | ||
|
|
389ec27b19 | ||
|
|
29df1fdc1e | ||
|
|
c4e048b315 | ||
|
|
ce5fd9d292 | ||
|
|
8e906be9df | ||
|
|
99bfd4884f | ||
|
|
15b213b38b | ||
|
|
8ca8817000 | ||
|
|
9f8c79f5d1 | ||
|
|
ce21299280 | ||
|
|
403d1f9944 | ||
|
|
138e5fbe15 | ||
|
|
fe869b97fd | ||
|
|
a4aeeca2d3 | ||
|
|
33691c2d3a | ||
|
|
08f1bda1aa | ||
|
|
58f65abefd | ||
|
|
9c1563adb7 | ||
|
|
cbbb50b194 | ||
|
|
6684117a00 | ||
|
|
6b9b700177 | ||
|
|
89c80a8178 | ||
|
|
6b5af37e1a | ||
|
|
6cd5e746c8 | ||
|
|
ffa28d0dc0 | ||
|
|
c10f6240b1 | ||
|
|
1ce2df9bc4 | ||
|
|
7690bc6e8a | ||
|
|
a227825336 | ||
|
|
f09fd0b574 | ||
|
|
39042fa04d | ||
|
|
909f55c74e | ||
|
|
32a857fbf2 | ||
|
|
d3bce65081 | ||
|
|
868148709c | ||
|
|
a2134ecce7 | ||
|
|
a1bc9178e3 | ||
|
|
8b49e74a31 | ||
|
|
60965df051 | ||
|
|
4d7992b55a | ||
|
|
c5b64af7e0 | ||
|
|
46c2ee3c31 | ||
|
|
ba6460ea10 | ||
|
|
40b83cab79 | ||
|
|
1743b5d2b3 | ||
|
|
d360c179d1 | ||
|
|
90f6169bad | ||
|
|
64a8a158c3 | ||
|
|
e3a4e284de | ||
|
|
2ef11ff513 | ||
|
|
066571a11e | ||
|
|
41c0c6d829 | ||
|
|
9629ee7298 | ||
|
|
d430048ba3 | ||
|
|
992162f507 | ||
|
|
fbc2c45e7f | ||
|
|
7acd8a2a80 | ||
|
|
21d6c69f73 | ||
|
|
c02a381819 | ||
|
|
c032a4ad49 | ||
|
|
c0f742595f | ||
|
|
168a24ffdf | ||
|
|
f864b40a85 | ||
|
|
39fb4ec8ab | ||
|
|
92f206cb93 | ||
|
|
634b77edad | ||
|
|
e091fa580f | ||
|
|
b1afaf71ca | ||
|
|
70b03ad61a | ||
|
|
a32de78c7c | ||
|
|
330103cc2b | ||
|
|
8b1e55dec2 | ||
|
|
da3f133d89 | ||
|
|
19baa7b14f | ||
|
|
502d31fe8d | ||
|
|
5359c6d991 | ||
|
|
8d4a12e14f | ||
|
|
771fbc817f | ||
|
|
bc22b22341 | ||
|
|
cffff6c49e | ||
|
|
39adc16015 | ||
|
|
896209a004 | ||
|
|
d48b5cfa2f | ||
|
|
c6bceff54b | ||
|
|
ff3305f43c | ||
|
|
58def95f67 | ||
|
|
9bc3b636a2 | ||
|
|
895597eecb | ||
|
|
a91e829cc9 | ||
|
|
be31370540 | ||
|
|
b26dc63b01 | ||
|
|
fafa859660 | ||
|
|
6e119ba940 | ||
|
|
754d5a976d | ||
|
|
c4a2bef4c9 | ||
|
|
cd80a73446 | ||
|
|
299d006d20 | ||
|
|
85063cf624 | ||
|
|
c74df866e6 | ||
|
|
080289fa00 | ||
|
|
98f86269f3 | ||
|
|
44fabd4abc | ||
|
|
8ddbe32ea1 | ||
|
|
432ddf6abc | ||
|
|
9d184a098f | ||
|
|
1c2cc0fa28 | ||
|
|
24807cb679 | ||
|
|
cd8e8bee0a | ||
|
|
856720004f | ||
|
|
d1ad5ff222 | ||
|
|
c81c9d255a | ||
|
|
f057d92a4d | ||
|
|
1ab63187c9 | ||
|
|
2fa56fc1e1 | ||
|
|
36ccfb9509 | ||
|
|
cb3cb99d06 | ||
|
|
8704767ac5 | ||
|
|
03c4bf904f | ||
|
|
dca2eb7ae8 | ||
|
|
1d9465d662 | ||
|
|
53241efe63 | ||
|
|
940b0b18b0 | ||
|
|
824c72318a | ||
|
|
0d7e856186 | ||
|
|
2897813dda | ||
|
|
e3a61b23af | ||
|
|
7918e282bf | ||
|
|
0e428810fd | ||
|
|
fa4fff2292 | ||
|
|
0e875b17d1 | ||
|
|
efb2c632e2 | ||
|
|
8951bc13d7 | ||
|
|
830ec252b9 | ||
|
|
730584bd15 | ||
|
|
0e47e1e8ac | ||
|
|
9617071ada | ||
|
|
3b32bfe149 | ||
|
|
d9a5e9d628 | ||
|
|
0feeaadb9c | ||
|
|
8fac3bfcb1 | ||
|
|
b1e4ebeafc | ||
|
|
2f61798fa8 | ||
|
|
02436f312f | ||
|
|
68a47097c1 | ||
|
|
917a6f354d | ||
|
|
847a834920 | ||
|
|
3088e987e3 | ||
|
|
fddeea03f0 | ||
|
|
2fefafd061 | ||
|
|
084be87618 | ||
|
|
6598213b58 | ||
|
|
4079a69335 | ||
|
|
553c2d5482 | ||
|
|
0c9ab17a12 | ||
|
|
5e8f6e0503 | ||
|
|
f04cd55f2a | ||
|
|
53d9cf365d | ||
|
|
94e2fd0ff9 | ||
|
|
0618446b95 | ||
|
|
640d0f10ac | ||
|
|
a03530a72f | ||
|
|
3612bbd8ca | ||
|
|
028bb365ff | ||
|
|
65e3b74dda | ||
|
|
2e1a7a00c3 | ||
|
|
feae1b8317 | ||
|
|
da3f233f87 | ||
|
|
ee5a724374 | ||
|
|
46662fe6bd | ||
|
|
315c36db3e | ||
|
|
25f1cb6cd7 | ||
|
|
269761fcd0 | ||
|
|
4e87e9e37b | ||
|
|
657662461f | ||
|
|
71f7a88b45 | ||
|
|
14aba9edb2 | ||
|
|
61929fe2c8 | ||
|
|
28c9fcd61c | ||
|
|
a010fde4b0 | ||
|
|
379e0da6d2 | ||
|
|
a60dff1215 | ||
|
|
a5896be36a | ||
|
|
9022b8bda8 | ||
|
|
190f94c485 | ||
|
|
72e7b5e0b5 | ||
|
|
def5a612c6 | ||
|
|
725f94f347 | ||
|
|
a0b1914972 | ||
|
|
bb907e5e7d | ||
|
|
909208baec | ||
|
|
7abca1bdf5 | ||
|
|
4728127253 | ||
|
|
d919dcc05a | ||
|
|
8a1929038b | ||
|
|
1d6b9a025a | ||
|
|
3475cdb17a | ||
|
|
181e8dce28 | ||
|
|
38f76f6ad0 | ||
|
|
2c2b44e8fd | ||
|
|
5199021b8d | ||
|
|
f2a8c3d0d1 | ||
|
|
5b6ebbc796 | ||
|
|
7b87d555e4 | ||
|
|
e5cde60311 | ||
|
|
d0fba985e2 | ||
|
|
7d5ab78b84 | ||
|
|
493ad821c1 | ||
|
|
c01462d3f9 | ||
|
|
bccf6113cc | ||
|
|
a862d41aa4 | ||
|
|
096227d025 | ||
|
|
4d62961c89 | ||
|
|
2466a0ae6c | ||
|
|
8042c85bca | ||
|
|
79f7300474 | ||
|
|
7a74936d6b | ||
|
|
c5d3fe9aaa | ||
|
|
d201e03d5e | ||
|
|
168a9ae7f4 | ||
|
|
c664d4550f | ||
|
|
19b79b7ca4 | ||
|
|
0de9a0a262 | ||
|
|
edc9995832 | ||
|
|
6023dffd6d | ||
|
|
6fdde29723 | ||
|
|
d63aac727c | ||
|
|
7b9a19c94b | ||
|
|
f78ab1c867 | ||
|
|
7c918125e5 | ||
|
|
d3f1dca1ad | ||
|
|
259a2f5cab | ||
|
|
c7376ef3c9 | ||
|
|
7a619d8b04 | ||
|
|
c58aa798a4 | ||
|
|
378e6e018e | ||
|
|
55cfdb3a38 | ||
|
|
83e0ab3adf | ||
|
|
cc2b36fbe0 | ||
|
|
76c8de7f4d | ||
|
|
c1a4a58500 | ||
|
|
1faf40cd81 | ||
|
|
1b7a597f1c | ||
|
|
aa84b1c054 | ||
|
|
8b0fc77202 | ||
|
|
6e96dd0a33 | ||
|
|
adc2c17c38 | ||
|
|
56f230391d | ||
|
|
08cb7c0f28 | ||
|
|
ef30e69245 | ||
|
|
847980f03d | ||
|
|
999faa7f66 | ||
|
|
0ecb8585bc | ||
|
|
32aea4254b | ||
|
|
e49918745e | ||
|
|
220c347cc5 | ||
|
|
a4ec46a941 | ||
|
|
2c126786b3 | ||
|
|
784f1454ba | ||
|
|
9d9226b575 | ||
|
|
9ec5863a75 | ||
|
|
50f3089f14 | ||
|
|
1aadefef75 | ||
|
|
5727110542 | ||
|
|
f2fffb03e4 | ||
|
|
ab5eae3fbc | ||
|
|
38cf5fd58c | ||
|
|
cda554b58c | ||
|
|
a73794d751 | ||
|
|
81a412517c | ||
|
|
23a7281fbf | ||
|
|
f32c6426a9 | ||
|
|
91583a4e1a | ||
|
|
f628e7d9c7 | ||
|
|
68d1646ae7 | ||
|
|
8fde834e39 | ||
|
|
e99d238647 | ||
|
|
e9435c2d3d | ||
|
|
da3ee5d0ea | ||
|
|
411a465b14 | ||
|
|
cad57cd922 | ||
|
|
fe1776b4c8 | ||
|
|
d9779d55ea | ||
|
|
74d3c89235 | ||
|
|
9af6ce25bc | ||
|
|
c831f53444 | ||
|
|
2c68eee9f8 | ||
|
|
e6ffb4f4e5 | ||
|
|
e63cc1890e | ||
|
|
1079472a2a | ||
|
|
e70dfdec31 | ||
|
|
08c0eecbc5 | ||
|
|
1609931e3f | ||
|
|
699d38d8b9 | ||
|
|
acd4663aee | ||
|
|
f251cba363 | ||
|
|
91a07dcda6 | ||
|
|
99552bf792 | ||
|
|
45031055f8 | ||
|
|
d200017f74 | ||
|
|
f6eaca3843 | ||
|
|
8d3324f958 | ||
|
|
dd16b8f27f | ||
|
|
70f8266767 | ||
|
|
a9674d2ae7 | ||
|
|
cb6a55bc4a | ||
|
|
3ecbaf23a4 | ||
|
|
946fad8bb8 | ||
|
|
f1d86e5045 | ||
|
|
9adcd48c44 | ||
|
|
fb82bfae11 | ||
|
|
bd9e283d3b | ||
|
|
d2126b6703 | ||
|
|
73fe621da1 | ||
|
|
0b7bbb1ba9 | ||
|
|
bb46aa4b7d | ||
|
|
6256e40169 | ||
|
|
22cda073b9 | ||
|
|
0d46393e8c | ||
|
|
193f43d7bb | ||
|
|
8ec882ca5f | ||
|
|
c596805b60 | ||
|
|
f891d0bee6 | ||
|
|
1f748d563f | ||
|
|
210f3c7b6b | ||
|
|
433bfe7b6c | ||
|
|
fa6442998a | ||
|
|
6d06d3b1fb | ||
|
|
4c347cc026 | ||
|
|
986de717f1 | ||
|
|
d38c8aa5ab | ||
|
|
7f9f850b47 | ||
|
|
ca772fae2e | ||
|
|
fb831c05c0 | ||
|
|
f7f8020b9b | ||
|
|
98194a7414 | ||
|
|
70c7978306 | ||
|
|
d5521df9bd | ||
|
|
6ed1243f86 | ||
|
|
d1275ecd08 | ||
|
|
6c9d8bb47f | ||
|
|
1f240387f9 | ||
|
|
1d3964352e | ||
|
|
512277fa93 | ||
|
|
cd7fec68fc | ||
|
|
d12d07fd5c | ||
|
|
00bd212886 | ||
|
|
d19d6b58d0 | ||
|
|
f953db50da | ||
|
|
55e11fcc7b | ||
|
|
12184bc2b9 | ||
|
|
39daa3a38a | ||
|
|
a5ff9bf65b | ||
|
|
036fa6f888 | ||
|
|
792f6b4af8 | ||
|
|
52714f5cce | ||
|
|
bc54bd7bb0 | ||
|
|
0b85a52bee | ||
|
|
b3a2bc85e3 | ||
|
|
d097433266 | ||
|
|
2d294f0546 | ||
|
|
78b4d06b25 | ||
|
|
ae90969b7e | ||
|
|
6732205b24 | ||
|
|
60dee45a61 | ||
|
|
70cd3ce3e7 | ||
|
|
9dc21c6c2d | ||
|
|
4648c7b4c1 | ||
|
|
6a080fbf5d | ||
|
|
72f40f32ad | ||
|
|
cfc8c269f3 | ||
|
|
1da45ff039 | ||
|
|
c6ee006d6b | ||
|
|
848abc4bd1 | ||
|
|
4369b03141 | ||
|
|
baefc78bfe | ||
|
|
1db08d0b73 | ||
|
|
b2ed7525cd | ||
|
|
4f11814551 | ||
|
|
307b5617f0 | ||
|
|
7cf0ce1abf | ||
|
|
5602e9753f | ||
|
|
ab20502b37 | ||
|
|
8369fcddbf | ||
|
|
9f9ca50dd9 | ||
|
|
e7681debe2 | ||
|
|
36b10341ca | ||
|
|
0c234e400b | ||
|
|
c0b7f4e938 | ||
|
|
654778a0c7 | ||
|
|
86fdb51236 | ||
|
|
e8b83fbbda | ||
|
|
29f26f4dd0 | ||
|
|
a0526be17d | ||
|
|
4e41c133b4 | ||
|
|
587904e8cc | ||
|
|
6358fd7a45 | ||
|
|
af595f34dc | ||
|
|
2832058036 | ||
|
|
b9d3b43c3e | ||
|
|
bd0bc64c2a | ||
|
|
2dd62f052e | ||
|
|
778577e0d5 | ||
|
|
8568b9925f | ||
|
|
46ad1b1cd8 | ||
|
|
066ed77918 | ||
|
|
c7be1a5572 | ||
|
|
439e927f6b | ||
|
|
c354d5adc6 | ||
|
|
5ffe11dfc6 | ||
|
|
37a8bfaa06 | ||
|
|
0b03768482 | ||
|
|
620d626887 | ||
|
|
4e2a081c8b | ||
|
|
fa09845ef9 | ||
|
|
a2a79cb5d9 | ||
|
|
7f7cb019e6 | ||
|
|
ba74f397f5 | ||
|
|
7c45335abb | ||
|
|
ae13b58d5f | ||
|
|
3c7f7d1127 | ||
|
|
f0fc3238ca | ||
|
|
b3380d8365 | ||
|
|
d97d6cb81d | ||
|
|
b2a697f98d | ||
|
|
6e6a05d11e | ||
|
|
5d76294ff0 | ||
|
|
62a6da0063 | ||
|
|
6a8530a00a | ||
|
|
b3b40dcf9c | ||
|
|
4479ed5e95 | ||
|
|
b16e73ad42 | ||
|
|
4631f85114 | ||
|
|
746641e523 | ||
|
|
e848dde422 | ||
|
|
3ce6dbe850 | ||
|
|
8d5007919f | ||
|
|
08e569918b | ||
|
|
6498000721 | ||
|
|
8486e6b3aa | ||
|
|
3f6b6798f4 | ||
|
|
c1b928b8ef | ||
|
|
c2e8fba483 | ||
|
|
62cb694d72 | ||
|
|
c619343aa2 | ||
|
|
75ad26989d | ||
|
|
c4fc8c18df | ||
|
|
8663dc940f | ||
|
|
cf983a8f9c | ||
|
|
ad6aa0ca94 | ||
|
|
9dc5d62f47 | ||
|
|
3b8a9f9d2c | ||
|
|
ab9926a177 | ||
|
|
f83741eb09 | ||
|
|
028f2e4e8d | ||
|
|
255fa8cbe1 | ||
|
|
b42f5cdc01 | ||
|
|
74633ad699 | ||
|
|
980185ca2b | ||
|
|
8eabe30548 | ||
|
|
0c9c688e6d | ||
|
|
908c75927e | ||
|
|
0a1f078384 | ||
|
|
6a713e5eb4 | ||
|
|
8f0a28bad5 | ||
|
|
0fa70d9d38 | ||
|
|
b14c82d606 | ||
|
|
8e79f24c5b | ||
|
|
3266a5514e | ||
|
|
0c37323a15 | ||
|
|
10af98e158 | ||
|
|
632224a30a | ||
|
|
e8d11e64a6 | ||
|
|
27c7a2feb5 | ||
|
|
9555386bd7 | ||
|
|
9733de38a3 | ||
|
|
775a05cc3a | ||
|
|
4e5cc2ae61 | ||
|
|
32adf5ab38 | ||
|
|
28302e776e | ||
|
|
911ca64de0 | ||
|
|
045ea76539 | ||
|
|
cee820e82c | ||
|
|
6183b715b7 | ||
|
|
2669ab6072 | ||
|
|
96506c7cce | ||
|
|
7bb70c839e | ||
|
|
ba97a4593c | ||
|
|
c467ed798a | ||
|
|
ed881f0741 | ||
|
|
0e0dabdd08 | ||
|
|
bd8f8bde95 | ||
|
|
646dab497c | ||
|
|
dc3b61d164 | ||
|
|
4479a038cd | ||
|
|
dfd01ff118 | ||
|
|
d2bb66db31 | ||
|
|
7af97e2d9f | ||
|
|
ac5145be87 | ||
|
|
4779db2dda | ||
|
|
25c2774bc8 | ||
|
|
bbee8103eb | ||
|
|
730ea4d5ef | ||
|
|
13fccdc465 | ||
|
|
f1b66c80f6 | ||
|
|
f34f140d49 | ||
|
|
520fbfb2e4 | ||
|
|
25016580c1 | ||
|
|
f10f8455fc | ||
|
|
974581d39b | ||
|
|
7e24297913 | ||
|
|
b6142cd4f5 | ||
|
|
e87994c769 | ||
|
|
b140f1b57f | ||
|
|
64936021d2 | ||
|
|
a887e19e6c | ||
|
|
92b97a569e | ||
|
|
0e22358b30 | ||
|
|
7429daf99c | ||
|
|
b470b82e2a | ||
|
|
a0700e7399 | ||
|
|
228e1983bc | ||
|
|
7023abdba7 | ||
|
|
1b43a5f160 | ||
|
|
20f4066c16 | ||
|
|
ea0dd68e84 | ||
|
|
e0c3d2324f | ||
|
|
cb303d694c | ||
|
|
6130f43d06 | ||
|
|
4db55ac5eb | ||
|
|
bfd20a5e0e | ||
|
|
977141bed3 | ||
|
|
c4f8d6a251 | ||
|
|
9633ca4d25 | ||
|
|
f798cbd9f9 | ||
|
|
cf87779f7b | ||
|
|
c69135e0e5 | ||
|
|
a9c3a4c601 | ||
|
|
d1081c86b3 | ||
|
|
beadc80778 | ||
|
|
5bbb5a6266 | ||
|
|
0664370218 | ||
|
|
225d103509 | ||
|
|
0e22e3c12c | ||
|
|
7b8e7e40ce | ||
|
|
c941e487fb | ||
|
|
8386e985f2 | ||
|
|
e4c944488f | ||
|
|
99a7754c00 | ||
|
|
6cbfab9b2a | ||
|
|
461f756c88 | ||
|
|
50932ba49e | ||
|
|
f9f8bb2f11 | ||
|
|
2ae8f2aa19 | ||
|
|
1a872ca95c | ||
|
|
3e379e9697 | ||
|
|
7746974644 | ||
|
|
d989a8865d | ||
|
|
4aad0fc8f2 | ||
|
|
0e5ac5ed7c | ||
|
|
c267c7eb9a | ||
|
|
7792e29065 | ||
|
|
d35ff17de8 | ||
|
|
3a7d4c24ee | ||
|
|
ff2638ef66 | ||
|
|
bc294a0fe6 | ||
|
|
bf5bccb7d9 | ||
|
|
f00364037e | ||
|
|
e83bf379ba | ||
|
|
ae0549f78b | ||
|
|
74e7e5cdfb | ||
|
|
2bf4032d5b | ||
|
|
ee1763cb85 | ||
|
|
d497be9e95 | ||
|
|
6176a18a12 | ||
|
|
5789f12f3f | ||
|
|
6279873a35 | ||
|
|
79c441acb7 | ||
|
|
7864811016 | ||
|
|
13938f34fd | ||
|
|
9fb6b41e03 | ||
|
|
a8ba6b1328 | ||
|
|
9592f7fe46 | ||
|
|
119d582379 | ||
|
|
451267310b | ||
|
|
0fee3f280b | ||
|
|
2461fcd531 | ||
|
|
866b6e0a5a | ||
|
|
1dccf96506 | ||
|
|
bca27dcfdc | ||
|
|
5407ee01ee | ||
|
|
fc8b52d73d | ||
|
|
609e7ede86 | ||
|
|
289853e661 | ||
|
|
013e1936ec | ||
|
|
893818f64e | ||
|
|
56bdaae2c9 | ||
|
|
775ecb7b11 | ||
|
|
8d74a35e9c | ||
|
|
b753fd9fa8 | ||
|
|
9698f3c9f4 | ||
|
|
31b110cd39 | ||
|
|
b4da00f96f | ||
|
|
0369852035 | ||
|
|
115497b73f | ||
|
|
4f78b133c2 | ||
|
|
d550a67f19 | ||
|
|
8e6941dfbd | ||
|
|
c54567ab45 | ||
|
|
dd592ca676 | ||
|
|
5273722769 | ||
|
|
fb26e3e9b7 | ||
|
|
5e0b0167fc | ||
|
|
73fdc5ded7 | ||
|
|
5fe7b3bf16 | ||
|
|
4ecf492cd4 | ||
|
|
c42a50229f | ||
|
|
6f55a66328 | ||
|
|
9d551cc69b | ||
|
|
93b8dbb9ab | ||
|
|
8ad010d331 | ||
|
|
404579c361 | ||
|
|
f8210cf276 | ||
|
|
545e256695 | ||
|
|
e9c463c867 | ||
|
|
798ca12e43 | ||
|
|
3780925a68 | ||
|
|
a240c0b6ed | ||
|
|
de1b38c64b | ||
|
|
15d7b6d99e | ||
|
|
9377f55000 | ||
|
|
d002879b0b | ||
|
|
2c6338a2ef | ||
|
|
fd72d7c486 | ||
|
|
db34f31175 | ||
|
|
653e2bc774 | ||
|
|
31ea5eeeb2 | ||
|
|
4a2c67e045 | ||
|
|
68fb7570f7 | ||
|
|
56fc08fab4 | ||
|
|
b00ba53171 | ||
|
|
4dd52290ea | ||
|
|
492aff5265 | ||
|
|
395cdc3af1 | ||
|
|
e6f3000b3c | ||
|
|
e21c38c103 | ||
|
|
7a7512da30 | ||
|
|
58b5f6610d | ||
|
|
e81053f7dd | ||
|
|
424aab4a83 | ||
|
|
77e6db3381 | ||
|
|
f6e3188ab8 | ||
|
|
1ca0594060 | ||
|
|
ac59b4540b | ||
|
|
d0bd4b1329 | ||
|
|
ccbcaf6331 | ||
|
|
1ad1b15a5b | ||
|
|
2349ff61c1 | ||
|
|
13139dd71d | ||
|
|
57ac614865 | ||
|
|
bbb93c647d | ||
|
|
951ba75d93 | ||
|
|
15c9c4a068 | ||
|
|
57fefde732 | ||
|
|
b4a04df6f3 | ||
|
|
1e63b5e8ce | ||
|
|
6ad30915eb | ||
|
|
557ffa536f | ||
|
|
ae05d2f545 | ||
|
|
563c643813 | ||
|
|
68c85ac9ef | ||
|
|
3ac00ea4ec | ||
|
|
29b49496f2 | ||
|
|
3c27192d3e | ||
|
|
dca732cde0 | ||
|
|
0346dc05bb | ||
|
|
a03cdeff04 | ||
|
|
062d72805a | ||
|
|
70fed8148d | ||
|
|
12c6df83f5 | ||
|
|
f61a7817e6 | ||
|
|
c482289b14 | ||
|
|
1e59e5fbb6 | ||
|
|
6106a9fe51 | ||
|
|
ec9e26c054 | ||
|
|
108fc647ea | ||
|
|
a9b235048d | ||
|
|
e1c14619d2 | ||
|
|
f644bf20c5 | ||
|
|
93bdf41144 | ||
|
|
bacf15f037 | ||
|
|
9239852ec8 | ||
|
|
87a286fc74 | ||
|
|
6d253b937b | ||
|
|
255176c321 | ||
|
|
fa341deaac | ||
|
|
f08566d3f1 | ||
|
|
a29040faf7 | ||
|
|
637551eb33 | ||
|
|
58d959b305 | ||
|
|
fcc7056e5c | ||
|
|
5d7e56bffe | ||
|
|
69b3ddf717 | ||
|
|
79b5c6b5af | ||
|
|
076128c783 | ||
|
|
894cb14d49 | ||
|
|
a0935e9ae4 | ||
|
|
f2c248acbd | ||
|
|
590f14a614 | ||
|
|
4c8dba880a | ||
|
|
de0c7b94f4 | ||
|
|
2682a6e674 | ||
|
|
e3e0b21612 | ||
|
|
455d66fbe4 | ||
|
|
7db7277636 | ||
|
|
7be5db8cff | ||
|
|
249950d94b | ||
|
|
44565dca88 | ||
|
|
cefcd24ebb | ||
|
|
13d7df47d7 | ||
|
|
1ccd3074dc | ||
|
|
70d3591ed2 | ||
|
|
700991f4fa | ||
|
|
d89acbf44d | ||
|
|
59ef3296f0 | ||
|
|
3ed0cdee1c | ||
|
|
9f5230a342 | ||
|
|
b895ccfdeb | ||
|
|
d54a407d68 | ||
|
|
f9ec630509 | ||
|
|
3f47181c10 | ||
|
|
19409d801d | ||
|
|
8a4793d571 | ||
|
|
0fc3fdcb3d | ||
|
|
04e2b3952b | ||
|
|
b56624a781 | ||
|
|
07d7fadb1a | ||
|
|
8db92d53d1 | ||
|
|
7537235f43 | ||
|
|
4bb524e53d | ||
|
|
e7ded52f93 | ||
|
|
8547dc3b21 | ||
|
|
c22603bf7e | ||
|
|
89525dedb5 | ||
|
|
1c53a6f9f6 | ||
|
|
16ee0f2c3a | ||
|
|
72d0394475 | ||
|
|
0a998c8b49 | ||
|
|
7bfad655c2 | ||
|
|
e81cbf780c | ||
|
|
e8cc44450a | ||
|
|
d3a8a4a7de | ||
|
|
fc2c5a0f6b | ||
|
|
0f8b8e1744 | ||
|
|
197434ff94 | ||
|
|
703073a164 | ||
|
|
6a0fc64475 | ||
|
|
f1624353ef | ||
|
|
277b438f68 | ||
|
|
405863cb11 | ||
|
|
63ebab5c2a | ||
|
|
0ddaff9380 | ||
|
|
a6b02bf381 | ||
|
|
39ede77fec | ||
|
|
e505857832 | ||
|
|
d8f3547db7 | ||
|
|
6d8a99269b | ||
|
|
b9112a398e | ||
|
|
719fdd29cc | ||
|
|
9e1376f709 | ||
|
|
7a9a1fcba4 | ||
|
|
2def9f4e83 | ||
|
|
c1046aae6a | ||
|
|
53cf1c537c | ||
|
|
ccedcb7419 | ||
|
|
f94a01febd | ||
|
|
495e584313 | ||
|
|
172e660cd1 | ||
|
|
14262cdd2a | ||
|
|
80576cb757 | ||
|
|
fde6e9cc73 | ||
|
|
57ca60c5a5 | ||
|
|
1d0ee15948 | ||
|
|
eeaa1b4517 | ||
|
|
a14bcf98dd | ||
|
|
be84fc6e4e | ||
|
|
73a3f481bc | ||
|
|
5903bbc64a | ||
|
|
f204809e43 | ||
|
|
fe4806ce49 | ||
|
|
8f535acc3f | ||
|
|
53cbb4ae12 | ||
|
|
4e9446d934 | ||
|
|
acbfb6ad64 | ||
|
|
8570449080 | ||
|
|
ffe6109dfb | ||
|
|
7dbb8a1d75 | ||
|
|
86210c1fc1 | ||
|
|
e96f15773d | ||
|
|
bc5635dd8e | ||
|
|
5d71c90f0a | ||
|
|
05d6ab9516 | ||
|
|
ccb001ee97 | ||
|
|
5a5cf91742 | ||
|
|
6a0d4913f2 | ||
|
|
685e50bf6c | ||
|
|
f90fc6f681 | ||
|
|
d8f3f2dee1 | ||
|
|
da8100965f | ||
|
|
6d2ea1295e | ||
|
|
d7914ff9aa | ||
|
|
7f4af5ebbc | ||
|
|
8f575c455c | ||
|
|
819166eb35 | ||
|
|
f507802ec9 | ||
|
|
62267811cb | ||
|
|
12bedef2d3 | ||
|
|
fd240701f8 | ||
|
|
60b96e0a62 | ||
|
|
1d377bab9d | ||
|
|
799690dc07 | ||
|
|
aa02d0c5e6 | ||
|
|
6b8ecf3953 | ||
|
|
e5b81f367e | ||
|
|
ba4798464d | ||
|
|
655e8be382 | ||
|
|
fd9a5b0d7b | ||
|
|
d09314bbb5 | ||
|
|
371791215a | ||
|
|
4c220bb443 | ||
|
|
e27611d45a | ||
|
|
a3e647c547 | ||
|
|
be52fe5461 | ||
|
|
1d639fda0d | ||
|
|
bbdde79428 | ||
|
|
1966f86120 | ||
|
|
fa7c98bc38 | ||
|
|
6671869acc | ||
|
|
434c5d1b9c | ||
|
|
cc9abfe03f | ||
|
|
e02fd14a3c | ||
|
|
559eb8dea9 | ||
|
|
9e6478b9c9 | ||
|
|
3a295c4474 | ||
|
|
f8dfc43cae | ||
|
|
3e19bc74d4 | ||
|
|
2966922c0b | ||
|
|
991c7e1943 | ||
|
|
c31a7710ad | ||
|
|
f4cace093c | ||
|
|
01e417d436 | ||
|
|
261ce4278f | ||
|
|
785898b507 | ||
|
|
47a2cf7cd5 | ||
|
|
1f19793613 | ||
|
|
a0df2989af | ||
|
|
bdb538ab42 | ||
|
|
c844a4fb2b | ||
|
|
fea142774a | ||
|
|
4b575299bc | ||
|
|
4eec016f7d | ||
|
|
4078b21ac6 | ||
|
|
1721d397a7 | ||
|
|
558a0572f5 | ||
|
|
d60b81c8a0 | ||
|
|
cc14c1fbab | ||
|
|
80aee1354b | ||
|
|
332d69259b | ||
|
|
9ad6b0d726 | ||
|
|
ea9df9e371 | ||
|
|
d69a9c4862 | ||
|
|
6270a11bb1 | ||
|
|
18726483a6 | ||
|
|
aed184f6ef | ||
|
|
f688a57132 | ||
|
|
e954ab7f8b | ||
|
|
c9c8235c64 | ||
|
|
8e2e77da56 | ||
|
|
1e27dedde5 | ||
|
|
e947805c15 | ||
|
|
7a1c3b6209 | ||
|
|
49b5b510ee | ||
|
|
3cf850c2c4 | ||
|
|
1fbbfcd063 | ||
|
|
de19450f44 | ||
|
|
09c94cc1a0 | ||
|
|
da301373fa | ||
|
|
1f558baa9b | ||
|
|
3c511023f3 | ||
|
|
d10a9ad4e6 | ||
|
|
9ff9f8f601 | ||
|
|
05a1099fd0 | ||
|
|
b2980afcd1 | ||
|
|
6980dc59c5 | ||
|
|
a9c8133fd4 | ||
|
|
065abdd95a | ||
|
|
cd8c6a8b9a | ||
|
|
459673f764 | ||
|
|
c795e4fb68 | ||
|
|
7c98248e45 | ||
|
|
16c771aa77 | ||
|
|
d971f2ff29 | ||
|
|
ead0cca5af | ||
|
|
47ff861f00 | ||
|
|
dbc1fb8a09 | ||
|
|
d9c6fb7625 | ||
|
|
cd23a30e76 | ||
|
|
f4fba7924b | ||
|
|
01b3a82ee2 | ||
|
|
0403f30cd6 | ||
|
|
116196b4d4 | ||
|
|
bdc3525c9b | ||
|
|
9a0f459655 | ||
|
|
59aac50ebd | ||
|
|
5f1b14ab53 | ||
|
|
5c900a7467 | ||
|
|
cc9abbc505 | ||
|
|
c66eb9f94c | ||
|
|
0045ddc757 | ||
|
|
209a3ef181 | ||
|
|
92e2173fa5 | ||
|
|
2d997a4f8d | ||
|
|
b1baaa7d98 | ||
|
|
869bd4f12c | ||
|
|
f6d4541db3 | ||
|
|
5729666e72 | ||
|
|
aa3a36831c | ||
|
|
1e03ba4a02 | ||
|
|
d12dd0e117 | ||
|
|
077045b094 | ||
|
|
85ec09b8de | ||
|
|
e0a63c32b0 | ||
|
|
a2af07d1dc | ||
|
|
0cb9e72f99 | ||
|
|
b4584b4d17 | ||
|
|
ea3b092128 | ||
|
|
728743db07 | ||
|
|
3d03b22775 | ||
|
|
d2210df9ec | ||
|
|
423514b338 | ||
|
|
0e10f95293 | ||
|
|
478a3f5191 | ||
|
|
750e452abc | ||
|
|
d266bde1ad | ||
|
|
e904f6bb16 | ||
|
|
2c33c4d78c | ||
|
|
cc2a1db651 | ||
|
|
d18f4d311f | ||
|
|
588c491f4c | ||
|
|
abf4ea129c | ||
|
|
5778a68501 | ||
|
|
3d962685ce | ||
|
|
99a53b8f9a | ||
|
|
e7fa940139 | ||
|
|
e836b62a1e | ||
|
|
023058d3a1 | ||
|
|
9daf4c90df | ||
|
|
618b04e634 | ||
|
|
8a030058eb | ||
|
|
a6a0752feb | ||
|
|
6354b564b4 | ||
|
|
40aa65caba | ||
|
|
fd5ab80f2e | ||
|
|
aa084b4635 | ||
|
|
16e40372bd | ||
|
|
b1a969b1d0 | ||
|
|
b4973945eb | ||
|
|
8267072da2 | ||
|
|
8dd8a718a7 | ||
|
|
63358e3e6c | ||
|
|
063439ac94 | ||
|
|
a7425b0caf | ||
|
|
9ad81f2577 | ||
|
|
485b1dffb7 | ||
|
|
1877f17ca1 | ||
|
|
43e593c72d | ||
|
|
159d0a2294 | ||
|
|
6765f66e11 | ||
|
|
73215dca16 | ||
|
|
85499e2bdc | ||
|
|
06daf34102 | ||
|
|
47dfaaafe1 | ||
|
|
c60b7c0730 | ||
|
|
266d097cab | ||
|
|
d4452ea708 | ||
|
|
ec603bc3ef | ||
|
|
48af411878 | ||
|
|
57d0a236df | ||
|
|
554d5dbbca | ||
|
|
0793b1eaf6 | ||
|
|
425ce77f60 | ||
|
|
88729e4124 | ||
|
|
48f6a248c8 | ||
|
|
9714b130a8 | ||
|
|
4cce138d31 | ||
|
|
e7d6f2dfa3 | ||
|
|
b68a72614a | ||
|
|
36b66a681d | ||
|
|
3e273c03b6 | ||
|
|
da0437a774 | ||
|
|
78cff8c223 | ||
|
|
8c4605284c | ||
|
|
f708dc2043 | ||
|
|
160e4e2a32 | ||
|
|
79eadda494 | ||
|
|
3da1a4ed92 | ||
|
|
a5dc2d5382 | ||
|
|
705eb06078 | ||
|
|
e735f96555 | ||
|
|
f976ff8ed3 | ||
|
|
9ae6b2b0da | ||
|
|
86bb64000e | ||
|
|
19e0e4c2dc | ||
|
|
86724a6860 | ||
|
|
a226fdd242 | ||
|
|
e2369bae68 | ||
|
|
46f0bb2078 | ||
|
|
6ff8b527ea | ||
|
|
0f87c73051 | ||
|
|
d0d62e8847 | ||
|
|
439381e474 | ||
|
|
a6a95b0091 | ||
|
|
392cd862e9 | ||
|
|
b32106484f | ||
|
|
77df31e105 | ||
|
|
24fa722276 | ||
|
|
0211c57bed | ||
|
|
135b0609b4 | ||
|
|
6c73e3f3ae | ||
|
|
bc95159a80 | ||
|
|
0f68db6793 | ||
|
|
9a55747885 | ||
|
|
bd90eb267f | ||
|
|
a31c3a5796 | ||
|
|
7d5b22e662 | ||
|
|
42f1dabc31 | ||
|
|
eefef8b09f | ||
|
|
93c4616115 | ||
|
|
1f6ea333b6 | ||
|
|
4cc48e6f34 | ||
|
|
ecfb02a76f | ||
|
|
cc0222aa11 | ||
|
|
65036e8145 | ||
|
|
e2e32096a3 | ||
|
|
84a23947b0 | ||
|
|
d234d58a16 | ||
|
|
b75aaf177b | ||
|
|
87328a6ff3 | ||
|
|
3fa4dd3af9 | ||
|
|
6245976d3e | ||
|
|
dacabe6317 | ||
|
|
bf68404c53 | ||
|
|
5f40685161 | ||
|
|
f768dc1632 | ||
|
|
1a88883a3b | ||
|
|
a42f98e04c | ||
|
|
842d3e55bc | ||
|
|
f02397aab5 | ||
|
|
5a47754a92 | ||
|
|
d91bc52594 | ||
|
|
f67816e2d3 | ||
|
|
861e6c464b | ||
|
|
835ee117f7 | ||
|
|
e5e14722b8 | ||
|
|
af48519d65 | ||
|
|
d6e9765604 | ||
|
|
0ab39f207c | ||
|
|
ef2e065c77 | ||
|
|
80b4c151bd | ||
|
|
719cedde02 | ||
|
|
cc5eb4765c | ||
|
|
d557050eca | ||
|
|
469d1e9801 | ||
|
|
05857b954d | ||
|
|
81819661dc | ||
|
|
06afcf27a3 | ||
|
|
9587caa4f7 | ||
|
|
2f0d0924a7 | ||
|
|
2a976afe99 | ||
|
|
fb723bc650 | ||
|
|
e23286a336 | ||
|
|
2f5336388c | ||
|
|
af58018a1e | ||
|
|
cfb171b000 | ||
|
|
e037cb0e3e | ||
|
|
749110aaa2 | ||
|
|
59b4a0fb91 | ||
|
|
191c8b4061 | ||
|
|
4e68e65cd9 | ||
|
|
33d2b24ff2 | ||
|
|
9c8652cd5b | ||
|
|
dd4fcd5539 | ||
|
|
9de782e719 | ||
|
|
9f9a774340 | ||
|
|
8193d788fc | ||
|
|
5f1c2a4f7e | ||
|
|
8cce943cb9 | ||
|
|
1256c81bd0 | ||
|
|
6310096e85 | ||
|
|
b246f9dfe2 | ||
|
|
34d6ab032f | ||
|
|
3bb975965d | ||
|
|
4547efab09 | ||
|
|
65593f459f | ||
|
|
b08a5d3e2f | ||
|
|
c3d55e2295 | ||
|
|
cb7b8158e0 | ||
|
|
0e7288707e | ||
|
|
38a993b356 | ||
|
|
107f390ae8 | ||
|
|
0a9b0761dc | ||
|
|
d4634797f3 | ||
|
|
7a53378799 | ||
|
|
5cbe958645 | ||
|
|
2892c220ac | ||
|
|
227848a59d | ||
|
|
b16d954dd3 | ||
|
|
7cfb90df10 |
2
.github/CODEOWNERS
vendored
2
.github/CODEOWNERS
vendored
@@ -1 +1 @@
|
||||
* @kvaps
|
||||
* @kvaps @lllamnyp @lexfrei @androndo @IvanHunters @sircthulhu
|
||||
|
||||
50
.github/ISSUE_TEMPLATE/bug_report.md
vendored
Normal file
50
.github/ISSUE_TEMPLATE/bug_report.md
vendored
Normal file
@@ -0,0 +1,50 @@
|
||||
---
|
||||
name: Bug report
|
||||
about: Create a report to help us improve
|
||||
labels: 'bug'
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
<!--
|
||||
Thank you for submitting a bug report!
|
||||
Please fill in the fields below to help us investigate the problem.
|
||||
-->
|
||||
|
||||
**Describe the bug**
|
||||
A clear and concise description of what the bug is.
|
||||
|
||||
**Environment**
|
||||
- Cozystack version
|
||||
- Provider: on-prem, Hetzner, and so on
|
||||
|
||||
**To Reproduce**
|
||||
Steps to reproduce the behavior:
|
||||
1. Go to '...'
|
||||
2. Click on '....'
|
||||
3. Scroll down to '....'
|
||||
4. See error
|
||||
|
||||
**Expected behaviour**
|
||||
When taking the steps to reproduce, what should have happened differently?
|
||||
|
||||
**Actual behaviour**
|
||||
A clear and concise description of what happens when the bug occurs. Explain how the system currently behaves, including error messages, unexpected results, or incorrect functionality observed during execution.
|
||||
|
||||
|
||||
**Logs**
|
||||
```
|
||||
Paste any relevant logs here. Please redact tokens, passwords, private keys.
|
||||
```
|
||||
|
||||
**Screenshots**
|
||||
If applicable, add screenshots to help explain the problem.
|
||||
|
||||
**Additional context**
|
||||
Add any other context about the problem here.
|
||||
|
||||
**Checklist**
|
||||
- [ ] I have checked the documentation
|
||||
- [ ] I have searched for similar issues
|
||||
- [ ] I have included all required information
|
||||
- [ ] I have provided clear steps to reproduce
|
||||
- [ ] I have included relevant logs
|
||||
24
.github/PULL_REQUEST_TEMPLATE.md
vendored
Normal file
24
.github/PULL_REQUEST_TEMPLATE.md
vendored
Normal file
@@ -0,0 +1,24 @@
|
||||
<!-- Thank you for making a contribution! Here are some tips for you:
|
||||
- Start the PR title with the [label] of Cozystack component:
|
||||
- For system components: [platform], [system], [linstor], [cilium], [kube-ovn], [dashboard], [cluster-api], etc.
|
||||
- For managed apps: [apps], [tenant], [kubernetes], [postgres], [virtual-machine] etc.
|
||||
- For development and maintenance: [tests], [ci], [docs], [maintenance].
|
||||
- If it's a work in progress, consider creating this PR as a draft.
|
||||
- Don't hesistate to ask for opinion and review in the community chats, even if it's still a draft.
|
||||
- Add the label `backport` if it's a bugfix that needs to be backported to a previous version.
|
||||
-->
|
||||
|
||||
## What this PR does
|
||||
|
||||
|
||||
### Release note
|
||||
|
||||
<!-- Write a release note:
|
||||
- Explain what has changed internally and for users.
|
||||
- Start with the same [label] as in the PR title
|
||||
- Follow the guidelines at https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
|
||||
-->
|
||||
|
||||
```release-note
|
||||
[]
|
||||
```
|
||||
188
.github/workflows/auto-release.yaml
vendored
Normal file
188
.github/workflows/auto-release.yaml
vendored
Normal file
@@ -0,0 +1,188 @@
|
||||
name: Auto Patch Release
|
||||
|
||||
on:
|
||||
schedule:
|
||||
# Run daily at 2:00 AM CET (1:00 UTC in winter, 0:00 UTC in summer)
|
||||
# Using 1:00 UTC to approximate 2:00 AM CET
|
||||
- cron: '0 1 * * *'
|
||||
workflow_dispatch: # Allow manual trigger
|
||||
|
||||
concurrency:
|
||||
group: auto-release-${{ github.workflow }}
|
||||
cancel-in-progress: false
|
||||
|
||||
jobs:
|
||||
auto-release:
|
||||
name: Auto Patch Release
|
||||
runs-on: [self-hosted]
|
||||
permissions:
|
||||
contents: write
|
||||
pull-requests: read
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
fetch-tags: true
|
||||
|
||||
- name: Configure git
|
||||
env:
|
||||
GH_PAT: ${{ secrets.GH_PAT }}
|
||||
run: |
|
||||
git config user.name "cozystack-bot"
|
||||
git config user.email "217169706+cozystack-bot@users.noreply.github.com"
|
||||
git remote set-url origin https://cozystack-bot:${GH_PAT}@github.com/${GITHUB_REPOSITORY}
|
||||
git config --unset-all http.https://github.com/.extraheader || true
|
||||
|
||||
- name: Process release branches
|
||||
uses: actions/github-script@v7
|
||||
env:
|
||||
GH_PAT: ${{ secrets.GH_PAT }}
|
||||
with:
|
||||
github-token: ${{ secrets.GH_PAT }}
|
||||
script: |
|
||||
const { execSync } = require('child_process');
|
||||
|
||||
// Configure git to use PAT for authentication
|
||||
execSync('git config user.name "cozystack-bot"', { encoding: 'utf8' });
|
||||
execSync('git config user.email "217169706+cozystack-bot@users.noreply.github.com"', { encoding: 'utf8' });
|
||||
execSync(`git remote set-url origin https://cozystack-bot:${process.env.GH_PAT}@github.com/${process.env.GITHUB_REPOSITORY}`, { encoding: 'utf8' });
|
||||
// Remove GITHUB_TOKEN extraheader to ensure PAT is used (needed to trigger other workflows)
|
||||
execSync('git config --unset-all http.https://github.com/.extraheader || true', { encoding: 'utf8' });
|
||||
|
||||
// Get all release-X.Y branches
|
||||
const branches = execSync('git branch -r | grep -E "origin/release-[0-9]+\\.[0-9]+$" | sed "s|origin/||" | tr -d " "', { encoding: 'utf8' })
|
||||
.split('\n')
|
||||
.filter(b => b.trim())
|
||||
.filter(b => /^release-\d+\.\d+$/.test(b));
|
||||
|
||||
console.log(`Found ${branches.length} release branches: ${branches.join(', ')}`);
|
||||
|
||||
// Get all published releases (not draft)
|
||||
const allReleases = await github.rest.repos.listReleases({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
per_page: 100
|
||||
});
|
||||
|
||||
// Filter to only published releases (not draft) with tags matching vX.Y.Z (no suffixes)
|
||||
const publishedReleases = allReleases.data
|
||||
.filter(r => !r.draft)
|
||||
.filter(r => /^v\d+\.\d+\.\d+$/.test(r.tag_name));
|
||||
|
||||
console.log(`Found ${publishedReleases.length} published releases without suffixes`);
|
||||
|
||||
for (const branch of branches) {
|
||||
console.log(`\n=== Processing branch: ${branch} ===`);
|
||||
|
||||
// Extract X.Y from branch name (release-X.Y)
|
||||
const match = branch.match(/^release-(\d+\.\d+)$/);
|
||||
if (!match) {
|
||||
console.log(` ⚠️ Branch ${branch} doesn't match pattern, skipping`);
|
||||
continue;
|
||||
}
|
||||
|
||||
const [major, minor] = match[1].split('.');
|
||||
const versionPrefix = `v${major}.${minor}.`;
|
||||
|
||||
console.log(` Looking for releases with prefix: ${versionPrefix}`);
|
||||
|
||||
// Find the latest published release for this branch (vX.Y.Z without suffixes)
|
||||
const branchReleases = publishedReleases
|
||||
.filter(r => r.tag_name.startsWith(versionPrefix))
|
||||
.filter(r => /^v\d+\.\d+\.\d+$/.test(r.tag_name)); // Ensure no suffixes
|
||||
|
||||
if (branchReleases.length === 0) {
|
||||
console.log(` ⚠️ No published releases found for ${branch}, skipping`);
|
||||
continue;
|
||||
}
|
||||
|
||||
// Sort by version (descending) to get the latest
|
||||
branchReleases.sort((a, b) => {
|
||||
const aVersion = a.tag_name.match(/^v(\d+)\.(\d+)\.(\d+)$/);
|
||||
const bVersion = b.tag_name.match(/^v(\d+)\.(\d+)\.(\d+)$/);
|
||||
if (!aVersion || !bVersion) return 0;
|
||||
|
||||
const aNum = parseInt(aVersion[1]) * 10000 + parseInt(aVersion[2]) * 100 + parseInt(aVersion[3]);
|
||||
const bNum = parseInt(bVersion[1]) * 10000 + parseInt(bVersion[2]) * 100 + parseInt(bVersion[3]);
|
||||
return bNum - aNum;
|
||||
});
|
||||
|
||||
const latestRelease = branchReleases[0];
|
||||
console.log(` ✅ Latest published release: ${latestRelease.tag_name}`);
|
||||
|
||||
// Get the commit SHA for this release tag
|
||||
let releaseCommitSha;
|
||||
try {
|
||||
releaseCommitSha = execSync(`git rev-list -n 1 ${latestRelease.tag_name}`, { encoding: 'utf8' }).trim();
|
||||
console.log(` Release commit SHA: ${releaseCommitSha}`);
|
||||
} catch (error) {
|
||||
console.log(` ⚠️ Could not find commit for tag ${latestRelease.tag_name}, skipping`);
|
||||
continue;
|
||||
}
|
||||
|
||||
// Checkout the branch
|
||||
execSync(`git fetch origin ${branch}:${branch}`, { encoding: 'utf8' });
|
||||
execSync(`git checkout ${branch}`, { encoding: 'utf8' });
|
||||
|
||||
// Get the latest commit on the branch
|
||||
const latestBranchCommit = execSync('git rev-parse HEAD', { encoding: 'utf8' }).trim();
|
||||
console.log(` Latest branch commit: ${latestBranchCommit}`);
|
||||
|
||||
// Check if there are new commits after the release
|
||||
const commitsAfterRelease = execSync(
|
||||
`git rev-list ${releaseCommitSha}..HEAD --oneline`,
|
||||
{ encoding: 'utf8' }
|
||||
).trim();
|
||||
|
||||
if (!commitsAfterRelease) {
|
||||
console.log(` ℹ️ No new commits after ${latestRelease.tag_name}, skipping`);
|
||||
continue;
|
||||
}
|
||||
|
||||
console.log(` ✅ Found new commits after release:`);
|
||||
console.log(commitsAfterRelease);
|
||||
|
||||
// Calculate next version (Z+1)
|
||||
const versionMatch = latestRelease.tag_name.match(/^v(\d+)\.(\d+)\.(\d+)$/);
|
||||
if (!versionMatch) {
|
||||
console.log(` ❌ Could not parse version from ${latestRelease.tag_name}, skipping`);
|
||||
continue;
|
||||
}
|
||||
|
||||
const nextPatch = parseInt(versionMatch[3]) + 1;
|
||||
const nextTag = `v${versionMatch[1]}.${versionMatch[2]}.${nextPatch}`;
|
||||
|
||||
console.log(` 🏷️ Creating new tag: ${nextTag} on commit ${latestBranchCommit}`);
|
||||
|
||||
// Create and push the tag with base_ref for workflow triggering
|
||||
try {
|
||||
// Delete local tag if exists to force update
|
||||
try {
|
||||
execSync(`git tag -d ${nextTag}`, { encoding: 'utf8' });
|
||||
} catch (e) {
|
||||
// Tag doesn't exist locally, that's fine
|
||||
}
|
||||
|
||||
// Delete remote tag if exists
|
||||
try {
|
||||
execSync(`git push origin :refs/tags/${nextTag}`, { encoding: 'utf8' });
|
||||
} catch (e) {
|
||||
// Tag doesn't exist remotely, that's fine
|
||||
}
|
||||
|
||||
// Create tag locally
|
||||
execSync(`git tag ${nextTag} ${latestBranchCommit}`, { encoding: 'utf8' });
|
||||
|
||||
// Push tag with HEAD reference to preserve base_ref
|
||||
execSync(`git push origin HEAD:refs/tags/${nextTag}`, { encoding: 'utf8' });
|
||||
console.log(` ✅ Successfully created and pushed tag ${nextTag}`);
|
||||
} catch (error) {
|
||||
console.log(` ❌ Error creating/pushing tag ${nextTag}: ${error.message}`);
|
||||
core.setFailed(`Failed to create tag ${nextTag} for branch ${branch}`);
|
||||
}
|
||||
}
|
||||
|
||||
console.log(`\n✅ Finished processing all release branches`);
|
||||
|
||||
129
.github/workflows/backport.yaml
vendored
Normal file
129
.github/workflows/backport.yaml
vendored
Normal file
@@ -0,0 +1,129 @@
|
||||
name: Automatic Backport
|
||||
|
||||
on:
|
||||
pull_request_target:
|
||||
types: [closed, labeled] # fires when PR is closed (merged) or labeled
|
||||
|
||||
concurrency:
|
||||
group: backport-${{ github.workflow }}-${{ github.event.pull_request.number }}
|
||||
cancel-in-progress: true
|
||||
|
||||
permissions:
|
||||
contents: write
|
||||
pull-requests: write
|
||||
|
||||
jobs:
|
||||
# Determine which backports are needed
|
||||
prepare:
|
||||
if: |
|
||||
github.event.pull_request.merged == true &&
|
||||
(
|
||||
contains(github.event.pull_request.labels.*.name, 'backport') ||
|
||||
contains(github.event.pull_request.labels.*.name, 'backport-previous') ||
|
||||
(github.event.action == 'labeled' && (github.event.label.name == 'backport' || github.event.label.name == 'backport-previous'))
|
||||
)
|
||||
runs-on: [self-hosted]
|
||||
outputs:
|
||||
backport_current: ${{ steps.labels.outputs.backport }}
|
||||
backport_previous: ${{ steps.labels.outputs.backport_previous }}
|
||||
current_branch: ${{ steps.branches.outputs.current_branch }}
|
||||
previous_branch: ${{ steps.branches.outputs.previous_branch }}
|
||||
steps:
|
||||
- name: Check which labels are present
|
||||
id: labels
|
||||
uses: actions/github-script@v7
|
||||
with:
|
||||
script: |
|
||||
const pr = context.payload.pull_request;
|
||||
const labels = pr.labels.map(l => l.name);
|
||||
const isBackport = labels.includes('backport');
|
||||
const isBackportPrevious = labels.includes('backport-previous');
|
||||
|
||||
core.setOutput('backport', isBackport ? 'true' : 'false');
|
||||
core.setOutput('backport_previous', isBackportPrevious ? 'true' : 'false');
|
||||
|
||||
console.log(`backport label: ${isBackport}, backport-previous label: ${isBackportPrevious}`);
|
||||
|
||||
- name: Determine target branches
|
||||
id: branches
|
||||
uses: actions/github-script@v7
|
||||
with:
|
||||
script: |
|
||||
// Get latest release
|
||||
let latestRelease;
|
||||
try {
|
||||
latestRelease = await github.rest.repos.getLatestRelease({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo
|
||||
});
|
||||
} catch (e) {
|
||||
core.setFailed('No existing releases found; cannot determine backport target.');
|
||||
return;
|
||||
}
|
||||
|
||||
const [maj, min] = latestRelease.data.tag_name.replace(/^v/, '').split('.');
|
||||
const currentBranch = `release-${maj}.${min}`;
|
||||
const prevMin = parseInt(min) - 1;
|
||||
const previousBranch = prevMin >= 0 ? `release-${maj}.${prevMin}` : '';
|
||||
|
||||
core.setOutput('current_branch', currentBranch);
|
||||
core.setOutput('previous_branch', previousBranch);
|
||||
|
||||
console.log(`Current branch: ${currentBranch}, Previous branch: ${previousBranch || 'N/A'}`);
|
||||
|
||||
// Verify previous branch exists if we need it
|
||||
if (previousBranch && '${{ steps.labels.outputs.backport_previous }}' === 'true') {
|
||||
try {
|
||||
await github.rest.repos.getBranch({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
branch: previousBranch
|
||||
});
|
||||
console.log(`Previous branch ${previousBranch} exists`);
|
||||
} catch (e) {
|
||||
core.setFailed(`Previous branch ${previousBranch} does not exist.`);
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
backport:
|
||||
needs: prepare
|
||||
if: |
|
||||
github.event.pull_request.merged == true &&
|
||||
(needs.prepare.outputs.backport_current == 'true' || needs.prepare.outputs.backport_previous == 'true')
|
||||
runs-on: [self-hosted]
|
||||
strategy:
|
||||
matrix:
|
||||
backport_type: [current, previous]
|
||||
steps:
|
||||
# 1. Determine target branch based on matrix
|
||||
- name: Set target branch
|
||||
id: target
|
||||
if: |
|
||||
(matrix.backport_type == 'current' && needs.prepare.outputs.backport_current == 'true') ||
|
||||
(matrix.backport_type == 'previous' && needs.prepare.outputs.backport_previous == 'true')
|
||||
run: |
|
||||
if [ "${{ matrix.backport_type }}" == "current" ]; then
|
||||
echo "branch=${{ needs.prepare.outputs.current_branch }}" >> $GITHUB_OUTPUT
|
||||
echo "Target branch: ${{ needs.prepare.outputs.current_branch }}"
|
||||
else
|
||||
echo "branch=${{ needs.prepare.outputs.previous_branch }}" >> $GITHUB_OUTPUT
|
||||
echo "Target branch: ${{ needs.prepare.outputs.previous_branch }}"
|
||||
fi
|
||||
|
||||
# 2. Checkout (required by backport‑action)
|
||||
- name: Checkout repository
|
||||
if: steps.target.outcome == 'success'
|
||||
uses: actions/checkout@v4
|
||||
|
||||
# 3. Create the back‑port pull request
|
||||
- name: Create back‑port PR
|
||||
id: backport
|
||||
if: steps.target.outcome == 'success'
|
||||
uses: korthout/backport-action@v3.2.1
|
||||
with:
|
||||
github_token: ${{ secrets.GITHUB_TOKEN }}
|
||||
label_pattern: '' # don't read labels for targets
|
||||
target_branches: ${{ steps.target.outputs.branch }}
|
||||
merge_commits: skip
|
||||
conflict_resolution: draft_commit_conflicts
|
||||
21
.github/workflows/pre-commit.yml
vendored
21
.github/workflows/pre-commit.yml
vendored
@@ -1,6 +1,12 @@
|
||||
name: Pre-Commit Checks
|
||||
|
||||
on: [push, pull_request]
|
||||
on:
|
||||
pull_request:
|
||||
types: [opened, synchronize, reopened]
|
||||
|
||||
concurrency:
|
||||
group: pre-commit-${{ github.workflow }}-${{ github.event.pull_request.number }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
pre-commit:
|
||||
@@ -8,6 +14,9 @@ jobs:
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 0
|
||||
fetch-tags: true
|
||||
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v4
|
||||
@@ -19,15 +28,7 @@ jobs:
|
||||
|
||||
- name: Install generate
|
||||
run: |
|
||||
sudo apt update
|
||||
sudo apt install curl -y
|
||||
curl -fsSL https://deb.nodesource.com/setup_16.x | sudo -E bash -
|
||||
sudo apt install nodejs -y
|
||||
git clone https://github.com/bitnami/readme-generator-for-helm
|
||||
cd ./readme-generator-for-helm
|
||||
npm install
|
||||
npm install -g pkg
|
||||
pkg . -o /usr/local/bin/readme-generator
|
||||
curl -sSL https://github.com/cozystack/cozyvalues-gen/releases/download/v1.0.6/cozyvalues-gen-linux-amd64.tar.gz | tar -xzvf- -C /usr/local/bin/ cozyvalues-gen
|
||||
|
||||
- name: Run pre-commit hooks
|
||||
run: |
|
||||
|
||||
204
.github/workflows/pull-requests-release.yaml
vendored
Normal file
204
.github/workflows/pull-requests-release.yaml
vendored
Normal file
@@ -0,0 +1,204 @@
|
||||
name: "Releasing PR"
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
types: [closed]
|
||||
paths-ignore:
|
||||
- 'docs/**/*'
|
||||
|
||||
# Cancel in‑flight runs for the same PR when a new push arrives.
|
||||
concurrency:
|
||||
group: pr-${{ github.workflow }}-${{ github.event.pull_request.number }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
finalize:
|
||||
name: Finalize Release
|
||||
runs-on: [self-hosted]
|
||||
permissions:
|
||||
contents: write
|
||||
|
||||
if: |
|
||||
github.event.pull_request.merged == true &&
|
||||
contains(github.event.pull_request.labels.*.name, 'release')
|
||||
|
||||
steps:
|
||||
# Extract tag from branch name (branch = release-X.Y.Z*)
|
||||
- name: Extract tag from branch name
|
||||
id: get_tag
|
||||
uses: actions/github-script@v7
|
||||
with:
|
||||
script: |
|
||||
const branch = context.payload.pull_request.head.ref;
|
||||
const m = branch.match(/^release-(\d+\.\d+\.\d+(?:[-\w\.]+)?)$/);
|
||||
if (!m) {
|
||||
core.setFailed(`Branch '${branch}' does not match 'release-X.Y.Z[-suffix]'`);
|
||||
return;
|
||||
}
|
||||
const tag = `v${m[1]}`;
|
||||
core.setOutput('tag', tag);
|
||||
console.log(`✅ Tag to publish: ${tag}`);
|
||||
|
||||
# Checkout repo & create / push annotated tag
|
||||
- name: Checkout repo
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Create tag on merge commit
|
||||
env:
|
||||
GH_PAT: ${{ secrets.GH_PAT }}
|
||||
run: |
|
||||
git config user.name "cozystack-bot"
|
||||
git config user.email "217169706+cozystack-bot@users.noreply.github.com"
|
||||
git remote set-url origin https://cozystack-bot:${GH_PAT}@github.com/${GITHUB_REPOSITORY}
|
||||
git tag -f ${{ steps.get_tag.outputs.tag }} ${{ github.sha }}
|
||||
git push -f origin ${{ steps.get_tag.outputs.tag }}
|
||||
|
||||
# Ensure maintenance branch release-X.Y
|
||||
- name: Ensure maintenance branch release-X.Y
|
||||
uses: actions/github-script@v7
|
||||
with:
|
||||
github-token: ${{ secrets.GH_PAT }}
|
||||
script: |
|
||||
const tag = '${{ steps.get_tag.outputs.tag }}'; // e.g. v0.1.3 or v0.1.3-rc3
|
||||
const match = tag.match(/^v(\d+)\.(\d+)\.\d+(?:[-\w\.]+)?$/);
|
||||
if (!match) {
|
||||
core.setFailed(`❌ tag '${tag}' must match 'vX.Y.Z' or 'vX.Y.Z-suffix'`);
|
||||
return;
|
||||
}
|
||||
const line = `${match[1]}.${match[2]}`;
|
||||
const branch = `release-${line}`;
|
||||
|
||||
// Get main branch commit for the tag
|
||||
const ref = await github.rest.git.getRef({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
ref: `tags/${tag}`
|
||||
});
|
||||
|
||||
const commitSha = ref.data.object.sha;
|
||||
|
||||
try {
|
||||
await github.rest.repos.getBranch({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
branch
|
||||
});
|
||||
|
||||
await github.rest.git.updateRef({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
ref: `heads/${branch}`,
|
||||
sha: commitSha,
|
||||
force: true
|
||||
});
|
||||
console.log(`🔁 Force-updated '${branch}' to ${commitSha}`);
|
||||
} catch (err) {
|
||||
if (err.status === 404) {
|
||||
await github.rest.git.createRef({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
ref: `refs/heads/${branch}`,
|
||||
sha: commitSha
|
||||
});
|
||||
console.log(`✅ Created branch '${branch}' at ${commitSha}`);
|
||||
} else {
|
||||
console.error('Unexpected error --', err);
|
||||
core.setFailed(`Unexpected error creating/updating branch: ${err.message}`);
|
||||
throw err;
|
||||
}
|
||||
}
|
||||
|
||||
# Publish draft release and ensure correct latest flag
|
||||
- name: Publish draft release
|
||||
uses: actions/github-script@v7
|
||||
with:
|
||||
script: |
|
||||
const tag = '${{ steps.get_tag.outputs.tag }}';
|
||||
const m = tag.match(/^v(\d+\.\d+\.\d+)(-(?:alpha|beta|rc)\.\d+)?$/);
|
||||
if (!m) {
|
||||
core.setFailed(`❌ tag '${tag}' must match 'vX.Y.Z' or 'vX.Y.Z-(alpha|beta|rc).N'`);
|
||||
return;
|
||||
}
|
||||
const isRc = Boolean(m[2]);
|
||||
|
||||
// Parse semver string to comparable numbers
|
||||
function parseSemver(v) {
|
||||
const match = v.replace(/^v/, '').match(/^(\d+)\.(\d+)\.(\d+)/);
|
||||
if (!match) return null;
|
||||
return {
|
||||
major: parseInt(match[1]),
|
||||
minor: parseInt(match[2]),
|
||||
patch: parseInt(match[3])
|
||||
};
|
||||
}
|
||||
|
||||
// Compare two semver objects
|
||||
function compareSemver(a, b) {
|
||||
if (a.major !== b.major) return a.major - b.major;
|
||||
if (a.minor !== b.minor) return a.minor - b.minor;
|
||||
return a.patch - b.patch;
|
||||
}
|
||||
|
||||
const currentSemver = parseSemver(tag);
|
||||
|
||||
// Get all releases
|
||||
const releases = await github.rest.repos.listReleases({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
per_page: 100
|
||||
});
|
||||
|
||||
// Find draft release to publish
|
||||
const draft = releases.data.find(r => r.tag_name === tag && r.draft);
|
||||
if (!draft) throw new Error(`Draft release for ${tag} not found`);
|
||||
|
||||
// Find max semver among published releases (excluding current draft)
|
||||
const publishedReleases = releases.data
|
||||
.filter(r => !r.draft && !r.prerelease)
|
||||
.filter(r => /^v\d+\.\d+\.\d+$/.test(r.tag_name))
|
||||
.map(r => ({ id: r.id, tag: r.tag_name, semver: parseSemver(r.tag_name) }))
|
||||
.filter(r => r.semver !== null);
|
||||
|
||||
let maxRelease = null;
|
||||
for (const rel of publishedReleases) {
|
||||
if (!maxRelease || compareSemver(rel.semver, maxRelease.semver) > 0) {
|
||||
maxRelease = rel;
|
||||
}
|
||||
}
|
||||
|
||||
// Determine if this release should be latest
|
||||
const isOutdated = maxRelease && compareSemver(currentSemver, maxRelease.semver) < 0;
|
||||
const makeLatest = (isRc || isOutdated) ? 'false' : 'true';
|
||||
|
||||
if (isRc) {
|
||||
console.log(`🏷️ ${tag} is a prerelease, make_latest: false`);
|
||||
} else if (isOutdated) {
|
||||
console.log(`🏷️ ${tag} < ${maxRelease.tag} (max semver), make_latest: false`);
|
||||
} else {
|
||||
console.log(`🏷️ ${tag} is the highest version, make_latest: true`);
|
||||
}
|
||||
|
||||
// Publish the release
|
||||
await github.rest.repos.updateRelease({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
release_id: draft.id,
|
||||
draft: false,
|
||||
prerelease: isRc,
|
||||
make_latest: makeLatest
|
||||
});
|
||||
console.log(`🚀 Published release ${tag}`);
|
||||
|
||||
// If this is a backport/outdated release, ensure the correct release is marked as latest
|
||||
if (isOutdated && maxRelease) {
|
||||
console.log(`🔧 Ensuring ${maxRelease.tag} remains the latest release...`);
|
||||
await github.rest.repos.updateRelease({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
release_id: maxRelease.id,
|
||||
make_latest: 'true'
|
||||
});
|
||||
console.log(`✅ Restored ${maxRelease.tag} as latest release`);
|
||||
}
|
||||
291
.github/workflows/pull-requests.yaml
vendored
Normal file
291
.github/workflows/pull-requests.yaml
vendored
Normal file
@@ -0,0 +1,291 @@
|
||||
name: Pull Request
|
||||
|
||||
env:
|
||||
# TODO: unhardcode this
|
||||
REGISTRY: iad.ocir.io/idyksih5sir9/cozystack
|
||||
on:
|
||||
pull_request:
|
||||
types: [opened, synchronize, reopened]
|
||||
paths-ignore:
|
||||
- 'docs/**/*'
|
||||
|
||||
# Cancel in‑flight runs for the same PR when a new push arrives.
|
||||
concurrency:
|
||||
group: pr-${{ github.workflow }}-${{ github.event.pull_request.number }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
build:
|
||||
name: Build
|
||||
runs-on: [self-hosted]
|
||||
permissions:
|
||||
contents: read
|
||||
packages: write
|
||||
|
||||
# Never run when the PR carries the "release" label.
|
||||
if: |
|
||||
!contains(github.event.pull_request.labels.*.name, 'release')
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
fetch-tags: true
|
||||
|
||||
- name: Run unit tests
|
||||
run: make unit-tests
|
||||
|
||||
- name: Set up Docker config
|
||||
run: |
|
||||
if [ -d ~/.docker ]; then
|
||||
cp -r ~/.docker "${{ runner.temp }}/.docker"
|
||||
fi
|
||||
|
||||
- name: Login to GitHub Container Registry
|
||||
if: ${{ !github.event.pull_request.head.repo.fork }}
|
||||
uses: docker/login-action@v3
|
||||
with:
|
||||
username: ${{ secrets.OCIR_USER}}
|
||||
password: ${{ secrets.OCIR_TOKEN }}
|
||||
registry: iad.ocir.io
|
||||
env:
|
||||
DOCKER_CONFIG: ${{ runner.temp }}/.docker
|
||||
|
||||
- name: Build
|
||||
run: make build
|
||||
env:
|
||||
DOCKER_CONFIG: ${{ runner.temp }}/.docker
|
||||
|
||||
- name: Build Talos image
|
||||
run: make -C packages/core/talos talos-nocloud
|
||||
|
||||
- name: Save git diff as patch
|
||||
if: "!contains(github.event.pull_request.labels.*.name, 'release')"
|
||||
run: git diff HEAD > _out/assets/pr.patch
|
||||
|
||||
- name: Upload git diff patch
|
||||
if: "!contains(github.event.pull_request.labels.*.name, 'release')"
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: pr-patch
|
||||
path: _out/assets/pr.patch
|
||||
|
||||
- name: Upload Talos image
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: talos-image
|
||||
path: _out/assets/nocloud-amd64.raw.xz
|
||||
|
||||
resolve_assets:
|
||||
name: "Resolve assets"
|
||||
runs-on: ubuntu-latest
|
||||
if: contains(github.event.pull_request.labels.*.name, 'release')
|
||||
outputs:
|
||||
disk_id: ${{ steps.fetch_assets.outputs.disk_id }}
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
if: contains(github.event.pull_request.labels.*.name, 'release')
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
fetch-tags: true
|
||||
|
||||
- name: Extract tag from PR branch (release PR)
|
||||
if: contains(github.event.pull_request.labels.*.name, 'release')
|
||||
id: get_tag
|
||||
uses: actions/github-script@v7
|
||||
with:
|
||||
script: |
|
||||
const branch = context.payload.pull_request.head.ref;
|
||||
const m = branch.match(/^release-(\d+\.\d+\.\d+(?:[-\w\.]+)?)$/);
|
||||
if (!m) {
|
||||
core.setFailed(`❌ Branch '${branch}' does not match 'release-X.Y.Z[-suffix]'`);
|
||||
return;
|
||||
}
|
||||
core.setOutput('tag', `v${m[1]}`);
|
||||
|
||||
- name: Find draft release & asset IDs (release PR)
|
||||
if: contains(github.event.pull_request.labels.*.name, 'release')
|
||||
id: fetch_assets
|
||||
uses: actions/github-script@v7
|
||||
with:
|
||||
github-token: ${{ secrets.GH_PAT }}
|
||||
script: |
|
||||
const tag = '${{ steps.get_tag.outputs.tag }}';
|
||||
const releases = await github.rest.repos.listReleases({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
per_page: 100
|
||||
});
|
||||
const draft = releases.data.find(r => r.tag_name === tag && r.draft);
|
||||
if (!draft) {
|
||||
core.setFailed(`Draft release '${tag}' not found`);
|
||||
return;
|
||||
}
|
||||
const find = (n) => draft.assets.find(a => a.name === n)?.id;
|
||||
const diskId = find('nocloud-amd64.raw.xz');
|
||||
if (!diskId) {
|
||||
core.setFailed('Required assets missing in draft release');
|
||||
return;
|
||||
}
|
||||
core.setOutput('disk_id', diskId);
|
||||
|
||||
|
||||
e2e:
|
||||
name: "E2E Tests"
|
||||
runs-on: ${{ contains(github.event.pull_request.labels.*.name, 'debug') && 'self-hosted' || 'oracle-vm-24cpu-96gb-x86-64' }}
|
||||
#runs-on: [oracle-vm-32cpu-128gb-x86-64]
|
||||
permissions:
|
||||
contents: read
|
||||
packages: read
|
||||
needs: ["build", "resolve_assets"]
|
||||
if: ${{ always() && (needs.build.result == 'success' || needs.resolve_assets.result == 'success') }}
|
||||
|
||||
steps:
|
||||
# ▸ Checkout and prepare the codebase
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
# ▸ Regular PR path – download artefacts produced by the *build* job
|
||||
- name: "Download Talos image (regular PR)"
|
||||
if: "!contains(github.event.pull_request.labels.*.name, 'release')"
|
||||
uses: actions/download-artifact@v4
|
||||
with:
|
||||
name: talos-image
|
||||
path: _out/assets
|
||||
|
||||
- name: Download PR patch
|
||||
if: "!contains(github.event.pull_request.labels.*.name, 'release')"
|
||||
uses: actions/download-artifact@v4
|
||||
with:
|
||||
name: pr-patch
|
||||
path: _out/assets
|
||||
|
||||
- name: Apply patch
|
||||
if: "!contains(github.event.pull_request.labels.*.name, 'release')"
|
||||
run: |
|
||||
git apply _out/assets/pr.patch
|
||||
|
||||
# ▸ Release PR path – fetch artefacts from the corresponding draft release
|
||||
- name: Download assets from draft release (release PR)
|
||||
if: contains(github.event.pull_request.labels.*.name, 'release')
|
||||
run: |
|
||||
mkdir -p _out/assets
|
||||
curl -sSL -H "Authorization: token ${GH_PAT}" -H "Accept: application/octet-stream" \
|
||||
-o _out/assets/nocloud-amd64.raw.xz \
|
||||
"https://api.github.com/repos/${GITHUB_REPOSITORY}/releases/assets/${{ needs.resolve_assets.outputs.disk_id }}"
|
||||
env:
|
||||
GH_PAT: ${{ secrets.GH_PAT }}
|
||||
|
||||
- name: Set sandbox ID
|
||||
run: echo "SANDBOX_NAME=cozy-e2e-sandbox-$(echo "${GITHUB_REPOSITORY}:${GITHUB_WORKFLOW}:${GITHUB_REF}" | sha256sum | cut -c1-10)" >> $GITHUB_ENV
|
||||
|
||||
# ▸ Prepare environment
|
||||
- name: Prepare workspace
|
||||
run: |
|
||||
rm -rf /tmp/$SANDBOX_NAME
|
||||
cp -r ${{ github.workspace }} /tmp/$SANDBOX_NAME
|
||||
|
||||
- name: Prepare environment
|
||||
run: |
|
||||
cd /tmp/$SANDBOX_NAME
|
||||
attempt=0
|
||||
until make SANDBOX_NAME=$SANDBOX_NAME prepare-env; do
|
||||
attempt=$((attempt + 1))
|
||||
if [ $attempt -ge 3 ]; then
|
||||
echo "❌ Attempt $attempt failed, exiting..."
|
||||
exit 1
|
||||
fi
|
||||
echo "❌ Attempt $attempt failed, retrying..."
|
||||
done
|
||||
echo "✅ The task completed successfully after $attempt attempts"
|
||||
|
||||
# ▸ Install Cozystack
|
||||
- name: Install Cozystack into sandbox
|
||||
run: |
|
||||
cd /tmp/$SANDBOX_NAME
|
||||
attempt=0
|
||||
until make -C packages/core/testing SANDBOX_NAME=$SANDBOX_NAME install-cozystack; do
|
||||
attempt=$((attempt + 1))
|
||||
if [ $attempt -ge 3 ]; then
|
||||
echo "❌ Attempt $attempt failed, exiting..."
|
||||
exit 1
|
||||
fi
|
||||
echo "❌ Attempt $attempt failed, retrying..."
|
||||
done
|
||||
echo "✅ The task completed successfully after $attempt attempts"
|
||||
|
||||
- name: Run OpenAPI tests
|
||||
run: |
|
||||
cd /tmp/$SANDBOX_NAME
|
||||
make -C packages/core/testing SANDBOX_NAME=$SANDBOX_NAME test-openapi
|
||||
|
||||
# ▸ Run E2E tests
|
||||
- name: Run E2E tests
|
||||
id: e2e_tests
|
||||
run: |
|
||||
cd /tmp/$SANDBOX_NAME
|
||||
failed_tests=""
|
||||
for app in $(ls hack/e2e-apps/*.bats | xargs -n1 basename | cut -d. -f1); do
|
||||
echo "::group::Testing $app"
|
||||
attempt=0
|
||||
success=false
|
||||
until [ $attempt -ge 3 ]; do
|
||||
if make -C packages/core/testing SANDBOX_NAME=$SANDBOX_NAME test-apps-$app; then
|
||||
success=true
|
||||
break
|
||||
fi
|
||||
attempt=$((attempt + 1))
|
||||
echo "❌ Attempt $attempt failed, retrying..."
|
||||
done
|
||||
if [ "$success" = true ]; then
|
||||
echo "✅ Test $app completed successfully"
|
||||
else
|
||||
echo "❌ Test $app failed after $attempt attempts"
|
||||
failed_tests="$failed_tests $app"
|
||||
fi
|
||||
echo "::endgroup::"
|
||||
done
|
||||
if [ -n "$failed_tests" ]; then
|
||||
echo "❌ Failed tests:$failed_tests"
|
||||
exit 1
|
||||
fi
|
||||
echo "✅ All E2E tests passed"
|
||||
|
||||
# ▸ Collect debug information (always runs)
|
||||
- name: Collect report
|
||||
if: always()
|
||||
run: |
|
||||
cd /tmp/$SANDBOX_NAME
|
||||
make -C packages/core/testing SANDBOX_NAME=$SANDBOX_NAME collect-report || true
|
||||
|
||||
- name: Upload cozyreport.tgz
|
||||
if: always()
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: cozyreport
|
||||
path: /tmp/${{ env.SANDBOX_NAME }}/_out/cozyreport.tgz
|
||||
|
||||
- name: Collect images list
|
||||
if: always()
|
||||
run: |
|
||||
cd /tmp/$SANDBOX_NAME
|
||||
make -C packages/core/testing SANDBOX_NAME=$SANDBOX_NAME collect-images || true
|
||||
|
||||
- name: Upload image list
|
||||
if: always()
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: image-list
|
||||
path: /tmp/${{ env.SANDBOX_NAME }}/_out/images.txt
|
||||
|
||||
# ▸ Tear down environment (always runs)
|
||||
- name: Tear down sandbox
|
||||
if: always()
|
||||
run: make -C packages/core/testing SANDBOX_NAME=$SANDBOX_NAME delete || true
|
||||
|
||||
- name: Remove workspace
|
||||
if: always()
|
||||
run: rm -rf /tmp/$SANDBOX_NAME
|
||||
78
.github/workflows/retest.yaml
vendored
Normal file
78
.github/workflows/retest.yaml
vendored
Normal file
@@ -0,0 +1,78 @@
|
||||
name: Retest
|
||||
|
||||
on:
|
||||
issue_comment:
|
||||
types: [created]
|
||||
|
||||
jobs:
|
||||
retest:
|
||||
name: Retest PR
|
||||
runs-on: ubuntu-latest
|
||||
if: |
|
||||
github.event.issue.pull_request &&
|
||||
contains(github.event.comment.body, '/retest')
|
||||
permissions:
|
||||
actions: write
|
||||
pull-requests: read
|
||||
|
||||
steps:
|
||||
- name: Rerun from Prepare environment
|
||||
uses: actions/github-script@v7
|
||||
with:
|
||||
script: |
|
||||
const prNumber = context.issue.number;
|
||||
|
||||
// Get the PR to find the head SHA
|
||||
const pr = await github.rest.pulls.get({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
pull_number: prNumber
|
||||
});
|
||||
|
||||
// Find the latest workflow run for this PR
|
||||
const runs = await github.rest.actions.listWorkflowRuns({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
workflow_id: 'pull-requests.yaml',
|
||||
head_sha: pr.data.head.sha
|
||||
});
|
||||
|
||||
if (runs.data.workflow_runs.length === 0) {
|
||||
core.setFailed('No workflow runs found for this PR');
|
||||
return;
|
||||
}
|
||||
|
||||
const latestRun = runs.data.workflow_runs[0];
|
||||
console.log(`Found workflow run: ${latestRun.id} (${latestRun.status})`);
|
||||
|
||||
// Check if workflow is waiting for approval (fork PRs)
|
||||
if (latestRun.conclusion === 'action_required') {
|
||||
core.setFailed('Workflow is waiting for approval. A maintainer must approve the workflow first.');
|
||||
return;
|
||||
}
|
||||
|
||||
// Get jobs for this run
|
||||
const jobs = await github.rest.actions.listJobsForWorkflowRun({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
run_id: latestRun.id
|
||||
});
|
||||
|
||||
// Find "Prepare environment" job
|
||||
const prepareJob = jobs.data.jobs.find(j => j.name === 'Prepare environment');
|
||||
|
||||
if (!prepareJob) {
|
||||
core.setFailed('Could not find "Prepare environment" job');
|
||||
return;
|
||||
}
|
||||
|
||||
console.log(`Found job: ${prepareJob.name} (id: ${prepareJob.id}, status: ${prepareJob.status})`);
|
||||
|
||||
// Rerun the job
|
||||
await github.rest.actions.reRunJobForWorkflowRun({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
job_id: prepareJob.id
|
||||
});
|
||||
|
||||
console.log(`✅ Triggered rerun of job "${prepareJob.name}" (${prepareJob.id})`);
|
||||
372
.github/workflows/tags.yaml
vendored
Normal file
372
.github/workflows/tags.yaml
vendored
Normal file
@@ -0,0 +1,372 @@
|
||||
name: Versioned Tag
|
||||
|
||||
on:
|
||||
push:
|
||||
tags:
|
||||
- 'v*.*.*' # vX.Y.Z
|
||||
- 'v*.*.*-rc.*' # vX.Y.Z-rc.N
|
||||
- 'v*.*.*-beta.*' # vX.Y.Z-beta.N
|
||||
- 'v*.*.*-alpha.*' # vX.Y.Z-alpha.N
|
||||
|
||||
concurrency:
|
||||
group: tags-${{ github.workflow }}-${{ github.ref }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
prepare-release:
|
||||
name: Prepare Release
|
||||
runs-on: [self-hosted]
|
||||
permissions:
|
||||
contents: write
|
||||
packages: write
|
||||
pull-requests: write
|
||||
actions: write
|
||||
|
||||
steps:
|
||||
# Check if a non-draft release with this tag already exists
|
||||
- name: Check if release already exists
|
||||
id: check_release
|
||||
uses: actions/github-script@v7
|
||||
with:
|
||||
script: |
|
||||
const tag = context.ref.replace('refs/tags/', '');
|
||||
const releases = await github.rest.repos.listReleases({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo
|
||||
});
|
||||
const exists = releases.data.some(r => r.tag_name === tag && !r.draft);
|
||||
core.setOutput('skip', exists);
|
||||
console.log(exists ? `Release ${tag} already published` : `No published release ${tag}`);
|
||||
|
||||
# If a published release already exists, skip the rest of the workflow
|
||||
- name: Skip if release already exists
|
||||
if: steps.check_release.outputs.skip == 'true'
|
||||
run: echo "Release already exists, skipping workflow."
|
||||
|
||||
# Parse tag meta-data (rc?, maintenance line, etc.)
|
||||
- name: Parse tag
|
||||
if: steps.check_release.outputs.skip == 'false'
|
||||
id: tag
|
||||
uses: actions/github-script@v7
|
||||
with:
|
||||
script: |
|
||||
const ref = context.ref.replace('refs/tags/', ''); // e.g. v0.31.5-rc.1
|
||||
const m = ref.match(/^v(\d+\.\d+\.\d+)(-(?:alpha|beta|rc)\.\d+)?$/); // ['0.31.5', '-rc.1' | '-beta.1' | …]
|
||||
if (!m) {
|
||||
core.setFailed(`❌ tag '${ref}' must match 'vX.Y.Z' or 'vX.Y.Z-(alpha|beta|rc).N'`);
|
||||
return;
|
||||
}
|
||||
const version = m[1] + (m[2] ?? ''); // 0.31.5-rc.1
|
||||
const isRc = Boolean(m[2]);
|
||||
const [maj, min] = m[1].split('.');
|
||||
core.setOutput('tag', ref); // v0.31.5-rc.1
|
||||
core.setOutput('version', version); // 0.31.5-rc.1
|
||||
core.setOutput('is_rc', isRc); // true
|
||||
core.setOutput('line', `${maj}.${min}`); // 0.31
|
||||
|
||||
# Detect base branch (main or release-X.Y) the tag was pushed from
|
||||
- name: Get base branch
|
||||
if: steps.check_release.outputs.skip == 'false'
|
||||
id: get_base
|
||||
uses: actions/github-script@v7
|
||||
with:
|
||||
script: |
|
||||
const baseRef = context.payload.base_ref;
|
||||
if (!baseRef) {
|
||||
core.setFailed(`❌ base_ref is empty. Push the tag via 'git push origin HEAD:refs/tags/<tag>'.`);
|
||||
return;
|
||||
}
|
||||
const branch = baseRef.replace('refs/heads/', '');
|
||||
const ok = branch === 'main' || /^release-\d+\.\d+$/.test(branch);
|
||||
if (!ok) {
|
||||
core.setFailed(`❌ Tagged commit must belong to 'main' or 'release-X.Y'. Got '${branch}'`);
|
||||
return;
|
||||
}
|
||||
core.setOutput('branch', branch);
|
||||
|
||||
# Checkout & login once
|
||||
- name: Checkout code
|
||||
if: steps.check_release.outputs.skip == 'false'
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
fetch-tags: true
|
||||
|
||||
- name: Login to GHCR
|
||||
if: steps.check_release.outputs.skip == 'false'
|
||||
uses: docker/login-action@v3
|
||||
with:
|
||||
username: ${{ github.repository_owner }}
|
||||
password: ${{ secrets.GITHUB_TOKEN }}
|
||||
registry: ghcr.io
|
||||
env:
|
||||
DOCKER_CONFIG: ${{ runner.temp }}/.docker
|
||||
|
||||
# Build project artifacts
|
||||
- name: Build
|
||||
if: steps.check_release.outputs.skip == 'false'
|
||||
run: make build
|
||||
env:
|
||||
DOCKER_CONFIG: ${{ runner.temp }}/.docker
|
||||
|
||||
# Commit built artifacts
|
||||
- name: Commit release artifacts
|
||||
if: steps.check_release.outputs.skip == 'false'
|
||||
env:
|
||||
GH_PAT: ${{ secrets.GH_PAT }}
|
||||
run: |
|
||||
git config user.name "cozystack-bot"
|
||||
git config user.email "217169706+cozystack-bot@users.noreply.github.com"
|
||||
git remote set-url origin https://cozystack-bot:${GH_PAT}@github.com/${GITHUB_REPOSITORY}
|
||||
git config --unset-all http.https://github.com/.extraheader || true
|
||||
git add .
|
||||
git commit -m "Prepare release ${GITHUB_REF#refs/tags/}" -s || echo "No changes to commit"
|
||||
git push origin HEAD || true
|
||||
|
||||
# Create or reuse draft release
|
||||
- name: Create / reuse draft release
|
||||
if: steps.check_release.outputs.skip == 'false'
|
||||
id: release
|
||||
uses: actions/github-script@v7
|
||||
with:
|
||||
script: |
|
||||
const tag = '${{ steps.tag.outputs.tag }}';
|
||||
const isRc = ${{ steps.tag.outputs.is_rc }};
|
||||
const releases = await github.rest.repos.listReleases({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo
|
||||
});
|
||||
|
||||
let rel = releases.data.find(r => r.tag_name === tag);
|
||||
if (!rel) {
|
||||
rel = await github.rest.repos.createRelease({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
tag_name: tag,
|
||||
name: tag,
|
||||
draft: true,
|
||||
prerelease: isRc // no make_latest for drafts
|
||||
});
|
||||
console.log(`Draft release created for ${tag}`);
|
||||
} else {
|
||||
console.log(`Re-using existing release ${tag}`);
|
||||
}
|
||||
|
||||
core.setOutput('upload_url', rel.upload_url);
|
||||
|
||||
# Build + upload assets (optional)
|
||||
- name: Build & upload assets
|
||||
if: steps.check_release.outputs.skip == 'false'
|
||||
run: |
|
||||
make assets
|
||||
make upload_assets VERSION=${{ steps.tag.outputs.tag }}
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
# Create release-X.Y.Z branch and push (force-update)
|
||||
- name: Create release branch
|
||||
if: steps.check_release.outputs.skip == 'false'
|
||||
env:
|
||||
GH_PAT: ${{ secrets.GH_PAT }}
|
||||
run: |
|
||||
git config user.name "cozystack-bot"
|
||||
git config user.email "217169706+cozystack-bot@users.noreply.github.com"
|
||||
git remote set-url origin https://cozystack-bot:${GH_PAT}@github.com/${GITHUB_REPOSITORY}
|
||||
BRANCH="release-${GITHUB_REF#refs/tags/v}"
|
||||
git branch -f "$BRANCH"
|
||||
git push -f origin "$BRANCH"
|
||||
|
||||
# Create pull request into original base branch (if absent)
|
||||
- name: Create pull request if not exists
|
||||
if: steps.check_release.outputs.skip == 'false'
|
||||
uses: actions/github-script@v7
|
||||
with:
|
||||
github-token: ${{ secrets.GH_PAT }}
|
||||
script: |
|
||||
const version = context.ref.replace('refs/tags/v', '');
|
||||
const base = '${{ steps.get_base.outputs.branch }}';
|
||||
const head = `release-${version}`;
|
||||
|
||||
const prs = await github.rest.pulls.list({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
head: `${context.repo.owner}:${head}`,
|
||||
base
|
||||
});
|
||||
if (prs.data.length === 0) {
|
||||
const pr = await github.rest.pulls.create({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
head,
|
||||
base,
|
||||
title: `Release v${version}`,
|
||||
body: `This PR prepares the release \`v${version}\`.`,
|
||||
draft: false
|
||||
});
|
||||
await github.rest.issues.addLabels({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
issue_number: pr.data.number,
|
||||
labels: ['release']
|
||||
});
|
||||
console.log(`Created PR #${pr.data.number}`);
|
||||
} else {
|
||||
console.log(`PR already exists from ${head} to ${base}`);
|
||||
}
|
||||
|
||||
generate-changelog:
|
||||
name: Generate Changelog
|
||||
runs-on: [self-hosted]
|
||||
needs: [prepare-release]
|
||||
permissions:
|
||||
contents: write
|
||||
pull-requests: write
|
||||
if: needs.prepare-release.result == 'success'
|
||||
steps:
|
||||
- name: Parse tag
|
||||
id: tag
|
||||
uses: actions/github-script@v7
|
||||
with:
|
||||
script: |
|
||||
const ref = context.ref.replace('refs/tags/', '');
|
||||
const m = ref.match(/^v(\d+\.\d+\.\d+)(-(?:alpha|beta|rc)\.\d+)?$/);
|
||||
if (!m) {
|
||||
core.setFailed(`❌ tag '${ref}' must match 'vX.Y.Z' or 'vX.Y.Z-(alpha|beta|rc).N'`);
|
||||
return;
|
||||
}
|
||||
const version = m[1] + (m[2] ?? '');
|
||||
|
||||
core.setOutput('version', version);
|
||||
core.setOutput('tag', ref);
|
||||
|
||||
- name: Checkout main branch
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
ref: main
|
||||
fetch-depth: 0
|
||||
fetch-tags: true
|
||||
token: ${{ secrets.GH_PAT }}
|
||||
|
||||
- name: Check if changelog already exists
|
||||
id: check_changelog
|
||||
run: |
|
||||
CHANGELOG_FILE="docs/changelogs/v${{ steps.tag.outputs.version }}.md"
|
||||
if [ -f "$CHANGELOG_FILE" ]; then
|
||||
echo "exists=true" >> $GITHUB_OUTPUT
|
||||
echo "Changelog file $CHANGELOG_FILE already exists"
|
||||
else
|
||||
echo "exists=false" >> $GITHUB_OUTPUT
|
||||
echo "Changelog file $CHANGELOG_FILE does not exist"
|
||||
fi
|
||||
|
||||
- name: Setup Node.js
|
||||
if: steps.check_changelog.outputs.exists == 'false'
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 22
|
||||
|
||||
- name: Install GitHub Copilot CLI
|
||||
if: steps.check_changelog.outputs.exists == 'false'
|
||||
run: npm i -g @github/copilot
|
||||
|
||||
- name: Generate changelog using AI
|
||||
if: steps.check_changelog.outputs.exists == 'false'
|
||||
env:
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
GH_TOKEN: ${{ secrets.GH_PAT }}
|
||||
run: |
|
||||
copilot --prompt "prepare changelog file for tagged release v${{ steps.tag.outputs.version }}, use @docs/agents/changelog.md for it. Create the changelog file at docs/changelogs/v${{ steps.tag.outputs.version }}.md" \
|
||||
--allow-all-tools --allow-all-paths < /dev/null
|
||||
|
||||
- name: Create changelog branch and commit
|
||||
if: steps.check_changelog.outputs.exists == 'false'
|
||||
env:
|
||||
GH_PAT: ${{ secrets.GH_PAT }}
|
||||
run: |
|
||||
git config user.name "cozystack-bot"
|
||||
git config user.email "217169706+cozystack-bot@users.noreply.github.com"
|
||||
git remote set-url origin https://cozystack-bot:${GH_PAT}@github.com/${GITHUB_REPOSITORY}
|
||||
|
||||
CHANGELOG_FILE="docs/changelogs/v${{ steps.tag.outputs.version }}.md"
|
||||
CHANGELOG_BRANCH="changelog-v${{ steps.tag.outputs.version }}"
|
||||
|
||||
if [ -f "$CHANGELOG_FILE" ]; then
|
||||
# Fetch latest main branch
|
||||
git fetch origin main
|
||||
|
||||
# Delete local branch if it exists
|
||||
git branch -D "$CHANGELOG_BRANCH" 2>/dev/null || true
|
||||
|
||||
# Create and checkout new branch from main
|
||||
git checkout -b "$CHANGELOG_BRANCH" origin/main
|
||||
|
||||
# Add and commit changelog
|
||||
git add "$CHANGELOG_FILE"
|
||||
if git diff --staged --quiet; then
|
||||
echo "⚠️ No changes to commit (file may already be committed)"
|
||||
else
|
||||
git commit -m "docs: add changelog for v${{ steps.tag.outputs.version }}" -s
|
||||
echo "✅ Changelog committed to branch $CHANGELOG_BRANCH"
|
||||
fi
|
||||
|
||||
# Push the branch (force push to update if it exists)
|
||||
git push -f origin "$CHANGELOG_BRANCH"
|
||||
else
|
||||
echo "⚠️ Changelog file was not generated"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
- name: Create PR for changelog
|
||||
if: steps.check_changelog.outputs.exists == 'false'
|
||||
uses: actions/github-script@v7
|
||||
with:
|
||||
github-token: ${{ secrets.GH_PAT }}
|
||||
script: |
|
||||
const version = '${{ steps.tag.outputs.version }}';
|
||||
const changelogBranch = `changelog-v${version}`;
|
||||
const baseBranch = 'main';
|
||||
|
||||
// Check if PR already exists
|
||||
const prs = await github.rest.pulls.list({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
head: `${context.repo.owner}:${changelogBranch}`,
|
||||
base: baseBranch,
|
||||
state: 'open'
|
||||
});
|
||||
|
||||
if (prs.data.length > 0) {
|
||||
const pr = prs.data[0];
|
||||
console.log(`PR #${pr.number} already exists for changelog branch ${changelogBranch}`);
|
||||
|
||||
// Update PR body with latest info
|
||||
const body = `This PR adds the changelog for release \`v${version}\`.\n\n✅ Changelog has been automatically generated in \`docs/changelogs/v${version}.md\`.`;
|
||||
await github.rest.pulls.update({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
pull_number: pr.number,
|
||||
body: body
|
||||
});
|
||||
console.log(`Updated existing PR #${pr.number}`);
|
||||
} else {
|
||||
// Create new PR
|
||||
const pr = await github.rest.pulls.create({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
head: changelogBranch,
|
||||
base: baseBranch,
|
||||
title: `docs: add changelog for v${version}`,
|
||||
body: `This PR adds the changelog for release \`v${version}\`.\n\n✅ Changelog has been automatically generated in \`docs/changelogs/v${version}.md\`.`,
|
||||
draft: false
|
||||
});
|
||||
|
||||
// Add label if needed
|
||||
await github.rest.issues.addLabels({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
issue_number: pr.data.number,
|
||||
labels: ['documentation', 'automated']
|
||||
});
|
||||
|
||||
console.log(`Created PR #${pr.data.number} for changelog`);
|
||||
}
|
||||
92
.github/workflows/update-releasenotes.yaml
vendored
Normal file
92
.github/workflows/update-releasenotes.yaml
vendored
Normal file
@@ -0,0 +1,92 @@
|
||||
name: Update Release Notes
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- main
|
||||
|
||||
concurrency:
|
||||
group: update-releasenotes-${{ github.workflow }}
|
||||
cancel-in-progress: false
|
||||
|
||||
jobs:
|
||||
update-releasenotes:
|
||||
name: Update Release Notes
|
||||
runs-on: [self-hosted]
|
||||
permissions:
|
||||
contents: write
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Update release notes from changelogs
|
||||
uses: actions/github-script@v7
|
||||
with:
|
||||
script: |
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
|
||||
const changelogDir = 'docs/changelogs';
|
||||
|
||||
// Get releases from first page
|
||||
const releases = await github.rest.repos.listReleases({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
per_page: 30
|
||||
});
|
||||
|
||||
console.log(`Found ${releases.data.length} releases (first page only)`);
|
||||
|
||||
// Process each release
|
||||
for (const release of releases.data) {
|
||||
const tag = release.tag_name;
|
||||
const changelogFile = `${tag}.md`;
|
||||
const changelogPath = path.join(changelogDir, changelogFile);
|
||||
|
||||
console.log(`\nProcessing release: ${tag}`);
|
||||
|
||||
// Check if changelog file exists
|
||||
if (!fs.existsSync(changelogPath)) {
|
||||
console.log(` ⚠️ Changelog file ${changelogFile} does not exist, skipping...`);
|
||||
continue;
|
||||
}
|
||||
|
||||
// Read changelog file content
|
||||
let changelogContent;
|
||||
try {
|
||||
changelogContent = fs.readFileSync(changelogPath, 'utf8');
|
||||
} catch (error) {
|
||||
console.log(` ❌ Error reading file ${changelogPath}: ${error.message}`);
|
||||
continue;
|
||||
}
|
||||
|
||||
if (!changelogContent.trim()) {
|
||||
console.log(` ⚠️ Changelog file ${changelogFile} is empty, skipping...`);
|
||||
continue;
|
||||
}
|
||||
|
||||
// Check if content is already up to date
|
||||
const currentBody = release.body || '';
|
||||
if (currentBody.trim() === changelogContent.trim()) {
|
||||
console.log(` ✓ Content is already up to date, skipping...`);
|
||||
continue;
|
||||
}
|
||||
|
||||
// Update release notes
|
||||
try {
|
||||
await github.rest.repos.updateRelease({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
release_id: release.id,
|
||||
body: changelogContent
|
||||
});
|
||||
console.log(` ✅ Successfully updated release notes for ${tag}`);
|
||||
} catch (error) {
|
||||
console.log(` ❌ Error updating release ${tag}: ${error.message}`);
|
||||
core.setFailed(`Failed to update release notes for ${tag}`);
|
||||
}
|
||||
}
|
||||
|
||||
6
.gitignore
vendored
6
.gitignore
vendored
@@ -1,6 +1,8 @@
|
||||
_out
|
||||
_repos
|
||||
.git
|
||||
.idea
|
||||
.vscode
|
||||
|
||||
# User-specific stuff
|
||||
.idea/**/workspace.xml
|
||||
@@ -75,4 +77,6 @@ fabric.properties
|
||||
.idea/caches/build_file_checksums.ser
|
||||
|
||||
.DS_Store
|
||||
**/.DS_Store
|
||||
**/.DS_Store
|
||||
|
||||
tmp/
|
||||
|
||||
@@ -1,23 +1,17 @@
|
||||
repos:
|
||||
- repo: local
|
||||
hooks:
|
||||
- id: gen-versions-map
|
||||
name: Generate versions map and check for changes
|
||||
entry: sh -c 'make -C packages/apps check-version-map && make -C packages/extra check-version-map'
|
||||
language: system
|
||||
types: [file]
|
||||
pass_filenames: false
|
||||
description: Run the script and fail if it generates changes
|
||||
- id: run-make-generate
|
||||
name: Run 'make generate' in all app directories
|
||||
entry: |
|
||||
/bin/bash -c '
|
||||
for dir in ./packages/apps/*/; do
|
||||
flock -x .git/pre-commit.lock sh -c '
|
||||
for dir in ./packages/apps/*/ ./packages/extra/*/; do
|
||||
if [ -d "$dir" ]; then
|
||||
echo "Running make generate in $dir"
|
||||
(cd "$dir" && make generate)
|
||||
make generate -C "$dir" || exit $?
|
||||
fi
|
||||
done
|
||||
git diff --color=always | cat
|
||||
'
|
||||
language: script
|
||||
language: system
|
||||
files: ^.*$
|
||||
|
||||
@@ -13,8 +13,8 @@ but it means a lot to us.
|
||||
|
||||
To add your organization to this list, you can either:
|
||||
|
||||
- [open a pull request](https://github.com/aenix-io/cozystack/pulls) to directly update this file, or
|
||||
- [edit this file](https://github.com/aenix-io/cozystack/blob/main/ADOPTERS.md) directly in GitHub
|
||||
- [open a pull request](https://github.com/cozystack/cozystack/pulls) to directly update this file, or
|
||||
- [edit this file](https://github.com/cozystack/cozystack/blob/main/ADOPTERS.md) directly in GitHub
|
||||
|
||||
Feel free to ask in the Slack chat if you any questions and/or require
|
||||
assistance with updating this list.
|
||||
@@ -30,3 +30,6 @@ This list is sorted in chronological order, based on the submission date.
|
||||
| [Bootstack](https://bootstack.app/) | @mrkhachaturov | 2024-08-01| At Bootstack, we utilize a Kubernetes operator specifically designed to simplify and streamline cloud infrastructure creation.|
|
||||
| [gohost](https://gohost.kz/) | @karabass_off | 2024-02-01 | Our company has been working in the market of Kazakhstan for more than 15 years, providing clients with a standard set of services: VPS/VDC, IaaS, shared hosting, etc. Now we are expanding the lineup by introducing Bare Metal Kubenetes cluster under Cozystack management. |
|
||||
| [Urmanac](https://urmanac.com) | @kingdonb | 2024-12-04 | Urmanac is the future home of a hosting platform for the knowledge base of a community of personal server enthusiasts. We use Cozystack to provide support services for web sites hosted using both conventional deployments and on SpinKube, with WASM. |
|
||||
| [Hidora](https://hikube.cloud) | @matthieu-robin | 2025-09-17 | Hidora is a Swiss cloud provider delivering managed services and infrastructure solutions through datacenters located in Switzerland, ensuring data sovereignty and reliability. Its sovereign cloud platform, Hikube, is designed to run workloads with high availability across multiple datacenters, providing enterprises with a secure and scalable foundation for their applications based on Cozystack. |
|
||||
| [QOSI](https://qosi.kz) | @tabu-a | 2025-10-04 | QOSI is a non-profit organization driving open-source adoption and digital sovereignty across Kazakhstan and Central Asia. We use Cozystack as a platform for deploying sovereign, GPU-enabled clouds and educational environments under the National AI Program. Our goal is to accelerate the region’s transition toward open, self-hosted cloud-native technologies |
|
||||
| [Cloupard](https://cloupard.kz/) | @serjiott | 2025-12-18 | Cloupard is a public cloud provider offering IaaS and PaaS services via datacenters in Kazakhstan and Uzbekistan. Uses CozyStack on bare metal to extend its managed PaaS offerings. |
|
||||
|
||||
62
AGENTS.md
Normal file
62
AGENTS.md
Normal file
@@ -0,0 +1,62 @@
|
||||
# AI Agents Overview
|
||||
|
||||
This file provides structured guidance for AI coding assistants and agents
|
||||
working with the **Cozystack** project.
|
||||
|
||||
## Activation
|
||||
|
||||
**CRITICAL**: When the user asks you to do something that matches the scope of a documented process, you MUST read the corresponding documentation file and follow the instructions exactly as written.
|
||||
|
||||
- **Commits, PRs, git operations** (e.g., "create a commit", "make a PR", "fix review comments", "rebase", "cherry-pick")
|
||||
- Read: [`contributing.md`](./docs/agents/contributing.md)
|
||||
- Action: Read the entire file and follow ALL instructions step-by-step
|
||||
|
||||
- **Changelog generation** (e.g., "generate changelog", "create changelog", "prepare changelog for version X")
|
||||
- Read: [`changelog.md`](./docs/agents/changelog.md)
|
||||
- Action: Read the entire file and follow ALL steps in the checklist. Do NOT skip any mandatory steps
|
||||
|
||||
- **Release creation** (e.g., "create release", "prepare release", "tag release", "make a release")
|
||||
- Read: [`releasing.md`](./docs/agents/releasing.md)
|
||||
- Action: Read the file and follow the referenced release process in `docs/release.md`
|
||||
|
||||
- **Project structure, conventions, code layout** (e.g., "where should I put X", "what's the convention for Y", "how is the project organized")
|
||||
- Read: [`overview.md`](./docs/agents/overview.md)
|
||||
- Action: Read relevant sections to understand project structure and conventions
|
||||
|
||||
- **General questions about contributing**
|
||||
- Read: [`contributing.md`](./docs/agents/contributing.md)
|
||||
- Action: Read the file to understand git workflow, commit format, PR process
|
||||
|
||||
**Important rules:**
|
||||
- ✅ **ONLY read the file if the task matches the documented process scope** - do not read files for tasks that don't match their purpose
|
||||
- ✅ **ALWAYS read the file FIRST** before starting the task (when applicable)
|
||||
- ✅ **Follow instructions EXACTLY** as written in the documentation
|
||||
- ✅ **Do NOT skip mandatory steps** (especially in changelog.md)
|
||||
- ✅ **Do NOT assume** you know the process - always check the documentation when the task matches
|
||||
- ❌ **Do NOT read files** for tasks that are outside their documented scope
|
||||
- 📖 **Note**: [`overview.md`](./docs/agents/overview.md) can be useful as a reference to understand project structure and conventions, even when not explicitly required by the task
|
||||
|
||||
## Project Overview
|
||||
|
||||
**Cozystack** is a Kubernetes-based platform for building cloud infrastructure with managed services (databases, VMs, K8s clusters), multi-tenancy, and GitOps delivery.
|
||||
|
||||
## Quick Reference
|
||||
|
||||
### Code Structure
|
||||
- `packages/core/` - Core platform charts (installer, platform)
|
||||
- `packages/system/` - System components (CSI, CNI, operators)
|
||||
- `packages/apps/` - User-facing applications in catalog
|
||||
- `packages/extra/` - Tenant-specific modules
|
||||
- `cmd/`, `internal/`, `pkg/` - Go code
|
||||
- `api/` - Kubernetes CRDs
|
||||
|
||||
### Conventions
|
||||
- **Helm Charts**: Umbrella pattern, vendored upstream charts in `charts/`
|
||||
- **Go Code**: Controller-runtime patterns, kubebuilder style
|
||||
- **Git Commits**: `[component] Description` format with `--signoff`
|
||||
|
||||
### What NOT to Do
|
||||
- ❌ Edit `/vendor/`, `zz_generated.*.go`, upstream charts directly
|
||||
- ❌ Modify `go.mod`/`go.sum` manually (use `go get`)
|
||||
- ❌ Force push to main/master
|
||||
- ❌ Commit built artifacts from `_out`
|
||||
@@ -1,3 +1,22 @@
|
||||
# Code of Conduct
|
||||
|
||||
Cozystack follows the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md).
|
||||
|
||||
# Cozystack Vendor Neutrality Manifesto
|
||||
|
||||
Cozystack exists for the cloud-native community. We are committed to a project culture where no single company, product, or commercial agenda directs our roadmap, governance, brand, or releases. Our North Star is user value, technical excellence, and open collaboration under the CNCF umbrella.
|
||||
|
||||
## Our Commitments
|
||||
|
||||
- **Community-first:** Decisions prioritize the broader community over any vendor interest.
|
||||
- **Open collaboration:** Ideas, discussions, and outcomes happen in public spaces; contributions are welcomed from all.
|
||||
- **Merit over affiliation:** Proposals are evaluated on technical merit and user impact, not on who submits them.
|
||||
- **Inclusive stewardship:** Leadership and maintenance are open to contributors who demonstrate sustained, constructive impact.
|
||||
- **Technology choice:** We prefer open, pluggable designs that interoperate with multiple ecosystems and providers.
|
||||
- **Neutral brand & voice:** Our name, logo, website, and documentation do not imply endorsement or preference for any vendor.
|
||||
- **Transparent practices:** Funding acknowledgments, partnerships, and potential conflicts are communicated openly.
|
||||
- **User trust:** Security handling, releases, and communications aim to be timely, transparent, and fair to all users.
|
||||
|
||||
By contributing to Cozystack, we affirm these principles and work together to keep the project open, welcoming, and vendor-neutral.
|
||||
|
||||
*— The Cozystack community*
|
||||
|
||||
@@ -6,13 +6,13 @@ As you get started, you are in the best position to give us feedbacks on areas o
|
||||
|
||||
* Problems found while setting up the development environment
|
||||
* Gaps in our documentation
|
||||
* Bugs in our Github actions
|
||||
* Bugs in our GitHub actions
|
||||
|
||||
First, though, it is important that you read the [code of conduct](CODE_OF_CONDUCT.md).
|
||||
First, though, it is important that you read the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md).
|
||||
|
||||
The guidelines below are a starting point. We don't want to limit your
|
||||
creativity, passion, and initiative. If you think there's a better way, please
|
||||
feel free to bring it up in a Github discussion, or open a pull request. We're
|
||||
feel free to bring it up in a GitHub discussion, or open a pull request. We're
|
||||
certain there are always better ways to do things, we just need to start some
|
||||
constructive dialogue!
|
||||
|
||||
@@ -23,9 +23,9 @@ We welcome many types of contributions including:
|
||||
* New features
|
||||
* Builds, CI/CD
|
||||
* Bug fixes
|
||||
* [Documentation](https://github.com/aenix-io/cozystack-website/tree/main)
|
||||
* [Documentation](https://GitHub.com/cozystack/cozystack-website/tree/main)
|
||||
* Issue Triage
|
||||
* Answering questions on Slack or Github Discussions
|
||||
* Answering questions on Slack or GitHub Discussions
|
||||
* Web design
|
||||
* Communications / Social Media / Blog Posts
|
||||
* Events participation
|
||||
@@ -34,7 +34,7 @@ We welcome many types of contributions including:
|
||||
## Ask for Help
|
||||
|
||||
The best way to reach us with a question when contributing is to drop a line in
|
||||
our [Telegram channel](https://t.me/cozystack), or start a new Github discussion.
|
||||
our [Telegram channel](https://t.me/cozystack), or start a new GitHub discussion.
|
||||
|
||||
## Raising Issues
|
||||
|
||||
|
||||
151
CONTRIBUTOR_LADDER.md
Normal file
151
CONTRIBUTOR_LADDER.md
Normal file
@@ -0,0 +1,151 @@
|
||||
# Contributor Ladder
|
||||
|
||||
* [Contributor Ladder](#contributor-ladder)
|
||||
* [Community Participant](#community-participant)
|
||||
* [Contributor](#contributor)
|
||||
* [Reviewer](#reviewer)
|
||||
* [Maintainer](#maintainer)
|
||||
* [Inactivity](#inactivity)
|
||||
* [Involuntary Removal](#involuntary-removal-or-demotion)
|
||||
* [Stepping Down/Emeritus Process](#stepping-downemeritus-process)
|
||||
* [Contact](#contact)
|
||||
|
||||
|
||||
## Contributor Ladder
|
||||
|
||||
Hello! We are excited that you want to learn more about our project contributor ladder! This contributor ladder outlines the different contributor roles within the project, along with the responsibilities and privileges that come with them. Community members generally start at the first levels of the "ladder" and advance up it as their involvement in the project grows. Our project members are happy to help you advance along the contributor ladder.
|
||||
|
||||
Each of the contributor roles below is organized into lists of three types of things. "Responsibilities" are things that a contributor is expected to do. "Requirements" are qualifications a person needs to meet to be in that role, and "Privileges" are things contributors on that level are entitled to.
|
||||
|
||||
|
||||
### Community Participant
|
||||
Description: A Community Participant engages with the project and its community, contributing their time, thoughts, etc. Community participants are usually users who have stopped being anonymous and started being active in project discussions.
|
||||
|
||||
* Responsibilities:
|
||||
* Must follow the [CNCF CoC](https://github.com/cncf/foundation/blob/main/code-of-conduct.md)
|
||||
* How users can get involved with the community:
|
||||
* Participating in community discussions
|
||||
* Helping other users
|
||||
* Submitting bug reports
|
||||
* Commenting on issues
|
||||
* Trying out new releases
|
||||
* Attending community events
|
||||
|
||||
|
||||
### Contributor
|
||||
Description: A Contributor contributes directly to the project and adds value to it. Contributions need not be code. People at the Contributor level may be new contributors, or they may only contribute occasionally.
|
||||
|
||||
* Responsibilities include:
|
||||
* Follow the [CNCF CoC](https://github.com/cncf/foundation/blob/main/code-of-conduct.md)
|
||||
* Follow the project [contributing guide] (https://github.com/cozystack/cozystack/blob/main/CONTRIBUTING.md)
|
||||
* Requirements (one or several of the below):
|
||||
* Report and sometimes resolve issues
|
||||
* Occasionally submit PRs
|
||||
* Contribute to the documentation
|
||||
* Show up at meetings, takes notes
|
||||
* Answer questions from other community members
|
||||
* Submit feedback on issues and PRs
|
||||
* Test releases and patches and submit reviews
|
||||
* Run or helps run events
|
||||
* Promote the project in public
|
||||
* Help run the project infrastructure
|
||||
* Privileges:
|
||||
* Invitations to contributor events
|
||||
* Eligible to become a Maintainer
|
||||
|
||||
|
||||
### Reviewer
|
||||
Description: A Reviewer has responsibility for specific code, documentation, test, or other project areas. They are collectively responsible, with other Reviewers, for reviewing all changes to those areas and indicating whether those changes are ready to merge. They have a track record of contribution and review in the project.
|
||||
|
||||
Reviewers are responsible for a "specific area." This can be a specific code directory, driver, chapter of the docs, test job, event, or other clearly-defined project component that is smaller than an entire repository or subproject. Most often it is one or a set of directories in one or more Git repositories. The "specific area" below refers to this area of responsibility.
|
||||
|
||||
Reviewers have all the rights and responsibilities of a Contributor, plus:
|
||||
|
||||
* Responsibilities include:
|
||||
* Continues to contribute regularly, as demonstrated by having at least 15 PRs a year, as demonstrated by [Cozystack devstats](https://cozystack.devstats.cncf.io).
|
||||
* Following the reviewing guide
|
||||
* Reviewing most Pull Requests against their specific areas of responsibility
|
||||
* Reviewing at least 40 PRs per year
|
||||
* Helping other contributors become reviewers
|
||||
* Requirements:
|
||||
* Must have successful contributions to the project, including at least one of the following:
|
||||
* 10 accepted PRs,
|
||||
* Reviewed 20 PRs,
|
||||
* Resolved and closed 20 Issues,
|
||||
* Become responsible for a key project management area,
|
||||
* Or some equivalent combination or contribution
|
||||
* Must have been contributing for at least 6 months
|
||||
* Must be actively contributing to at least one project area
|
||||
* Must have two sponsors who are also Reviewers or Maintainers, at least one of whom does not work for the same employer
|
||||
* Has reviewed, or helped review, at least 20 Pull Requests
|
||||
* Has analyzed and resolved test failures in their specific area
|
||||
* Has demonstrated an in-depth knowledge of the specific area
|
||||
* Commits to being responsible for that specific area
|
||||
* Is supportive of new and occasional contributors and helps get useful PRs in shape to commit
|
||||
* Additional privileges:
|
||||
* Has GitHub or CI/CD rights to approve pull requests in specific directories
|
||||
* Can recommend and review other contributors to become Reviewers
|
||||
* May be assigned Issues and Reviews
|
||||
* May give commands to CI/CD automation
|
||||
* Can recommend other contributors to become Reviewers
|
||||
|
||||
|
||||
The process of becoming a Reviewer is:
|
||||
1. The contributor is nominated by opening a PR against the appropriate repository, which adds their GitHub username to the OWNERS file for one or more directories.
|
||||
2. At least two members of the team that owns that repository or main directory, who are already Approvers, approve the PR.
|
||||
|
||||
|
||||
### Maintainer
|
||||
Description: Maintainers are very established contributors who are responsible for the entire project. As such, they have the ability to approve PRs against any area of the project, and are expected to participate in making decisions about the strategy and priorities of the project.
|
||||
|
||||
A Maintainer must meet the responsibilities and requirements of a Reviewer, plus:
|
||||
|
||||
* Responsibilities include:
|
||||
* Reviewing at least 40 PRs per year, especially PRs that involve multiple parts of the project
|
||||
* Mentoring new Reviewers
|
||||
* Writing refactoring PRs
|
||||
* Participating in CNCF maintainer activities
|
||||
* Determining strategy and policy for the project
|
||||
* Participating in, and leading, community meetings
|
||||
* Requirements
|
||||
* Experience as a Reviewer for at least 6 months
|
||||
* Demonstrates a broad knowledge of the project across multiple areas
|
||||
* Is able to exercise judgment for the good of the project, independent of their employer, friends, or team
|
||||
* Mentors other contributors
|
||||
* Can commit to spending at least 10 hours per month working on the project
|
||||
* Additional privileges:
|
||||
* Approve PRs to any area of the project
|
||||
* Represent the project in public as a Maintainer
|
||||
* Communicate with the CNCF on behalf of the project
|
||||
* Have a vote in Maintainer decision-making meetings
|
||||
|
||||
|
||||
Process of becoming a maintainer:
|
||||
1. Any current Maintainer may nominate a current Reviewer to become a new Maintainer, by opening a PR against the root of the cozystack repository adding the nominee as an Approver in the [MAINTAINERS](https://github.com/cozystack/cozystack/blob/main/MAINTAINERS.md) file.
|
||||
2. The nominee will add a comment to the PR testifying that they agree to all requirements of becoming a Maintainer.
|
||||
3. A majority of the current Maintainers must then approve the PR.
|
||||
|
||||
|
||||
## Inactivity
|
||||
It is important for contributors to be and stay active to set an example and show commitment to the project. Inactivity is harmful to the project as it may lead to unexpected delays, contributor attrition, and a lost of trust in the project.
|
||||
|
||||
* Inactivity is measured by:
|
||||
* Periods of no contributions for longer than 6 months
|
||||
* Periods of no communication for longer than 3 months
|
||||
* Consequences of being inactive include:
|
||||
* Involuntary removal or demotion
|
||||
* Being asked to move to Emeritus status
|
||||
|
||||
## Involuntary Removal or Demotion
|
||||
|
||||
Involuntary removal/demotion of a contributor happens when responsibilities and requirements aren't being met. This may include repeated patterns of inactivity, extended period of inactivity, a period of failing to meet the requirements of your role, and/or a violation of the Code of Conduct. This process is important because it protects the community and its deliverables while also opens up opportunities for new contributors to step in.
|
||||
|
||||
Involuntary removal or demotion is handled through a vote by a majority of the current Maintainers.
|
||||
|
||||
## Stepping Down/Emeritus Process
|
||||
If and when contributors' commitment levels change, contributors can consider stepping down (moving down the contributor ladder) vs moving to emeritus status (completely stepping away from the project).
|
||||
|
||||
Contact the Maintainers about changing to Emeritus status, or reducing your contributor level.
|
||||
|
||||
## Contact
|
||||
* For inquiries, please reach out to: @kvaps, @tym83
|
||||
91
GOVERNANCE.md
Normal file
91
GOVERNANCE.md
Normal file
@@ -0,0 +1,91 @@
|
||||
# Cozystack Governance
|
||||
|
||||
This document defines the governance structure of the Cozystack community, outlining how members collaborate to achieve shared goals.
|
||||
|
||||
## Overview
|
||||
|
||||
**Cozystack**, a Cloud Native Computing Foundation (CNCF) project, is committed
|
||||
to building an open, inclusive, productive, and self-governing open source
|
||||
community focused on building a high-quality open source PaaS and framework for building clouds.
|
||||
|
||||
## Code Repositories
|
||||
|
||||
The following code repositories are governed by the Cozystack community and
|
||||
maintained under the `cozystack` namespace:
|
||||
|
||||
* **[Cozystack](https://github.com/cozystack/cozystack):** Main Cozystack codebase
|
||||
* **[website](https://github.com/cozystack/website):** Cozystack website and documentation sources
|
||||
* **[Talm](https://github.com/cozystack/talm):** Tool for managing Talos Linux the GitOps way
|
||||
* **[cozy-proxy](https://github.com/cozystack/cozy-proxy):** A simple kube-proxy addon for 1:1 NAT services in Kubernetes with NFT backend
|
||||
* **[cozystack-telemetry-server](https://github.com/cozystack/cozystack-telemetry-server):** Cozystack telemetry
|
||||
* **[talos-bootstrap](https://github.com/cozystack/talos-bootstrap):** An interactive Talos Linux installer
|
||||
* **[talos-meta-tool](https://github.com/cozystack/talos-meta-tool):** Tool for writing network metadata into META partition
|
||||
|
||||
## Community Roles
|
||||
|
||||
* **Users:** Members that engage with the Cozystack community via any medium, including Slack, Telegram, GitHub, and mailing lists.
|
||||
* **Contributors:** Members contributing to the projects by contributing and reviewing code, writing documentation,
|
||||
responding to issues, participating in proposal discussions, and so on.
|
||||
* **Directors:** Non-technical project leaders.
|
||||
* **Maintainers**: Technical project leaders.
|
||||
|
||||
## Contributors
|
||||
|
||||
Cozystack is for everyone. Anyone can become a Cozystack contributor simply by
|
||||
contributing to the project, whether through code, documentation, blog posts,
|
||||
community management, or other means.
|
||||
As with all Cozystack community members, contributors are expected to follow the
|
||||
[Cozystack Code of Conduct](https://github.com/cozystack/cozystack/blob/main/CODE_OF_CONDUCT.md).
|
||||
|
||||
All contributions to Cozystack code, documentation, or other components in the
|
||||
Cozystack GitHub organisation must follow the
|
||||
[contributing guidelines](https://github.com/cozystack/cozystack/blob/main/CONTRIBUTING.md).
|
||||
Whether these contributions are merged into the project is the prerogative of the maintainers.
|
||||
|
||||
## Directors
|
||||
|
||||
Directors are responsible for non-technical leadership functions within the project.
|
||||
This includes representing Cozystack and its maintainers to the community, to the press,
|
||||
and to the outside world; interfacing with CNCF and other governance entities;
|
||||
and participating in project decision-making processes when appropriate.
|
||||
|
||||
Directors are elected by a majority vote of the maintainers.
|
||||
|
||||
## Maintainers
|
||||
|
||||
Maintainers have the right to merge code into the project.
|
||||
Anyone can become a Cozystack maintainer (see "Becoming a maintainer" below).
|
||||
|
||||
### Expectations
|
||||
|
||||
Cozystack maintainers are expected to:
|
||||
|
||||
* Review pull requests, triage issues, and fix bugs in their areas of
|
||||
expertise, ensuring that all changes go through the project's code review
|
||||
and integration processes.
|
||||
* Monitor cncf-cozystack-* emails, the Cozystack Slack channels in Kubernetes
|
||||
and CNCF Slack workspaces, Telegram groups, and help out when possible.
|
||||
* Rapidly respond to any time-sensitive security release processes.
|
||||
* Attend Cozystack community meetings.
|
||||
|
||||
If a maintainer is no longer interested in or cannot perform the duties
|
||||
listed above, they should move themselves to emeritus status.
|
||||
If necessary, this can also occur through the decision-making process outlined below.
|
||||
|
||||
### Becoming a Maintainer
|
||||
|
||||
Anyone can become a Cozystack maintainer. Maintainers should be extremely
|
||||
proficient in cloud native technologies and/or Go; have relevant domain expertise;
|
||||
have the time and ability to meet the maintainer's expectations above;
|
||||
and demonstrate the ability to work with the existing maintainers and project processes.
|
||||
|
||||
To become a maintainer, start by expressing interest to existing maintainers.
|
||||
Existing maintainers will then ask you to demonstrate the qualifications above
|
||||
by contributing PRs, doing code reviews, and other such tasks under their guidance.
|
||||
After several months of working together, maintainers will decide whether to grant maintainer status.
|
||||
|
||||
## Project Decision-making Process
|
||||
|
||||
Ideally, all project decisions are resolved by consensus of maintainers and directors.
|
||||
If this is not possible, a vote will be called.
|
||||
The voting process is a simple majority in which each maintainer and director receives one vote.
|
||||
@@ -1,7 +1,12 @@
|
||||
# The Cozystack Maintainers
|
||||
|
||||
| Maintainer | GitHub Username | Company |
|
||||
| ---------- | --------------- | ------- |
|
||||
| Andrei Kvapil | [@kvaps](https://github.com/kvaps) | Ænix |
|
||||
| George Gaál | [@gecube](https://github.com/gecube) | Ænix |
|
||||
| Eduard Generalov | [@egeneralov](https://github.com/egeneralov) | Ænix |
|
||||
| Maintainer | GitHub Username | Company | Responsibility |
|
||||
| ---------- | --------------- | ------- | --------------------------------- |
|
||||
| Andrei Kvapil | [@kvaps](https://github.com/kvaps) | Ænix | Core Maintainer |
|
||||
| George Gaál | [@gecube](https://github.com/gecube) | Ænix | DevOps Practices in Platform, Developers Advocate |
|
||||
| Kingdon Barrett | [@kingdonb](https://github.com/kingdonb) | Urmanac | FluxCD and flux-operator |
|
||||
| Timofei Larkin | [@lllamnyp](https://github.com/lllamnyp) | 3commas | Etcd-operator Lead |
|
||||
| Artem Bortnikov | [@aobort](https://github.com/aobort) | Timescale | Etcd-operator Lead |
|
||||
| Timur Tukaev | [@tym83](https://github.com/tym83) | Ænix | Cozystack Website, Marketing, Community Management |
|
||||
| Kirill Klinchenkov | [@klinch0](https://github.com/klinch0) | Ænix | Core Maintainer |
|
||||
| Nikita Bykov | [@nbykov0](https://github.com/nbykov0) | Ænix | Maintainer of ARM and stuff |
|
||||
|
||||
92
Makefile
92
Makefile
@@ -1,39 +1,93 @@
|
||||
.PHONY: manifests repos assets
|
||||
.PHONY: manifests assets unit-tests helm-unit-tests
|
||||
|
||||
build:
|
||||
include hack/common-envs.mk
|
||||
|
||||
build-deps:
|
||||
@command -V find docker skopeo jq gh helm > /dev/null
|
||||
@yq --version | grep -q "mikefarah" || (echo "mikefarah/yq is required" && exit 1)
|
||||
@tar --version | grep -q GNU || (echo "GNU tar is required" && exit 1)
|
||||
@sed --version | grep -q GNU || (echo "GNU sed is required" && exit 1)
|
||||
@awk --version | grep -q GNU || (echo "GNU awk is required" && exit 1)
|
||||
|
||||
build: build-deps
|
||||
make -C packages/apps/http-cache image
|
||||
make -C packages/apps/postgres image
|
||||
make -C packages/apps/mysql image
|
||||
make -C packages/apps/mariadb image
|
||||
make -C packages/apps/clickhouse image
|
||||
make -C packages/apps/kubernetes image
|
||||
make -C packages/system/monitoring image
|
||||
make -C packages/system/cozystack-api image
|
||||
make -C packages/system/cozystack-controller image
|
||||
make -C packages/system/backup-controller image
|
||||
make -C packages/system/backupstrategy-controller image
|
||||
make -C packages/system/lineage-controller-webhook image
|
||||
make -C packages/system/cilium image
|
||||
make -C packages/system/kubeovn image
|
||||
make -C packages/system/linstor image
|
||||
make -C packages/system/kubeovn-webhook image
|
||||
make -C packages/system/kubeovn-plunger image
|
||||
make -C packages/system/dashboard image
|
||||
make -C packages/system/metallb image
|
||||
make -C packages/system/kamaji image
|
||||
make -C packages/system/bucket image
|
||||
make -C packages/system/objectstorage-controller image
|
||||
make -C packages/system/grafana-operator image
|
||||
make -C packages/core/testing image
|
||||
make -C packages/core/talos image
|
||||
make -C packages/core/platform image
|
||||
make -C packages/core/installer image
|
||||
make manifests
|
||||
|
||||
manifests:
|
||||
(cd packages/core/installer/; helm template -n cozy-installer installer .) > manifests/cozystack-installer.yaml
|
||||
sed -i 's|@sha256:[^"]\+||' manifests/cozystack-installer.yaml
|
||||
mkdir -p _out/assets
|
||||
cat internal/crdinstall/manifests/*.yaml > _out/assets/cozystack-crds.yaml
|
||||
# Talos variant (default)
|
||||
helm template installer packages/core/installer -n cozy-system \
|
||||
--show-only templates/cozystack-operator.yaml \
|
||||
> _out/assets/cozystack-operator-talos.yaml
|
||||
# Generic Kubernetes variant (k3s, kubeadm, RKE2)
|
||||
helm template installer packages/core/installer -n cozy-system \
|
||||
--set cozystackOperator.variant=generic \
|
||||
--set cozystack.apiServerHost=REPLACE_ME \
|
||||
--show-only templates/cozystack-operator.yaml \
|
||||
> _out/assets/cozystack-operator-generic.yaml
|
||||
# Hosted variant (managed Kubernetes)
|
||||
helm template installer packages/core/installer -n cozy-system \
|
||||
--set cozystackOperator.variant=hosted \
|
||||
--show-only templates/cozystack-operator.yaml \
|
||||
> _out/assets/cozystack-operator-hosted.yaml
|
||||
|
||||
repos:
|
||||
rm -rf _out
|
||||
make -C packages/apps check-version-map
|
||||
make -C packages/extra check-version-map
|
||||
make -C packages/system repo
|
||||
make -C packages/apps repo
|
||||
make -C packages/extra repo
|
||||
mkdir -p _out/logos
|
||||
cp ./packages/apps/*/logos/*.svg ./packages/extra/*/logos/*.svg _out/logos/
|
||||
cozypkg:
|
||||
go build -ldflags "-X github.com/cozystack/cozystack/cmd/cozypkg/cmd.Version=v$(COZYSTACK_VERSION)" -o _out/bin/cozypkg ./cmd/cozypkg
|
||||
|
||||
assets:
|
||||
make -C packages/core/installer/ assets
|
||||
assets: assets-talos assets-cozypkg
|
||||
|
||||
assets-talos:
|
||||
make -C packages/core/talos assets
|
||||
|
||||
assets-cozypkg: assets-cozypkg-linux-amd64 assets-cozypkg-linux-arm64 assets-cozypkg-darwin-amd64 assets-cozypkg-darwin-arm64 assets-cozypkg-windows-amd64 assets-cozypkg-windows-arm64
|
||||
(cd _out/assets/ && sha256sum cozypkg-*.tar.gz) > _out/assets/cozypkg-checksums.txt
|
||||
|
||||
assets-cozypkg-%:
|
||||
$(eval EXT := $(if $(filter windows,$(firstword $(subst -, ,$*))),.exe,))
|
||||
mkdir -p _out/assets
|
||||
GOOS=$(firstword $(subst -, ,$*)) GOARCH=$(lastword $(subst -, ,$*)) go build -ldflags "-X github.com/cozystack/cozystack/cmd/cozypkg/cmd.Version=v$(COZYSTACK_VERSION)" -o _out/bin/cozypkg-$*/cozypkg$(EXT) ./cmd/cozypkg
|
||||
cp LICENSE _out/bin/cozypkg-$*/LICENSE
|
||||
tar -C _out/bin/cozypkg-$* -czf _out/assets/cozypkg-$*.tar.gz LICENSE cozypkg$(EXT)
|
||||
|
||||
test:
|
||||
make -C packages/core/testing apply
|
||||
make -C packages/core/testing test
|
||||
make -C packages/core/testing test-applications
|
||||
|
||||
unit-tests: helm-unit-tests
|
||||
|
||||
helm-unit-tests:
|
||||
hack/helm-unit-tests.sh
|
||||
|
||||
prepare-env:
|
||||
make -C packages/core/testing apply
|
||||
make -C packages/core/testing prepare-cluster
|
||||
|
||||
generate:
|
||||
hack/update-codegen.sh
|
||||
|
||||
upload_assets: manifests
|
||||
hack/upload-assets.sh
|
||||
|
||||
63
README.md
63
README.md
@@ -2,63 +2,68 @@
|
||||

|
||||
|
||||
[](https://opensource.org/)
|
||||
[](https://opensource.org/licenses/)
|
||||
[](https://aenix.io/contact-us/#meet)
|
||||
[](https://aenix.io/cozystack/)
|
||||
[](https://github.com/aenix-io/cozystack)
|
||||
[](https://github.com/aenix-io/cozystack)
|
||||
[](https://opensource.org/licenses/)
|
||||
[](https://cozystack.io/support/)
|
||||
[](https://github.com/cozystack/cozystack)
|
||||
[](https://github.com/cozystack/cozystack/releases/latest)
|
||||
[](https://github.com/cozystack/cozystack/graphs/contributors)
|
||||
|
||||
# Cozystack
|
||||
|
||||
**Cozystack** is a free PaaS platform and framework for building clouds.
|
||||
|
||||
With Cozystack, you can transform your bunch of servers into an intelligent system with a simple REST API for spawning Kubernetes clusters, Database-as-a-Service, virtual machines, load balancers, HTTP caching services, and other services with ease.
|
||||
Cozystack is a [CNCF Sandbox Level Project](https://www.cncf.io/sandbox-projects/) that was originally built and sponsored by [Ænix](https://aenix.io/).
|
||||
|
||||
You can use Cozystack to build your own cloud or to provide a cost-effective development environments.
|
||||
With Cozystack, you can transform a bunch of servers into an intelligent system with a simple REST API for spawning Kubernetes clusters,
|
||||
Database-as-a-Service, virtual machines, load balancers, HTTP caching services, and other services with ease.
|
||||
|
||||
Use Cozystack to build your own cloud or provide a cost-effective development environment.
|
||||
|
||||

|
||||
|
||||
## Use-Cases
|
||||
|
||||
* [**Using Cozystack to build public cloud**](https://cozystack.io/docs/use-cases/public-cloud/)
|
||||
You can use Cozystack as backend for a public cloud
|
||||
* [**Using Cozystack to build a public cloud**](https://cozystack.io/docs/guides/use-cases/public-cloud/)
|
||||
You can use Cozystack as a backend for a public cloud
|
||||
|
||||
* [**Using Cozystack to build private cloud**](https://cozystack.io/docs/use-cases/private-cloud/)
|
||||
You can use Cozystack as platform to build a private cloud powered by Infrastructure-as-Code approach
|
||||
* [**Using Cozystack to build a private cloud**](https://cozystack.io/docs/guides/use-cases/private-cloud/)
|
||||
You can use Cozystack as a platform to build a private cloud powered by Infrastructure-as-Code approach
|
||||
|
||||
* [**Using Cozystack as Kubernetes distribution**](https://cozystack.io/docs/use-cases/kubernetes-distribution/)
|
||||
You can use Cozystack as Kubernetes distribution for Bare Metal
|
||||
* [**Using Cozystack as a Kubernetes distribution**](https://cozystack.io/docs/guides/use-cases/kubernetes-distribution/)
|
||||
You can use Cozystack as a Kubernetes distribution for Bare Metal
|
||||
|
||||
## Screenshot
|
||||
|
||||

|
||||
|
||||
## Documentation
|
||||
|
||||
The documentation is located on official [cozystack.io](https://cozystack.io) website.
|
||||
The documentation is located on the [cozystack.io](https://cozystack.io) website.
|
||||
|
||||
Read [Get Started](https://cozystack.io/docs/get-started/) section for a quick start.
|
||||
Read the [Getting Started](https://cozystack.io/docs/getting-started/) section for a quick start.
|
||||
|
||||
If you encounter any difficulties, start with the [troubleshooting guide](https://cozystack.io/docs/troubleshooting/), and work your way through the process that we've outlined.
|
||||
If you encounter any difficulties, start with the [troubleshooting guide](https://cozystack.io/docs/operations/troubleshooting/) and work your way through the process that we've outlined.
|
||||
|
||||
## Versioning
|
||||
|
||||
Versioning adheres to the [Semantic Versioning](http://semver.org/) principles.
|
||||
A full list of the available releases is available in the GitHub repository's [Release](https://github.com/aenix-io/cozystack/releases) section.
|
||||
A full list of the available releases is available in the GitHub repository's [Release](https://github.com/cozystack/cozystack/releases) section.
|
||||
|
||||
- [Roadmap](https://github.com/orgs/aenix-io/projects/2)
|
||||
- [Roadmap](https://cozystack.io/docs/roadmap/)
|
||||
|
||||
## Contributions
|
||||
|
||||
Contributions are highly appreciated and very welcomed!
|
||||
|
||||
In case of bugs, please, check if the issue has been already opened by checking the [GitHub Issues](https://github.com/aenix-io/cozystack/issues) section.
|
||||
In case it isn't, you can open a new one: a detailed report will help us to replicate it, assess it, and work on a fix.
|
||||
In case of bugs, please check if the issue has already been opened by checking the [GitHub Issues](https://github.com/cozystack/cozystack/issues) section.
|
||||
If it isn't, you can open a new one. A detailed report will help us replicate it, assess it, and work on a fix.
|
||||
|
||||
You can express your intention in working on the fix on your own.
|
||||
You can express your intention to on the fix on your own.
|
||||
Commits are used to generate the changelog, and their author will be referenced in it.
|
||||
|
||||
In case of **Feature Requests** please use the [Discussion's Feature Request section](https://github.com/aenix-io/cozystack/discussions/categories/feature-requests).
|
||||
If you have **Feature Requests** please use the [Discussion's Feature Request section](https://github.com/cozystack/cozystack/discussions/categories/feature-requests).
|
||||
|
||||
You can join our weekly community meetings (just add this events to your [Google Calendar](https://calendar.google.com/calendar?cid=ZTQzZDIxZTVjOWI0NWE5NWYyOGM1ZDY0OWMyY2IxZTFmNDMzZTJlNjUzYjU2ZGJiZGE3NGNhMzA2ZjBkMGY2OEBncm91cC5jYWxlbmRhci5nb29nbGUuY29t) or [iCal](https://calendar.google.com/calendar/ical/e43d21e5c9b45a95f28c5d649c2cb1e1f433e2e653b56dbbda74ca306f0d0f68%40group.calendar.google.com/public/basic.ics)) or [Telegram group](https://t.me/cozystack).
|
||||
## Community
|
||||
|
||||
You are welcome to join our [Telegram group](https://t.me/cozystack) and come to our weekly community meetings.
|
||||
Add them to your [Google Calendar](https://calendar.google.com/calendar?cid=ZTQzZDIxZTVjOWI0NWE5NWYyOGM1ZDY0OWMyY2IxZTFmNDMzZTJlNjUzYjU2ZGJiZGE3NGNhMzA2ZjBkMGY2OEBncm91cC5jYWxlbmRhci5nb29nbGUuY29t) or [iCal](https://calendar.google.com/calendar/ical/e43d21e5c9b45a95f28c5d649c2cb1e1f433e2e653b56dbbda74ca306f0d0f68%40group.calendar.google.com/public/basic.ics) for convenience.
|
||||
|
||||
## License
|
||||
|
||||
@@ -67,8 +72,4 @@ The code is provided as-is with no warranties.
|
||||
|
||||
## Commercial Support
|
||||
|
||||
[**Ænix**](https://aenix.io) offers enterprise-grade support, available 24/7.
|
||||
|
||||
We provide all types of assistance, including consultations, development of missing features, design, assistance with installation, and integration.
|
||||
|
||||
[Contact us](https://aenix.io/contact/)
|
||||
A list of companies providing commercial support for this project can be found on [official site](https://cozystack.io/support/).
|
||||
|
||||
1
api/.gitattributes
vendored
Normal file
1
api/.gitattributes
vendored
Normal file
@@ -0,0 +1 @@
|
||||
zz_generated_deepcopy.go linguist-generated
|
||||
@@ -1,4 +1,5 @@
|
||||
API rule violation: list_type_missing,github.com/aenix.io/cozystack/pkg/apis/apps/v1alpha1,ApplicationStatus,Conditions
|
||||
API rule violation: list_type_missing,github.com/cozystack/cozystack/pkg/apis/apps/v1alpha1,ApplicationStatus,Conditions
|
||||
API rule violation: list_type_missing,github.com/cozystack/cozystack/pkg/apis/core/v1alpha1,TenantModuleStatus,Conditions
|
||||
API rule violation: names_match,k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1,JSONSchemaProps,Ref
|
||||
API rule violation: names_match,k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1,JSONSchemaProps,Schema
|
||||
API rule violation: names_match,k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1,JSONSchemaProps,XEmbeddedResource
|
||||
|
||||
37
api/backups/strategy/v1alpha1/groupversion_info.go
Normal file
37
api/backups/strategy/v1alpha1/groupversion_info.go
Normal file
@@ -0,0 +1,37 @@
|
||||
/*
|
||||
Copyright 2025.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
// Package v1alpha1 contains API Schema definitions for the v1alpha1 API group.
|
||||
// +kubebuilder:object:generate=true
|
||||
// +groupName=strategy.backups.cozystack.io
|
||||
package v1alpha1
|
||||
|
||||
import (
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
"k8s.io/apimachinery/pkg/runtime/schema"
|
||||
)
|
||||
|
||||
var (
|
||||
GroupVersion = schema.GroupVersion{Group: "strategy.backups.cozystack.io", Version: "v1alpha1"}
|
||||
SchemeBuilder = runtime.NewSchemeBuilder(addGroupVersion)
|
||||
AddToScheme = SchemeBuilder.AddToScheme
|
||||
)
|
||||
|
||||
func addGroupVersion(scheme *runtime.Scheme) error {
|
||||
metav1.AddToGroupVersion(scheme, GroupVersion)
|
||||
return nil
|
||||
}
|
||||
63
api/backups/strategy/v1alpha1/job_types.go
Normal file
63
api/backups/strategy/v1alpha1/job_types.go
Normal file
@@ -0,0 +1,63 @@
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
// Package v1alpha1 defines strategy.backups.cozystack.io API types.
|
||||
//
|
||||
// Group: strategy.backups.cozystack.io
|
||||
// Version: v1alpha1
|
||||
package v1alpha1
|
||||
|
||||
import (
|
||||
corev1 "k8s.io/api/core/v1"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
)
|
||||
|
||||
func init() {
|
||||
SchemeBuilder.Register(func(s *runtime.Scheme) error {
|
||||
s.AddKnownTypes(GroupVersion,
|
||||
&Job{},
|
||||
&JobList{},
|
||||
)
|
||||
return nil
|
||||
})
|
||||
}
|
||||
|
||||
const (
|
||||
JobStrategyKind = "Job"
|
||||
)
|
||||
|
||||
// +kubebuilder:object:root=true
|
||||
// +kubebuilder:subresource:status
|
||||
// +kubebuilder:resource:scope=Cluster
|
||||
|
||||
// Job defines a backup strategy using a one-shot Job
|
||||
type Job struct {
|
||||
metav1.TypeMeta `json:",inline"`
|
||||
metav1.ObjectMeta `json:"metadata,omitempty"`
|
||||
|
||||
Spec JobSpec `json:"spec,omitempty"`
|
||||
Status JobStatus `json:"status,omitempty"`
|
||||
}
|
||||
|
||||
// +kubebuilder:object:root=true
|
||||
|
||||
// JobList contains a list of backup Jobs.
|
||||
type JobList struct {
|
||||
metav1.TypeMeta `json:",inline"`
|
||||
metav1.ListMeta `json:"metadata,omitempty"`
|
||||
Items []Job `json:"items"`
|
||||
}
|
||||
|
||||
// JobSpec specifies the desired behavior of a backup job.
|
||||
type JobSpec struct {
|
||||
// Template holds a PodTemplateSpec with the right shape to
|
||||
// run a single pod to completion and create a tarball with
|
||||
// a given apps data. Helm-like Go templates are supported.
|
||||
// The values of the source application are available under
|
||||
// `.Values`. `.Release.Name` and `.Release.Namespace` are
|
||||
// also exported.
|
||||
Template corev1.PodTemplateSpec `json:"template"`
|
||||
}
|
||||
|
||||
type JobStatus struct {
|
||||
Conditions []metav1.Condition `json:"conditions,omitempty"`
|
||||
}
|
||||
65
api/backups/strategy/v1alpha1/velero_types.go
Normal file
65
api/backups/strategy/v1alpha1/velero_types.go
Normal file
@@ -0,0 +1,65 @@
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
// Package v1alpha1 defines strategy.backups.cozystack.io API types.
|
||||
//
|
||||
// Group: strategy.backups.cozystack.io
|
||||
// Version: v1alpha1
|
||||
package v1alpha1
|
||||
|
||||
import (
|
||||
velerov1 "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
)
|
||||
|
||||
func init() {
|
||||
SchemeBuilder.Register(func(s *runtime.Scheme) error {
|
||||
s.AddKnownTypes(GroupVersion,
|
||||
&Velero{},
|
||||
&VeleroList{},
|
||||
)
|
||||
return nil
|
||||
})
|
||||
}
|
||||
|
||||
const (
|
||||
VeleroStrategyKind = "Velero"
|
||||
)
|
||||
|
||||
// +kubebuilder:object:root=true
|
||||
// +kubebuilder:subresource:status
|
||||
// +kubebuilder:resource:scope=Cluster
|
||||
|
||||
// Velero defines a backup strategy using Velero as the driver.
|
||||
type Velero struct {
|
||||
metav1.TypeMeta `json:",inline"`
|
||||
metav1.ObjectMeta `json:"metadata,omitempty"`
|
||||
|
||||
Spec VeleroSpec `json:"spec,omitempty"`
|
||||
Status VeleroStatus `json:"status,omitempty"`
|
||||
}
|
||||
|
||||
// +kubebuilder:object:root=true
|
||||
|
||||
// VeleroList contains a list of Velero backup strategies.
|
||||
type VeleroList struct {
|
||||
metav1.TypeMeta `json:",inline"`
|
||||
metav1.ListMeta `json:"metadata,omitempty"`
|
||||
Items []Velero `json:"items"`
|
||||
}
|
||||
|
||||
// VeleroSpec specifies the desired strategy for backing up with Velero.
|
||||
type VeleroSpec struct {
|
||||
Template VeleroTemplate `json:"template"`
|
||||
}
|
||||
|
||||
// VeleroTemplate describes the data a backup.velero.io should have when
|
||||
// templated from a Velero backup strategy.
|
||||
type VeleroTemplate struct {
|
||||
Spec velerov1.BackupSpec `json:"spec"`
|
||||
// +optional
|
||||
RestoreSpec *velerov1.RestoreSpec `json:"restoreSpec,omitempty"`
|
||||
}
|
||||
|
||||
type VeleroStatus struct {
|
||||
Conditions []metav1.Condition `json:"conditions,omitempty"`
|
||||
}
|
||||
242
api/backups/strategy/v1alpha1/zz_generated.deepcopy.go
Normal file
242
api/backups/strategy/v1alpha1/zz_generated.deepcopy.go
Normal file
@@ -0,0 +1,242 @@
|
||||
//go:build !ignore_autogenerated
|
||||
|
||||
/*
|
||||
Copyright 2025 The Cozystack Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
// Code generated by controller-gen. DO NOT EDIT.
|
||||
|
||||
package v1alpha1
|
||||
|
||||
import (
|
||||
velerov1 "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
|
||||
"k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
)
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *Job) DeepCopyInto(out *Job) {
|
||||
*out = *in
|
||||
out.TypeMeta = in.TypeMeta
|
||||
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
|
||||
in.Spec.DeepCopyInto(&out.Spec)
|
||||
in.Status.DeepCopyInto(&out.Status)
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Job.
|
||||
func (in *Job) DeepCopy() *Job {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(Job)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
|
||||
func (in *Job) DeepCopyObject() runtime.Object {
|
||||
if c := in.DeepCopy(); c != nil {
|
||||
return c
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *JobList) DeepCopyInto(out *JobList) {
|
||||
*out = *in
|
||||
out.TypeMeta = in.TypeMeta
|
||||
in.ListMeta.DeepCopyInto(&out.ListMeta)
|
||||
if in.Items != nil {
|
||||
in, out := &in.Items, &out.Items
|
||||
*out = make([]Job, len(*in))
|
||||
for i := range *in {
|
||||
(*in)[i].DeepCopyInto(&(*out)[i])
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new JobList.
|
||||
func (in *JobList) DeepCopy() *JobList {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(JobList)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
|
||||
func (in *JobList) DeepCopyObject() runtime.Object {
|
||||
if c := in.DeepCopy(); c != nil {
|
||||
return c
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *JobSpec) DeepCopyInto(out *JobSpec) {
|
||||
*out = *in
|
||||
in.Template.DeepCopyInto(&out.Template)
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new JobSpec.
|
||||
func (in *JobSpec) DeepCopy() *JobSpec {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(JobSpec)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *JobStatus) DeepCopyInto(out *JobStatus) {
|
||||
*out = *in
|
||||
if in.Conditions != nil {
|
||||
in, out := &in.Conditions, &out.Conditions
|
||||
*out = make([]v1.Condition, len(*in))
|
||||
for i := range *in {
|
||||
(*in)[i].DeepCopyInto(&(*out)[i])
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new JobStatus.
|
||||
func (in *JobStatus) DeepCopy() *JobStatus {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(JobStatus)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *Velero) DeepCopyInto(out *Velero) {
|
||||
*out = *in
|
||||
out.TypeMeta = in.TypeMeta
|
||||
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
|
||||
in.Spec.DeepCopyInto(&out.Spec)
|
||||
in.Status.DeepCopyInto(&out.Status)
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Velero.
|
||||
func (in *Velero) DeepCopy() *Velero {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(Velero)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
|
||||
func (in *Velero) DeepCopyObject() runtime.Object {
|
||||
if c := in.DeepCopy(); c != nil {
|
||||
return c
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *VeleroList) DeepCopyInto(out *VeleroList) {
|
||||
*out = *in
|
||||
out.TypeMeta = in.TypeMeta
|
||||
in.ListMeta.DeepCopyInto(&out.ListMeta)
|
||||
if in.Items != nil {
|
||||
in, out := &in.Items, &out.Items
|
||||
*out = make([]Velero, len(*in))
|
||||
for i := range *in {
|
||||
(*in)[i].DeepCopyInto(&(*out)[i])
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new VeleroList.
|
||||
func (in *VeleroList) DeepCopy() *VeleroList {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(VeleroList)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
|
||||
func (in *VeleroList) DeepCopyObject() runtime.Object {
|
||||
if c := in.DeepCopy(); c != nil {
|
||||
return c
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *VeleroSpec) DeepCopyInto(out *VeleroSpec) {
|
||||
*out = *in
|
||||
in.Template.DeepCopyInto(&out.Template)
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new VeleroSpec.
|
||||
func (in *VeleroSpec) DeepCopy() *VeleroSpec {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(VeleroSpec)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *VeleroStatus) DeepCopyInto(out *VeleroStatus) {
|
||||
*out = *in
|
||||
if in.Conditions != nil {
|
||||
in, out := &in.Conditions, &out.Conditions
|
||||
*out = make([]v1.Condition, len(*in))
|
||||
for i := range *in {
|
||||
(*in)[i].DeepCopyInto(&(*out)[i])
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new VeleroStatus.
|
||||
func (in *VeleroStatus) DeepCopy() *VeleroStatus {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(VeleroStatus)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *VeleroTemplate) DeepCopyInto(out *VeleroTemplate) {
|
||||
*out = *in
|
||||
in.Spec.DeepCopyInto(&out.Spec)
|
||||
if in.RestoreSpec != nil {
|
||||
in, out := &in.RestoreSpec, &out.RestoreSpec
|
||||
*out = new(velerov1.RestoreSpec)
|
||||
(*in).DeepCopyInto(*out)
|
||||
}
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new VeleroTemplate.
|
||||
func (in *VeleroTemplate) DeepCopy() *VeleroTemplate {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(VeleroTemplate)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
478
api/backups/v1alpha1/DESIGN.md
Normal file
478
api/backups/v1alpha1/DESIGN.md
Normal file
@@ -0,0 +1,478 @@
|
||||
# Cozystack Backups – Core API & Contracts (Draft)
|
||||
|
||||
## 1. Overview
|
||||
|
||||
Cozystack’s backup subsystem provides a generic, composable way to back up and restore managed applications:
|
||||
|
||||
* Every **application instance** can have one or more **backup plans**.
|
||||
* Backups are stored in configurable **storage locations**.
|
||||
* The mechanics of *how* a backup/restore is performed are delegated to **strategy drivers**, each implementing driver-specific **BackupStrategy** CRDs.
|
||||
|
||||
The core API:
|
||||
|
||||
* Orchestrates **when** backups happen and **where** they’re stored.
|
||||
* Tracks **what** backups exist and their status.
|
||||
* Defines contracts with drivers via shared resources (`BackupJob`, `Backup`, `RestoreJob`).
|
||||
|
||||
It does **not** implement the backup logic itself.
|
||||
|
||||
This document covers only the **core** API and its contracts with drivers, not driver implementations.
|
||||
|
||||
---
|
||||
|
||||
## 2. Goals and non-goals
|
||||
|
||||
### Goals
|
||||
|
||||
* Provide a **stable core API** for:
|
||||
|
||||
* Declaring **backup plans** per application.
|
||||
* Configuring **storage targets** (S3, in-cluster bucket, etc.).
|
||||
* Tracking **backup artifacts**.
|
||||
* Initiating and tracking **restores**.
|
||||
* Allow multiple **strategy drivers** to plug in, each supporting specific kinds of applications and strategies.
|
||||
* Let application/product authors implement backup for their kinds by:
|
||||
|
||||
* Creating **Plan** objects referencing a **driver-specific strategy**.
|
||||
* Not having to write a backup engine themselves.
|
||||
|
||||
### Non-goals
|
||||
|
||||
* Implement backup logic for any specific application or storage backend.
|
||||
* Define the internal structure of driver-specific strategy CRDs.
|
||||
* Handle tenant-facing UI/UX (that’s built on top of these APIs).
|
||||
|
||||
---
|
||||
|
||||
## 3. Architecture
|
||||
|
||||
High-level components:
|
||||
|
||||
* **Core backups controller(s)** (Cozystack-owned):
|
||||
|
||||
* Group: `backups.cozystack.io`
|
||||
* Own:
|
||||
|
||||
* `Plan`
|
||||
* `BackupJob`
|
||||
* `Backup`
|
||||
* `RestoreJob`
|
||||
* Responsibilities:
|
||||
|
||||
* Schedule backups based on `Plan`.
|
||||
* Create `BackupJob` objects when due.
|
||||
* Provide stable contracts for drivers to:
|
||||
|
||||
* Perform backups and create `Backup`s.
|
||||
* Perform restores based on `Backup`s.
|
||||
|
||||
* **Strategy drivers** (pluggable, possibly third-party):
|
||||
|
||||
* Their own API groups, e.g. `jobdriver.backups.cozystack.io`.
|
||||
* Own **strategy CRDs** (e.g. `JobBackupStrategy`).
|
||||
* Implement controllers that:
|
||||
|
||||
* Watch `BackupJob` / `RestoreJob`.
|
||||
* Match runs whose `strategyRef` GVK they support.
|
||||
* Execute backup/restore logic.
|
||||
* Create and update `Backup` and run statuses.
|
||||
|
||||
Strategy drivers and core communicate entirely via Kubernetes objects; there are no webhook/HTTP calls between them.
|
||||
|
||||
* **Storage drivers** (pluggable, possibly third-party):
|
||||
|
||||
* **TBD**
|
||||
|
||||
---
|
||||
|
||||
## 4. Core API resources
|
||||
|
||||
### 4.1 Plan
|
||||
|
||||
**Group/Kind**
|
||||
`backups.cozystack.io/v1alpha1, Kind=Plan`
|
||||
|
||||
**Purpose**
|
||||
Describe **when**, **how**, and **where** to back up a specific managed application.
|
||||
|
||||
**Key fields (spec)**
|
||||
|
||||
```go
|
||||
type PlanSpec struct {
|
||||
// Application to back up.
|
||||
// If apiGroup is not specified, it defaults to "apps.cozystack.io".
|
||||
ApplicationRef corev1.TypedLocalObjectReference `json:"applicationRef"`
|
||||
|
||||
// BackupClassName references a BackupClass that contains strategy and other parameters (e.g. storage reference).
|
||||
// The BackupClass will be resolved to determine the appropriate strategy and parameters
|
||||
// based on the ApplicationRef.
|
||||
BackupClassName string `json:"backupClassName"`
|
||||
|
||||
// When backups should run.
|
||||
Schedule PlanSchedule `json:"schedule"`
|
||||
}
|
||||
```
|
||||
|
||||
`PlanSchedule` (initially) supports only cron:
|
||||
|
||||
```go
|
||||
type PlanScheduleType string
|
||||
|
||||
const (
|
||||
PlanScheduleTypeEmpty PlanScheduleType = ""
|
||||
PlanScheduleTypeCron PlanScheduleType = "cron"
|
||||
)
|
||||
```
|
||||
|
||||
```go
|
||||
type PlanSchedule struct {
|
||||
// Type is the schedule type. Currently only "cron" is supported.
|
||||
// Defaults to "cron".
|
||||
Type PlanScheduleType `json:"type,omitempty"`
|
||||
|
||||
// Cron expression (required for cron type).
|
||||
Cron string `json:"cron,omitempty"`
|
||||
}
|
||||
```
|
||||
|
||||
**Plan reconciliation contract**
|
||||
|
||||
Core Plan controller:
|
||||
|
||||
1. **Read schedule** from `spec.schedule` and compute the next fire time.
|
||||
2. When due:
|
||||
|
||||
* Create a `BackupJob` in the same namespace:
|
||||
|
||||
* `spec.planRef.name = plan.Name`
|
||||
* `spec.applicationRef = plan.spec.applicationRef` (normalized with default apiGroup if not specified)
|
||||
* `spec.backupClassName = plan.spec.backupClassName`
|
||||
* Set `ownerReferences` so the `BackupJob` is owned by the `Plan`.
|
||||
|
||||
**Note:** The `BackupJob` controller resolves the `BackupClass` to determine the appropriate strategy and parameters, based on the `ApplicationRef`. The strategy template is processed with a context containing the `Application` object and `Parameters` from the `BackupClass`.
|
||||
|
||||
The Plan controller does **not**:
|
||||
|
||||
* Execute backups itself.
|
||||
* Modify driver resources or `Backup` objects.
|
||||
* Touch `BackupJob.spec` after creation.
|
||||
|
||||
---
|
||||
|
||||
### 4.2 BackupClass
|
||||
|
||||
**Group/Kind**
|
||||
`backups.cozystack.io/v1alpha1, Kind=BackupClass`
|
||||
|
||||
**Purpose**
|
||||
Define a class of backup configurations that encapsulate strategy and parameters per application type. `BackupClass` is a cluster-scoped resource that allows admins to configure backup strategies and parameters in a reusable way.
|
||||
|
||||
**Key fields (spec)**
|
||||
|
||||
```go
|
||||
type BackupClassSpec struct {
|
||||
// Strategies is a list of backup strategies, each matching a specific application type.
|
||||
Strategies []BackupClassStrategy `json:"strategies"`
|
||||
}
|
||||
|
||||
type BackupClassStrategy struct {
|
||||
// StrategyRef references the driver-specific BackupStrategy (e.g., Velero).
|
||||
StrategyRef corev1.TypedLocalObjectReference `json:"strategyRef"`
|
||||
|
||||
// Application specifies which application types this strategy applies to.
|
||||
// If apiGroup is not specified, it defaults to "apps.cozystack.io".
|
||||
Application ApplicationSelector `json:"application"`
|
||||
|
||||
// Parameters holds strategy-specific parameters, like storage reference.
|
||||
// Common parameters include:
|
||||
// - backupStorageLocationName: Name of Velero BackupStorageLocation
|
||||
// +optional
|
||||
Parameters map[string]string `json:"parameters,omitempty"`
|
||||
}
|
||||
|
||||
type ApplicationSelector struct {
|
||||
// APIGroup is the API group of the application.
|
||||
// If not specified, defaults to "apps.cozystack.io".
|
||||
// +optional
|
||||
APIGroup *string `json:"apiGroup,omitempty"`
|
||||
|
||||
// Kind is the kind of the application (e.g., VirtualMachine, MariaDB).
|
||||
Kind string `json:"kind"`
|
||||
}
|
||||
```
|
||||
|
||||
**BackupClass resolution**
|
||||
|
||||
* When a `BackupJob` or `Plan` references a `BackupClass` via `backupClassName`, the controller:
|
||||
1. Fetches the `BackupClass` by name.
|
||||
2. Matches the `ApplicationRef` against strategies in the `BackupClass`:
|
||||
* Normalizes `ApplicationRef.apiGroup` (defaults to `"apps.cozystack.io"` if not specified).
|
||||
* Finds a strategy where `ApplicationSelector` matches the `ApplicationRef` (apiGroup and kind).
|
||||
3. Returns the matched `StrategyRef` and `Parameters`.
|
||||
* Strategy templates (e.g., Velero's `backupTemplate.spec`) are processed with a context containing:
|
||||
* `Application`: The application object being backed up.
|
||||
* `Parameters`: The parameters from the matched `BackupClassStrategy`.
|
||||
|
||||
**Parameters**
|
||||
|
||||
* Parameters are passed via `Parameters` in the `BackupClass` (e.g., `backupStorageLocationName` for Velero).
|
||||
* The driver uses these parameters to resolve the actual resources (e.g., Velero's `BackupStorageLocation` CRD).
|
||||
|
||||
---
|
||||
|
||||
### 4.3 BackupJob
|
||||
|
||||
**Group/Kind**
|
||||
`backups.cozystack.io/v1alpha1, Kind=BackupJob`
|
||||
|
||||
**Purpose**
|
||||
Represent a single **execution** of a backup operation, typically created when a `Plan` fires or when a user triggers an ad-hoc backup.
|
||||
|
||||
**Key fields (spec)**
|
||||
|
||||
```go
|
||||
type BackupJobSpec struct {
|
||||
// Plan that triggered this run, if any.
|
||||
PlanRef *corev1.LocalObjectReference `json:"planRef,omitempty"`
|
||||
|
||||
// Application to back up.
|
||||
// If apiGroup is not specified, it defaults to "apps.cozystack.io".
|
||||
ApplicationRef corev1.TypedLocalObjectReference `json:"applicationRef"`
|
||||
|
||||
// BackupClassName references a BackupClass that contains strategy and related parameters
|
||||
// The BackupClass will be resolved to determine the appropriate strategy and parameters
|
||||
// based on the ApplicationRef.
|
||||
BackupClassName string `json:"backupClassName"`
|
||||
}
|
||||
```
|
||||
|
||||
**Key fields (status)**
|
||||
|
||||
```go
|
||||
type BackupJobStatus struct {
|
||||
Phase BackupJobPhase `json:"phase,omitempty"`
|
||||
BackupRef *corev1.LocalObjectReference `json:"backupRef,omitempty"`
|
||||
StartedAt *metav1.Time `json:"startedAt,omitempty"`
|
||||
CompletedAt *metav1.Time `json:"completedAt,omitempty"`
|
||||
Message string `json:"message,omitempty"`
|
||||
Conditions []metav1.Condition `json:"conditions,omitempty"`
|
||||
}
|
||||
```
|
||||
|
||||
`BackupJobPhase` is one of: `Pending`, `Running`, `Succeeded`, `Failed`.
|
||||
|
||||
**BackupJob contract with drivers**
|
||||
|
||||
* Core **creates** `BackupJob` and must treat `spec` as immutable afterwards.
|
||||
* Each driver controller:
|
||||
|
||||
* Watches `BackupJob`.
|
||||
* Resolves the `BackupClass` referenced by `spec.backupClassName`.
|
||||
* Matches the `ApplicationRef` against strategies in the `BackupClass` to find the appropriate strategy.
|
||||
* Reconciles runs where the resolved strategy's `apiGroup/kind` matches its **strategy type(s)**.
|
||||
* Driver responsibilities:
|
||||
|
||||
1. On first reconcile:
|
||||
|
||||
* Set `status.startedAt` if unset.
|
||||
* Set `status.phase = Running`.
|
||||
2. Resolve inputs:
|
||||
|
||||
* Resolve `BackupClass` from `spec.backupClassName`.
|
||||
* Match `ApplicationRef` against `BackupClass` strategies to get `StrategyRef` and `Parameters`.
|
||||
* Read `Strategy` (driver-owned CRD) from `StrategyRef`.
|
||||
* Read `Application` from `ApplicationRef`.
|
||||
* Extract parameters from `Parameters` (e.g., `backupStorageLocationName` for Velero).
|
||||
* Process strategy template with context: `Application` object and `Parameters` from `BackupClass`.
|
||||
3. Execute backup logic (implementation-specific).
|
||||
4. On success:
|
||||
|
||||
* Create a `Backup` resource (see below).
|
||||
* Set `status.backupRef` to the created `Backup`.
|
||||
* Set `status.completedAt`.
|
||||
* Set `status.phase = Succeeded`.
|
||||
5. On failure:
|
||||
|
||||
* Set `status.completedAt`.
|
||||
* Set `status.phase = Failed`.
|
||||
* Set `status.message` and conditions.
|
||||
|
||||
Drivers must **not** modify `BackupJob.spec` or delete `BackupJob` themselves.
|
||||
|
||||
---
|
||||
|
||||
### 4.4 Backup
|
||||
|
||||
**Group/Kind**
|
||||
`backups.cozystack.io/v1alpha1, Kind=Backup`
|
||||
|
||||
**Purpose**
|
||||
Represent a single **backup artifact** for a given application, decoupled from a particular run. usable as a stable, listable “thing you can restore from”.
|
||||
|
||||
**Key fields (spec)**
|
||||
|
||||
```go
|
||||
type BackupSpec struct {
|
||||
ApplicationRef corev1.TypedLocalObjectReference `json:"applicationRef"`
|
||||
PlanRef *corev1.LocalObjectReference `json:"planRef,omitempty"`
|
||||
StrategyRef corev1.TypedLocalObjectReference `json:"strategyRef"`
|
||||
TakenAt metav1.Time `json:"takenAt"`
|
||||
DriverMetadata map[string]string `json:"driverMetadata,omitempty"`
|
||||
}
|
||||
```
|
||||
|
||||
**Note:** Parameters are not stored directly in `Backup`. Instead, they are resolved from `BackupClass` parameters when the backup was created. The storage location is managed by the driver (e.g., Velero's `BackupStorageLocation`) and referenced via parameters in the `BackupClass`.
|
||||
|
||||
**Key fields (status)**
|
||||
|
||||
```go
|
||||
type BackupStatus struct {
|
||||
Phase BackupPhase `json:"phase,omitempty"` // Pending, Ready, Failed, etc.
|
||||
Artifact *BackupArtifact `json:"artifact,omitempty"`
|
||||
Conditions []metav1.Condition `json:"conditions,omitempty"`
|
||||
}
|
||||
```
|
||||
|
||||
`BackupArtifact` describes the artifact (URI, size, checksum).
|
||||
|
||||
**Backup contract with drivers**
|
||||
|
||||
* On successful completion of a `BackupJob`, the **driver**:
|
||||
|
||||
* Creates a `Backup` in the same namespace (typically owned by the `BackupJob`).
|
||||
* Populates `spec` fields with:
|
||||
|
||||
* The application reference.
|
||||
* The strategy reference (resolved from `BackupClass` during `BackupJob` execution).
|
||||
* `takenAt`.
|
||||
* Optional `driverMetadata`.
|
||||
* Sets `status` with:
|
||||
|
||||
* `phase = Ready` (or equivalent when fully usable).
|
||||
* `artifact` describing the stored object.
|
||||
* Core:
|
||||
|
||||
* Treats `Backup` spec as mostly immutable and opaque.
|
||||
* Uses it to:
|
||||
|
||||
* List backups for a given application/plan.
|
||||
* Anchor `RestoreJob` operations.
|
||||
* Implement higher-level policies (retention) if needed.
|
||||
|
||||
**Note:** Parameters are resolved from `BackupClass` when the `BackupJob` is created. The driver uses these parameters to determine where to store backups. The storage location itself is managed by the driver (e.g., Velero's `BackupStorageLocation` CRD) and is not directly referenced in the `Backup` resource. When restoring, the driver resolves the storage location from the original `BackupClass` parameters or from the driver's own metadata.
|
||||
|
||||
---
|
||||
|
||||
### 4.5 RestoreJob
|
||||
|
||||
**Group/Kind**
|
||||
`backups.cozystack.io/v1alpha1, Kind=RestoreJob`
|
||||
|
||||
**Purpose**
|
||||
Represent a single **restore operation** from a `Backup`, either back into the same application or into a new target application.
|
||||
|
||||
**Key fields (spec)**
|
||||
|
||||
```go
|
||||
type RestoreJobSpec struct {
|
||||
// Backup to restore from.
|
||||
BackupRef corev1.LocalObjectReference `json:"backupRef"`
|
||||
|
||||
// Target application; if omitted, drivers SHOULD restore into
|
||||
// backup.spec.applicationRef.
|
||||
TargetApplicationRef *corev1.TypedLocalObjectReference `json:"targetApplicationRef,omitempty"`
|
||||
}
|
||||
```
|
||||
|
||||
**Key fields (status)**
|
||||
|
||||
```go
|
||||
type RestoreJobStatus struct {
|
||||
Phase RestoreJobPhase `json:"phase,omitempty"` // Pending, Running, Succeeded, Failed
|
||||
StartedAt *metav1.Time `json:"startedAt,omitempty"`
|
||||
CompletedAt *metav1.Time `json:"completedAt,omitempty"`
|
||||
Message string `json:"message,omitempty"`
|
||||
Conditions []metav1.Condition `json:"conditions,omitempty"`
|
||||
}
|
||||
```
|
||||
|
||||
**RestoreJob contract with drivers**
|
||||
|
||||
* RestoreJob is created either manually or by core.
|
||||
* Driver controller:
|
||||
|
||||
1. Watches `RestoreJob`.
|
||||
2. On reconcile:
|
||||
|
||||
* Fetches the referenced `Backup`.
|
||||
* Determines effective:
|
||||
|
||||
* **Strategy**: `backup.spec.strategyRef`.
|
||||
* **Storage**: Resolved from driver metadata or `BackupClass` parameters (e.g., `backupStorageLocationName` stored in `driverMetadata` or resolved from the original `BackupClass`).
|
||||
* **Target application**: `spec.targetApplicationRef` or `backup.spec.applicationRef`.
|
||||
* If effective strategy’s GVK is one of its supported strategy types → driver is responsible.
|
||||
3. Behaviour:
|
||||
|
||||
* On first reconcile, set `status.startedAt` and `phase = Running`.
|
||||
* Resolve `Backup`, storage location (from driver metadata or `BackupClass`), `Strategy`, target application.
|
||||
* Execute restore logic (implementation-specific).
|
||||
* On success:
|
||||
|
||||
* Set `status.completedAt`.
|
||||
* Set `status.phase = Succeeded`.
|
||||
* On failure:
|
||||
|
||||
* Set `status.completedAt`.
|
||||
* Set `status.phase = Failed`.
|
||||
* Set `status.message` and conditions.
|
||||
|
||||
Drivers must not modify `RestoreJob.spec` or delete `RestoreJob`.
|
||||
|
||||
---
|
||||
|
||||
## 5. Strategy drivers (high-level)
|
||||
|
||||
Strategy drivers are separate controllers that:
|
||||
|
||||
* Define their own **strategy CRDs** (e.g. `JobBackupStrategy`) in their own API groups:
|
||||
|
||||
* e.g. `jobdriver.backups.cozystack.io/v1alpha1, Kind=JobBackupStrategy`
|
||||
* Implement the **BackupJob contract**:
|
||||
|
||||
* Watch `BackupJob`.
|
||||
* Filter by `spec.strategyRef.apiGroup/kind`.
|
||||
* Execute backup logic.
|
||||
* Create/update `Backup`.
|
||||
* Implement the **RestoreJob contract**:
|
||||
|
||||
* Watch `RestoreJob`.
|
||||
* Resolve `Backup`, then effective `strategyRef`.
|
||||
* Filter by effective strategy GVK.
|
||||
* Execute restore logic.
|
||||
|
||||
The core backups API **does not** dictate:
|
||||
|
||||
* The fields and structure of driver strategy specs.
|
||||
* How drivers implement backup/restore internally (Jobs, snapshots, native operator CRDs, etc.).
|
||||
|
||||
Drivers are interchangeable as long as they respect:
|
||||
|
||||
* The `BackupJob` and `RestoreJob` contracts.
|
||||
* The shapes and semantics of `Backup` objects.
|
||||
|
||||
---
|
||||
|
||||
## 6. Summary
|
||||
|
||||
The Cozystack backups core API:
|
||||
|
||||
* Uses a single group, `backups.cozystack.io`, for all core CRDs.
|
||||
* Cleanly separates:
|
||||
|
||||
* **When** (Plan schedule) – core-owned.
|
||||
* **How & where** (BackupClass) – central configuration unit that encapsulates strategy and parameters (e.g., storage reference) per application type, resolved per BackupJob/Plan.
|
||||
* **Execution** (BackupJob) – created by Plan when schedule fires, resolves BackupClass to get strategy and parameters, then delegates to driver.
|
||||
* **What backup artifacts exist** (Backup) – driver-created but cluster-visible.
|
||||
* **Restore lifecycle** (RestoreJob) – shared contract boundary.
|
||||
* Allows multiple strategy drivers to implement backup/restore logic without entangling their implementation with the core API.
|
||||
|
||||
114
api/backups/v1alpha1/backup_types.go
Normal file
114
api/backups/v1alpha1/backup_types.go
Normal file
@@ -0,0 +1,114 @@
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
// Package v1alpha1 defines backups.cozystack.io API types.
|
||||
//
|
||||
// Group: backups.cozystack.io
|
||||
// Version: v1alpha1
|
||||
package v1alpha1
|
||||
|
||||
import (
|
||||
corev1 "k8s.io/api/core/v1"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
)
|
||||
|
||||
func init() {
|
||||
SchemeBuilder.Register(func(s *runtime.Scheme) error {
|
||||
s.AddKnownTypes(GroupVersion,
|
||||
&Backup{},
|
||||
&BackupList{},
|
||||
)
|
||||
return nil
|
||||
})
|
||||
}
|
||||
|
||||
// BackupPhase represents the lifecycle phase of a Backup.
|
||||
type BackupPhase string
|
||||
|
||||
const (
|
||||
BackupPhaseEmpty BackupPhase = ""
|
||||
BackupPhasePending BackupPhase = "Pending"
|
||||
BackupPhaseReady BackupPhase = "Ready"
|
||||
BackupPhaseFailed BackupPhase = "Failed"
|
||||
)
|
||||
|
||||
// BackupArtifact describes the stored backup object (tarball, snapshot, etc.).
|
||||
type BackupArtifact struct {
|
||||
// URI is a driver-/storage-specific URI pointing to the backup artifact.
|
||||
// For example: s3://bucket/prefix/file.tar.gz
|
||||
URI string `json:"uri"`
|
||||
|
||||
// SizeBytes is the size of the artifact in bytes, if known.
|
||||
// +optional
|
||||
SizeBytes int64 `json:"sizeBytes,omitempty"`
|
||||
|
||||
// Checksum is the checksum of the artifact, if computed.
|
||||
// For example: "sha256:<hex>".
|
||||
// +optional
|
||||
Checksum string `json:"checksum,omitempty"`
|
||||
}
|
||||
|
||||
// BackupSpec describes an immutable backup artifact produced by a BackupJob.
|
||||
type BackupSpec struct {
|
||||
// ApplicationRef refers to the application that was backed up.
|
||||
ApplicationRef corev1.TypedLocalObjectReference `json:"applicationRef"`
|
||||
|
||||
// PlanRef refers to the Plan that produced this backup, if any.
|
||||
// For manually triggered backups, this can be omitted.
|
||||
// +optional
|
||||
PlanRef *corev1.LocalObjectReference `json:"planRef,omitempty"`
|
||||
|
||||
// StrategyRef refers to the driver-specific BackupStrategy that was used
|
||||
// to create this backup. This allows the driver to later perform restores.
|
||||
StrategyRef corev1.TypedLocalObjectReference `json:"strategyRef"`
|
||||
|
||||
// TakenAt is the time at which the backup was taken (as reported by the
|
||||
// driver). It may differ slightly from metadata.creationTimestamp.
|
||||
TakenAt metav1.Time `json:"takenAt"`
|
||||
|
||||
// DriverMetadata holds driver-specific, opaque metadata associated with
|
||||
// this backup (for example snapshot IDs, schema versions, etc.).
|
||||
// This data is not interpreted by the core backup controllers.
|
||||
// +optional
|
||||
DriverMetadata map[string]string `json:"driverMetadata,omitempty"`
|
||||
}
|
||||
|
||||
// BackupStatus represents the observed state of a Backup.
|
||||
type BackupStatus struct {
|
||||
// Phase is a simple, high-level summary of the backup's state.
|
||||
// Typical values are: Pending, Ready, Failed.
|
||||
// +optional
|
||||
Phase BackupPhase `json:"phase,omitempty"`
|
||||
|
||||
// Artifact describes the stored backup object, if available.
|
||||
// +optional
|
||||
Artifact *BackupArtifact `json:"artifact,omitempty"`
|
||||
|
||||
// Conditions represents the latest available observations of a Backup's state.
|
||||
// +optional
|
||||
Conditions []metav1.Condition `json:"conditions,omitempty"`
|
||||
}
|
||||
|
||||
// The field indexing on applicationRef will be needed later to display per-app backup resources.
|
||||
|
||||
// +kubebuilder:object:root=true
|
||||
// +kubebuilder:selectablefield:JSONPath=`.spec.applicationRef.apiGroup`
|
||||
// +kubebuilder:selectablefield:JSONPath=`.spec.applicationRef.kind`
|
||||
// +kubebuilder:selectablefield:JSONPath=`.spec.applicationRef.name`
|
||||
|
||||
// Backup represents a single backup artifact for a given application.
|
||||
type Backup struct {
|
||||
metav1.TypeMeta `json:",inline"`
|
||||
metav1.ObjectMeta `json:"metadata,omitempty"`
|
||||
|
||||
Spec BackupSpec `json:"spec,omitempty"`
|
||||
Status BackupStatus `json:"status,omitempty"`
|
||||
}
|
||||
|
||||
// +kubebuilder:object:root=true
|
||||
|
||||
// BackupList contains a list of Backups.
|
||||
type BackupList struct {
|
||||
metav1.TypeMeta `json:",inline"`
|
||||
metav1.ListMeta `json:"metadata,omitempty"`
|
||||
Items []Backup `json:"items"`
|
||||
}
|
||||
85
api/backups/v1alpha1/backupclass_types.go
Normal file
85
api/backups/v1alpha1/backupclass_types.go
Normal file
@@ -0,0 +1,85 @@
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
// Package v1alpha1 defines backups.cozystack.io API types.
|
||||
//
|
||||
// Group: backups.cozystack.io
|
||||
// Version: v1alpha1
|
||||
package v1alpha1
|
||||
|
||||
import (
|
||||
corev1 "k8s.io/api/core/v1"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
)
|
||||
|
||||
func init() {
|
||||
SchemeBuilder.Register(func(s *runtime.Scheme) error {
|
||||
s.AddKnownTypes(GroupVersion,
|
||||
&BackupClass{},
|
||||
&BackupClassList{},
|
||||
)
|
||||
return nil
|
||||
})
|
||||
}
|
||||
|
||||
// +kubebuilder:object:root=true
|
||||
// +kubebuilder:resource:scope=Cluster
|
||||
// +kubebuilder:subresource:status
|
||||
|
||||
// BackupClass defines a class of backup configurations that can be referenced
|
||||
// by BackupJob and Plan resources. It encapsulates strategy and storage configuration
|
||||
// per application type.
|
||||
type BackupClass struct {
|
||||
metav1.TypeMeta `json:",inline"`
|
||||
metav1.ObjectMeta `json:"metadata,omitempty"`
|
||||
|
||||
Spec BackupClassSpec `json:"spec,omitempty"`
|
||||
Status BackupClassStatus `json:"status,omitempty"`
|
||||
}
|
||||
|
||||
// +kubebuilder:object:root=true
|
||||
|
||||
// BackupClassList contains a list of BackupClasses.
|
||||
type BackupClassList struct {
|
||||
metav1.TypeMeta `json:",inline"`
|
||||
metav1.ListMeta `json:"metadata,omitempty"`
|
||||
Items []BackupClass `json:"items"`
|
||||
}
|
||||
|
||||
// BackupClassSpec defines the desired state of a BackupClass.
|
||||
type BackupClassSpec struct {
|
||||
// Strategies is a list of backup strategies, each matching a specific application type.
|
||||
Strategies []BackupClassStrategy `json:"strategies"`
|
||||
}
|
||||
|
||||
// BackupClassStrategy defines a backup strategy for a specific application type.
|
||||
type BackupClassStrategy struct {
|
||||
// StrategyRef references the driver-specific BackupStrategy (e.g., Velero).
|
||||
StrategyRef corev1.TypedLocalObjectReference `json:"strategyRef"`
|
||||
|
||||
// Application specifies which application types this strategy applies to.
|
||||
Application ApplicationSelector `json:"application"`
|
||||
|
||||
// Parameters holds strategy-specific and storage-specific parameters.
|
||||
// Common parameters include:
|
||||
// - backupStorageLocationName: Name of Velero BackupStorageLocation
|
||||
// +optional
|
||||
Parameters map[string]string `json:"parameters,omitempty"`
|
||||
}
|
||||
|
||||
// ApplicationSelector specifies which application types a strategy applies to.
|
||||
type ApplicationSelector struct {
|
||||
// APIGroup is the API group of the application.
|
||||
// If not specified, defaults to "apps.cozystack.io".
|
||||
// +optional
|
||||
APIGroup *string `json:"apiGroup,omitempty"`
|
||||
|
||||
// Kind is the kind of the application (e.g., VirtualMachine, MariaDB).
|
||||
Kind string `json:"kind"`
|
||||
}
|
||||
|
||||
// BackupClassStatus defines the observed state of a BackupClass.
|
||||
type BackupClassStatus struct {
|
||||
// Conditions represents the latest available observations of a BackupClass's state.
|
||||
// +optional
|
||||
Conditions []metav1.Condition `json:"conditions,omitempty"`
|
||||
}
|
||||
131
api/backups/v1alpha1/backupjob_types.go
Normal file
131
api/backups/v1alpha1/backupjob_types.go
Normal file
@@ -0,0 +1,131 @@
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
// Package v1alpha1 defines backups.cozystack.io API types.
|
||||
//
|
||||
// Group: backups.cozystack.io
|
||||
// Version: v1alpha1
|
||||
package v1alpha1
|
||||
|
||||
import (
|
||||
corev1 "k8s.io/api/core/v1"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
)
|
||||
|
||||
func init() {
|
||||
SchemeBuilder.Register(func(s *runtime.Scheme) error {
|
||||
s.AddKnownTypes(GroupVersion,
|
||||
&BackupJob{},
|
||||
&BackupJobList{},
|
||||
)
|
||||
return nil
|
||||
})
|
||||
}
|
||||
|
||||
const (
|
||||
OwningJobNameLabel = thisGroup + "/owned-by.BackupJobName"
|
||||
OwningJobNamespaceLabel = thisGroup + "/owned-by.BackupJobNamespace"
|
||||
|
||||
// DefaultApplicationAPIGroup is the default API group for applications
|
||||
// when not specified in ApplicationRef or ApplicationSelector.
|
||||
DefaultApplicationAPIGroup = "apps.cozystack.io"
|
||||
)
|
||||
|
||||
// BackupJobPhase represents the lifecycle phase of a BackupJob.
|
||||
type BackupJobPhase string
|
||||
|
||||
const (
|
||||
BackupJobPhaseEmpty BackupJobPhase = ""
|
||||
BackupJobPhasePending BackupJobPhase = "Pending"
|
||||
BackupJobPhaseRunning BackupJobPhase = "Running"
|
||||
BackupJobPhaseSucceeded BackupJobPhase = "Succeeded"
|
||||
BackupJobPhaseFailed BackupJobPhase = "Failed"
|
||||
)
|
||||
|
||||
// BackupJobSpec describes the execution of a single backup operation.
|
||||
type BackupJobSpec struct {
|
||||
// PlanRef refers to the Plan that requested this backup run.
|
||||
// For ad-hoc/manual backups, this can be omitted.
|
||||
// +optional
|
||||
PlanRef *corev1.LocalObjectReference `json:"planRef,omitempty"`
|
||||
|
||||
// ApplicationRef holds a reference to the managed application whose state
|
||||
// is being backed up.
|
||||
// If apiGroup is not specified, it defaults to "apps.cozystack.io".
|
||||
ApplicationRef corev1.TypedLocalObjectReference `json:"applicationRef"`
|
||||
|
||||
// BackupClassName references a BackupClass that contains strategy and storage configuration.
|
||||
// The BackupClass will be resolved to determine the appropriate strategy and storage
|
||||
// based on the ApplicationRef.
|
||||
// This field is immutable once the BackupJob is created.
|
||||
// +kubebuilder:validation:MinLength=1
|
||||
// +kubebuilder:validation:XValidation:rule="self == oldSelf",message="backupClassName is immutable"
|
||||
BackupClassName string `json:"backupClassName"`
|
||||
}
|
||||
|
||||
// BackupJobStatus represents the observed state of a BackupJob.
|
||||
type BackupJobStatus struct {
|
||||
// Phase is a high-level summary of the run's state.
|
||||
// Typical values: Pending, Running, Succeeded, Failed.
|
||||
// +optional
|
||||
Phase BackupJobPhase `json:"phase,omitempty"`
|
||||
|
||||
// BackupRef refers to the Backup object created by this run, if any.
|
||||
// +optional
|
||||
BackupRef *corev1.LocalObjectReference `json:"backupRef,omitempty"`
|
||||
|
||||
// StartedAt is the time at which the backup run started.
|
||||
// +optional
|
||||
StartedAt *metav1.Time `json:"startedAt,omitempty"`
|
||||
|
||||
// CompletedAt is the time at which the backup run completed (successfully
|
||||
// or otherwise).
|
||||
// +optional
|
||||
CompletedAt *metav1.Time `json:"completedAt,omitempty"`
|
||||
|
||||
// Message is a human-readable message indicating details about why the
|
||||
// backup run is in its current phase, if any.
|
||||
// +optional
|
||||
Message string `json:"message,omitempty"`
|
||||
|
||||
// Conditions represents the latest available observations of a BackupJob's state.
|
||||
// +optional
|
||||
Conditions []metav1.Condition `json:"conditions,omitempty"`
|
||||
}
|
||||
|
||||
// The field indexing on applicationRef will be needed later to display per-app backup resources.
|
||||
|
||||
// +kubebuilder:object:root=true
|
||||
// +kubebuilder:subresource:status
|
||||
// +kubebuilder:printcolumn:name="Phase",type="string",JSONPath=".status.phase",priority=0
|
||||
// +kubebuilder:selectablefield:JSONPath=`.spec.applicationRef.apiGroup`
|
||||
// +kubebuilder:selectablefield:JSONPath=`.spec.applicationRef.kind`
|
||||
// +kubebuilder:selectablefield:JSONPath=`.spec.applicationRef.name`
|
||||
|
||||
// BackupJob represents a single execution of a backup.
|
||||
// It is typically created by a Plan controller when a schedule fires.
|
||||
type BackupJob struct {
|
||||
metav1.TypeMeta `json:",inline"`
|
||||
metav1.ObjectMeta `json:"metadata,omitempty"`
|
||||
|
||||
Spec BackupJobSpec `json:"spec,omitempty"`
|
||||
Status BackupJobStatus `json:"status,omitempty"`
|
||||
}
|
||||
|
||||
// +kubebuilder:object:root=true
|
||||
|
||||
// BackupJobList contains a list of BackupJobs.
|
||||
type BackupJobList struct {
|
||||
metav1.TypeMeta `json:",inline"`
|
||||
metav1.ListMeta `json:"metadata,omitempty"`
|
||||
Items []BackupJob `json:"items"`
|
||||
}
|
||||
|
||||
// NormalizeApplicationRef sets the default apiGroup to DefaultApplicationAPIGroup if it's not specified.
|
||||
// This function is exported so it can be used by other packages (e.g., controllers, factories).
|
||||
func NormalizeApplicationRef(ref corev1.TypedLocalObjectReference) corev1.TypedLocalObjectReference {
|
||||
if ref.APIGroup == nil || *ref.APIGroup == "" {
|
||||
apiGroup := DefaultApplicationAPIGroup
|
||||
ref.APIGroup = &apiGroup
|
||||
}
|
||||
return ref
|
||||
}
|
||||
42
api/backups/v1alpha1/groupversion_info.go
Normal file
42
api/backups/v1alpha1/groupversion_info.go
Normal file
@@ -0,0 +1,42 @@
|
||||
/*
|
||||
Copyright 2025.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
// Package v1alpha1 contains API Schema definitions for the v1alpha1 API group.
|
||||
// +kubebuilder:object:generate=true
|
||||
// +groupName=backups.cozystack.io
|
||||
package v1alpha1
|
||||
|
||||
import (
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
"k8s.io/apimachinery/pkg/runtime/schema"
|
||||
)
|
||||
|
||||
const (
|
||||
thisGroup = "backups.cozystack.io"
|
||||
thisVersion = "v1alpha1"
|
||||
)
|
||||
|
||||
var (
|
||||
GroupVersion = schema.GroupVersion{Group: thisGroup, Version: thisVersion}
|
||||
SchemeBuilder = runtime.NewSchemeBuilder(addGroupVersion)
|
||||
AddToScheme = SchemeBuilder.AddToScheme
|
||||
)
|
||||
|
||||
func addGroupVersion(scheme *runtime.Scheme) error {
|
||||
metav1.AddToGroupVersion(scheme, GroupVersion)
|
||||
return nil
|
||||
}
|
||||
96
api/backups/v1alpha1/plan_types.go
Normal file
96
api/backups/v1alpha1/plan_types.go
Normal file
@@ -0,0 +1,96 @@
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
// Package v1alpha1 defines backups.cozystack.io API types.
|
||||
//
|
||||
// Group: backups.cozystack.io
|
||||
// Version: v1alpha1
|
||||
package v1alpha1
|
||||
|
||||
import (
|
||||
corev1 "k8s.io/api/core/v1"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
)
|
||||
|
||||
func init() {
|
||||
SchemeBuilder.Register(func(s *runtime.Scheme) error {
|
||||
s.AddKnownTypes(GroupVersion,
|
||||
&Plan{},
|
||||
&PlanList{},
|
||||
)
|
||||
return nil
|
||||
})
|
||||
}
|
||||
|
||||
type PlanScheduleType string
|
||||
|
||||
const (
|
||||
PlanScheduleTypeEmpty PlanScheduleType = ""
|
||||
PlanScheduleTypeCron PlanScheduleType = "cron"
|
||||
)
|
||||
|
||||
// Condtions
|
||||
const (
|
||||
PlanConditionError = "Error"
|
||||
)
|
||||
|
||||
// The field indexing on applicationRef will be needed later to display per-app backup resources.
|
||||
|
||||
// +kubebuilder:object:root=true
|
||||
// +kubebuilder:subresource:status
|
||||
// +kubebuilder:selectablefield:JSONPath=`.spec.applicationRef.apiGroup`
|
||||
// +kubebuilder:selectablefield:JSONPath=`.spec.applicationRef.kind`
|
||||
// +kubebuilder:selectablefield:JSONPath=`.spec.applicationRef.name`
|
||||
|
||||
// Plan describes the schedule, method and storage location for the
|
||||
// backup of a given target application.
|
||||
type Plan struct {
|
||||
metav1.TypeMeta `json:",inline"`
|
||||
metav1.ObjectMeta `json:"metadata,omitempty"`
|
||||
|
||||
Spec PlanSpec `json:"spec,omitempty"`
|
||||
Status PlanStatus `json:"status,omitempty"`
|
||||
}
|
||||
|
||||
// +kubebuilder:object:root=true
|
||||
|
||||
// PlanList contains a list of backup Plans.
|
||||
type PlanList struct {
|
||||
metav1.TypeMeta `json:",inline"`
|
||||
metav1.ListMeta `json:"metadata,omitempty"`
|
||||
Items []Plan `json:"items"`
|
||||
}
|
||||
|
||||
// PlanSpec references the storage, the strategy, the application to be
|
||||
// backed up and specifies the timetable on which the backups will run.
|
||||
type PlanSpec struct {
|
||||
// ApplicationRef holds a reference to the managed application,
|
||||
// whose state and configuration must be backed up.
|
||||
// If apiGroup is not specified, it defaults to "apps.cozystack.io".
|
||||
ApplicationRef corev1.TypedLocalObjectReference `json:"applicationRef"`
|
||||
|
||||
// BackupClassName references a BackupClass that contains strategy and storage configuration.
|
||||
// The BackupClass will be resolved to determine the appropriate strategy and storage
|
||||
// based on the ApplicationRef.
|
||||
BackupClassName string `json:"backupClassName"`
|
||||
|
||||
// Schedule specifies when backup copies are created.
|
||||
Schedule PlanSchedule `json:"schedule"`
|
||||
}
|
||||
|
||||
// PlanSchedule specifies when backup copies are created.
|
||||
type PlanSchedule struct {
|
||||
// Type is the type of schedule specification. Supported values are
|
||||
// [`cron`]. If omitted, defaults to `cron`.
|
||||
// +optional
|
||||
Type PlanScheduleType `json:"type,omitempty"`
|
||||
|
||||
// Cron contains the cron spec for scheduling backups. Must be
|
||||
// specified if the schedule type is `cron`. Since only `cron` is
|
||||
// supported, omitting this field is not allowed.
|
||||
// +optional
|
||||
Cron string `json:"cron,omitempty"`
|
||||
}
|
||||
|
||||
type PlanStatus struct {
|
||||
Conditions []metav1.Condition `json:"conditions,omitempty"`
|
||||
}
|
||||
93
api/backups/v1alpha1/restorejob_types.go
Normal file
93
api/backups/v1alpha1/restorejob_types.go
Normal file
@@ -0,0 +1,93 @@
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
// Package v1alpha1 defines backups.cozystack.io API types.
|
||||
//
|
||||
// Group: backups.cozystack.io
|
||||
// Version: v1alpha1
|
||||
package v1alpha1
|
||||
|
||||
import (
|
||||
corev1 "k8s.io/api/core/v1"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
)
|
||||
|
||||
func init() {
|
||||
SchemeBuilder.Register(func(s *runtime.Scheme) error {
|
||||
s.AddKnownTypes(GroupVersion,
|
||||
&RestoreJob{},
|
||||
&RestoreJobList{},
|
||||
)
|
||||
return nil
|
||||
})
|
||||
}
|
||||
|
||||
// RestoreJobPhase represents the lifecycle phase of a RestoreJob.
|
||||
type RestoreJobPhase string
|
||||
|
||||
const (
|
||||
RestoreJobPhaseEmpty RestoreJobPhase = ""
|
||||
RestoreJobPhasePending RestoreJobPhase = "Pending"
|
||||
RestoreJobPhaseRunning RestoreJobPhase = "Running"
|
||||
RestoreJobPhaseSucceeded RestoreJobPhase = "Succeeded"
|
||||
RestoreJobPhaseFailed RestoreJobPhase = "Failed"
|
||||
)
|
||||
|
||||
// RestoreJobSpec describes the execution of a single restore operation.
|
||||
type RestoreJobSpec struct {
|
||||
// BackupRef refers to the Backup that should be restored.
|
||||
BackupRef corev1.LocalObjectReference `json:"backupRef"`
|
||||
|
||||
// TargetApplicationRef refers to the application into which the backup
|
||||
// should be restored. If omitted, the driver SHOULD restore into the same
|
||||
// application as referenced by backup.spec.applicationRef.
|
||||
// +optional
|
||||
TargetApplicationRef *corev1.TypedLocalObjectReference `json:"targetApplicationRef,omitempty"`
|
||||
}
|
||||
|
||||
// RestoreJobStatus represents the observed state of a RestoreJob.
|
||||
type RestoreJobStatus struct {
|
||||
// Phase is a high-level summary of the run's state.
|
||||
// Typical values: Pending, Running, Succeeded, Failed.
|
||||
// +optional
|
||||
Phase RestoreJobPhase `json:"phase,omitempty"`
|
||||
|
||||
// StartedAt is the time at which the restore run started.
|
||||
// +optional
|
||||
StartedAt *metav1.Time `json:"startedAt,omitempty"`
|
||||
|
||||
// CompletedAt is the time at which the restore run completed (successfully
|
||||
// or otherwise).
|
||||
// +optional
|
||||
CompletedAt *metav1.Time `json:"completedAt,omitempty"`
|
||||
|
||||
// Message is a human-readable message indicating details about why the
|
||||
// restore run is in its current phase, if any.
|
||||
// +optional
|
||||
Message string `json:"message,omitempty"`
|
||||
|
||||
// Conditions represents the latest available observations of a RestoreJob's state.
|
||||
// +optional
|
||||
Conditions []metav1.Condition `json:"conditions,omitempty"`
|
||||
}
|
||||
|
||||
// +kubebuilder:object:root=true
|
||||
// +kubebuilder:subresource:status
|
||||
// +kubebuilder:printcolumn:name="Phase",type="string",JSONPath=".status.phase",priority=0
|
||||
|
||||
// RestoreJob represents a single execution of a restore from a Backup.
|
||||
type RestoreJob struct {
|
||||
metav1.TypeMeta `json:",inline"`
|
||||
metav1.ObjectMeta `json:"metadata,omitempty"`
|
||||
|
||||
Spec RestoreJobSpec `json:"spec,omitempty"`
|
||||
Status RestoreJobStatus `json:"status,omitempty"`
|
||||
}
|
||||
|
||||
// +kubebuilder:object:root=true
|
||||
|
||||
// RestoreJobList contains a list of RestoreJobs.
|
||||
type RestoreJobList struct {
|
||||
metav1.TypeMeta `json:",inline"`
|
||||
metav1.ListMeta `json:"metadata,omitempty"`
|
||||
Items []RestoreJob `json:"items"`
|
||||
}
|
||||
643
api/backups/v1alpha1/zz_generated.deepcopy.go
Normal file
643
api/backups/v1alpha1/zz_generated.deepcopy.go
Normal file
@@ -0,0 +1,643 @@
|
||||
//go:build !ignore_autogenerated
|
||||
|
||||
/*
|
||||
Copyright 2025 The Cozystack Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
// Code generated by controller-gen. DO NOT EDIT.
|
||||
|
||||
package v1alpha1
|
||||
|
||||
import (
|
||||
"k8s.io/api/core/v1"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
)
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *ApplicationSelector) DeepCopyInto(out *ApplicationSelector) {
|
||||
*out = *in
|
||||
if in.APIGroup != nil {
|
||||
in, out := &in.APIGroup, &out.APIGroup
|
||||
*out = new(string)
|
||||
**out = **in
|
||||
}
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ApplicationSelector.
|
||||
func (in *ApplicationSelector) DeepCopy() *ApplicationSelector {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(ApplicationSelector)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *Backup) DeepCopyInto(out *Backup) {
|
||||
*out = *in
|
||||
out.TypeMeta = in.TypeMeta
|
||||
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
|
||||
in.Spec.DeepCopyInto(&out.Spec)
|
||||
in.Status.DeepCopyInto(&out.Status)
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Backup.
|
||||
func (in *Backup) DeepCopy() *Backup {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(Backup)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
|
||||
func (in *Backup) DeepCopyObject() runtime.Object {
|
||||
if c := in.DeepCopy(); c != nil {
|
||||
return c
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *BackupArtifact) DeepCopyInto(out *BackupArtifact) {
|
||||
*out = *in
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new BackupArtifact.
|
||||
func (in *BackupArtifact) DeepCopy() *BackupArtifact {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(BackupArtifact)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *BackupClass) DeepCopyInto(out *BackupClass) {
|
||||
*out = *in
|
||||
out.TypeMeta = in.TypeMeta
|
||||
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
|
||||
in.Spec.DeepCopyInto(&out.Spec)
|
||||
in.Status.DeepCopyInto(&out.Status)
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new BackupClass.
|
||||
func (in *BackupClass) DeepCopy() *BackupClass {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(BackupClass)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
|
||||
func (in *BackupClass) DeepCopyObject() runtime.Object {
|
||||
if c := in.DeepCopy(); c != nil {
|
||||
return c
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *BackupClassList) DeepCopyInto(out *BackupClassList) {
|
||||
*out = *in
|
||||
out.TypeMeta = in.TypeMeta
|
||||
in.ListMeta.DeepCopyInto(&out.ListMeta)
|
||||
if in.Items != nil {
|
||||
in, out := &in.Items, &out.Items
|
||||
*out = make([]BackupClass, len(*in))
|
||||
for i := range *in {
|
||||
(*in)[i].DeepCopyInto(&(*out)[i])
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new BackupClassList.
|
||||
func (in *BackupClassList) DeepCopy() *BackupClassList {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(BackupClassList)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
|
||||
func (in *BackupClassList) DeepCopyObject() runtime.Object {
|
||||
if c := in.DeepCopy(); c != nil {
|
||||
return c
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *BackupClassSpec) DeepCopyInto(out *BackupClassSpec) {
|
||||
*out = *in
|
||||
if in.Strategies != nil {
|
||||
in, out := &in.Strategies, &out.Strategies
|
||||
*out = make([]BackupClassStrategy, len(*in))
|
||||
for i := range *in {
|
||||
(*in)[i].DeepCopyInto(&(*out)[i])
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new BackupClassSpec.
|
||||
func (in *BackupClassSpec) DeepCopy() *BackupClassSpec {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(BackupClassSpec)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *BackupClassStatus) DeepCopyInto(out *BackupClassStatus) {
|
||||
*out = *in
|
||||
if in.Conditions != nil {
|
||||
in, out := &in.Conditions, &out.Conditions
|
||||
*out = make([]metav1.Condition, len(*in))
|
||||
for i := range *in {
|
||||
(*in)[i].DeepCopyInto(&(*out)[i])
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new BackupClassStatus.
|
||||
func (in *BackupClassStatus) DeepCopy() *BackupClassStatus {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(BackupClassStatus)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *BackupClassStrategy) DeepCopyInto(out *BackupClassStrategy) {
|
||||
*out = *in
|
||||
in.StrategyRef.DeepCopyInto(&out.StrategyRef)
|
||||
in.Application.DeepCopyInto(&out.Application)
|
||||
if in.Parameters != nil {
|
||||
in, out := &in.Parameters, &out.Parameters
|
||||
*out = make(map[string]string, len(*in))
|
||||
for key, val := range *in {
|
||||
(*out)[key] = val
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new BackupClassStrategy.
|
||||
func (in *BackupClassStrategy) DeepCopy() *BackupClassStrategy {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(BackupClassStrategy)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *BackupJob) DeepCopyInto(out *BackupJob) {
|
||||
*out = *in
|
||||
out.TypeMeta = in.TypeMeta
|
||||
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
|
||||
in.Spec.DeepCopyInto(&out.Spec)
|
||||
in.Status.DeepCopyInto(&out.Status)
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new BackupJob.
|
||||
func (in *BackupJob) DeepCopy() *BackupJob {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(BackupJob)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
|
||||
func (in *BackupJob) DeepCopyObject() runtime.Object {
|
||||
if c := in.DeepCopy(); c != nil {
|
||||
return c
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *BackupJobList) DeepCopyInto(out *BackupJobList) {
|
||||
*out = *in
|
||||
out.TypeMeta = in.TypeMeta
|
||||
in.ListMeta.DeepCopyInto(&out.ListMeta)
|
||||
if in.Items != nil {
|
||||
in, out := &in.Items, &out.Items
|
||||
*out = make([]BackupJob, len(*in))
|
||||
for i := range *in {
|
||||
(*in)[i].DeepCopyInto(&(*out)[i])
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new BackupJobList.
|
||||
func (in *BackupJobList) DeepCopy() *BackupJobList {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(BackupJobList)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
|
||||
func (in *BackupJobList) DeepCopyObject() runtime.Object {
|
||||
if c := in.DeepCopy(); c != nil {
|
||||
return c
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *BackupJobSpec) DeepCopyInto(out *BackupJobSpec) {
|
||||
*out = *in
|
||||
if in.PlanRef != nil {
|
||||
in, out := &in.PlanRef, &out.PlanRef
|
||||
*out = new(v1.LocalObjectReference)
|
||||
**out = **in
|
||||
}
|
||||
in.ApplicationRef.DeepCopyInto(&out.ApplicationRef)
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new BackupJobSpec.
|
||||
func (in *BackupJobSpec) DeepCopy() *BackupJobSpec {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(BackupJobSpec)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *BackupJobStatus) DeepCopyInto(out *BackupJobStatus) {
|
||||
*out = *in
|
||||
if in.BackupRef != nil {
|
||||
in, out := &in.BackupRef, &out.BackupRef
|
||||
*out = new(v1.LocalObjectReference)
|
||||
**out = **in
|
||||
}
|
||||
if in.StartedAt != nil {
|
||||
in, out := &in.StartedAt, &out.StartedAt
|
||||
*out = (*in).DeepCopy()
|
||||
}
|
||||
if in.CompletedAt != nil {
|
||||
in, out := &in.CompletedAt, &out.CompletedAt
|
||||
*out = (*in).DeepCopy()
|
||||
}
|
||||
if in.Conditions != nil {
|
||||
in, out := &in.Conditions, &out.Conditions
|
||||
*out = make([]metav1.Condition, len(*in))
|
||||
for i := range *in {
|
||||
(*in)[i].DeepCopyInto(&(*out)[i])
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new BackupJobStatus.
|
||||
func (in *BackupJobStatus) DeepCopy() *BackupJobStatus {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(BackupJobStatus)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *BackupList) DeepCopyInto(out *BackupList) {
|
||||
*out = *in
|
||||
out.TypeMeta = in.TypeMeta
|
||||
in.ListMeta.DeepCopyInto(&out.ListMeta)
|
||||
if in.Items != nil {
|
||||
in, out := &in.Items, &out.Items
|
||||
*out = make([]Backup, len(*in))
|
||||
for i := range *in {
|
||||
(*in)[i].DeepCopyInto(&(*out)[i])
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new BackupList.
|
||||
func (in *BackupList) DeepCopy() *BackupList {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(BackupList)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
|
||||
func (in *BackupList) DeepCopyObject() runtime.Object {
|
||||
if c := in.DeepCopy(); c != nil {
|
||||
return c
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *BackupSpec) DeepCopyInto(out *BackupSpec) {
|
||||
*out = *in
|
||||
in.ApplicationRef.DeepCopyInto(&out.ApplicationRef)
|
||||
if in.PlanRef != nil {
|
||||
in, out := &in.PlanRef, &out.PlanRef
|
||||
*out = new(v1.LocalObjectReference)
|
||||
**out = **in
|
||||
}
|
||||
in.StrategyRef.DeepCopyInto(&out.StrategyRef)
|
||||
in.TakenAt.DeepCopyInto(&out.TakenAt)
|
||||
if in.DriverMetadata != nil {
|
||||
in, out := &in.DriverMetadata, &out.DriverMetadata
|
||||
*out = make(map[string]string, len(*in))
|
||||
for key, val := range *in {
|
||||
(*out)[key] = val
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new BackupSpec.
|
||||
func (in *BackupSpec) DeepCopy() *BackupSpec {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(BackupSpec)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *BackupStatus) DeepCopyInto(out *BackupStatus) {
|
||||
*out = *in
|
||||
if in.Artifact != nil {
|
||||
in, out := &in.Artifact, &out.Artifact
|
||||
*out = new(BackupArtifact)
|
||||
**out = **in
|
||||
}
|
||||
if in.Conditions != nil {
|
||||
in, out := &in.Conditions, &out.Conditions
|
||||
*out = make([]metav1.Condition, len(*in))
|
||||
for i := range *in {
|
||||
(*in)[i].DeepCopyInto(&(*out)[i])
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new BackupStatus.
|
||||
func (in *BackupStatus) DeepCopy() *BackupStatus {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(BackupStatus)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *Plan) DeepCopyInto(out *Plan) {
|
||||
*out = *in
|
||||
out.TypeMeta = in.TypeMeta
|
||||
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
|
||||
in.Spec.DeepCopyInto(&out.Spec)
|
||||
in.Status.DeepCopyInto(&out.Status)
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Plan.
|
||||
func (in *Plan) DeepCopy() *Plan {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(Plan)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
|
||||
func (in *Plan) DeepCopyObject() runtime.Object {
|
||||
if c := in.DeepCopy(); c != nil {
|
||||
return c
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *PlanList) DeepCopyInto(out *PlanList) {
|
||||
*out = *in
|
||||
out.TypeMeta = in.TypeMeta
|
||||
in.ListMeta.DeepCopyInto(&out.ListMeta)
|
||||
if in.Items != nil {
|
||||
in, out := &in.Items, &out.Items
|
||||
*out = make([]Plan, len(*in))
|
||||
for i := range *in {
|
||||
(*in)[i].DeepCopyInto(&(*out)[i])
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PlanList.
|
||||
func (in *PlanList) DeepCopy() *PlanList {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(PlanList)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
|
||||
func (in *PlanList) DeepCopyObject() runtime.Object {
|
||||
if c := in.DeepCopy(); c != nil {
|
||||
return c
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *PlanSchedule) DeepCopyInto(out *PlanSchedule) {
|
||||
*out = *in
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PlanSchedule.
|
||||
func (in *PlanSchedule) DeepCopy() *PlanSchedule {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(PlanSchedule)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *PlanSpec) DeepCopyInto(out *PlanSpec) {
|
||||
*out = *in
|
||||
in.ApplicationRef.DeepCopyInto(&out.ApplicationRef)
|
||||
out.Schedule = in.Schedule
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PlanSpec.
|
||||
func (in *PlanSpec) DeepCopy() *PlanSpec {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(PlanSpec)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *PlanStatus) DeepCopyInto(out *PlanStatus) {
|
||||
*out = *in
|
||||
if in.Conditions != nil {
|
||||
in, out := &in.Conditions, &out.Conditions
|
||||
*out = make([]metav1.Condition, len(*in))
|
||||
for i := range *in {
|
||||
(*in)[i].DeepCopyInto(&(*out)[i])
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PlanStatus.
|
||||
func (in *PlanStatus) DeepCopy() *PlanStatus {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(PlanStatus)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *RestoreJob) DeepCopyInto(out *RestoreJob) {
|
||||
*out = *in
|
||||
out.TypeMeta = in.TypeMeta
|
||||
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
|
||||
in.Spec.DeepCopyInto(&out.Spec)
|
||||
in.Status.DeepCopyInto(&out.Status)
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RestoreJob.
|
||||
func (in *RestoreJob) DeepCopy() *RestoreJob {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(RestoreJob)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
|
||||
func (in *RestoreJob) DeepCopyObject() runtime.Object {
|
||||
if c := in.DeepCopy(); c != nil {
|
||||
return c
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *RestoreJobList) DeepCopyInto(out *RestoreJobList) {
|
||||
*out = *in
|
||||
out.TypeMeta = in.TypeMeta
|
||||
in.ListMeta.DeepCopyInto(&out.ListMeta)
|
||||
if in.Items != nil {
|
||||
in, out := &in.Items, &out.Items
|
||||
*out = make([]RestoreJob, len(*in))
|
||||
for i := range *in {
|
||||
(*in)[i].DeepCopyInto(&(*out)[i])
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RestoreJobList.
|
||||
func (in *RestoreJobList) DeepCopy() *RestoreJobList {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(RestoreJobList)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
|
||||
func (in *RestoreJobList) DeepCopyObject() runtime.Object {
|
||||
if c := in.DeepCopy(); c != nil {
|
||||
return c
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *RestoreJobSpec) DeepCopyInto(out *RestoreJobSpec) {
|
||||
*out = *in
|
||||
out.BackupRef = in.BackupRef
|
||||
if in.TargetApplicationRef != nil {
|
||||
in, out := &in.TargetApplicationRef, &out.TargetApplicationRef
|
||||
*out = new(v1.TypedLocalObjectReference)
|
||||
(*in).DeepCopyInto(*out)
|
||||
}
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RestoreJobSpec.
|
||||
func (in *RestoreJobSpec) DeepCopy() *RestoreJobSpec {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(RestoreJobSpec)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *RestoreJobStatus) DeepCopyInto(out *RestoreJobStatus) {
|
||||
*out = *in
|
||||
if in.StartedAt != nil {
|
||||
in, out := &in.StartedAt, &out.StartedAt
|
||||
*out = (*in).DeepCopy()
|
||||
}
|
||||
if in.CompletedAt != nil {
|
||||
in, out := &in.CompletedAt, &out.CompletedAt
|
||||
*out = (*in).DeepCopy()
|
||||
}
|
||||
if in.Conditions != nil {
|
||||
in, out := &in.Conditions, &out.Conditions
|
||||
*out = make([]metav1.Condition, len(*in))
|
||||
for i := range *in {
|
||||
(*in)[i].DeepCopyInto(&(*out)[i])
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RestoreJobStatus.
|
||||
func (in *RestoreJobStatus) DeepCopy() *RestoreJobStatus {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(RestoreJobStatus)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
277
api/dashboard/v1alpha1/dashboard_resources.go
Normal file
277
api/dashboard/v1alpha1/dashboard_resources.go
Normal file
@@ -0,0 +1,277 @@
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
// Package v1alpha1 defines front.in-cloud.io API types.
|
||||
//
|
||||
// Group: dashboard.cozystack.io
|
||||
// Version: v1alpha1
|
||||
package v1alpha1
|
||||
|
||||
import (
|
||||
v1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
)
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// Shared shapes
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
// CommonStatus is a generic Status block with Kubernetes conditions.
|
||||
type CommonStatus struct {
|
||||
// ObservedGeneration reflects the most recent generation observed by the controller.
|
||||
// +optional
|
||||
ObservedGeneration int64 `json:"observedGeneration,omitempty"`
|
||||
|
||||
// Conditions represent the latest available observations of an object's state.
|
||||
// +optional
|
||||
Conditions []metav1.Condition `json:"conditions,omitempty"`
|
||||
}
|
||||
|
||||
// ArbitrarySpec holds schemaless user data and preserves unknown fields.
|
||||
// We map the entire .spec to a single JSON payload to mirror the CRDs you provided.
|
||||
// NOTE: Using apiextensionsv1.JSON avoids losing arbitrary structure during round-trips.
|
||||
type ArbitrarySpec struct {
|
||||
// +kubebuilder:validation:XPreserveUnknownFields
|
||||
// +kubebuilder:pruning:PreserveUnknownFields
|
||||
v1.JSON `json:",inline"`
|
||||
}
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// Sidebar
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
// +kubebuilder:object:root=true
|
||||
// +kubebuilder:resource:path=sidebars,scope=Cluster
|
||||
// +kubebuilder:subresource:status
|
||||
type Sidebar struct {
|
||||
metav1.TypeMeta `json:",inline"`
|
||||
metav1.ObjectMeta `json:"metadata,omitempty"`
|
||||
|
||||
Spec ArbitrarySpec `json:"spec"`
|
||||
Status CommonStatus `json:"status,omitempty"`
|
||||
}
|
||||
|
||||
// +kubebuilder:object:root=true
|
||||
type SidebarList struct {
|
||||
metav1.TypeMeta `json:",inline"`
|
||||
metav1.ListMeta `json:"metadata,omitempty"`
|
||||
Items []Sidebar `json:"items"`
|
||||
}
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// CustomFormsPrefill (shortName: cfp)
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
// +kubebuilder:object:root=true
|
||||
// +kubebuilder:resource:path=customformsprefills,scope=Cluster,shortName=cfp
|
||||
// +kubebuilder:subresource:status
|
||||
type CustomFormsPrefill struct {
|
||||
metav1.TypeMeta `json:",inline"`
|
||||
metav1.ObjectMeta `json:"metadata,omitempty"`
|
||||
|
||||
Spec ArbitrarySpec `json:"spec"`
|
||||
Status CommonStatus `json:"status,omitempty"`
|
||||
}
|
||||
|
||||
// +kubebuilder:object:root=true
|
||||
type CustomFormsPrefillList struct {
|
||||
metav1.TypeMeta `json:",inline"`
|
||||
metav1.ListMeta `json:"metadata,omitempty"`
|
||||
Items []CustomFormsPrefill `json:"items"`
|
||||
}
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// BreadcrumbInside
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
// +kubebuilder:object:root=true
|
||||
// +kubebuilder:resource:path=breadcrumbsinside,scope=Cluster
|
||||
// +kubebuilder:subresource:status
|
||||
type BreadcrumbInside struct {
|
||||
metav1.TypeMeta `json:",inline"`
|
||||
metav1.ObjectMeta `json:"metadata,omitempty"`
|
||||
|
||||
Spec ArbitrarySpec `json:"spec"`
|
||||
Status CommonStatus `json:"status,omitempty"`
|
||||
}
|
||||
|
||||
// +kubebuilder:object:root=true
|
||||
type BreadcrumbInsideList struct {
|
||||
metav1.TypeMeta `json:",inline"`
|
||||
metav1.ListMeta `json:"metadata,omitempty"`
|
||||
Items []BreadcrumbInside `json:"items"`
|
||||
}
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// CustomFormsOverride (shortName: cfo)
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
// +kubebuilder:object:root=true
|
||||
// +kubebuilder:resource:path=customformsoverrides,scope=Cluster,shortName=cfo
|
||||
// +kubebuilder:subresource:status
|
||||
type CustomFormsOverride struct {
|
||||
metav1.TypeMeta `json:",inline"`
|
||||
metav1.ObjectMeta `json:"metadata,omitempty"`
|
||||
|
||||
Spec ArbitrarySpec `json:"spec"`
|
||||
Status CommonStatus `json:"status,omitempty"`
|
||||
}
|
||||
|
||||
// +kubebuilder:object:root=true
|
||||
type CustomFormsOverrideList struct {
|
||||
metav1.TypeMeta `json:",inline"`
|
||||
metav1.ListMeta `json:"metadata,omitempty"`
|
||||
Items []CustomFormsOverride `json:"items"`
|
||||
}
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// TableUriMapping
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
// +kubebuilder:object:root=true
|
||||
// +kubebuilder:resource:path=tableurimappings,scope=Cluster
|
||||
// +kubebuilder:subresource:status
|
||||
type TableUriMapping struct {
|
||||
metav1.TypeMeta `json:",inline"`
|
||||
metav1.ObjectMeta `json:"metadata,omitempty"`
|
||||
|
||||
Spec ArbitrarySpec `json:"spec"`
|
||||
Status CommonStatus `json:"status,omitempty"`
|
||||
}
|
||||
|
||||
// +kubebuilder:object:root=true
|
||||
type TableUriMappingList struct {
|
||||
metav1.TypeMeta `json:",inline"`
|
||||
metav1.ListMeta `json:"metadata,omitempty"`
|
||||
Items []TableUriMapping `json:"items"`
|
||||
}
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// Breadcrumb
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
// +kubebuilder:object:root=true
|
||||
// +kubebuilder:resource:path=breadcrumbs,scope=Cluster
|
||||
// +kubebuilder:subresource:status
|
||||
type Breadcrumb struct {
|
||||
metav1.TypeMeta `json:",inline"`
|
||||
metav1.ObjectMeta `json:"metadata,omitempty"`
|
||||
|
||||
Spec ArbitrarySpec `json:"spec"`
|
||||
Status CommonStatus `json:"status,omitempty"`
|
||||
}
|
||||
|
||||
// +kubebuilder:object:root=true
|
||||
type BreadcrumbList struct {
|
||||
metav1.TypeMeta `json:",inline"`
|
||||
metav1.ListMeta `json:"metadata,omitempty"`
|
||||
Items []Breadcrumb `json:"items"`
|
||||
}
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// MarketplacePanel
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
// +kubebuilder:object:root=true
|
||||
// +kubebuilder:resource:path=marketplacepanels,scope=Cluster
|
||||
// +kubebuilder:subresource:status
|
||||
type MarketplacePanel struct {
|
||||
metav1.TypeMeta `json:",inline"`
|
||||
metav1.ObjectMeta `json:"metadata,omitempty"`
|
||||
|
||||
Spec ArbitrarySpec `json:"spec"`
|
||||
Status CommonStatus `json:"status,omitempty"`
|
||||
}
|
||||
|
||||
// +kubebuilder:object:root=true
|
||||
type MarketplacePanelList struct {
|
||||
metav1.TypeMeta `json:",inline"`
|
||||
metav1.ListMeta `json:"metadata,omitempty"`
|
||||
Items []MarketplacePanel `json:"items"`
|
||||
}
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// Navigation
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
// +kubebuilder:object:root=true
|
||||
// +kubebuilder:resource:path=navigations,scope=Cluster
|
||||
// +kubebuilder:subresource:status
|
||||
type Navigation struct {
|
||||
metav1.TypeMeta `json:",inline"`
|
||||
metav1.ObjectMeta `json:"metadata,omitempty"`
|
||||
|
||||
Spec ArbitrarySpec `json:"spec"`
|
||||
Status CommonStatus `json:"status,omitempty"`
|
||||
}
|
||||
|
||||
// +kubebuilder:object:root=true
|
||||
type NavigationList struct {
|
||||
metav1.TypeMeta `json:",inline"`
|
||||
metav1.ListMeta `json:"metadata,omitempty"`
|
||||
Items []Navigation `json:"items"`
|
||||
}
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// CustomColumnsOverride
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
// +kubebuilder:object:root=true
|
||||
// +kubebuilder:resource:path=customcolumnsoverrides,scope=Cluster
|
||||
// +kubebuilder:subresource:status
|
||||
type CustomColumnsOverride struct {
|
||||
metav1.TypeMeta `json:",inline"`
|
||||
metav1.ObjectMeta `json:"metadata,omitempty"`
|
||||
|
||||
Spec ArbitrarySpec `json:"spec"`
|
||||
Status CommonStatus `json:"status,omitempty"`
|
||||
}
|
||||
|
||||
// +kubebuilder:object:root=true
|
||||
type CustomColumnsOverrideList struct {
|
||||
metav1.TypeMeta `json:",inline"`
|
||||
metav1.ListMeta `json:"metadata,omitempty"`
|
||||
Items []CustomColumnsOverride `json:"items"`
|
||||
}
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// Factory
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
// +kubebuilder:object:root=true
|
||||
// +kubebuilder:resource:path=factories,scope=Cluster
|
||||
// +kubebuilder:subresource:status
|
||||
type Factory struct {
|
||||
metav1.TypeMeta `json:",inline"`
|
||||
metav1.ObjectMeta `json:"metadata,omitempty"`
|
||||
|
||||
Spec ArbitrarySpec `json:"spec"`
|
||||
Status CommonStatus `json:"status,omitempty"`
|
||||
}
|
||||
|
||||
// +kubebuilder:object:root=true
|
||||
type FactoryList struct {
|
||||
metav1.TypeMeta `json:",inline"`
|
||||
metav1.ListMeta `json:"metadata,omitempty"`
|
||||
Items []Factory `json:"items"`
|
||||
}
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// CustomFormsOverrideMapping
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
// +kubebuilder:object:root=true
|
||||
// +kubebuilder:resource:path=cfomappings,scope=Cluster
|
||||
// +kubebuilder:subresource:status
|
||||
type CFOMapping struct {
|
||||
metav1.TypeMeta `json:",inline"`
|
||||
metav1.ObjectMeta `json:"metadata,omitempty"`
|
||||
|
||||
Spec ArbitrarySpec `json:"spec"`
|
||||
Status CommonStatus `json:"status,omitempty"`
|
||||
}
|
||||
|
||||
// +kubebuilder:object:root=true
|
||||
type CFOMappingList struct {
|
||||
metav1.TypeMeta `json:",inline"`
|
||||
metav1.ListMeta `json:"metadata,omitempty"`
|
||||
Items []CFOMapping `json:"items"`
|
||||
}
|
||||
78
api/dashboard/v1alpha1/groupversion_info.go
Normal file
78
api/dashboard/v1alpha1/groupversion_info.go
Normal file
@@ -0,0 +1,78 @@
|
||||
/*
|
||||
Copyright 2025.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
// Package v1alpha1 contains API Schema definitions for the v1alpha1 API group.
|
||||
// +kubebuilder:object:generate=true
|
||||
// +groupName=dashboard.cozystack.io
|
||||
package v1alpha1
|
||||
|
||||
import (
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
"k8s.io/apimachinery/pkg/runtime/schema"
|
||||
)
|
||||
|
||||
var (
|
||||
// GroupVersion is group version used to register these objects.
|
||||
GroupVersion = schema.GroupVersion{Group: "dashboard.cozystack.io", Version: "v1alpha1"}
|
||||
|
||||
// SchemeBuilder is used to add go types to the GroupVersionKind scheme.
|
||||
SchemeBuilder = runtime.NewSchemeBuilder(addKnownTypes)
|
||||
|
||||
// AddToScheme adds the types in this group-version to the given scheme.
|
||||
AddToScheme = SchemeBuilder.AddToScheme
|
||||
)
|
||||
|
||||
func addKnownTypes(scheme *runtime.Scheme) error {
|
||||
scheme.AddKnownTypes(
|
||||
GroupVersion,
|
||||
|
||||
&Sidebar{},
|
||||
&SidebarList{},
|
||||
|
||||
&CustomFormsPrefill{},
|
||||
&CustomFormsPrefillList{},
|
||||
|
||||
&BreadcrumbInside{},
|
||||
&BreadcrumbInsideList{},
|
||||
|
||||
&CustomFormsOverride{},
|
||||
&CustomFormsOverrideList{},
|
||||
|
||||
&TableUriMapping{},
|
||||
&TableUriMappingList{},
|
||||
|
||||
&Breadcrumb{},
|
||||
&BreadcrumbList{},
|
||||
|
||||
&MarketplacePanel{},
|
||||
&MarketplacePanelList{},
|
||||
|
||||
&Navigation{},
|
||||
&NavigationList{},
|
||||
|
||||
&CustomColumnsOverride{},
|
||||
&CustomColumnsOverrideList{},
|
||||
|
||||
&Factory{},
|
||||
&FactoryList{},
|
||||
|
||||
&CFOMapping{},
|
||||
&CFOMappingList{},
|
||||
)
|
||||
metav1.AddToGroupVersion(scheme, GroupVersion)
|
||||
return nil
|
||||
}
|
||||
713
api/dashboard/v1alpha1/zz_generated.deepcopy.go
Normal file
713
api/dashboard/v1alpha1/zz_generated.deepcopy.go
Normal file
@@ -0,0 +1,713 @@
|
||||
//go:build !ignore_autogenerated
|
||||
|
||||
/*
|
||||
Copyright 2025 The Cozystack Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
// Code generated by controller-gen. DO NOT EDIT.
|
||||
|
||||
package v1alpha1
|
||||
|
||||
import (
|
||||
"k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
)
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *ArbitrarySpec) DeepCopyInto(out *ArbitrarySpec) {
|
||||
*out = *in
|
||||
in.JSON.DeepCopyInto(&out.JSON)
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ArbitrarySpec.
|
||||
func (in *ArbitrarySpec) DeepCopy() *ArbitrarySpec {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(ArbitrarySpec)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *Breadcrumb) DeepCopyInto(out *Breadcrumb) {
|
||||
*out = *in
|
||||
out.TypeMeta = in.TypeMeta
|
||||
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
|
||||
in.Spec.DeepCopyInto(&out.Spec)
|
||||
in.Status.DeepCopyInto(&out.Status)
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Breadcrumb.
|
||||
func (in *Breadcrumb) DeepCopy() *Breadcrumb {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(Breadcrumb)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
|
||||
func (in *Breadcrumb) DeepCopyObject() runtime.Object {
|
||||
if c := in.DeepCopy(); c != nil {
|
||||
return c
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *BreadcrumbInside) DeepCopyInto(out *BreadcrumbInside) {
|
||||
*out = *in
|
||||
out.TypeMeta = in.TypeMeta
|
||||
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
|
||||
in.Spec.DeepCopyInto(&out.Spec)
|
||||
in.Status.DeepCopyInto(&out.Status)
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new BreadcrumbInside.
|
||||
func (in *BreadcrumbInside) DeepCopy() *BreadcrumbInside {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(BreadcrumbInside)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
|
||||
func (in *BreadcrumbInside) DeepCopyObject() runtime.Object {
|
||||
if c := in.DeepCopy(); c != nil {
|
||||
return c
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *BreadcrumbInsideList) DeepCopyInto(out *BreadcrumbInsideList) {
|
||||
*out = *in
|
||||
out.TypeMeta = in.TypeMeta
|
||||
in.ListMeta.DeepCopyInto(&out.ListMeta)
|
||||
if in.Items != nil {
|
||||
in, out := &in.Items, &out.Items
|
||||
*out = make([]BreadcrumbInside, len(*in))
|
||||
for i := range *in {
|
||||
(*in)[i].DeepCopyInto(&(*out)[i])
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new BreadcrumbInsideList.
|
||||
func (in *BreadcrumbInsideList) DeepCopy() *BreadcrumbInsideList {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(BreadcrumbInsideList)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
|
||||
func (in *BreadcrumbInsideList) DeepCopyObject() runtime.Object {
|
||||
if c := in.DeepCopy(); c != nil {
|
||||
return c
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *BreadcrumbList) DeepCopyInto(out *BreadcrumbList) {
|
||||
*out = *in
|
||||
out.TypeMeta = in.TypeMeta
|
||||
in.ListMeta.DeepCopyInto(&out.ListMeta)
|
||||
if in.Items != nil {
|
||||
in, out := &in.Items, &out.Items
|
||||
*out = make([]Breadcrumb, len(*in))
|
||||
for i := range *in {
|
||||
(*in)[i].DeepCopyInto(&(*out)[i])
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new BreadcrumbList.
|
||||
func (in *BreadcrumbList) DeepCopy() *BreadcrumbList {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(BreadcrumbList)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
|
||||
func (in *BreadcrumbList) DeepCopyObject() runtime.Object {
|
||||
if c := in.DeepCopy(); c != nil {
|
||||
return c
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *CFOMapping) DeepCopyInto(out *CFOMapping) {
|
||||
*out = *in
|
||||
out.TypeMeta = in.TypeMeta
|
||||
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
|
||||
in.Spec.DeepCopyInto(&out.Spec)
|
||||
in.Status.DeepCopyInto(&out.Status)
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CFOMapping.
|
||||
func (in *CFOMapping) DeepCopy() *CFOMapping {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(CFOMapping)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
|
||||
func (in *CFOMapping) DeepCopyObject() runtime.Object {
|
||||
if c := in.DeepCopy(); c != nil {
|
||||
return c
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *CFOMappingList) DeepCopyInto(out *CFOMappingList) {
|
||||
*out = *in
|
||||
out.TypeMeta = in.TypeMeta
|
||||
in.ListMeta.DeepCopyInto(&out.ListMeta)
|
||||
if in.Items != nil {
|
||||
in, out := &in.Items, &out.Items
|
||||
*out = make([]CFOMapping, len(*in))
|
||||
for i := range *in {
|
||||
(*in)[i].DeepCopyInto(&(*out)[i])
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CFOMappingList.
|
||||
func (in *CFOMappingList) DeepCopy() *CFOMappingList {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(CFOMappingList)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
|
||||
func (in *CFOMappingList) DeepCopyObject() runtime.Object {
|
||||
if c := in.DeepCopy(); c != nil {
|
||||
return c
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *CommonStatus) DeepCopyInto(out *CommonStatus) {
|
||||
*out = *in
|
||||
if in.Conditions != nil {
|
||||
in, out := &in.Conditions, &out.Conditions
|
||||
*out = make([]v1.Condition, len(*in))
|
||||
for i := range *in {
|
||||
(*in)[i].DeepCopyInto(&(*out)[i])
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CommonStatus.
|
||||
func (in *CommonStatus) DeepCopy() *CommonStatus {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(CommonStatus)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *CustomColumnsOverride) DeepCopyInto(out *CustomColumnsOverride) {
|
||||
*out = *in
|
||||
out.TypeMeta = in.TypeMeta
|
||||
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
|
||||
in.Spec.DeepCopyInto(&out.Spec)
|
||||
in.Status.DeepCopyInto(&out.Status)
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CustomColumnsOverride.
|
||||
func (in *CustomColumnsOverride) DeepCopy() *CustomColumnsOverride {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(CustomColumnsOverride)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
|
||||
func (in *CustomColumnsOverride) DeepCopyObject() runtime.Object {
|
||||
if c := in.DeepCopy(); c != nil {
|
||||
return c
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *CustomColumnsOverrideList) DeepCopyInto(out *CustomColumnsOverrideList) {
|
||||
*out = *in
|
||||
out.TypeMeta = in.TypeMeta
|
||||
in.ListMeta.DeepCopyInto(&out.ListMeta)
|
||||
if in.Items != nil {
|
||||
in, out := &in.Items, &out.Items
|
||||
*out = make([]CustomColumnsOverride, len(*in))
|
||||
for i := range *in {
|
||||
(*in)[i].DeepCopyInto(&(*out)[i])
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CustomColumnsOverrideList.
|
||||
func (in *CustomColumnsOverrideList) DeepCopy() *CustomColumnsOverrideList {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(CustomColumnsOverrideList)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
|
||||
func (in *CustomColumnsOverrideList) DeepCopyObject() runtime.Object {
|
||||
if c := in.DeepCopy(); c != nil {
|
||||
return c
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *CustomFormsOverride) DeepCopyInto(out *CustomFormsOverride) {
|
||||
*out = *in
|
||||
out.TypeMeta = in.TypeMeta
|
||||
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
|
||||
in.Spec.DeepCopyInto(&out.Spec)
|
||||
in.Status.DeepCopyInto(&out.Status)
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CustomFormsOverride.
|
||||
func (in *CustomFormsOverride) DeepCopy() *CustomFormsOverride {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(CustomFormsOverride)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
|
||||
func (in *CustomFormsOverride) DeepCopyObject() runtime.Object {
|
||||
if c := in.DeepCopy(); c != nil {
|
||||
return c
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *CustomFormsOverrideList) DeepCopyInto(out *CustomFormsOverrideList) {
|
||||
*out = *in
|
||||
out.TypeMeta = in.TypeMeta
|
||||
in.ListMeta.DeepCopyInto(&out.ListMeta)
|
||||
if in.Items != nil {
|
||||
in, out := &in.Items, &out.Items
|
||||
*out = make([]CustomFormsOverride, len(*in))
|
||||
for i := range *in {
|
||||
(*in)[i].DeepCopyInto(&(*out)[i])
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CustomFormsOverrideList.
|
||||
func (in *CustomFormsOverrideList) DeepCopy() *CustomFormsOverrideList {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(CustomFormsOverrideList)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
|
||||
func (in *CustomFormsOverrideList) DeepCopyObject() runtime.Object {
|
||||
if c := in.DeepCopy(); c != nil {
|
||||
return c
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *CustomFormsPrefill) DeepCopyInto(out *CustomFormsPrefill) {
|
||||
*out = *in
|
||||
out.TypeMeta = in.TypeMeta
|
||||
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
|
||||
in.Spec.DeepCopyInto(&out.Spec)
|
||||
in.Status.DeepCopyInto(&out.Status)
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CustomFormsPrefill.
|
||||
func (in *CustomFormsPrefill) DeepCopy() *CustomFormsPrefill {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(CustomFormsPrefill)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
|
||||
func (in *CustomFormsPrefill) DeepCopyObject() runtime.Object {
|
||||
if c := in.DeepCopy(); c != nil {
|
||||
return c
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *CustomFormsPrefillList) DeepCopyInto(out *CustomFormsPrefillList) {
|
||||
*out = *in
|
||||
out.TypeMeta = in.TypeMeta
|
||||
in.ListMeta.DeepCopyInto(&out.ListMeta)
|
||||
if in.Items != nil {
|
||||
in, out := &in.Items, &out.Items
|
||||
*out = make([]CustomFormsPrefill, len(*in))
|
||||
for i := range *in {
|
||||
(*in)[i].DeepCopyInto(&(*out)[i])
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CustomFormsPrefillList.
|
||||
func (in *CustomFormsPrefillList) DeepCopy() *CustomFormsPrefillList {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(CustomFormsPrefillList)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
|
||||
func (in *CustomFormsPrefillList) DeepCopyObject() runtime.Object {
|
||||
if c := in.DeepCopy(); c != nil {
|
||||
return c
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *Factory) DeepCopyInto(out *Factory) {
|
||||
*out = *in
|
||||
out.TypeMeta = in.TypeMeta
|
||||
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
|
||||
in.Spec.DeepCopyInto(&out.Spec)
|
||||
in.Status.DeepCopyInto(&out.Status)
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Factory.
|
||||
func (in *Factory) DeepCopy() *Factory {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(Factory)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
|
||||
func (in *Factory) DeepCopyObject() runtime.Object {
|
||||
if c := in.DeepCopy(); c != nil {
|
||||
return c
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *FactoryList) DeepCopyInto(out *FactoryList) {
|
||||
*out = *in
|
||||
out.TypeMeta = in.TypeMeta
|
||||
in.ListMeta.DeepCopyInto(&out.ListMeta)
|
||||
if in.Items != nil {
|
||||
in, out := &in.Items, &out.Items
|
||||
*out = make([]Factory, len(*in))
|
||||
for i := range *in {
|
||||
(*in)[i].DeepCopyInto(&(*out)[i])
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new FactoryList.
|
||||
func (in *FactoryList) DeepCopy() *FactoryList {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(FactoryList)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
|
||||
func (in *FactoryList) DeepCopyObject() runtime.Object {
|
||||
if c := in.DeepCopy(); c != nil {
|
||||
return c
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *MarketplacePanel) DeepCopyInto(out *MarketplacePanel) {
|
||||
*out = *in
|
||||
out.TypeMeta = in.TypeMeta
|
||||
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
|
||||
in.Spec.DeepCopyInto(&out.Spec)
|
||||
in.Status.DeepCopyInto(&out.Status)
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new MarketplacePanel.
|
||||
func (in *MarketplacePanel) DeepCopy() *MarketplacePanel {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(MarketplacePanel)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
|
||||
func (in *MarketplacePanel) DeepCopyObject() runtime.Object {
|
||||
if c := in.DeepCopy(); c != nil {
|
||||
return c
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *MarketplacePanelList) DeepCopyInto(out *MarketplacePanelList) {
|
||||
*out = *in
|
||||
out.TypeMeta = in.TypeMeta
|
||||
in.ListMeta.DeepCopyInto(&out.ListMeta)
|
||||
if in.Items != nil {
|
||||
in, out := &in.Items, &out.Items
|
||||
*out = make([]MarketplacePanel, len(*in))
|
||||
for i := range *in {
|
||||
(*in)[i].DeepCopyInto(&(*out)[i])
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new MarketplacePanelList.
|
||||
func (in *MarketplacePanelList) DeepCopy() *MarketplacePanelList {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(MarketplacePanelList)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
|
||||
func (in *MarketplacePanelList) DeepCopyObject() runtime.Object {
|
||||
if c := in.DeepCopy(); c != nil {
|
||||
return c
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *Navigation) DeepCopyInto(out *Navigation) {
|
||||
*out = *in
|
||||
out.TypeMeta = in.TypeMeta
|
||||
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
|
||||
in.Spec.DeepCopyInto(&out.Spec)
|
||||
in.Status.DeepCopyInto(&out.Status)
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Navigation.
|
||||
func (in *Navigation) DeepCopy() *Navigation {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(Navigation)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
|
||||
func (in *Navigation) DeepCopyObject() runtime.Object {
|
||||
if c := in.DeepCopy(); c != nil {
|
||||
return c
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *NavigationList) DeepCopyInto(out *NavigationList) {
|
||||
*out = *in
|
||||
out.TypeMeta = in.TypeMeta
|
||||
in.ListMeta.DeepCopyInto(&out.ListMeta)
|
||||
if in.Items != nil {
|
||||
in, out := &in.Items, &out.Items
|
||||
*out = make([]Navigation, len(*in))
|
||||
for i := range *in {
|
||||
(*in)[i].DeepCopyInto(&(*out)[i])
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new NavigationList.
|
||||
func (in *NavigationList) DeepCopy() *NavigationList {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(NavigationList)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
|
||||
func (in *NavigationList) DeepCopyObject() runtime.Object {
|
||||
if c := in.DeepCopy(); c != nil {
|
||||
return c
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *Sidebar) DeepCopyInto(out *Sidebar) {
|
||||
*out = *in
|
||||
out.TypeMeta = in.TypeMeta
|
||||
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
|
||||
in.Spec.DeepCopyInto(&out.Spec)
|
||||
in.Status.DeepCopyInto(&out.Status)
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Sidebar.
|
||||
func (in *Sidebar) DeepCopy() *Sidebar {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(Sidebar)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
|
||||
func (in *Sidebar) DeepCopyObject() runtime.Object {
|
||||
if c := in.DeepCopy(); c != nil {
|
||||
return c
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *SidebarList) DeepCopyInto(out *SidebarList) {
|
||||
*out = *in
|
||||
out.TypeMeta = in.TypeMeta
|
||||
in.ListMeta.DeepCopyInto(&out.ListMeta)
|
||||
if in.Items != nil {
|
||||
in, out := &in.Items, &out.Items
|
||||
*out = make([]Sidebar, len(*in))
|
||||
for i := range *in {
|
||||
(*in)[i].DeepCopyInto(&(*out)[i])
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new SidebarList.
|
||||
func (in *SidebarList) DeepCopy() *SidebarList {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(SidebarList)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
|
||||
func (in *SidebarList) DeepCopyObject() runtime.Object {
|
||||
if c := in.DeepCopy(); c != nil {
|
||||
return c
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *TableUriMapping) DeepCopyInto(out *TableUriMapping) {
|
||||
*out = *in
|
||||
out.TypeMeta = in.TypeMeta
|
||||
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
|
||||
in.Spec.DeepCopyInto(&out.Spec)
|
||||
in.Status.DeepCopyInto(&out.Status)
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new TableUriMapping.
|
||||
func (in *TableUriMapping) DeepCopy() *TableUriMapping {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(TableUriMapping)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
|
||||
func (in *TableUriMapping) DeepCopyObject() runtime.Object {
|
||||
if c := in.DeepCopy(); c != nil {
|
||||
return c
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *TableUriMappingList) DeepCopyInto(out *TableUriMappingList) {
|
||||
*out = *in
|
||||
out.TypeMeta = in.TypeMeta
|
||||
in.ListMeta.DeepCopyInto(&out.ListMeta)
|
||||
if in.Items != nil {
|
||||
in, out := &in.Items, &out.Items
|
||||
*out = make([]TableUriMapping, len(*in))
|
||||
for i := range *in {
|
||||
(*in)[i].DeepCopyInto(&(*out)[i])
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new TableUriMappingList.
|
||||
func (in *TableUriMappingList) DeepCopy() *TableUriMappingList {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(TableUriMappingList)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
|
||||
func (in *TableUriMappingList) DeepCopyObject() runtime.Object {
|
||||
if c := in.DeepCopy(); c != nil {
|
||||
return c
|
||||
}
|
||||
return nil
|
||||
}
|
||||
177
api/v1alpha1/applicationdefinitions_types.go
Normal file
177
api/v1alpha1/applicationdefinitions_types.go
Normal file
@@ -0,0 +1,177 @@
|
||||
/*
|
||||
Copyright 2025.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package v1alpha1
|
||||
|
||||
import (
|
||||
helmv2 "github.com/fluxcd/helm-controller/api/v2"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
)
|
||||
|
||||
// +kubebuilder:object:root=true
|
||||
// +kubebuilder:resource:scope=Cluster
|
||||
|
||||
// ApplicationDefinition is the Schema for the applicationdefinitions API
|
||||
type ApplicationDefinition struct {
|
||||
metav1.TypeMeta `json:",inline"`
|
||||
metav1.ObjectMeta `json:"metadata,omitempty"`
|
||||
|
||||
Spec ApplicationDefinitionSpec `json:"spec,omitempty"`
|
||||
}
|
||||
|
||||
// +kubebuilder:object:root=true
|
||||
|
||||
// ApplicationDefinitionList contains a list of ApplicationDefinitions
|
||||
type ApplicationDefinitionList struct {
|
||||
metav1.TypeMeta `json:",inline"`
|
||||
metav1.ListMeta `json:"metadata,omitempty"`
|
||||
Items []ApplicationDefinition `json:"items"`
|
||||
}
|
||||
|
||||
func init() {
|
||||
SchemeBuilder.Register(&ApplicationDefinition{}, &ApplicationDefinitionList{})
|
||||
}
|
||||
|
||||
type ApplicationDefinitionSpec struct {
|
||||
// Application configuration
|
||||
Application ApplicationDefinitionApplication `json:"application"`
|
||||
// Release configuration
|
||||
Release ApplicationDefinitionRelease `json:"release"`
|
||||
|
||||
// Secret selectors
|
||||
Secrets ApplicationDefinitionResources `json:"secrets,omitempty"`
|
||||
// Service selectors
|
||||
Services ApplicationDefinitionResources `json:"services,omitempty"`
|
||||
// Ingress selectors
|
||||
Ingresses ApplicationDefinitionResources `json:"ingresses,omitempty"`
|
||||
|
||||
// Dashboard configuration for this resource
|
||||
Dashboard *ApplicationDefinitionDashboard `json:"dashboard,omitempty"`
|
||||
}
|
||||
|
||||
type ApplicationDefinitionApplication struct {
|
||||
// Kind of the application, used for UI and API
|
||||
Kind string `json:"kind"`
|
||||
// OpenAPI schema for the application, used for API validation
|
||||
OpenAPISchema string `json:"openAPISchema"`
|
||||
// Plural name of the application, used for UI and API
|
||||
Plural string `json:"plural"`
|
||||
// Singular name of the application, used for UI and API
|
||||
Singular string `json:"singular"`
|
||||
}
|
||||
|
||||
type ApplicationDefinitionRelease struct {
|
||||
// Reference to the chart source
|
||||
ChartRef *helmv2.CrossNamespaceSourceReference `json:"chartRef"`
|
||||
// Labels for the release
|
||||
Labels map[string]string `json:"labels,omitempty"`
|
||||
// Prefix for the release name
|
||||
Prefix string `json:"prefix"`
|
||||
}
|
||||
|
||||
// ApplicationDefinitionResourceSelector extends metav1.LabelSelector with resourceNames support.
|
||||
// A resource matches this selector only if it satisfies ALL criteria:
|
||||
// - Label selector conditions (matchExpressions and matchLabels)
|
||||
// - AND has a name that matches one of the names in resourceNames (if specified)
|
||||
//
|
||||
// The resourceNames field supports Go templates with the following variables available:
|
||||
// - {{ .name }}: The name of the managing application (from apps.cozystack.io/application.name)
|
||||
// - {{ .kind }}: The lowercased kind of the managing application (from apps.cozystack.io/application.kind)
|
||||
// - {{ .namespace }}: The namespace of the resource being processed
|
||||
//
|
||||
// Example YAML:
|
||||
//
|
||||
// secrets:
|
||||
// include:
|
||||
// - matchExpressions:
|
||||
// - key: badlabel
|
||||
// operator: DoesNotExist
|
||||
// matchLabels:
|
||||
// goodlabel: goodvalue
|
||||
// resourceNames:
|
||||
// - "{{ .name }}-secret"
|
||||
// - "{{ .kind }}-{{ .name }}-tls"
|
||||
// - "specificname"
|
||||
type ApplicationDefinitionResourceSelector struct {
|
||||
metav1.LabelSelector `json:",inline"`
|
||||
// ResourceNames is a list of resource names to match
|
||||
// If specified, the resource must have one of these exact names to match the selector
|
||||
// +optional
|
||||
ResourceNames []string `json:"resourceNames,omitempty"`
|
||||
}
|
||||
|
||||
type ApplicationDefinitionResources struct {
|
||||
// Exclude contains an array of resource selectors that target resources.
|
||||
// If a resource matches the selector in any of the elements in the array, it is
|
||||
// hidden from the user, regardless of the matches in the include array.
|
||||
Exclude []*ApplicationDefinitionResourceSelector `json:"exclude,omitempty"`
|
||||
// Include contains an array of resource selectors that target resources.
|
||||
// If a resource matches the selector in any of the elements in the array, and
|
||||
// matches none of the selectors in the exclude array that resource is marked
|
||||
// as a tenant resource and is visible to users.
|
||||
Include []*ApplicationDefinitionResourceSelector `json:"include,omitempty"`
|
||||
}
|
||||
|
||||
// ---- Dashboard types ----
|
||||
|
||||
// DashboardTab enumerates allowed UI tabs.
|
||||
// +kubebuilder:validation:Enum=workloads;ingresses;services;secrets;yaml
|
||||
type DashboardTab string
|
||||
|
||||
const (
|
||||
DashboardTabWorkloads DashboardTab = "workloads"
|
||||
DashboardTabIngresses DashboardTab = "ingresses"
|
||||
DashboardTabServices DashboardTab = "services"
|
||||
DashboardTabSecrets DashboardTab = "secrets"
|
||||
DashboardTabYAML DashboardTab = "yaml"
|
||||
)
|
||||
|
||||
// ApplicationDefinitionDashboard describes how this resource appears in the UI.
|
||||
type ApplicationDefinitionDashboard struct {
|
||||
// Human-readable name shown in the UI (e.g., "Bucket")
|
||||
Singular string `json:"singular"`
|
||||
// Plural human-readable name (e.g., "Buckets")
|
||||
Plural string `json:"plural"`
|
||||
// Hard-coded name used in the UI (e.g., "bucket")
|
||||
// +optional
|
||||
Name string `json:"name,omitempty"`
|
||||
// Whether this resource is singular (not a collection) in the UI
|
||||
// +optional
|
||||
SingularResource bool `json:"singularResource,omitempty"`
|
||||
// Order weight for sorting resources in the UI (lower first)
|
||||
// +optional
|
||||
Weight int `json:"weight,omitempty"`
|
||||
// Short description shown in catalogs or headers (e.g., "S3 compatible storage")
|
||||
// +optional
|
||||
Description string `json:"description,omitempty"`
|
||||
// Icon encoded as a string (e.g., inline SVG, base64, or data URI)
|
||||
// +optional
|
||||
Icon string `json:"icon,omitempty"`
|
||||
// Category used to group resources in the UI (e.g., "Storage", "Networking")
|
||||
Category string `json:"category"`
|
||||
// Free-form tags for search and filtering
|
||||
// +optional
|
||||
Tags []string `json:"tags,omitempty"`
|
||||
// Which tabs to show for this resource
|
||||
// +optional
|
||||
Tabs []DashboardTab `json:"tabs,omitempty"`
|
||||
// Order of keys in the YAML view
|
||||
// +optional
|
||||
KeysOrder [][]string `json:"keysOrder,omitempty"`
|
||||
// Whether this resource is a module (tenant module)
|
||||
// +optional
|
||||
Module bool `json:"module,omitempty"`
|
||||
}
|
||||
36
api/v1alpha1/groupversion_info.go
Normal file
36
api/v1alpha1/groupversion_info.go
Normal file
@@ -0,0 +1,36 @@
|
||||
/*
|
||||
Copyright 2025.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
// Package v1alpha1 contains API Schema definitions for the v1alpha1 API group.
|
||||
// +kubebuilder:object:generate=true
|
||||
// +groupName=cozystack.io
|
||||
package v1alpha1
|
||||
|
||||
import (
|
||||
"k8s.io/apimachinery/pkg/runtime/schema"
|
||||
"sigs.k8s.io/controller-runtime/pkg/scheme"
|
||||
)
|
||||
|
||||
var (
|
||||
// GroupVersion is group version used to register these objects.
|
||||
GroupVersion = schema.GroupVersion{Group: "cozystack.io", Version: "v1alpha1"}
|
||||
|
||||
// SchemeBuilder is used to add go types to the GroupVersionKind scheme.
|
||||
SchemeBuilder = &scheme.Builder{GroupVersion: GroupVersion}
|
||||
|
||||
// AddToScheme adds the types in this group-version to the given scheme.
|
||||
AddToScheme = SchemeBuilder.AddToScheme
|
||||
)
|
||||
100
api/v1alpha1/package_types.go
Normal file
100
api/v1alpha1/package_types.go
Normal file
@@ -0,0 +1,100 @@
|
||||
/*
|
||||
Copyright 2025.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package v1alpha1
|
||||
|
||||
import (
|
||||
apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
)
|
||||
|
||||
// +kubebuilder:object:root=true
|
||||
// +kubebuilder:resource:scope=Cluster,shortName={pkg,pkgs}
|
||||
// +kubebuilder:subresource:status
|
||||
// +kubebuilder:printcolumn:name="Variant",type="string",JSONPath=".spec.variant",description="Selected variant"
|
||||
// +kubebuilder:printcolumn:name="Ready",type="string",JSONPath=".status.conditions[?(@.type=='Ready')].status",description="Ready status"
|
||||
// +kubebuilder:printcolumn:name="Status",type="string",JSONPath=".status.conditions[?(@.type=='Ready')].message",description="Ready message"
|
||||
|
||||
// Package is the Schema for the packages API
|
||||
type Package struct {
|
||||
metav1.TypeMeta `json:",inline"`
|
||||
metav1.ObjectMeta `json:"metadata,omitempty"`
|
||||
|
||||
Spec PackageSpec `json:"spec,omitempty"`
|
||||
Status PackageStatus `json:"status,omitempty"`
|
||||
}
|
||||
|
||||
// +kubebuilder:object:root=true
|
||||
|
||||
// PackageList contains a list of Packages
|
||||
type PackageList struct {
|
||||
metav1.TypeMeta `json:",inline"`
|
||||
metav1.ListMeta `json:"metadata,omitempty"`
|
||||
Items []Package `json:"items"`
|
||||
}
|
||||
|
||||
func init() {
|
||||
SchemeBuilder.Register(&Package{}, &PackageList{})
|
||||
}
|
||||
|
||||
// PackageSpec defines the desired state of Package
|
||||
type PackageSpec struct {
|
||||
// Variant is the name of the variant to use from the PackageSource
|
||||
// If not specified, defaults to "default"
|
||||
// +optional
|
||||
Variant string `json:"variant,omitempty"`
|
||||
|
||||
// IgnoreDependencies is a list of package source dependencies to ignore
|
||||
// Dependencies listed here will not be installed even if they are specified in the PackageSource
|
||||
// +optional
|
||||
IgnoreDependencies []string `json:"ignoreDependencies,omitempty"`
|
||||
|
||||
// Components is a map of release name to component overrides
|
||||
// Allows overriding values and enabling/disabling specific components from the PackageSource
|
||||
// +optional
|
||||
Components map[string]PackageComponent `json:"components,omitempty"`
|
||||
}
|
||||
|
||||
// PackageComponent defines overrides for a specific component
|
||||
type PackageComponent struct {
|
||||
// Enabled indicates whether this component should be installed
|
||||
// If false, the component will be disabled even if it's defined in the PackageSource
|
||||
// +optional
|
||||
Enabled *bool `json:"enabled,omitempty"`
|
||||
|
||||
// Values contains Helm chart values as a JSON object
|
||||
// These values will be merged with the default values from the PackageSource
|
||||
// +optional
|
||||
Values *apiextensionsv1.JSON `json:"values,omitempty"`
|
||||
}
|
||||
|
||||
// PackageStatus defines the observed state of Package
|
||||
type PackageStatus struct {
|
||||
// Conditions represents the latest available observations of a Package's state
|
||||
// +optional
|
||||
Conditions []metav1.Condition `json:"conditions,omitempty"`
|
||||
|
||||
// Dependencies tracks the readiness status of each dependency
|
||||
// Key is the dependency package name, value indicates if the dependency is ready
|
||||
// +optional
|
||||
Dependencies map[string]DependencyStatus `json:"dependencies,omitempty"`
|
||||
}
|
||||
|
||||
// DependencyStatus represents the readiness status of a dependency
|
||||
type DependencyStatus struct {
|
||||
// Ready indicates whether the dependency is ready
|
||||
Ready bool `json:"ready"`
|
||||
}
|
||||
171
api/v1alpha1/packagesource_types.go
Normal file
171
api/v1alpha1/packagesource_types.go
Normal file
@@ -0,0 +1,171 @@
|
||||
/*
|
||||
Copyright 2025.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package v1alpha1
|
||||
|
||||
import (
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
)
|
||||
|
||||
// +kubebuilder:object:root=true
|
||||
// +kubebuilder:resource:scope=Cluster,shortName={pks}
|
||||
// +kubebuilder:subresource:status
|
||||
// +kubebuilder:printcolumn:name="Variants",type="string",JSONPath=".status.variants",description="Package variants (comma-separated)"
|
||||
// +kubebuilder:printcolumn:name="Ready",type="string",JSONPath=".status.conditions[?(@.type=='Ready')].status",description="Ready status"
|
||||
// +kubebuilder:printcolumn:name="Status",type="string",JSONPath=".status.conditions[?(@.type=='Ready')].message",description="Ready message"
|
||||
|
||||
// PackageSource is the Schema for the packagesources API
|
||||
type PackageSource struct {
|
||||
metav1.TypeMeta `json:",inline"`
|
||||
metav1.ObjectMeta `json:"metadata,omitempty"`
|
||||
|
||||
Spec PackageSourceSpec `json:"spec,omitempty"`
|
||||
Status PackageSourceStatus `json:"status,omitempty"`
|
||||
}
|
||||
|
||||
// +kubebuilder:object:root=true
|
||||
|
||||
// PackageSourceList contains a list of PackageSources
|
||||
type PackageSourceList struct {
|
||||
metav1.TypeMeta `json:",inline"`
|
||||
metav1.ListMeta `json:"metadata,omitempty"`
|
||||
Items []PackageSource `json:"items"`
|
||||
}
|
||||
|
||||
func init() {
|
||||
SchemeBuilder.Register(&PackageSource{}, &PackageSourceList{})
|
||||
}
|
||||
|
||||
// PackageSourceSpec defines the desired state of PackageSource
|
||||
type PackageSourceSpec struct {
|
||||
// SourceRef is the source reference for the package source charts
|
||||
// +optional
|
||||
SourceRef *PackageSourceRef `json:"sourceRef,omitempty"`
|
||||
|
||||
// Variants is a list of package source variants
|
||||
// Each variant defines components, applications, dependencies, and libraries for a specific configuration
|
||||
// +optional
|
||||
Variants []Variant `json:"variants,omitempty"`
|
||||
}
|
||||
|
||||
// Variant defines a single variant configuration
|
||||
type Variant struct {
|
||||
// Name is the unique identifier for this variant
|
||||
// +required
|
||||
Name string `json:"name"`
|
||||
|
||||
// DependsOn is a list of package source dependencies
|
||||
// For example: "cozystack.networking"
|
||||
// +optional
|
||||
DependsOn []string `json:"dependsOn,omitempty"`
|
||||
|
||||
// Libraries is a list of Helm library charts used by components in this variant
|
||||
// +optional
|
||||
Libraries []Library `json:"libraries,omitempty"`
|
||||
|
||||
// Components is a list of Helm releases to be installed as part of this variant
|
||||
// +optional
|
||||
Components []Component `json:"components,omitempty"`
|
||||
}
|
||||
|
||||
// Library defines a Helm library chart
|
||||
type Library struct {
|
||||
// Name is the optional name for library placed in charts
|
||||
// +optional
|
||||
Name string `json:"name,omitempty"`
|
||||
|
||||
// Path is the path to the library chart directory
|
||||
// +required
|
||||
Path string `json:"path"`
|
||||
}
|
||||
|
||||
// PackageSourceRef defines the source reference for package source charts
|
||||
type PackageSourceRef struct {
|
||||
// Kind of the source reference
|
||||
// +kubebuilder:validation:Enum=GitRepository;OCIRepository
|
||||
// +required
|
||||
Kind string `json:"kind"`
|
||||
|
||||
// Name of the source reference
|
||||
// +required
|
||||
Name string `json:"name"`
|
||||
|
||||
// Namespace of the source reference
|
||||
// +required
|
||||
Namespace string `json:"namespace"`
|
||||
|
||||
// Path is the base path where packages are located in the source.
|
||||
// For GitRepository, defaults to "packages" if not specified.
|
||||
// For OCIRepository, defaults to empty string (root) if not specified.
|
||||
// +optional
|
||||
Path string `json:"path,omitempty"`
|
||||
}
|
||||
|
||||
// ComponentInstall defines installation parameters for a component
|
||||
type ComponentInstall struct {
|
||||
// ReleaseName is the name of the HelmRelease resource that will be created
|
||||
// If not specified, defaults to the component Name field
|
||||
// +optional
|
||||
ReleaseName string `json:"releaseName,omitempty"`
|
||||
|
||||
// Namespace is the Kubernetes namespace where the release will be installed
|
||||
// +optional
|
||||
Namespace string `json:"namespace,omitempty"`
|
||||
|
||||
// Privileged indicates whether this release requires privileged access
|
||||
// +optional
|
||||
Privileged bool `json:"privileged,omitempty"`
|
||||
|
||||
// DependsOn is a list of component names that must be installed before this component
|
||||
// +optional
|
||||
DependsOn []string `json:"dependsOn,omitempty"`
|
||||
}
|
||||
|
||||
// Component defines a single Helm release component within a package source
|
||||
type Component struct {
|
||||
// Name is the unique identifier for this component within the package source
|
||||
// +required
|
||||
Name string `json:"name"`
|
||||
|
||||
// Path is the path to the Helm chart directory
|
||||
// +required
|
||||
Path string `json:"path"`
|
||||
|
||||
// Install defines installation parameters for this component
|
||||
// +optional
|
||||
Install *ComponentInstall `json:"install,omitempty"`
|
||||
|
||||
// Libraries is a list of library names that this component depends on
|
||||
// These libraries must be defined at the variant level
|
||||
// +optional
|
||||
Libraries []string `json:"libraries,omitempty"`
|
||||
|
||||
// ValuesFiles is a list of values file names to use
|
||||
// +optional
|
||||
ValuesFiles []string `json:"valuesFiles,omitempty"`
|
||||
}
|
||||
|
||||
// PackageSourceStatus defines the observed state of PackageSource
|
||||
type PackageSourceStatus struct {
|
||||
// Variants is a comma-separated list of package variant names
|
||||
// This field is populated by the controller based on spec.variants keys
|
||||
// +optional
|
||||
Variants string `json:"variants,omitempty"`
|
||||
|
||||
// Conditions represents the latest available observations of a PackageSource's state
|
||||
// +optional
|
||||
Conditions []metav1.Condition `json:"conditions,omitempty"`
|
||||
}
|
||||
70
api/v1alpha1/workload_types.go
Normal file
70
api/v1alpha1/workload_types.go
Normal file
@@ -0,0 +1,70 @@
|
||||
/*
|
||||
Copyright 2025.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package v1alpha1
|
||||
|
||||
import (
|
||||
"k8s.io/apimachinery/pkg/api/resource"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
)
|
||||
|
||||
// WorkloadStatus defines the observed state of Workload
|
||||
type WorkloadStatus struct {
|
||||
// Kind represents the type of workload (redis, postgres, etc.)
|
||||
// +required
|
||||
Kind string `json:"kind"`
|
||||
|
||||
// Type represents the specific role of the workload (redis, sentinel, etc.)
|
||||
// If not specified, defaults to Kind
|
||||
// +optional
|
||||
Type string `json:"type,omitempty"`
|
||||
|
||||
// Resources specifies the compute resources allocated to this workload
|
||||
// +required
|
||||
Resources map[string]resource.Quantity `json:"resources"`
|
||||
|
||||
// Operational indicates if all pods of the workload are ready
|
||||
// +optional
|
||||
Operational bool `json:"operational"`
|
||||
}
|
||||
|
||||
// +kubebuilder:object:root=true
|
||||
// +kubebuilder:printcolumn:name="Kind",type="string",JSONPath=".status.kind"
|
||||
// +kubebuilder:printcolumn:name="Type",type="string",JSONPath=".status.type"
|
||||
// +kubebuilder:printcolumn:name="CPU",type="string",JSONPath=".status.resources.cpu"
|
||||
// +kubebuilder:printcolumn:name="Memory",type="string",JSONPath=".status.resources.memory"
|
||||
// +kubebuilder:printcolumn:name="Operational",type="boolean",JSONPath=`.status.operational`
|
||||
|
||||
// Workload is the Schema for the workloads API
|
||||
type Workload struct {
|
||||
metav1.TypeMeta `json:",inline"`
|
||||
metav1.ObjectMeta `json:"metadata,omitempty"`
|
||||
|
||||
Status WorkloadStatus `json:"status,omitempty"`
|
||||
}
|
||||
|
||||
// +kubebuilder:object:root=true
|
||||
|
||||
// WorkloadList contains a list of Workload
|
||||
type WorkloadList struct {
|
||||
metav1.TypeMeta `json:",inline"`
|
||||
metav1.ListMeta `json:"metadata,omitempty"`
|
||||
Items []Workload `json:"items"`
|
||||
}
|
||||
|
||||
func init() {
|
||||
SchemeBuilder.Register(&Workload{}, &WorkloadList{})
|
||||
}
|
||||
91
api/v1alpha1/workloadmonitor_types.go
Normal file
91
api/v1alpha1/workloadmonitor_types.go
Normal file
@@ -0,0 +1,91 @@
|
||||
package v1alpha1
|
||||
|
||||
import (
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
)
|
||||
|
||||
// WorkloadMonitorSpec defines the desired state of WorkloadMonitor
|
||||
type WorkloadMonitorSpec struct {
|
||||
// Selector is a label selector to find workloads to monitor
|
||||
// +required
|
||||
Selector map[string]string `json:"selector"`
|
||||
|
||||
// Kind specifies the kind of the workload
|
||||
// +optional
|
||||
Kind string `json:"kind,omitempty"`
|
||||
|
||||
// Type specifies the type of the workload
|
||||
// +optional
|
||||
Type string `json:"type,omitempty"`
|
||||
|
||||
// Version specifies the version of the workload
|
||||
// +optional
|
||||
Version string `json:"version,omitempty"`
|
||||
|
||||
// MinReplicas specifies the minimum number of replicas that should be available
|
||||
// +kubebuilder:validation:Minimum=0
|
||||
// +optional
|
||||
MinReplicas *int32 `json:"minReplicas,omitempty"`
|
||||
|
||||
// Replicas is the desired number of replicas
|
||||
// If not specified, will use observedReplicas as the target
|
||||
// +kubebuilder:validation:Minimum=0
|
||||
// +optional
|
||||
Replicas *int32 `json:"replicas,omitempty"`
|
||||
}
|
||||
|
||||
// WorkloadMonitorStatus defines the observed state of WorkloadMonitor
|
||||
type WorkloadMonitorStatus struct {
|
||||
// Operational indicates if the workload meets all operational requirements
|
||||
// +optional
|
||||
Operational *bool `json:"operational,omitempty"`
|
||||
|
||||
// AvailableReplicas is the number of ready replicas
|
||||
// +optional
|
||||
AvailableReplicas int32 `json:"availableReplicas"`
|
||||
|
||||
// ObservedReplicas is the total number of pods observed
|
||||
// +optional
|
||||
ObservedReplicas int32 `json:"observedReplicas"`
|
||||
}
|
||||
|
||||
// +kubebuilder:object:root=true
|
||||
// +kubebuilder:subresource:status
|
||||
// +kubebuilder:printcolumn:name="Kind",type="string",JSONPath=".spec.kind"
|
||||
// +kubebuilder:printcolumn:name="Type",type="string",JSONPath=".spec.type"
|
||||
// +kubebuilder:printcolumn:name="Version",type="string",JSONPath=".spec.version"
|
||||
// +kubebuilder:printcolumn:name="Replicas",type="integer",JSONPath=".spec.replicas"
|
||||
// +kubebuilder:printcolumn:name="MinReplicas",type="integer",JSONPath=".spec.minReplicas"
|
||||
// +kubebuilder:printcolumn:name="Available",type="integer",JSONPath=".status.availableReplicas"
|
||||
// +kubebuilder:printcolumn:name="Observed",type="integer",JSONPath=".status.observedReplicas"
|
||||
// +kubebuilder:printcolumn:name="Operational",type="boolean",JSONPath=".status.operational"
|
||||
|
||||
// WorkloadMonitor is the Schema for the workloadmonitors API
|
||||
type WorkloadMonitor struct {
|
||||
metav1.TypeMeta `json:",inline"`
|
||||
metav1.ObjectMeta `json:"metadata,omitempty"`
|
||||
|
||||
Spec WorkloadMonitorSpec `json:"spec,omitempty"`
|
||||
Status WorkloadMonitorStatus `json:"status,omitempty"`
|
||||
}
|
||||
|
||||
// +kubebuilder:object:root=true
|
||||
|
||||
// WorkloadMonitorList contains a list of WorkloadMonitor
|
||||
type WorkloadMonitorList struct {
|
||||
metav1.TypeMeta `json:",inline"`
|
||||
metav1.ListMeta `json:"metadata,omitempty"`
|
||||
Items []WorkloadMonitor `json:"items"`
|
||||
}
|
||||
|
||||
func init() {
|
||||
SchemeBuilder.Register(&WorkloadMonitor{}, &WorkloadMonitorList{})
|
||||
}
|
||||
|
||||
// GetSelector returns the label selector from metadata
|
||||
func (w *WorkloadMonitor) GetSelector() map[string]string {
|
||||
return w.Spec.Selector
|
||||
}
|
||||
|
||||
// Selector specifies the label selector for workloads
|
||||
type Selector map[string]string
|
||||
835
api/v1alpha1/zz_generated.deepcopy.go
Normal file
835
api/v1alpha1/zz_generated.deepcopy.go
Normal file
@@ -0,0 +1,835 @@
|
||||
//go:build !ignore_autogenerated
|
||||
|
||||
/*
|
||||
Copyright 2025 The Cozystack Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
// Code generated by controller-gen. DO NOT EDIT.
|
||||
|
||||
package v1alpha1
|
||||
|
||||
import (
|
||||
"github.com/fluxcd/helm-controller/api/v2"
|
||||
"k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
|
||||
"k8s.io/apimachinery/pkg/api/resource"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
runtime "k8s.io/apimachinery/pkg/runtime"
|
||||
)
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *ApplicationDefinition) DeepCopyInto(out *ApplicationDefinition) {
|
||||
*out = *in
|
||||
out.TypeMeta = in.TypeMeta
|
||||
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
|
||||
in.Spec.DeepCopyInto(&out.Spec)
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ApplicationDefinition.
|
||||
func (in *ApplicationDefinition) DeepCopy() *ApplicationDefinition {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(ApplicationDefinition)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
|
||||
func (in *ApplicationDefinition) DeepCopyObject() runtime.Object {
|
||||
if c := in.DeepCopy(); c != nil {
|
||||
return c
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *ApplicationDefinitionApplication) DeepCopyInto(out *ApplicationDefinitionApplication) {
|
||||
*out = *in
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ApplicationDefinitionApplication.
|
||||
func (in *ApplicationDefinitionApplication) DeepCopy() *ApplicationDefinitionApplication {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(ApplicationDefinitionApplication)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *ApplicationDefinitionDashboard) DeepCopyInto(out *ApplicationDefinitionDashboard) {
|
||||
*out = *in
|
||||
if in.Tags != nil {
|
||||
in, out := &in.Tags, &out.Tags
|
||||
*out = make([]string, len(*in))
|
||||
copy(*out, *in)
|
||||
}
|
||||
if in.Tabs != nil {
|
||||
in, out := &in.Tabs, &out.Tabs
|
||||
*out = make([]DashboardTab, len(*in))
|
||||
copy(*out, *in)
|
||||
}
|
||||
if in.KeysOrder != nil {
|
||||
in, out := &in.KeysOrder, &out.KeysOrder
|
||||
*out = make([][]string, len(*in))
|
||||
for i := range *in {
|
||||
if (*in)[i] != nil {
|
||||
in, out := &(*in)[i], &(*out)[i]
|
||||
*out = make([]string, len(*in))
|
||||
copy(*out, *in)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ApplicationDefinitionDashboard.
|
||||
func (in *ApplicationDefinitionDashboard) DeepCopy() *ApplicationDefinitionDashboard {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(ApplicationDefinitionDashboard)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *ApplicationDefinitionList) DeepCopyInto(out *ApplicationDefinitionList) {
|
||||
*out = *in
|
||||
out.TypeMeta = in.TypeMeta
|
||||
in.ListMeta.DeepCopyInto(&out.ListMeta)
|
||||
if in.Items != nil {
|
||||
in, out := &in.Items, &out.Items
|
||||
*out = make([]ApplicationDefinition, len(*in))
|
||||
for i := range *in {
|
||||
(*in)[i].DeepCopyInto(&(*out)[i])
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ApplicationDefinitionList.
|
||||
func (in *ApplicationDefinitionList) DeepCopy() *ApplicationDefinitionList {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(ApplicationDefinitionList)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
|
||||
func (in *ApplicationDefinitionList) DeepCopyObject() runtime.Object {
|
||||
if c := in.DeepCopy(); c != nil {
|
||||
return c
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *ApplicationDefinitionRelease) DeepCopyInto(out *ApplicationDefinitionRelease) {
|
||||
*out = *in
|
||||
if in.ChartRef != nil {
|
||||
in, out := &in.ChartRef, &out.ChartRef
|
||||
*out = new(v2.CrossNamespaceSourceReference)
|
||||
**out = **in
|
||||
}
|
||||
if in.Labels != nil {
|
||||
in, out := &in.Labels, &out.Labels
|
||||
*out = make(map[string]string, len(*in))
|
||||
for key, val := range *in {
|
||||
(*out)[key] = val
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ApplicationDefinitionRelease.
|
||||
func (in *ApplicationDefinitionRelease) DeepCopy() *ApplicationDefinitionRelease {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(ApplicationDefinitionRelease)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *ApplicationDefinitionResourceSelector) DeepCopyInto(out *ApplicationDefinitionResourceSelector) {
|
||||
*out = *in
|
||||
in.LabelSelector.DeepCopyInto(&out.LabelSelector)
|
||||
if in.ResourceNames != nil {
|
||||
in, out := &in.ResourceNames, &out.ResourceNames
|
||||
*out = make([]string, len(*in))
|
||||
copy(*out, *in)
|
||||
}
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ApplicationDefinitionResourceSelector.
|
||||
func (in *ApplicationDefinitionResourceSelector) DeepCopy() *ApplicationDefinitionResourceSelector {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(ApplicationDefinitionResourceSelector)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *ApplicationDefinitionResources) DeepCopyInto(out *ApplicationDefinitionResources) {
|
||||
*out = *in
|
||||
if in.Exclude != nil {
|
||||
in, out := &in.Exclude, &out.Exclude
|
||||
*out = make([]*ApplicationDefinitionResourceSelector, len(*in))
|
||||
for i := range *in {
|
||||
if (*in)[i] != nil {
|
||||
in, out := &(*in)[i], &(*out)[i]
|
||||
*out = new(ApplicationDefinitionResourceSelector)
|
||||
(*in).DeepCopyInto(*out)
|
||||
}
|
||||
}
|
||||
}
|
||||
if in.Include != nil {
|
||||
in, out := &in.Include, &out.Include
|
||||
*out = make([]*ApplicationDefinitionResourceSelector, len(*in))
|
||||
for i := range *in {
|
||||
if (*in)[i] != nil {
|
||||
in, out := &(*in)[i], &(*out)[i]
|
||||
*out = new(ApplicationDefinitionResourceSelector)
|
||||
(*in).DeepCopyInto(*out)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ApplicationDefinitionResources.
|
||||
func (in *ApplicationDefinitionResources) DeepCopy() *ApplicationDefinitionResources {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(ApplicationDefinitionResources)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *ApplicationDefinitionSpec) DeepCopyInto(out *ApplicationDefinitionSpec) {
|
||||
*out = *in
|
||||
out.Application = in.Application
|
||||
in.Release.DeepCopyInto(&out.Release)
|
||||
in.Secrets.DeepCopyInto(&out.Secrets)
|
||||
in.Services.DeepCopyInto(&out.Services)
|
||||
in.Ingresses.DeepCopyInto(&out.Ingresses)
|
||||
if in.Dashboard != nil {
|
||||
in, out := &in.Dashboard, &out.Dashboard
|
||||
*out = new(ApplicationDefinitionDashboard)
|
||||
(*in).DeepCopyInto(*out)
|
||||
}
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ApplicationDefinitionSpec.
|
||||
func (in *ApplicationDefinitionSpec) DeepCopy() *ApplicationDefinitionSpec {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(ApplicationDefinitionSpec)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *Component) DeepCopyInto(out *Component) {
|
||||
*out = *in
|
||||
if in.Install != nil {
|
||||
in, out := &in.Install, &out.Install
|
||||
*out = new(ComponentInstall)
|
||||
(*in).DeepCopyInto(*out)
|
||||
}
|
||||
if in.Libraries != nil {
|
||||
in, out := &in.Libraries, &out.Libraries
|
||||
*out = make([]string, len(*in))
|
||||
copy(*out, *in)
|
||||
}
|
||||
if in.ValuesFiles != nil {
|
||||
in, out := &in.ValuesFiles, &out.ValuesFiles
|
||||
*out = make([]string, len(*in))
|
||||
copy(*out, *in)
|
||||
}
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Component.
|
||||
func (in *Component) DeepCopy() *Component {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(Component)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *ComponentInstall) DeepCopyInto(out *ComponentInstall) {
|
||||
*out = *in
|
||||
if in.DependsOn != nil {
|
||||
in, out := &in.DependsOn, &out.DependsOn
|
||||
*out = make([]string, len(*in))
|
||||
copy(*out, *in)
|
||||
}
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ComponentInstall.
|
||||
func (in *ComponentInstall) DeepCopy() *ComponentInstall {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(ComponentInstall)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *DependencyStatus) DeepCopyInto(out *DependencyStatus) {
|
||||
*out = *in
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DependencyStatus.
|
||||
func (in *DependencyStatus) DeepCopy() *DependencyStatus {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(DependencyStatus)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *Library) DeepCopyInto(out *Library) {
|
||||
*out = *in
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Library.
|
||||
func (in *Library) DeepCopy() *Library {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(Library)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *Package) DeepCopyInto(out *Package) {
|
||||
*out = *in
|
||||
out.TypeMeta = in.TypeMeta
|
||||
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
|
||||
in.Spec.DeepCopyInto(&out.Spec)
|
||||
in.Status.DeepCopyInto(&out.Status)
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Package.
|
||||
func (in *Package) DeepCopy() *Package {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(Package)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
|
||||
func (in *Package) DeepCopyObject() runtime.Object {
|
||||
if c := in.DeepCopy(); c != nil {
|
||||
return c
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *PackageComponent) DeepCopyInto(out *PackageComponent) {
|
||||
*out = *in
|
||||
if in.Enabled != nil {
|
||||
in, out := &in.Enabled, &out.Enabled
|
||||
*out = new(bool)
|
||||
**out = **in
|
||||
}
|
||||
if in.Values != nil {
|
||||
in, out := &in.Values, &out.Values
|
||||
*out = new(v1.JSON)
|
||||
(*in).DeepCopyInto(*out)
|
||||
}
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PackageComponent.
|
||||
func (in *PackageComponent) DeepCopy() *PackageComponent {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(PackageComponent)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *PackageList) DeepCopyInto(out *PackageList) {
|
||||
*out = *in
|
||||
out.TypeMeta = in.TypeMeta
|
||||
in.ListMeta.DeepCopyInto(&out.ListMeta)
|
||||
if in.Items != nil {
|
||||
in, out := &in.Items, &out.Items
|
||||
*out = make([]Package, len(*in))
|
||||
for i := range *in {
|
||||
(*in)[i].DeepCopyInto(&(*out)[i])
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PackageList.
|
||||
func (in *PackageList) DeepCopy() *PackageList {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(PackageList)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
|
||||
func (in *PackageList) DeepCopyObject() runtime.Object {
|
||||
if c := in.DeepCopy(); c != nil {
|
||||
return c
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *PackageSource) DeepCopyInto(out *PackageSource) {
|
||||
*out = *in
|
||||
out.TypeMeta = in.TypeMeta
|
||||
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
|
||||
in.Spec.DeepCopyInto(&out.Spec)
|
||||
in.Status.DeepCopyInto(&out.Status)
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PackageSource.
|
||||
func (in *PackageSource) DeepCopy() *PackageSource {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(PackageSource)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
|
||||
func (in *PackageSource) DeepCopyObject() runtime.Object {
|
||||
if c := in.DeepCopy(); c != nil {
|
||||
return c
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *PackageSourceList) DeepCopyInto(out *PackageSourceList) {
|
||||
*out = *in
|
||||
out.TypeMeta = in.TypeMeta
|
||||
in.ListMeta.DeepCopyInto(&out.ListMeta)
|
||||
if in.Items != nil {
|
||||
in, out := &in.Items, &out.Items
|
||||
*out = make([]PackageSource, len(*in))
|
||||
for i := range *in {
|
||||
(*in)[i].DeepCopyInto(&(*out)[i])
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PackageSourceList.
|
||||
func (in *PackageSourceList) DeepCopy() *PackageSourceList {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(PackageSourceList)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
|
||||
func (in *PackageSourceList) DeepCopyObject() runtime.Object {
|
||||
if c := in.DeepCopy(); c != nil {
|
||||
return c
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *PackageSourceRef) DeepCopyInto(out *PackageSourceRef) {
|
||||
*out = *in
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PackageSourceRef.
|
||||
func (in *PackageSourceRef) DeepCopy() *PackageSourceRef {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(PackageSourceRef)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *PackageSourceSpec) DeepCopyInto(out *PackageSourceSpec) {
|
||||
*out = *in
|
||||
if in.SourceRef != nil {
|
||||
in, out := &in.SourceRef, &out.SourceRef
|
||||
*out = new(PackageSourceRef)
|
||||
**out = **in
|
||||
}
|
||||
if in.Variants != nil {
|
||||
in, out := &in.Variants, &out.Variants
|
||||
*out = make([]Variant, len(*in))
|
||||
for i := range *in {
|
||||
(*in)[i].DeepCopyInto(&(*out)[i])
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PackageSourceSpec.
|
||||
func (in *PackageSourceSpec) DeepCopy() *PackageSourceSpec {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(PackageSourceSpec)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *PackageSourceStatus) DeepCopyInto(out *PackageSourceStatus) {
|
||||
*out = *in
|
||||
if in.Conditions != nil {
|
||||
in, out := &in.Conditions, &out.Conditions
|
||||
*out = make([]metav1.Condition, len(*in))
|
||||
for i := range *in {
|
||||
(*in)[i].DeepCopyInto(&(*out)[i])
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PackageSourceStatus.
|
||||
func (in *PackageSourceStatus) DeepCopy() *PackageSourceStatus {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(PackageSourceStatus)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *PackageSpec) DeepCopyInto(out *PackageSpec) {
|
||||
*out = *in
|
||||
if in.IgnoreDependencies != nil {
|
||||
in, out := &in.IgnoreDependencies, &out.IgnoreDependencies
|
||||
*out = make([]string, len(*in))
|
||||
copy(*out, *in)
|
||||
}
|
||||
if in.Components != nil {
|
||||
in, out := &in.Components, &out.Components
|
||||
*out = make(map[string]PackageComponent, len(*in))
|
||||
for key, val := range *in {
|
||||
(*out)[key] = *val.DeepCopy()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PackageSpec.
|
||||
func (in *PackageSpec) DeepCopy() *PackageSpec {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(PackageSpec)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *PackageStatus) DeepCopyInto(out *PackageStatus) {
|
||||
*out = *in
|
||||
if in.Conditions != nil {
|
||||
in, out := &in.Conditions, &out.Conditions
|
||||
*out = make([]metav1.Condition, len(*in))
|
||||
for i := range *in {
|
||||
(*in)[i].DeepCopyInto(&(*out)[i])
|
||||
}
|
||||
}
|
||||
if in.Dependencies != nil {
|
||||
in, out := &in.Dependencies, &out.Dependencies
|
||||
*out = make(map[string]DependencyStatus, len(*in))
|
||||
for key, val := range *in {
|
||||
(*out)[key] = val
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PackageStatus.
|
||||
func (in *PackageStatus) DeepCopy() *PackageStatus {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(PackageStatus)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in Selector) DeepCopyInto(out *Selector) {
|
||||
{
|
||||
in := &in
|
||||
*out = make(Selector, len(*in))
|
||||
for key, val := range *in {
|
||||
(*out)[key] = val
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Selector.
|
||||
func (in Selector) DeepCopy() Selector {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(Selector)
|
||||
in.DeepCopyInto(out)
|
||||
return *out
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *Variant) DeepCopyInto(out *Variant) {
|
||||
*out = *in
|
||||
if in.DependsOn != nil {
|
||||
in, out := &in.DependsOn, &out.DependsOn
|
||||
*out = make([]string, len(*in))
|
||||
copy(*out, *in)
|
||||
}
|
||||
if in.Libraries != nil {
|
||||
in, out := &in.Libraries, &out.Libraries
|
||||
*out = make([]Library, len(*in))
|
||||
copy(*out, *in)
|
||||
}
|
||||
if in.Components != nil {
|
||||
in, out := &in.Components, &out.Components
|
||||
*out = make([]Component, len(*in))
|
||||
for i := range *in {
|
||||
(*in)[i].DeepCopyInto(&(*out)[i])
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Variant.
|
||||
func (in *Variant) DeepCopy() *Variant {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(Variant)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *Workload) DeepCopyInto(out *Workload) {
|
||||
*out = *in
|
||||
out.TypeMeta = in.TypeMeta
|
||||
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
|
||||
in.Status.DeepCopyInto(&out.Status)
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Workload.
|
||||
func (in *Workload) DeepCopy() *Workload {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(Workload)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
|
||||
func (in *Workload) DeepCopyObject() runtime.Object {
|
||||
if c := in.DeepCopy(); c != nil {
|
||||
return c
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *WorkloadList) DeepCopyInto(out *WorkloadList) {
|
||||
*out = *in
|
||||
out.TypeMeta = in.TypeMeta
|
||||
in.ListMeta.DeepCopyInto(&out.ListMeta)
|
||||
if in.Items != nil {
|
||||
in, out := &in.Items, &out.Items
|
||||
*out = make([]Workload, len(*in))
|
||||
for i := range *in {
|
||||
(*in)[i].DeepCopyInto(&(*out)[i])
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new WorkloadList.
|
||||
func (in *WorkloadList) DeepCopy() *WorkloadList {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(WorkloadList)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
|
||||
func (in *WorkloadList) DeepCopyObject() runtime.Object {
|
||||
if c := in.DeepCopy(); c != nil {
|
||||
return c
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *WorkloadMonitor) DeepCopyInto(out *WorkloadMonitor) {
|
||||
*out = *in
|
||||
out.TypeMeta = in.TypeMeta
|
||||
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
|
||||
in.Spec.DeepCopyInto(&out.Spec)
|
||||
in.Status.DeepCopyInto(&out.Status)
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new WorkloadMonitor.
|
||||
func (in *WorkloadMonitor) DeepCopy() *WorkloadMonitor {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(WorkloadMonitor)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
|
||||
func (in *WorkloadMonitor) DeepCopyObject() runtime.Object {
|
||||
if c := in.DeepCopy(); c != nil {
|
||||
return c
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *WorkloadMonitorList) DeepCopyInto(out *WorkloadMonitorList) {
|
||||
*out = *in
|
||||
out.TypeMeta = in.TypeMeta
|
||||
in.ListMeta.DeepCopyInto(&out.ListMeta)
|
||||
if in.Items != nil {
|
||||
in, out := &in.Items, &out.Items
|
||||
*out = make([]WorkloadMonitor, len(*in))
|
||||
for i := range *in {
|
||||
(*in)[i].DeepCopyInto(&(*out)[i])
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new WorkloadMonitorList.
|
||||
func (in *WorkloadMonitorList) DeepCopy() *WorkloadMonitorList {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(WorkloadMonitorList)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
|
||||
func (in *WorkloadMonitorList) DeepCopyObject() runtime.Object {
|
||||
if c := in.DeepCopy(); c != nil {
|
||||
return c
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *WorkloadMonitorSpec) DeepCopyInto(out *WorkloadMonitorSpec) {
|
||||
*out = *in
|
||||
if in.Selector != nil {
|
||||
in, out := &in.Selector, &out.Selector
|
||||
*out = make(map[string]string, len(*in))
|
||||
for key, val := range *in {
|
||||
(*out)[key] = val
|
||||
}
|
||||
}
|
||||
if in.MinReplicas != nil {
|
||||
in, out := &in.MinReplicas, &out.MinReplicas
|
||||
*out = new(int32)
|
||||
**out = **in
|
||||
}
|
||||
if in.Replicas != nil {
|
||||
in, out := &in.Replicas, &out.Replicas
|
||||
*out = new(int32)
|
||||
**out = **in
|
||||
}
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new WorkloadMonitorSpec.
|
||||
func (in *WorkloadMonitorSpec) DeepCopy() *WorkloadMonitorSpec {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(WorkloadMonitorSpec)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *WorkloadMonitorStatus) DeepCopyInto(out *WorkloadMonitorStatus) {
|
||||
*out = *in
|
||||
if in.Operational != nil {
|
||||
in, out := &in.Operational, &out.Operational
|
||||
*out = new(bool)
|
||||
**out = **in
|
||||
}
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new WorkloadMonitorStatus.
|
||||
func (in *WorkloadMonitorStatus) DeepCopy() *WorkloadMonitorStatus {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(WorkloadMonitorStatus)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *WorkloadStatus) DeepCopyInto(out *WorkloadStatus) {
|
||||
*out = *in
|
||||
if in.Resources != nil {
|
||||
in, out := &in.Resources, &out.Resources
|
||||
*out = make(map[string]resource.Quantity, len(*in))
|
||||
for key, val := range *in {
|
||||
(*out)[key] = val.DeepCopy()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new WorkloadStatus.
|
||||
func (in *WorkloadStatus) DeepCopy() *WorkloadStatus {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(WorkloadStatus)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
181
cmd/backup-controller/main.go
Normal file
181
cmd/backup-controller/main.go
Normal file
@@ -0,0 +1,181 @@
|
||||
/*
|
||||
Copyright 2025.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"crypto/tls"
|
||||
"flag"
|
||||
"os"
|
||||
|
||||
// Import all Kubernetes client auth plugins (e.g. Azure, GCP, OIDC, etc.)
|
||||
// to ensure that exec-entrypoint and run can make use of them.
|
||||
_ "k8s.io/client-go/plugin/pkg/client/auth"
|
||||
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
|
||||
clientgoscheme "k8s.io/client-go/kubernetes/scheme"
|
||||
ctrl "sigs.k8s.io/controller-runtime"
|
||||
"sigs.k8s.io/controller-runtime/pkg/cache"
|
||||
"sigs.k8s.io/controller-runtime/pkg/client"
|
||||
"sigs.k8s.io/controller-runtime/pkg/healthz"
|
||||
"sigs.k8s.io/controller-runtime/pkg/log/zap"
|
||||
"sigs.k8s.io/controller-runtime/pkg/metrics/filters"
|
||||
metricsserver "sigs.k8s.io/controller-runtime/pkg/metrics/server"
|
||||
"sigs.k8s.io/controller-runtime/pkg/webhook"
|
||||
|
||||
backupsv1alpha1 "github.com/cozystack/cozystack/api/backups/v1alpha1"
|
||||
"github.com/cozystack/cozystack/internal/backupcontroller"
|
||||
// +kubebuilder:scaffold:imports
|
||||
)
|
||||
|
||||
var (
|
||||
scheme = runtime.NewScheme()
|
||||
setupLog = ctrl.Log.WithName("setup")
|
||||
)
|
||||
|
||||
func init() {
|
||||
utilruntime.Must(clientgoscheme.AddToScheme(scheme))
|
||||
|
||||
utilruntime.Must(backupsv1alpha1.AddToScheme(scheme))
|
||||
// +kubebuilder:scaffold:scheme
|
||||
}
|
||||
|
||||
func main() {
|
||||
var metricsAddr string
|
||||
var enableLeaderElection bool
|
||||
var probeAddr string
|
||||
var secureMetrics bool
|
||||
var enableHTTP2 bool
|
||||
var tlsOpts []func(*tls.Config)
|
||||
flag.StringVar(&metricsAddr, "metrics-bind-address", "0", "The address the metrics endpoint binds to. "+
|
||||
"Use :8443 for HTTPS or :8080 for HTTP, or leave as 0 to disable the metrics service.")
|
||||
flag.StringVar(&probeAddr, "health-probe-bind-address", ":8081", "The address the probe endpoint binds to.")
|
||||
flag.BoolVar(&enableLeaderElection, "leader-elect", false,
|
||||
"Enable leader election for controller manager. "+
|
||||
"Enabling this will ensure there is only one active controller manager.")
|
||||
flag.BoolVar(&secureMetrics, "metrics-secure", true,
|
||||
"If set, the metrics endpoint is served securely via HTTPS. Use --metrics-secure=false to use HTTP instead.")
|
||||
flag.BoolVar(&enableHTTP2, "enable-http2", false,
|
||||
"If set, HTTP/2 will be enabled for the metrics and webhook servers")
|
||||
opts := zap.Options{
|
||||
Development: false,
|
||||
}
|
||||
opts.BindFlags(flag.CommandLine)
|
||||
flag.Parse()
|
||||
|
||||
ctrl.SetLogger(zap.New(zap.UseFlagOptions(&opts)))
|
||||
|
||||
// if the enable-http2 flag is false (the default), http/2 should be disabled
|
||||
// due to its vulnerabilities. More specifically, disabling http/2 will
|
||||
// prevent from being vulnerable to the HTTP/2 Stream Cancellation and
|
||||
// Rapid Reset CVEs. For more information see:
|
||||
// - https://github.com/advisories/GHSA-qppj-fm5r-hxr3
|
||||
// - https://github.com/advisories/GHSA-4374-p667-p6c8
|
||||
disableHTTP2 := func(c *tls.Config) {
|
||||
setupLog.Info("disabling http/2")
|
||||
c.NextProtos = []string{"http/1.1"}
|
||||
}
|
||||
|
||||
if !enableHTTP2 {
|
||||
tlsOpts = append(tlsOpts, disableHTTP2)
|
||||
}
|
||||
|
||||
webhookServer := webhook.NewServer(webhook.Options{
|
||||
TLSOpts: tlsOpts,
|
||||
})
|
||||
|
||||
// Metrics endpoint is enabled in 'config/default/kustomization.yaml'. The Metrics options configure the server.
|
||||
// More info:
|
||||
// - https://pkg.go.dev/sigs.k8s.io/controller-runtime@v0.19.1/pkg/metrics/server
|
||||
// - https://book.kubebuilder.io/reference/metrics.html
|
||||
metricsServerOptions := metricsserver.Options{
|
||||
BindAddress: metricsAddr,
|
||||
SecureServing: secureMetrics,
|
||||
TLSOpts: tlsOpts,
|
||||
}
|
||||
|
||||
if secureMetrics {
|
||||
// FilterProvider is used to protect the metrics endpoint with authn/authz.
|
||||
// These configurations ensure that only authorized users and service accounts
|
||||
// can access the metrics endpoint. The RBAC are configured in 'config/rbac/kustomization.yaml'. More info:
|
||||
// https://pkg.go.dev/sigs.k8s.io/controller-runtime@v0.19.1/pkg/metrics/filters#WithAuthenticationAndAuthorization
|
||||
metricsServerOptions.FilterProvider = filters.WithAuthenticationAndAuthorization
|
||||
|
||||
// TODO(user): If CertDir, CertName, and KeyName are not specified, controller-runtime will automatically
|
||||
// generate self-signed certificates for the metrics server. While convenient for development and testing,
|
||||
// this setup is not recommended for production.
|
||||
}
|
||||
|
||||
// Configure rate limiting for the Kubernetes client
|
||||
config := ctrl.GetConfigOrDie()
|
||||
config.QPS = 50.0 // Increased from default 5.0
|
||||
config.Burst = 100 // Increased from default 10
|
||||
|
||||
mgr, err := ctrl.NewManager(config, ctrl.Options{
|
||||
Scheme: scheme,
|
||||
Metrics: metricsServerOptions,
|
||||
WebhookServer: webhookServer,
|
||||
HealthProbeBindAddress: probeAddr,
|
||||
LeaderElection: enableLeaderElection,
|
||||
LeaderElectionID: "core.backups.cozystack.io",
|
||||
Cache: cache.Options{
|
||||
ByObject: map[client.Object]cache.ByObject{
|
||||
&backupsv1alpha1.BackupClass{}: {},
|
||||
},
|
||||
},
|
||||
// LeaderElectionReleaseOnCancel defines if the leader should step down voluntarily
|
||||
// when the Manager ends. This requires the binary to immediately end when the
|
||||
// Manager is stopped, otherwise, this setting is unsafe. Setting this significantly
|
||||
// speeds up voluntary leader transitions as the new leader don't have to wait
|
||||
// LeaseDuration time first.
|
||||
//
|
||||
// In the default scaffold provided, the program ends immediately after
|
||||
// the manager stops, so would be fine to enable this option. However,
|
||||
// if you are doing or is intended to do any operation such as perform cleanups
|
||||
// after the manager stops then its usage might be unsafe.
|
||||
// LeaderElectionReleaseOnCancel: true,
|
||||
})
|
||||
if err != nil {
|
||||
setupLog.Error(err, "unable to start manager")
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
if err = (&backupcontroller.PlanReconciler{
|
||||
Client: mgr.GetClient(),
|
||||
Scheme: mgr.GetScheme(),
|
||||
}).SetupWithManager(mgr); err != nil {
|
||||
setupLog.Error(err, "unable to create controller", "controller", "Plan")
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
// +kubebuilder:scaffold:builder
|
||||
|
||||
if err := mgr.AddHealthzCheck("healthz", healthz.Ping); err != nil {
|
||||
setupLog.Error(err, "unable to set up health check")
|
||||
os.Exit(1)
|
||||
}
|
||||
if err := mgr.AddReadyzCheck("readyz", healthz.Ping); err != nil {
|
||||
setupLog.Error(err, "unable to set up ready check")
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
setupLog.Info("starting manager")
|
||||
if err := mgr.Start(ctrl.SetupSignalHandler()); err != nil {
|
||||
setupLog.Error(err, "problem running manager")
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
195
cmd/backupstrategy-controller/main.go
Normal file
195
cmd/backupstrategy-controller/main.go
Normal file
@@ -0,0 +1,195 @@
|
||||
/*
|
||||
Copyright 2025.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"crypto/tls"
|
||||
"flag"
|
||||
"os"
|
||||
|
||||
// Import all Kubernetes client auth plugins (e.g. Azure, GCP, OIDC, etc.)
|
||||
// to ensure that exec-entrypoint and run can make use of them.
|
||||
_ "k8s.io/client-go/plugin/pkg/client/auth"
|
||||
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
|
||||
clientgoscheme "k8s.io/client-go/kubernetes/scheme"
|
||||
ctrl "sigs.k8s.io/controller-runtime"
|
||||
"sigs.k8s.io/controller-runtime/pkg/cache"
|
||||
"sigs.k8s.io/controller-runtime/pkg/client"
|
||||
"sigs.k8s.io/controller-runtime/pkg/healthz"
|
||||
"sigs.k8s.io/controller-runtime/pkg/log/zap"
|
||||
"sigs.k8s.io/controller-runtime/pkg/metrics/filters"
|
||||
metricsserver "sigs.k8s.io/controller-runtime/pkg/metrics/server"
|
||||
"sigs.k8s.io/controller-runtime/pkg/webhook"
|
||||
|
||||
strategyv1alpha1 "github.com/cozystack/cozystack/api/backups/strategy/v1alpha1"
|
||||
backupsv1alpha1 "github.com/cozystack/cozystack/api/backups/v1alpha1"
|
||||
"github.com/cozystack/cozystack/internal/backupcontroller"
|
||||
velerov1 "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
|
||||
// +kubebuilder:scaffold:imports
|
||||
)
|
||||
|
||||
var (
|
||||
scheme = runtime.NewScheme()
|
||||
setupLog = ctrl.Log.WithName("setup")
|
||||
)
|
||||
|
||||
func init() {
|
||||
utilruntime.Must(clientgoscheme.AddToScheme(scheme))
|
||||
|
||||
utilruntime.Must(backupsv1alpha1.AddToScheme(scheme))
|
||||
utilruntime.Must(strategyv1alpha1.AddToScheme(scheme))
|
||||
utilruntime.Must(velerov1.AddToScheme(scheme))
|
||||
// +kubebuilder:scaffold:scheme
|
||||
}
|
||||
|
||||
func main() {
|
||||
var metricsAddr string
|
||||
var enableLeaderElection bool
|
||||
var probeAddr string
|
||||
var secureMetrics bool
|
||||
var enableHTTP2 bool
|
||||
var tlsOpts []func(*tls.Config)
|
||||
flag.StringVar(&metricsAddr, "metrics-bind-address", "0", "The address the metrics endpoint binds to. "+
|
||||
"Use :8443 for HTTPS or :8080 for HTTP, or leave as 0 to disable the metrics service.")
|
||||
flag.StringVar(&probeAddr, "health-probe-bind-address", ":8081", "The address the probe endpoint binds to.")
|
||||
flag.BoolVar(&enableLeaderElection, "leader-elect", false,
|
||||
"Enable leader election for controller manager. "+
|
||||
"Enabling this will ensure there is only one active controller manager.")
|
||||
flag.BoolVar(&secureMetrics, "metrics-secure", true,
|
||||
"If set, the metrics endpoint is served securely via HTTPS. Use --metrics-secure=false to use HTTP instead.")
|
||||
flag.BoolVar(&enableHTTP2, "enable-http2", false,
|
||||
"If set, HTTP/2 will be enabled for the metrics and webhook servers")
|
||||
opts := zap.Options{
|
||||
Development: false,
|
||||
}
|
||||
opts.BindFlags(flag.CommandLine)
|
||||
flag.Parse()
|
||||
|
||||
ctrl.SetLogger(zap.New(zap.UseFlagOptions(&opts)))
|
||||
|
||||
// if the enable-http2 flag is false (the default), http/2 should be disabled
|
||||
// due to its vulnerabilities. More specifically, disabling http/2 will
|
||||
// prevent from being vulnerable to the HTTP/2 Stream Cancellation and
|
||||
// Rapid Reset CVEs. For more information see:
|
||||
// - https://github.com/advisories/GHSA-qppj-fm5r-hxr3
|
||||
// - https://github.com/advisories/GHSA-4374-p667-p6c8
|
||||
disableHTTP2 := func(c *tls.Config) {
|
||||
setupLog.Info("disabling http/2")
|
||||
c.NextProtos = []string{"http/1.1"}
|
||||
}
|
||||
|
||||
if !enableHTTP2 {
|
||||
tlsOpts = append(tlsOpts, disableHTTP2)
|
||||
}
|
||||
|
||||
webhookServer := webhook.NewServer(webhook.Options{
|
||||
TLSOpts: tlsOpts,
|
||||
})
|
||||
|
||||
// Metrics endpoint is enabled in 'config/default/kustomization.yaml'. The Metrics options configure the server.
|
||||
// More info:
|
||||
// - https://pkg.go.dev/sigs.k8s.io/controller-runtime@v0.19.1/pkg/metrics/server
|
||||
// - https://book.kubebuilder.io/reference/metrics.html
|
||||
metricsServerOptions := metricsserver.Options{
|
||||
BindAddress: metricsAddr,
|
||||
SecureServing: secureMetrics,
|
||||
TLSOpts: tlsOpts,
|
||||
}
|
||||
|
||||
if secureMetrics {
|
||||
// FilterProvider is used to protect the metrics endpoint with authn/authz.
|
||||
// These configurations ensure that only authorized users and service accounts
|
||||
// can access the metrics endpoint. The RBAC are configured in 'config/rbac/kustomization.yaml'. More info:
|
||||
// https://pkg.go.dev/sigs.k8s.io/controller-runtime@v0.19.1/pkg/metrics/filters#WithAuthenticationAndAuthorization
|
||||
metricsServerOptions.FilterProvider = filters.WithAuthenticationAndAuthorization
|
||||
|
||||
// TODO(user): If CertDir, CertName, and KeyName are not specified, controller-runtime will automatically
|
||||
// generate self-signed certificates for the metrics server. While convenient for development and testing,
|
||||
// this setup is not recommended for production.
|
||||
}
|
||||
|
||||
// Configure rate limiting for the Kubernetes client
|
||||
config := ctrl.GetConfigOrDie()
|
||||
config.QPS = 50.0 // Increased from default 5.0
|
||||
config.Burst = 100 // Increased from default 10
|
||||
|
||||
mgr, err := ctrl.NewManager(config, ctrl.Options{
|
||||
Scheme: scheme,
|
||||
Metrics: metricsServerOptions,
|
||||
WebhookServer: webhookServer,
|
||||
HealthProbeBindAddress: probeAddr,
|
||||
LeaderElection: enableLeaderElection,
|
||||
LeaderElectionID: "strategy.backups.cozystack.io",
|
||||
Cache: cache.Options{
|
||||
ByObject: map[client.Object]cache.ByObject{
|
||||
&backupsv1alpha1.BackupClass{}: {},
|
||||
},
|
||||
},
|
||||
// LeaderElectionReleaseOnCancel defines if the leader should step down voluntarily
|
||||
// when the Manager ends. This requires the binary to immediately end when the
|
||||
// Manager is stopped, otherwise, this setting is unsafe. Setting this significantly
|
||||
// speeds up voluntary leader transitions as the new leader don't have to wait
|
||||
// LeaseDuration time first.
|
||||
//
|
||||
// In the default scaffold provided, the program ends immediately after
|
||||
// the manager stops, so would be fine to enable this option. However,
|
||||
// if you are doing or is intended to do any operation such as perform cleanups
|
||||
// after the manager stops then its usage might be unsafe.
|
||||
// LeaderElectionReleaseOnCancel: true,
|
||||
})
|
||||
if err != nil {
|
||||
setupLog.Error(err, "unable to start manager")
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
if err = (&backupcontroller.BackupJobReconciler{
|
||||
Client: mgr.GetClient(),
|
||||
Scheme: mgr.GetScheme(),
|
||||
Recorder: mgr.GetEventRecorderFor("backup-controller"),
|
||||
}).SetupWithManager(mgr); err != nil {
|
||||
setupLog.Error(err, "unable to create controller", "controller", "BackupJob")
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
if err = (&backupcontroller.RestoreJobReconciler{
|
||||
Client: mgr.GetClient(),
|
||||
Scheme: mgr.GetScheme(),
|
||||
Recorder: mgr.GetEventRecorderFor("restore-controller"),
|
||||
}).SetupWithManager(mgr); err != nil {
|
||||
setupLog.Error(err, "unable to create controller", "controller", "RestoreJob")
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
// +kubebuilder:scaffold:builder
|
||||
|
||||
if err := mgr.AddHealthzCheck("healthz", healthz.Ping); err != nil {
|
||||
setupLog.Error(err, "unable to set up health check")
|
||||
os.Exit(1)
|
||||
}
|
||||
if err := mgr.AddReadyzCheck("readyz", healthz.Ping); err != nil {
|
||||
setupLog.Error(err, "unable to set up ready check")
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
setupLog.Info("starting manager")
|
||||
if err := mgr.Start(ctrl.SetupSignalHandler()); err != nil {
|
||||
setupLog.Error(err, "problem running manager")
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
549
cmd/cozypkg/cmd/add.go
Normal file
549
cmd/cozypkg/cmd/add.go
Normal file
@@ -0,0 +1,549 @@
|
||||
/*
|
||||
Copyright 2025 The Cozystack Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
cozyv1alpha1 "github.com/cozystack/cozystack/api/v1alpha1"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
"k8s.io/apimachinery/pkg/runtime/serializer/yaml"
|
||||
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
|
||||
clientgoscheme "k8s.io/client-go/kubernetes/scheme"
|
||||
"k8s.io/client-go/rest"
|
||||
"k8s.io/client-go/tools/clientcmd"
|
||||
ctrl "sigs.k8s.io/controller-runtime"
|
||||
"sigs.k8s.io/controller-runtime/pkg/client"
|
||||
_ "k8s.io/client-go/plugin/pkg/client/auth"
|
||||
)
|
||||
|
||||
var addCmdFlags struct {
|
||||
files []string
|
||||
kubeconfig string
|
||||
}
|
||||
|
||||
var addCmd = &cobra.Command{
|
||||
Use: "add [package]...",
|
||||
Short: "Install PackageSource and its dependencies interactively",
|
||||
Long: `Install PackageSource and its dependencies interactively.
|
||||
|
||||
You can specify packages as arguments or use -f flag to read from files.
|
||||
Multiple -f flags can be specified, and they can point to files or directories.`,
|
||||
Args: cobra.ArbitraryArgs,
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
ctx := context.Background()
|
||||
|
||||
// Collect package names from arguments and files
|
||||
packageNames := make(map[string]bool)
|
||||
packagesFromFiles := make(map[string]string) // packageName -> filePath
|
||||
|
||||
for _, arg := range args {
|
||||
packageNames[arg] = true
|
||||
}
|
||||
|
||||
// Read packages from files
|
||||
for _, filePath := range addCmdFlags.files {
|
||||
packages, err := readPackagesFromFile(filePath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to read packages from %s: %w", filePath, err)
|
||||
}
|
||||
for _, pkg := range packages {
|
||||
packageNames[pkg] = true
|
||||
if oldPath, ok := packagesFromFiles[pkg]; ok {
|
||||
fmt.Fprintf(os.Stderr, "warning: package %q is defined in both %s and %s, using the latter\n", pkg, oldPath, filePath)
|
||||
}
|
||||
packagesFromFiles[pkg] = filePath
|
||||
}
|
||||
}
|
||||
|
||||
if len(packageNames) == 0 {
|
||||
return fmt.Errorf("no packages specified")
|
||||
}
|
||||
|
||||
// Create Kubernetes client config
|
||||
var config *rest.Config
|
||||
var err error
|
||||
|
||||
if addCmdFlags.kubeconfig != "" {
|
||||
config, err = clientcmd.BuildConfigFromFlags("", addCmdFlags.kubeconfig)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to load kubeconfig from %s: %w", addCmdFlags.kubeconfig, err)
|
||||
}
|
||||
} else {
|
||||
config, err = ctrl.GetConfig()
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to get kubeconfig: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
scheme := runtime.NewScheme()
|
||||
utilruntime.Must(clientgoscheme.AddToScheme(scheme))
|
||||
utilruntime.Must(cozyv1alpha1.AddToScheme(scheme))
|
||||
|
||||
k8sClient, err := client.New(config, client.Options{Scheme: scheme})
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create k8s client: %w", err)
|
||||
}
|
||||
|
||||
// Process each package
|
||||
for packageName := range packageNames {
|
||||
// Check if package comes from a file
|
||||
if filePath, fromFile := packagesFromFiles[packageName]; fromFile {
|
||||
// Try to create Package directly from file
|
||||
if err := createPackageFromFile(ctx, k8sClient, filePath, packageName); err == nil {
|
||||
fmt.Fprintf(os.Stderr, "✓ Added Package %s\n", packageName)
|
||||
continue
|
||||
}
|
||||
// If failed, fall back to interactive installation
|
||||
}
|
||||
|
||||
// Interactive installation from PackageSource
|
||||
if err := installPackage(ctx, k8sClient, packageName); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
},
|
||||
}
|
||||
|
||||
func readPackagesFromFile(filePath string) ([]string, error) {
|
||||
info, err := os.Stat(filePath)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var packages []string
|
||||
|
||||
if info.IsDir() {
|
||||
// Read all YAML files from directory
|
||||
err := filepath.Walk(filePath, func(path string, info os.FileInfo, err error) error {
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if info.IsDir() || !strings.HasSuffix(path, ".yaml") && !strings.HasSuffix(path, ".yml") {
|
||||
return nil
|
||||
}
|
||||
|
||||
pkgs, err := readPackagesFromYAMLFile(path)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to read %s: %w", path, err)
|
||||
}
|
||||
packages = append(packages, pkgs...)
|
||||
return nil
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
} else {
|
||||
packages, err = readPackagesFromYAMLFile(filePath)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
return packages, nil
|
||||
}
|
||||
|
||||
func readPackagesFromYAMLFile(filePath string) ([]string, error) {
|
||||
data, err := os.ReadFile(filePath)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var packages []string
|
||||
|
||||
// Split YAML documents (in case of multiple resources)
|
||||
documents := strings.Split(string(data), "---")
|
||||
|
||||
for _, doc := range documents {
|
||||
doc = strings.TrimSpace(doc)
|
||||
if doc == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
// Parse using Kubernetes decoder
|
||||
decoder := yaml.NewDecodingSerializer(unstructured.UnstructuredJSONScheme)
|
||||
obj := &unstructured.Unstructured{}
|
||||
_, _, err := decoder.Decode([]byte(doc), nil, obj)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
// Check if it's a Package
|
||||
if obj.GetKind() == "Package" {
|
||||
name := obj.GetName()
|
||||
if name != "" {
|
||||
packages = append(packages, name)
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
// Check if it's a PackageSource
|
||||
if obj.GetKind() == "PackageSource" {
|
||||
name := obj.GetName()
|
||||
if name != "" {
|
||||
packages = append(packages, name)
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
// Try to parse as PackageList or PackageSourceList
|
||||
if obj.GetKind() == "PackageList" || obj.GetKind() == "PackageSourceList" {
|
||||
items, found, err := unstructured.NestedSlice(obj.Object, "items")
|
||||
if err == nil && found {
|
||||
for _, item := range items {
|
||||
if itemMap, ok := item.(map[string]interface{}); ok {
|
||||
if metadata, ok := itemMap["metadata"].(map[string]interface{}); ok {
|
||||
if name, ok := metadata["name"].(string); ok && name != "" {
|
||||
packages = append(packages, name)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
||||
// Return empty list if no packages found - don't error out
|
||||
// The check for whether any packages were specified at all is handled later in RunE
|
||||
return packages, nil
|
||||
}
|
||||
|
||||
// buildDependencyTree builds a dependency tree starting from the root PackageSource
|
||||
// Returns both the dependency tree and a map of dependencies to their requesters
|
||||
func buildDependencyTree(ctx context.Context, k8sClient client.Client, rootName string) (map[string][]string, map[string]string, error) {
|
||||
tree := make(map[string][]string)
|
||||
dependencyRequesters := make(map[string]string) // dep -> requester
|
||||
visited := make(map[string]bool)
|
||||
|
||||
// Ensure root is in tree even if it has no dependencies
|
||||
tree[rootName] = []string{}
|
||||
|
||||
var buildTree func(string) error
|
||||
buildTree = func(pkgName string) error {
|
||||
if visited[pkgName] {
|
||||
return nil
|
||||
}
|
||||
visited[pkgName] = true
|
||||
|
||||
// Get PackageSource
|
||||
ps := &cozyv1alpha1.PackageSource{}
|
||||
if err := k8sClient.Get(ctx, client.ObjectKey{Name: pkgName}, ps); err != nil {
|
||||
// If PackageSource doesn't exist, just skip it
|
||||
return nil
|
||||
}
|
||||
|
||||
// Collect all dependencies from all variants
|
||||
deps := make(map[string]bool)
|
||||
for _, variant := range ps.Spec.Variants {
|
||||
for _, dep := range variant.DependsOn {
|
||||
deps[dep] = true
|
||||
}
|
||||
}
|
||||
|
||||
// Add dependencies to tree
|
||||
for dep := range deps {
|
||||
if _, exists := tree[pkgName]; !exists {
|
||||
tree[pkgName] = []string{}
|
||||
}
|
||||
tree[pkgName] = append(tree[pkgName], dep)
|
||||
// Track who requested this dependency
|
||||
dependencyRequesters[dep] = pkgName
|
||||
// Recursively build tree for dependencies
|
||||
if err := buildTree(dep); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
if err := buildTree(rootName); err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
return tree, dependencyRequesters, nil
|
||||
}
|
||||
|
||||
// topologicalSort performs topological sort on the dependency tree
|
||||
// Returns order from root to leaves (dependencies first)
|
||||
func topologicalSort(tree map[string][]string) ([]string, error) {
|
||||
// Build reverse graph (dependencies -> dependents)
|
||||
reverseGraph := make(map[string][]string)
|
||||
allNodes := make(map[string]bool)
|
||||
|
||||
for node, deps := range tree {
|
||||
allNodes[node] = true
|
||||
for _, dep := range deps {
|
||||
allNodes[dep] = true
|
||||
reverseGraph[dep] = append(reverseGraph[dep], node)
|
||||
}
|
||||
}
|
||||
|
||||
// Calculate in-degrees (how many dependencies a node has)
|
||||
inDegree := make(map[string]int)
|
||||
for node := range allNodes {
|
||||
inDegree[node] = 0
|
||||
}
|
||||
for node, deps := range tree {
|
||||
inDegree[node] = len(deps)
|
||||
}
|
||||
|
||||
// Kahn's algorithm - start with nodes that have no dependencies
|
||||
var queue []string
|
||||
for node, degree := range inDegree {
|
||||
if degree == 0 {
|
||||
queue = append(queue, node)
|
||||
}
|
||||
}
|
||||
|
||||
var result []string
|
||||
for len(queue) > 0 {
|
||||
node := queue[0]
|
||||
queue = queue[1:]
|
||||
result = append(result, node)
|
||||
|
||||
// Process dependents (nodes that depend on this node)
|
||||
for _, dependent := range reverseGraph[node] {
|
||||
inDegree[dependent]--
|
||||
if inDegree[dependent] == 0 {
|
||||
queue = append(queue, dependent)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Check for cycles
|
||||
if len(result) != len(allNodes) {
|
||||
return nil, fmt.Errorf("dependency cycle detected")
|
||||
}
|
||||
|
||||
return result, nil
|
||||
}
|
||||
|
||||
// createPackageFromFile creates a Package resource directly from a YAML file
|
||||
func createPackageFromFile(ctx context.Context, k8sClient client.Client, filePath string, packageName string) error {
|
||||
data, err := os.ReadFile(filePath)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Split YAML documents
|
||||
documents := strings.Split(string(data), "---")
|
||||
|
||||
for _, doc := range documents {
|
||||
doc = strings.TrimSpace(doc)
|
||||
if doc == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
// Parse using Kubernetes decoder
|
||||
decoder := yaml.NewDecodingSerializer(unstructured.UnstructuredJSONScheme)
|
||||
obj := &unstructured.Unstructured{}
|
||||
_, _, err := decoder.Decode([]byte(doc), nil, obj)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
// Check if it's a Package with matching name
|
||||
if obj.GetKind() == "Package" && obj.GetName() == packageName {
|
||||
// Convert to Package
|
||||
var pkg cozyv1alpha1.Package
|
||||
if err := runtime.DefaultUnstructuredConverter.FromUnstructured(obj.Object, &pkg); err != nil {
|
||||
return fmt.Errorf("failed to convert Package: %w", err)
|
||||
}
|
||||
|
||||
// Create Package
|
||||
if err := k8sClient.Create(ctx, &pkg); err != nil {
|
||||
return fmt.Errorf("failed to create Package: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
return fmt.Errorf("Package %s not found in file", packageName)
|
||||
}
|
||||
|
||||
func installPackage(ctx context.Context, k8sClient client.Client, packageSourceName string) error {
|
||||
// Get PackageSource
|
||||
packageSource := &cozyv1alpha1.PackageSource{}
|
||||
if err := k8sClient.Get(ctx, client.ObjectKey{Name: packageSourceName}, packageSource); err != nil {
|
||||
return fmt.Errorf("failed to get PackageSource %s: %w", packageSourceName, err)
|
||||
}
|
||||
|
||||
// Build dependency tree
|
||||
dependencyTree, dependencyRequesters, err := buildDependencyTree(ctx, k8sClient, packageSourceName)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to build dependency tree: %w", err)
|
||||
}
|
||||
|
||||
// Topological sort (install from root to leaves)
|
||||
installOrder, err := topologicalSort(dependencyTree)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to sort dependencies: %w", err)
|
||||
}
|
||||
|
||||
// Get all PackageSources for variant selection
|
||||
var allPackageSources cozyv1alpha1.PackageSourceList
|
||||
if err := k8sClient.List(ctx, &allPackageSources); err != nil {
|
||||
return fmt.Errorf("failed to list PackageSources: %w", err)
|
||||
}
|
||||
|
||||
packageSourceMap := make(map[string]*cozyv1alpha1.PackageSource)
|
||||
for i := range allPackageSources.Items {
|
||||
packageSourceMap[allPackageSources.Items[i].Name] = &allPackageSources.Items[i]
|
||||
}
|
||||
|
||||
// Get all installed Packages
|
||||
var installedPackages cozyv1alpha1.PackageList
|
||||
if err := k8sClient.List(ctx, &installedPackages); err != nil {
|
||||
return fmt.Errorf("failed to list Packages: %w", err)
|
||||
}
|
||||
|
||||
installedMap := make(map[string]*cozyv1alpha1.Package)
|
||||
for i := range installedPackages.Items {
|
||||
installedMap[installedPackages.Items[i].Name] = &installedPackages.Items[i]
|
||||
}
|
||||
|
||||
// First, collect all variant selections
|
||||
fmt.Fprintf(os.Stderr, "Installing %s and its dependencies...\n\n", packageSourceName)
|
||||
packageVariants := make(map[string]string) // packageName -> variant
|
||||
|
||||
for _, pkgName := range installOrder {
|
||||
// Check if already installed
|
||||
if installed, exists := installedMap[pkgName]; exists {
|
||||
variant := installed.Spec.Variant
|
||||
if variant == "" {
|
||||
variant = "default"
|
||||
}
|
||||
fmt.Fprintf(os.Stderr, "✓ %s (already installed, variant: %s)\n", pkgName, variant)
|
||||
packageVariants[pkgName] = variant
|
||||
continue
|
||||
}
|
||||
|
||||
// Get PackageSource for this dependency
|
||||
ps, exists := packageSourceMap[pkgName]
|
||||
if !exists {
|
||||
requester := dependencyRequesters[pkgName]
|
||||
if requester != "" {
|
||||
return fmt.Errorf("PackageSource %s not found (required by %s)", pkgName, requester)
|
||||
}
|
||||
return fmt.Errorf("PackageSource %s not found", pkgName)
|
||||
}
|
||||
|
||||
// Select variant interactively
|
||||
variant, err := selectVariantInteractive(ps)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to select variant for %s: %w", pkgName, err)
|
||||
}
|
||||
|
||||
packageVariants[pkgName] = variant
|
||||
}
|
||||
|
||||
// Now create all Package resources
|
||||
for _, pkgName := range installOrder {
|
||||
// Skip if already installed
|
||||
if _, exists := installedMap[pkgName]; exists {
|
||||
continue
|
||||
}
|
||||
|
||||
variant := packageVariants[pkgName]
|
||||
|
||||
// Create Package
|
||||
pkg := &cozyv1alpha1.Package{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: pkgName,
|
||||
},
|
||||
Spec: cozyv1alpha1.PackageSpec{
|
||||
Variant: variant,
|
||||
},
|
||||
}
|
||||
|
||||
if err := k8sClient.Create(ctx, pkg); err != nil {
|
||||
return fmt.Errorf("failed to create Package %s: %w", pkgName, err)
|
||||
}
|
||||
|
||||
fmt.Fprintf(os.Stderr, "✓ Added Package %s\n", pkgName)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// selectVariantInteractive prompts user to select a variant
|
||||
func selectVariantInteractive(ps *cozyv1alpha1.PackageSource) (string, error) {
|
||||
if len(ps.Spec.Variants) == 0 {
|
||||
return "", fmt.Errorf("no variants available for PackageSource %s", ps.Name)
|
||||
}
|
||||
|
||||
reader := bufio.NewReader(os.Stdin)
|
||||
|
||||
fmt.Fprintf(os.Stderr, "\nPackageSource: %s\n", ps.Name)
|
||||
fmt.Fprintf(os.Stderr, "Available variants:\n")
|
||||
for i, variant := range ps.Spec.Variants {
|
||||
fmt.Fprintf(os.Stderr, " %d. %s\n", i+1, variant.Name)
|
||||
}
|
||||
|
||||
// If only one variant, use it as default
|
||||
defaultVariant := ps.Spec.Variants[0].Name
|
||||
var prompt string
|
||||
if len(ps.Spec.Variants) == 1 {
|
||||
prompt = "Select variant [1]: "
|
||||
} else {
|
||||
prompt = fmt.Sprintf("Select variant (1-%d): ", len(ps.Spec.Variants))
|
||||
}
|
||||
|
||||
for {
|
||||
fmt.Fprintf(os.Stderr, prompt)
|
||||
input, err := reader.ReadString('\n')
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to read input: %w", err)
|
||||
}
|
||||
|
||||
input = strings.TrimSpace(input)
|
||||
|
||||
// If input is empty and there's a default variant, use it
|
||||
if input == "" && len(ps.Spec.Variants) == 1 {
|
||||
return defaultVariant, nil
|
||||
}
|
||||
|
||||
choice, err := strconv.Atoi(input)
|
||||
if err != nil || choice < 1 || choice > len(ps.Spec.Variants) {
|
||||
fmt.Fprintf(os.Stderr, "Invalid choice. Please enter a number between 1 and %d.\n", len(ps.Spec.Variants))
|
||||
continue
|
||||
}
|
||||
|
||||
return ps.Spec.Variants[choice-1].Name, nil
|
||||
}
|
||||
}
|
||||
|
||||
func init() {
|
||||
rootCmd.AddCommand(addCmd)
|
||||
addCmd.Flags().StringArrayVarP(&addCmdFlags.files, "file", "f", []string{}, "Read packages from file or directory (can be specified multiple times)")
|
||||
addCmd.Flags().StringVar(&addCmdFlags.kubeconfig, "kubeconfig", "", "Path to kubeconfig file (defaults to ~/.kube/config or KUBECONFIG env var)")
|
||||
}
|
||||
|
||||
396
cmd/cozypkg/cmd/del.go
Normal file
396
cmd/cozypkg/cmd/del.go
Normal file
@@ -0,0 +1,396 @@
|
||||
/*
|
||||
Copyright 2025 The Cozystack Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"strings"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
cozyv1alpha1 "github.com/cozystack/cozystack/api/v1alpha1"
|
||||
apierrors "k8s.io/apimachinery/pkg/api/errors"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
|
||||
clientgoscheme "k8s.io/client-go/kubernetes/scheme"
|
||||
"k8s.io/client-go/rest"
|
||||
"k8s.io/client-go/tools/clientcmd"
|
||||
ctrl "sigs.k8s.io/controller-runtime"
|
||||
"sigs.k8s.io/controller-runtime/pkg/client"
|
||||
_ "k8s.io/client-go/plugin/pkg/client/auth"
|
||||
)
|
||||
|
||||
var delCmdFlags struct {
|
||||
files []string
|
||||
kubeconfig string
|
||||
}
|
||||
|
||||
var delCmd = &cobra.Command{
|
||||
Use: "del [package]...",
|
||||
Short: "Delete Package resources",
|
||||
Long: `Delete Package resources.
|
||||
|
||||
You can specify packages as arguments or use -f flag to read from files.
|
||||
Multiple -f flags can be specified, and they can point to files or directories.`,
|
||||
Args: cobra.ArbitraryArgs,
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
ctx := context.Background()
|
||||
|
||||
// Collect package names from arguments and files
|
||||
packageNames := make(map[string]bool)
|
||||
packagesFromFiles := make(map[string]string) // packageName -> filePath
|
||||
|
||||
for _, arg := range args {
|
||||
packageNames[arg] = true
|
||||
}
|
||||
|
||||
// Read packages from files (reuse function from add.go)
|
||||
for _, filePath := range delCmdFlags.files {
|
||||
packages, err := readPackagesFromFile(filePath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to read packages from %s: %w", filePath, err)
|
||||
}
|
||||
for _, pkg := range packages {
|
||||
packageNames[pkg] = true
|
||||
if oldPath, ok := packagesFromFiles[pkg]; ok {
|
||||
fmt.Fprintf(os.Stderr, "warning: package %q is defined in both %s and %s, using the latter\n", pkg, oldPath, filePath)
|
||||
}
|
||||
packagesFromFiles[pkg] = filePath
|
||||
}
|
||||
}
|
||||
|
||||
if len(packageNames) == 0 {
|
||||
return fmt.Errorf("no packages specified")
|
||||
}
|
||||
|
||||
// Create Kubernetes client config
|
||||
var config *rest.Config
|
||||
var err error
|
||||
|
||||
if delCmdFlags.kubeconfig != "" {
|
||||
config, err = clientcmd.BuildConfigFromFlags("", delCmdFlags.kubeconfig)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to load kubeconfig from %s: %w", delCmdFlags.kubeconfig, err)
|
||||
}
|
||||
} else {
|
||||
config, err = ctrl.GetConfig()
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to get kubeconfig: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
scheme := runtime.NewScheme()
|
||||
utilruntime.Must(clientgoscheme.AddToScheme(scheme))
|
||||
utilruntime.Must(cozyv1alpha1.AddToScheme(scheme))
|
||||
|
||||
k8sClient, err := client.New(config, client.Options{Scheme: scheme})
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create k8s client: %w", err)
|
||||
}
|
||||
|
||||
// Check which requested packages are installed
|
||||
var installedPackages cozyv1alpha1.PackageList
|
||||
if err := k8sClient.List(ctx, &installedPackages); err != nil {
|
||||
return fmt.Errorf("failed to list Packages: %w", err)
|
||||
}
|
||||
installedMap := make(map[string]bool)
|
||||
for _, pkg := range installedPackages.Items {
|
||||
installedMap[pkg.Name] = true
|
||||
}
|
||||
|
||||
// Warn about requested packages that are not installed
|
||||
for pkgName := range packageNames {
|
||||
if !installedMap[pkgName] {
|
||||
fmt.Fprintf(os.Stderr, "⚠ Package %s is not installed, skipping\n", pkgName)
|
||||
}
|
||||
}
|
||||
|
||||
// Find all packages to delete (including dependents)
|
||||
packagesToDelete, err := findPackagesToDelete(ctx, k8sClient, packageNames)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to analyze dependencies: %w", err)
|
||||
}
|
||||
|
||||
if len(packagesToDelete) == 0 {
|
||||
fmt.Fprintf(os.Stderr, "No packages found to delete\n")
|
||||
return nil
|
||||
}
|
||||
|
||||
// Show packages to be deleted and ask for confirmation
|
||||
if err := confirmDeletion(packagesToDelete, packageNames); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Delete packages in reverse topological order (dependents first, then dependencies)
|
||||
deleteOrder, err := getDeleteOrder(ctx, k8sClient, packagesToDelete)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to determine delete order: %w", err)
|
||||
}
|
||||
|
||||
// Delete each package
|
||||
for _, packageName := range deleteOrder {
|
||||
pkg := &cozyv1alpha1.Package{}
|
||||
pkg.Name = packageName
|
||||
if err := k8sClient.Delete(ctx, pkg); err != nil {
|
||||
if apierrors.IsNotFound(err) {
|
||||
fmt.Fprintf(os.Stderr, "⚠ Package %s not found, skipping\n", packageName)
|
||||
continue
|
||||
}
|
||||
return fmt.Errorf("failed to delete Package %s: %w", packageName, err)
|
||||
}
|
||||
fmt.Fprintf(os.Stderr, "✓ Deleted Package %s\n", packageName)
|
||||
}
|
||||
|
||||
return nil
|
||||
},
|
||||
}
|
||||
|
||||
// findPackagesToDelete finds all packages that need to be deleted, including dependents
|
||||
func findPackagesToDelete(ctx context.Context, k8sClient client.Client, requestedPackages map[string]bool) (map[string]bool, error) {
|
||||
// Get all installed Packages
|
||||
var installedPackages cozyv1alpha1.PackageList
|
||||
if err := k8sClient.List(ctx, &installedPackages); err != nil {
|
||||
return nil, fmt.Errorf("failed to list Packages: %w", err)
|
||||
}
|
||||
|
||||
installedMap := make(map[string]bool)
|
||||
for _, pkg := range installedPackages.Items {
|
||||
installedMap[pkg.Name] = true
|
||||
}
|
||||
|
||||
// Get all PackageSources to build dependency graph
|
||||
var packageSources cozyv1alpha1.PackageSourceList
|
||||
if err := k8sClient.List(ctx, &packageSources); err != nil {
|
||||
return nil, fmt.Errorf("failed to list PackageSources: %w", err)
|
||||
}
|
||||
|
||||
// Build reverse dependency graph (dependents -> dependencies)
|
||||
// This tells us: for each package, which packages depend on it
|
||||
reverseDeps := make(map[string][]string)
|
||||
for _, ps := range packageSources.Items {
|
||||
// Only consider installed packages
|
||||
if !installedMap[ps.Name] {
|
||||
continue
|
||||
}
|
||||
for _, variant := range ps.Spec.Variants {
|
||||
for _, dep := range variant.DependsOn {
|
||||
// Only consider installed dependencies
|
||||
if installedMap[dep] {
|
||||
reverseDeps[dep] = append(reverseDeps[dep], ps.Name)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Find all packages to delete (requested + their dependents)
|
||||
packagesToDelete := make(map[string]bool)
|
||||
visited := make(map[string]bool)
|
||||
|
||||
var findDependents func(string)
|
||||
findDependents = func(pkgName string) {
|
||||
if visited[pkgName] {
|
||||
return
|
||||
}
|
||||
visited[pkgName] = true
|
||||
|
||||
// Only add if it's installed
|
||||
if installedMap[pkgName] {
|
||||
packagesToDelete[pkgName] = true
|
||||
}
|
||||
|
||||
// Recursively find all dependents
|
||||
for _, dependent := range reverseDeps[pkgName] {
|
||||
if installedMap[dependent] {
|
||||
findDependents(dependent)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Start from requested packages
|
||||
for pkgName := range requestedPackages {
|
||||
if !installedMap[pkgName] {
|
||||
continue
|
||||
}
|
||||
findDependents(pkgName)
|
||||
}
|
||||
|
||||
return packagesToDelete, nil
|
||||
}
|
||||
|
||||
// confirmDeletion shows the list of packages to be deleted and asks for user confirmation
|
||||
func confirmDeletion(packagesToDelete map[string]bool, requestedPackages map[string]bool) error {
|
||||
// Separate requested packages from dependents
|
||||
var requested []string
|
||||
var dependents []string
|
||||
|
||||
for pkg := range packagesToDelete {
|
||||
if requestedPackages[pkg] {
|
||||
requested = append(requested, pkg)
|
||||
} else {
|
||||
dependents = append(dependents, pkg)
|
||||
}
|
||||
}
|
||||
|
||||
fmt.Fprintf(os.Stderr, "\nThe following packages will be deleted:\n\n")
|
||||
|
||||
if len(requested) > 0 {
|
||||
fmt.Fprintf(os.Stderr, "Requested packages:\n")
|
||||
for _, pkg := range requested {
|
||||
fmt.Fprintf(os.Stderr, " - %s\n", pkg)
|
||||
}
|
||||
fmt.Fprintf(os.Stderr, "\n")
|
||||
}
|
||||
|
||||
if len(dependents) > 0 {
|
||||
fmt.Fprintf(os.Stderr, "Dependent packages (will also be deleted):\n")
|
||||
for _, pkg := range dependents {
|
||||
fmt.Fprintf(os.Stderr, " - %s\n", pkg)
|
||||
}
|
||||
fmt.Fprintf(os.Stderr, "\n")
|
||||
}
|
||||
|
||||
fmt.Fprintf(os.Stderr, "Total: %d package(s)\n\n", len(packagesToDelete))
|
||||
fmt.Fprintf(os.Stderr, "Do you want to continue? [y/N]: ")
|
||||
|
||||
reader := bufio.NewReader(os.Stdin)
|
||||
input, err := reader.ReadString('\n')
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to read input: %w", err)
|
||||
}
|
||||
|
||||
input = strings.TrimSpace(strings.ToLower(input))
|
||||
if input != "y" && input != "yes" {
|
||||
return fmt.Errorf("deletion cancelled")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// getDeleteOrder returns packages in reverse topological order (dependents first, then dependencies)
|
||||
// This ensures we delete dependents before their dependencies
|
||||
func getDeleteOrder(ctx context.Context, k8sClient client.Client, packagesToDelete map[string]bool) ([]string, error) {
|
||||
// Get all PackageSources to build dependency graph
|
||||
var packageSources cozyv1alpha1.PackageSourceList
|
||||
if err := k8sClient.List(ctx, &packageSources); err != nil {
|
||||
return nil, fmt.Errorf("failed to list PackageSources: %w", err)
|
||||
}
|
||||
|
||||
// Build forward dependency graph (package -> dependencies)
|
||||
dependencyGraph := make(map[string][]string)
|
||||
for _, ps := range packageSources.Items {
|
||||
if !packagesToDelete[ps.Name] {
|
||||
continue
|
||||
}
|
||||
deps := make(map[string]bool)
|
||||
for _, variant := range ps.Spec.Variants {
|
||||
for _, dep := range variant.DependsOn {
|
||||
if packagesToDelete[dep] {
|
||||
deps[dep] = true
|
||||
}
|
||||
}
|
||||
}
|
||||
var depList []string
|
||||
for dep := range deps {
|
||||
depList = append(depList, dep)
|
||||
}
|
||||
dependencyGraph[ps.Name] = depList
|
||||
}
|
||||
|
||||
// Build reverse graph for topological sort
|
||||
reverseGraph := make(map[string][]string)
|
||||
allNodes := make(map[string]bool)
|
||||
|
||||
for node, deps := range dependencyGraph {
|
||||
allNodes[node] = true
|
||||
for _, dep := range deps {
|
||||
allNodes[dep] = true
|
||||
reverseGraph[dep] = append(reverseGraph[dep], node)
|
||||
}
|
||||
}
|
||||
|
||||
// Add nodes that have no dependencies
|
||||
for pkg := range packagesToDelete {
|
||||
if !allNodes[pkg] {
|
||||
allNodes[pkg] = true
|
||||
dependencyGraph[pkg] = []string{}
|
||||
}
|
||||
}
|
||||
|
||||
// Calculate in-degrees
|
||||
inDegree := make(map[string]int)
|
||||
for node := range allNodes {
|
||||
inDegree[node] = 0
|
||||
}
|
||||
for node, deps := range dependencyGraph {
|
||||
inDegree[node] = len(deps)
|
||||
}
|
||||
|
||||
// Kahn's algorithm - start with nodes that have no dependencies
|
||||
var queue []string
|
||||
for node, degree := range inDegree {
|
||||
if degree == 0 {
|
||||
queue = append(queue, node)
|
||||
}
|
||||
}
|
||||
|
||||
var result []string
|
||||
for len(queue) > 0 {
|
||||
node := queue[0]
|
||||
queue = queue[1:]
|
||||
result = append(result, node)
|
||||
|
||||
// Process dependents
|
||||
for _, dependent := range reverseGraph[node] {
|
||||
inDegree[dependent]--
|
||||
if inDegree[dependent] == 0 {
|
||||
queue = append(queue, dependent)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Check for cycles: if not all nodes were processed, there's a cycle
|
||||
if len(result) != len(allNodes) {
|
||||
// Find unprocessed nodes
|
||||
processed := make(map[string]bool)
|
||||
for _, node := range result {
|
||||
processed[node] = true
|
||||
}
|
||||
var unprocessed []string
|
||||
for node := range allNodes {
|
||||
if !processed[node] {
|
||||
unprocessed = append(unprocessed, node)
|
||||
}
|
||||
}
|
||||
return nil, fmt.Errorf("dependency cycle detected: the following packages form a cycle and cannot be deleted: %v", unprocessed)
|
||||
}
|
||||
|
||||
// Reverse the result to get dependents first, then dependencies
|
||||
for i, j := 0, len(result)-1; i < j; i, j = i+1, j-1 {
|
||||
result[i], result[j] = result[j], result[i]
|
||||
}
|
||||
|
||||
return result, nil
|
||||
}
|
||||
|
||||
func init() {
|
||||
rootCmd.AddCommand(delCmd)
|
||||
delCmd.Flags().StringArrayVarP(&delCmdFlags.files, "file", "f", []string{}, "Read packages from file or directory (can be specified multiple times)")
|
||||
delCmd.Flags().StringVar(&delCmdFlags.kubeconfig, "kubeconfig", "", "Path to kubeconfig file (defaults to ~/.kube/config or KUBECONFIG env var)")
|
||||
}
|
||||
|
||||
564
cmd/cozypkg/cmd/dependencies.go
Normal file
564
cmd/cozypkg/cmd/dependencies.go
Normal file
@@ -0,0 +1,564 @@
|
||||
/*
|
||||
Copyright 2025 The Cozystack Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"strings"
|
||||
|
||||
cozyv1alpha1 "github.com/cozystack/cozystack/api/v1alpha1"
|
||||
"github.com/emicklei/dot"
|
||||
"github.com/spf13/cobra"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
|
||||
clientgoscheme "k8s.io/client-go/kubernetes/scheme"
|
||||
_ "k8s.io/client-go/plugin/pkg/client/auth"
|
||||
"k8s.io/client-go/rest"
|
||||
"k8s.io/client-go/tools/clientcmd"
|
||||
ctrl "sigs.k8s.io/controller-runtime"
|
||||
"sigs.k8s.io/controller-runtime/pkg/client"
|
||||
)
|
||||
|
||||
var dotCmdFlags struct {
|
||||
installed bool
|
||||
components bool
|
||||
files []string
|
||||
kubeconfig string
|
||||
}
|
||||
|
||||
var dotCmd = &cobra.Command{
|
||||
Use: "dot [package]...",
|
||||
Short: "Generate dependency graph as graphviz DOT format",
|
||||
Long: `Generate dependency graph as graphviz DOT format.
|
||||
|
||||
Pipe the output through the "dot" program (part of graphviz package) to render the graph:
|
||||
|
||||
cozypkg dot | dot -Tpng > graph.png
|
||||
|
||||
By default, shows dependencies for all PackageSource resources.
|
||||
Use --installed to show only installed Package resources.
|
||||
Specify packages as arguments or use -f flag to read from files.`,
|
||||
Args: cobra.ArbitraryArgs,
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
ctx := context.Background()
|
||||
|
||||
// Collect package names from arguments and files
|
||||
packageNames := make(map[string]bool)
|
||||
for _, arg := range args {
|
||||
packageNames[arg] = true
|
||||
}
|
||||
|
||||
// Read packages from files (reuse function from add.go)
|
||||
for _, filePath := range dotCmdFlags.files {
|
||||
packages, err := readPackagesFromFile(filePath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to read packages from %s: %w", filePath, err)
|
||||
}
|
||||
for _, pkg := range packages {
|
||||
packageNames[pkg] = true
|
||||
}
|
||||
}
|
||||
|
||||
// Convert to slice, empty means all packages
|
||||
var selectedPackages []string
|
||||
if len(packageNames) > 0 {
|
||||
for pkg := range packageNames {
|
||||
selectedPackages = append(selectedPackages, pkg)
|
||||
}
|
||||
}
|
||||
|
||||
// If multiple packages specified, show graph for all of them
|
||||
// If single package, use packageName for backward compatibility
|
||||
var packageName string
|
||||
if len(selectedPackages) == 1 {
|
||||
packageName = selectedPackages[0]
|
||||
} else if len(selectedPackages) > 1 {
|
||||
// Multiple packages - pass empty string to packageName, use selectedPackages
|
||||
packageName = ""
|
||||
}
|
||||
|
||||
// packagesOnly is inverse of components flag (if components=false, then packagesOnly=true)
|
||||
packagesOnly := !dotCmdFlags.components
|
||||
graph, allNodes, edgeVariants, packageNames, err := buildGraphFromCluster(ctx, dotCmdFlags.kubeconfig, packagesOnly, dotCmdFlags.installed, packageName, selectedPackages)
|
||||
if err != nil {
|
||||
return fmt.Errorf("error getting PackageSource dependencies: %w", err)
|
||||
}
|
||||
|
||||
dotGraph := generateDOTGraph(graph, allNodes, packagesOnly, edgeVariants, packageNames)
|
||||
dotGraph.Write(os.Stdout)
|
||||
|
||||
return nil
|
||||
},
|
||||
}
|
||||
|
||||
func init() {
|
||||
rootCmd.AddCommand(dotCmd)
|
||||
dotCmd.Flags().BoolVarP(&dotCmdFlags.installed, "installed", "i", false, "show dependencies only for installed Package resources")
|
||||
dotCmd.Flags().BoolVar(&dotCmdFlags.components, "components", false, "show component-level dependencies")
|
||||
dotCmd.Flags().StringArrayVarP(&dotCmdFlags.files, "file", "f", []string{}, "Read packages from file or directory (can be specified multiple times)")
|
||||
dotCmd.Flags().StringVar(&dotCmdFlags.kubeconfig, "kubeconfig", "", "Path to kubeconfig file (defaults to ~/.kube/config or KUBECONFIG env var)")
|
||||
}
|
||||
|
||||
var (
|
||||
dependenciesScheme = runtime.NewScheme()
|
||||
)
|
||||
|
||||
func init() {
|
||||
utilruntime.Must(clientgoscheme.AddToScheme(dependenciesScheme))
|
||||
utilruntime.Must(cozyv1alpha1.AddToScheme(dependenciesScheme))
|
||||
}
|
||||
|
||||
// buildGraphFromCluster builds a dependency graph from PackageSource resources in the cluster.
|
||||
// Returns: graph, allNodes, edgeVariants (map[edgeKey]variants), packageNames, error
|
||||
func buildGraphFromCluster(ctx context.Context, kubeconfig string, packagesOnly bool, installedOnly bool, packageName string, selectedPackages []string) (map[string][]string, map[string]bool, map[string][]string, map[string]bool, error) {
|
||||
// Create Kubernetes client config
|
||||
var config *rest.Config
|
||||
var err error
|
||||
|
||||
if kubeconfig != "" {
|
||||
// Load kubeconfig from explicit path
|
||||
config, err = clientcmd.BuildConfigFromFlags("", kubeconfig)
|
||||
if err != nil {
|
||||
return nil, nil, nil, nil, fmt.Errorf("failed to load kubeconfig from %s: %w", kubeconfig, err)
|
||||
}
|
||||
} else {
|
||||
// Use default kubeconfig loading (from env var or ~/.kube/config)
|
||||
config, err = ctrl.GetConfig()
|
||||
if err != nil {
|
||||
return nil, nil, nil, nil, fmt.Errorf("failed to get kubeconfig: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
k8sClient, err := client.New(config, client.Options{Scheme: dependenciesScheme})
|
||||
if err != nil {
|
||||
return nil, nil, nil, nil, fmt.Errorf("failed to create k8s client: %w", err)
|
||||
}
|
||||
|
||||
// Get installed Packages if needed
|
||||
installedPackages := make(map[string]bool)
|
||||
if installedOnly {
|
||||
var packageList cozyv1alpha1.PackageList
|
||||
if err := k8sClient.List(ctx, &packageList); err != nil {
|
||||
return nil, nil, nil, nil, fmt.Errorf("failed to list Packages: %w", err)
|
||||
}
|
||||
for _, pkg := range packageList.Items {
|
||||
installedPackages[pkg.Name] = true
|
||||
}
|
||||
}
|
||||
|
||||
// List all PackageSource resources
|
||||
var packageSourceList cozyv1alpha1.PackageSourceList
|
||||
if err := k8sClient.List(ctx, &packageSourceList); err != nil {
|
||||
return nil, nil, nil, nil, fmt.Errorf("failed to list PackageSources: %w", err)
|
||||
}
|
||||
|
||||
// Build map of existing packages and components
|
||||
packageNames := make(map[string]bool)
|
||||
allExistingComponents := make(map[string]bool) // "package.component" -> true
|
||||
for _, ps := range packageSourceList.Items {
|
||||
if ps.Name != "" {
|
||||
packageNames[ps.Name] = true
|
||||
for _, variant := range ps.Spec.Variants {
|
||||
for _, component := range variant.Components {
|
||||
if component.Install != nil {
|
||||
componentFullName := fmt.Sprintf("%s.%s", ps.Name, component.Name)
|
||||
allExistingComponents[componentFullName] = true
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
graph := make(map[string][]string)
|
||||
allNodes := make(map[string]bool)
|
||||
edgeVariants := make(map[string][]string) // key: "source->target", value: list of variant names
|
||||
existingEdges := make(map[string]bool) // key: "source->target" to avoid duplicates
|
||||
componentHasLocalDeps := make(map[string]bool) // componentName -> has local component dependencies
|
||||
|
||||
// Process each PackageSource
|
||||
for _, ps := range packageSourceList.Items {
|
||||
psName := ps.Name
|
||||
if psName == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
// Filter by package name if specified
|
||||
if packageName != "" && psName != packageName {
|
||||
continue
|
||||
}
|
||||
|
||||
// Filter by selected packages if specified
|
||||
if len(selectedPackages) > 0 {
|
||||
found := false
|
||||
for _, selected := range selectedPackages {
|
||||
if psName == selected {
|
||||
found = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !found {
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
||||
// Filter by installed packages if flag is set
|
||||
if installedOnly && !installedPackages[psName] {
|
||||
continue
|
||||
}
|
||||
|
||||
allNodes[psName] = true
|
||||
|
||||
// Track package dependencies per variant
|
||||
packageDepVariants := make(map[string]map[string]bool) // dep -> variant -> true
|
||||
allVariantNames := make(map[string]bool)
|
||||
for _, v := range ps.Spec.Variants {
|
||||
allVariantNames[v.Name] = true
|
||||
}
|
||||
|
||||
// Track component dependencies per variant
|
||||
componentDepVariants := make(map[string]map[string]map[string]bool) // componentName -> dep -> variant -> true
|
||||
componentVariants := make(map[string]map[string]bool) // componentName -> variant -> true
|
||||
|
||||
// Extract dependencies from variants
|
||||
for _, variant := range ps.Spec.Variants {
|
||||
// Variant-level dependencies (package-level)
|
||||
for _, dep := range variant.DependsOn {
|
||||
// If installedOnly is set, only include dependencies that are installed
|
||||
if installedOnly && !installedPackages[dep] {
|
||||
continue
|
||||
}
|
||||
|
||||
// Track which variant this dependency comes from
|
||||
if packageDepVariants[dep] == nil {
|
||||
packageDepVariants[dep] = make(map[string]bool)
|
||||
}
|
||||
packageDepVariants[dep][variant.Name] = true
|
||||
|
||||
edgeKey := fmt.Sprintf("%s->%s", psName, dep)
|
||||
if !existingEdges[edgeKey] {
|
||||
graph[psName] = append(graph[psName], dep)
|
||||
existingEdges[edgeKey] = true
|
||||
}
|
||||
|
||||
// Add to allNodes only if package exists
|
||||
if packageNames[dep] {
|
||||
allNodes[dep] = true
|
||||
}
|
||||
// If package doesn't exist, don't add to allNodes - it will be shown as missing (red)
|
||||
}
|
||||
|
||||
// Component-level dependencies
|
||||
if !packagesOnly {
|
||||
for _, component := range variant.Components {
|
||||
// Skip components without install section
|
||||
if component.Install == nil {
|
||||
continue
|
||||
}
|
||||
|
||||
componentName := fmt.Sprintf("%s.%s", psName, component.Name)
|
||||
allNodes[componentName] = true
|
||||
|
||||
// Track which variants this component appears in
|
||||
if componentVariants[componentName] == nil {
|
||||
componentVariants[componentName] = make(map[string]bool)
|
||||
}
|
||||
componentVariants[componentName][variant.Name] = true
|
||||
|
||||
if component.Install != nil {
|
||||
if componentDepVariants[componentName] == nil {
|
||||
componentDepVariants[componentName] = make(map[string]map[string]bool)
|
||||
}
|
||||
|
||||
for _, dep := range component.Install.DependsOn {
|
||||
// Track which variant this dependency comes from
|
||||
if componentDepVariants[componentName][dep] == nil {
|
||||
componentDepVariants[componentName][dep] = make(map[string]bool)
|
||||
}
|
||||
componentDepVariants[componentName][dep][variant.Name] = true
|
||||
|
||||
// Check if it's a local component dependency or external
|
||||
if strings.Contains(dep, ".") {
|
||||
// External component dependency (package.component format)
|
||||
// Mark that this component has local dependencies (for edge to package logic)
|
||||
componentHasLocalDeps[componentName] = true
|
||||
|
||||
// Check if target component exists
|
||||
if allExistingComponents[dep] {
|
||||
// Component exists
|
||||
edgeKey := fmt.Sprintf("%s->%s", componentName, dep)
|
||||
if !existingEdges[edgeKey] {
|
||||
graph[componentName] = append(graph[componentName], dep)
|
||||
existingEdges[edgeKey] = true
|
||||
}
|
||||
allNodes[dep] = true
|
||||
} else {
|
||||
// Component doesn't exist - create missing component node
|
||||
edgeKey := fmt.Sprintf("%s->%s", componentName, dep)
|
||||
if !existingEdges[edgeKey] {
|
||||
graph[componentName] = append(graph[componentName], dep)
|
||||
existingEdges[edgeKey] = true
|
||||
}
|
||||
// Don't add to allNodes - will be shown as missing (red)
|
||||
|
||||
// Add edge from missing component to its package
|
||||
parts := strings.SplitN(dep, ".", 2)
|
||||
if len(parts) == 2 {
|
||||
depPackageName := parts[0]
|
||||
missingEdgeKey := fmt.Sprintf("%s->%s", dep, depPackageName)
|
||||
if !existingEdges[missingEdgeKey] {
|
||||
graph[dep] = append(graph[dep], depPackageName)
|
||||
existingEdges[missingEdgeKey] = true
|
||||
}
|
||||
// Add package to allNodes only if it exists
|
||||
if packageNames[depPackageName] {
|
||||
allNodes[depPackageName] = true
|
||||
}
|
||||
// If package doesn't exist, it will be shown as missing (red)
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// Local component dependency (same package)
|
||||
// Mark that this component has local dependencies
|
||||
componentHasLocalDeps[componentName] = true
|
||||
|
||||
localDep := fmt.Sprintf("%s.%s", psName, dep)
|
||||
|
||||
// Check if target component exists
|
||||
if allExistingComponents[localDep] {
|
||||
// Component exists
|
||||
edgeKey := fmt.Sprintf("%s->%s", componentName, localDep)
|
||||
if !existingEdges[edgeKey] {
|
||||
graph[componentName] = append(graph[componentName], localDep)
|
||||
existingEdges[edgeKey] = true
|
||||
}
|
||||
allNodes[localDep] = true
|
||||
} else {
|
||||
// Component doesn't exist - create missing component node
|
||||
edgeKey := fmt.Sprintf("%s->%s", componentName, localDep)
|
||||
if !existingEdges[edgeKey] {
|
||||
graph[componentName] = append(graph[componentName], localDep)
|
||||
existingEdges[edgeKey] = true
|
||||
}
|
||||
// Don't add to allNodes - will be shown as missing (red)
|
||||
|
||||
// Add edge from missing component to its package
|
||||
missingEdgeKey := fmt.Sprintf("%s->%s", localDep, psName)
|
||||
if !existingEdges[missingEdgeKey] {
|
||||
graph[localDep] = append(graph[localDep], psName)
|
||||
existingEdges[missingEdgeKey] = true
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Store variant information for package dependencies that are not in all variants
|
||||
for dep, variants := range packageDepVariants {
|
||||
if len(variants) < len(allVariantNames) {
|
||||
var variantList []string
|
||||
for v := range variants {
|
||||
variantList = append(variantList, v)
|
||||
}
|
||||
edgeKey := fmt.Sprintf("%s->%s", psName, dep)
|
||||
edgeVariants[edgeKey] = variantList
|
||||
}
|
||||
}
|
||||
|
||||
// Add component->package edges for components without local dependencies
|
||||
if !packagesOnly {
|
||||
for componentName := range componentVariants {
|
||||
// Only add edge to package if component has no local component dependencies
|
||||
if !componentHasLocalDeps[componentName] {
|
||||
edgeKey := fmt.Sprintf("%s->%s", componentName, psName)
|
||||
if !existingEdges[edgeKey] {
|
||||
graph[componentName] = append(graph[componentName], psName)
|
||||
existingEdges[edgeKey] = true
|
||||
}
|
||||
|
||||
// If component is not in all variants, store variant info for component->package edge
|
||||
componentAllVariants := componentVariants[componentName]
|
||||
if len(componentAllVariants) < len(allVariantNames) {
|
||||
var variantList []string
|
||||
for v := range componentAllVariants {
|
||||
variantList = append(variantList, v)
|
||||
}
|
||||
edgeVariants[edgeKey] = variantList
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Store variant information for component dependencies that are not in all variants
|
||||
for componentName, deps := range componentDepVariants {
|
||||
componentAllVariants := componentVariants[componentName]
|
||||
for dep, variants := range deps {
|
||||
if len(variants) < len(componentAllVariants) {
|
||||
var variantList []string
|
||||
for v := range variants {
|
||||
variantList = append(variantList, v)
|
||||
}
|
||||
// Determine the actual target name
|
||||
var targetName string
|
||||
if strings.Contains(dep, ".") {
|
||||
targetName = dep
|
||||
} else {
|
||||
targetName = fmt.Sprintf("%s.%s", psName, dep)
|
||||
}
|
||||
edgeKey := fmt.Sprintf("%s->%s", componentName, targetName)
|
||||
edgeVariants[edgeKey] = variantList
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return graph, allNodes, edgeVariants, packageNames, nil
|
||||
}
|
||||
|
||||
// generateDOTGraph generates a DOT graph from the dependency graph.
|
||||
func generateDOTGraph(graph map[string][]string, allNodes map[string]bool, packagesOnly bool, edgeVariants map[string][]string, packageNames map[string]bool) *dot.Graph {
|
||||
g := dot.NewGraph(dot.Directed)
|
||||
g.Attr("rankdir", "RL")
|
||||
g.Attr("nodesep", "0.5")
|
||||
g.Attr("ranksep", "1.0")
|
||||
|
||||
// Helper function to check if a node is a package
|
||||
// A node is a package if:
|
||||
// 1. It's directly in packageNames
|
||||
// 2. It doesn't contain a dot (simple package name)
|
||||
// 3. It contains a dot but the part before the first dot is a package name
|
||||
isPackage := func(nodeName string) bool {
|
||||
if packageNames[nodeName] {
|
||||
return true
|
||||
}
|
||||
if !strings.Contains(nodeName, ".") {
|
||||
return true
|
||||
}
|
||||
// If it contains a dot, check if the part before the first dot is a package
|
||||
parts := strings.SplitN(nodeName, ".", 2)
|
||||
if len(parts) > 0 {
|
||||
return packageNames[parts[0]]
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// Add nodes
|
||||
for node := range allNodes {
|
||||
if packagesOnly && !isPackage(node) {
|
||||
// Skip component nodes when packages-only is enabled
|
||||
continue
|
||||
}
|
||||
|
||||
n := g.Node(node)
|
||||
|
||||
// Style nodes based on type
|
||||
if isPackage(node) {
|
||||
// Package node
|
||||
n.Attr("shape", "box")
|
||||
n.Attr("style", "rounded,filled")
|
||||
n.Attr("fillcolor", "lightblue")
|
||||
n.Attr("label", node)
|
||||
} else {
|
||||
// Component node
|
||||
n.Attr("shape", "box")
|
||||
n.Attr("style", "rounded,filled")
|
||||
n.Attr("fillcolor", "lightyellow")
|
||||
// Extract component name (part after last dot)
|
||||
parts := strings.Split(node, ".")
|
||||
if len(parts) > 0 {
|
||||
n.Attr("label", parts[len(parts)-1])
|
||||
} else {
|
||||
n.Attr("label", node)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Add edges
|
||||
for source, targets := range graph {
|
||||
if packagesOnly && !isPackage(source) {
|
||||
// Skip component edges when packages-only is enabled
|
||||
continue
|
||||
}
|
||||
|
||||
for _, target := range targets {
|
||||
if packagesOnly && !isPackage(target) {
|
||||
// Skip component edges when packages-only is enabled
|
||||
continue
|
||||
}
|
||||
|
||||
// Check if target exists
|
||||
targetExists := allNodes[target]
|
||||
|
||||
// Determine edge type for coloring
|
||||
sourceIsPackage := isPackage(source)
|
||||
targetIsPackage := isPackage(target)
|
||||
|
||||
// Add edge
|
||||
edge := g.Edge(g.Node(source), g.Node(target))
|
||||
|
||||
// Set edge color based on type (if target exists)
|
||||
if targetExists {
|
||||
if sourceIsPackage && targetIsPackage {
|
||||
// Package -> Package: black (default)
|
||||
edge.Attr("color", "black")
|
||||
} else {
|
||||
// Component -> Package or Component -> Component: green
|
||||
edge.Attr("color", "green")
|
||||
}
|
||||
}
|
||||
|
||||
// If target doesn't exist, mark it as missing (red color)
|
||||
if !targetExists {
|
||||
edge.Attr("color", "red")
|
||||
edge.Attr("style", "dashed")
|
||||
|
||||
// Also add the missing node with red color
|
||||
missingNode := g.Node(target)
|
||||
missingNode.Attr("shape", "box")
|
||||
missingNode.Attr("style", "rounded,filled,dashed")
|
||||
missingNode.Attr("fillcolor", "lightcoral")
|
||||
|
||||
// Determine label based on node type
|
||||
if isPackage(target) {
|
||||
// Package node
|
||||
missingNode.Attr("label", target)
|
||||
} else {
|
||||
// Component node - extract component name
|
||||
parts := strings.Split(target, ".")
|
||||
if len(parts) > 0 {
|
||||
missingNode.Attr("label", parts[len(parts)-1])
|
||||
} else {
|
||||
missingNode.Attr("label", target)
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// Check if this edge has variant information (dependency not in all variants)
|
||||
edgeKey := fmt.Sprintf("%s->%s", source, target)
|
||||
if variants, hasVariants := edgeVariants[edgeKey]; hasVariants {
|
||||
// Add label with variant names
|
||||
edge.Attr("label", strings.Join(variants, ","))
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return g
|
||||
}
|
||||
220
cmd/cozypkg/cmd/list.go
Normal file
220
cmd/cozypkg/cmd/list.go
Normal file
@@ -0,0 +1,220 @@
|
||||
/*
|
||||
Copyright 2025 The Cozystack Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"strings"
|
||||
"text/tabwriter"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
cozyv1alpha1 "github.com/cozystack/cozystack/api/v1alpha1"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
|
||||
clientgoscheme "k8s.io/client-go/kubernetes/scheme"
|
||||
"k8s.io/client-go/rest"
|
||||
"k8s.io/client-go/tools/clientcmd"
|
||||
ctrl "sigs.k8s.io/controller-runtime"
|
||||
"sigs.k8s.io/controller-runtime/pkg/client"
|
||||
_ "k8s.io/client-go/plugin/pkg/client/auth"
|
||||
)
|
||||
|
||||
var listCmdFlags struct {
|
||||
installed bool
|
||||
components bool
|
||||
kubeconfig string
|
||||
}
|
||||
|
||||
var listCmd = &cobra.Command{
|
||||
Use: "list",
|
||||
Short: "List PackageSource or Package resources",
|
||||
Long: `List PackageSource or Package resources in table format.
|
||||
|
||||
By default, lists PackageSource resources. Use --installed flag to list installed Package resources.
|
||||
Use --components flag to show components on separate lines.`,
|
||||
Args: cobra.NoArgs,
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
ctx := context.Background()
|
||||
|
||||
// Create Kubernetes client config
|
||||
var config *rest.Config
|
||||
var err error
|
||||
|
||||
if listCmdFlags.kubeconfig != "" {
|
||||
config, err = clientcmd.BuildConfigFromFlags("", listCmdFlags.kubeconfig)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to load kubeconfig from %s: %w", listCmdFlags.kubeconfig, err)
|
||||
}
|
||||
} else {
|
||||
config, err = ctrl.GetConfig()
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to get kubeconfig: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
scheme := runtime.NewScheme()
|
||||
utilruntime.Must(clientgoscheme.AddToScheme(scheme))
|
||||
utilruntime.Must(cozyv1alpha1.AddToScheme(scheme))
|
||||
|
||||
k8sClient, err := client.New(config, client.Options{Scheme: scheme})
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create k8s client: %w", err)
|
||||
}
|
||||
|
||||
if listCmdFlags.installed {
|
||||
return listPackages(ctx, k8sClient, listCmdFlags.components)
|
||||
}
|
||||
return listPackageSources(ctx, k8sClient, listCmdFlags.components)
|
||||
},
|
||||
}
|
||||
|
||||
func listPackageSources(ctx context.Context, k8sClient client.Client, showComponents bool) error {
|
||||
var psList cozyv1alpha1.PackageSourceList
|
||||
if err := k8sClient.List(ctx, &psList); err != nil {
|
||||
return fmt.Errorf("failed to list PackageSources: %w", err)
|
||||
}
|
||||
|
||||
// Use tabwriter for better column alignment
|
||||
w := tabwriter.NewWriter(os.Stdout, 0, 0, 3, ' ', 0)
|
||||
defer w.Flush()
|
||||
|
||||
// Print header
|
||||
fmt.Fprintln(w, "NAME\tVARIANTS\tREADY\tSTATUS")
|
||||
|
||||
// Print rows
|
||||
for _, ps := range psList.Items {
|
||||
// Get variants
|
||||
var variants []string
|
||||
for _, variant := range ps.Spec.Variants {
|
||||
variants = append(variants, variant.Name)
|
||||
}
|
||||
variantsStr := strings.Join(variants, ",")
|
||||
if len(variantsStr) > 28 {
|
||||
variantsStr = variantsStr[:25] + "..."
|
||||
}
|
||||
|
||||
// Get Ready condition
|
||||
ready := "Unknown"
|
||||
status := ""
|
||||
for _, condition := range ps.Status.Conditions {
|
||||
if condition.Type == "Ready" {
|
||||
ready = string(condition.Status)
|
||||
status = condition.Message
|
||||
if len(status) > 48 {
|
||||
status = status[:45] + "..."
|
||||
}
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
fmt.Fprintf(w, "%s\t%s\t%s\t%s\n", ps.Name, variantsStr, ready, status)
|
||||
|
||||
// Show components if requested
|
||||
if showComponents {
|
||||
for _, variant := range ps.Spec.Variants {
|
||||
for _, component := range variant.Components {
|
||||
fmt.Fprintf(w, " %s\t%s\t\t\n",
|
||||
fmt.Sprintf("%s.%s", ps.Name, component.Name),
|
||||
variant.Name)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func listPackages(ctx context.Context, k8sClient client.Client, showComponents bool) error {
|
||||
var pkgList cozyv1alpha1.PackageList
|
||||
if err := k8sClient.List(ctx, &pkgList); err != nil {
|
||||
return fmt.Errorf("failed to list Packages: %w", err)
|
||||
}
|
||||
|
||||
// Fetch all PackageSource resources once if components are requested
|
||||
var psMap map[string]*cozyv1alpha1.PackageSource
|
||||
if showComponents {
|
||||
var psList cozyv1alpha1.PackageSourceList
|
||||
if err := k8sClient.List(ctx, &psList); err != nil {
|
||||
return fmt.Errorf("failed to list PackageSources: %w", err)
|
||||
}
|
||||
psMap = make(map[string]*cozyv1alpha1.PackageSource)
|
||||
for i := range psList.Items {
|
||||
psMap[psList.Items[i].Name] = &psList.Items[i]
|
||||
}
|
||||
}
|
||||
|
||||
// Use tabwriter for better column alignment
|
||||
w := tabwriter.NewWriter(os.Stdout, 0, 0, 3, ' ', 0)
|
||||
defer w.Flush()
|
||||
|
||||
// Print header
|
||||
fmt.Fprintln(w, "NAME\tVARIANT\tREADY\tSTATUS")
|
||||
|
||||
// Print rows
|
||||
for _, pkg := range pkgList.Items {
|
||||
variant := pkg.Spec.Variant
|
||||
if variant == "" {
|
||||
variant = "default"
|
||||
}
|
||||
|
||||
// Get Ready condition
|
||||
ready := "Unknown"
|
||||
status := ""
|
||||
for _, condition := range pkg.Status.Conditions {
|
||||
if condition.Type == "Ready" {
|
||||
ready = string(condition.Status)
|
||||
status = condition.Message
|
||||
if len(status) > 48 {
|
||||
status = status[:45] + "..."
|
||||
}
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
fmt.Fprintf(w, "%s\t%s\t%s\t%s\n", pkg.Name, variant, ready, status)
|
||||
|
||||
// Show components if requested
|
||||
if showComponents {
|
||||
// Look up PackageSource from map instead of making API call
|
||||
if ps, exists := psMap[pkg.Name]; exists {
|
||||
// Find the variant
|
||||
for _, v := range ps.Spec.Variants {
|
||||
if v.Name == variant {
|
||||
for _, component := range v.Components {
|
||||
fmt.Fprintf(w, " %s\t%s\t\t\n",
|
||||
fmt.Sprintf("%s.%s", pkg.Name, component.Name),
|
||||
variant)
|
||||
}
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func init() {
|
||||
rootCmd.AddCommand(listCmd)
|
||||
listCmd.Flags().BoolVarP(&listCmdFlags.installed, "installed", "i", false, "list installed Package resources instead of PackageSource resources")
|
||||
listCmd.Flags().BoolVar(&listCmdFlags.components, "components", false, "show components on separate lines")
|
||||
listCmd.Flags().StringVar(&listCmdFlags.kubeconfig, "kubeconfig", "", "Path to kubeconfig file (defaults to ~/.kube/config or KUBECONFIG env var)")
|
||||
}
|
||||
|
||||
52
cmd/cozypkg/cmd/root.go
Normal file
52
cmd/cozypkg/cmd/root.go
Normal file
@@ -0,0 +1,52 @@
|
||||
/*
|
||||
Copyright 2025 The Cozystack Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
// Version is set at build time via -ldflags.
|
||||
var Version = "dev"
|
||||
|
||||
// rootCmd represents the base command when called without any subcommands.
|
||||
var rootCmd = &cobra.Command{
|
||||
Use: "cozypkg",
|
||||
Short: "A CLI for managing Cozystack packages",
|
||||
Long: ``,
|
||||
SilenceErrors: true,
|
||||
SilenceUsage: true,
|
||||
DisableAutoGenTag: true,
|
||||
}
|
||||
|
||||
// Execute adds all child commands to the root command and sets flags appropriately.
|
||||
// This is called by main.main(). It only needs to happen once to the rootCmd.
|
||||
func Execute() error {
|
||||
if err := rootCmd.Execute(); err != nil {
|
||||
fmt.Fprintln(os.Stderr, err.Error())
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func init() {
|
||||
rootCmd.Version = Version
|
||||
}
|
||||
|
||||
30
cmd/cozypkg/main.go
Normal file
30
cmd/cozypkg/main.go
Normal file
@@ -0,0 +1,30 @@
|
||||
/*
|
||||
Copyright 2025 The Cozystack Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"os"
|
||||
|
||||
"github.com/cozystack/cozystack/cmd/cozypkg/cmd"
|
||||
)
|
||||
|
||||
func main() {
|
||||
if err := cmd.Execute(); err != nil {
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -19,15 +19,15 @@ package main
|
||||
import (
|
||||
"os"
|
||||
|
||||
"github.com/aenix.io/cozystack/pkg/cmd/server"
|
||||
"github.com/cozystack/cozystack/pkg/cmd/server"
|
||||
genericapiserver "k8s.io/apiserver/pkg/server"
|
||||
"k8s.io/component-base/cli"
|
||||
)
|
||||
|
||||
func main() {
|
||||
ctx := genericapiserver.SetupSignalContext()
|
||||
options := server.NewAppsServerOptions(os.Stdout, os.Stderr)
|
||||
cmd := server.NewCommandStartAppsServer(ctx, options)
|
||||
options := server.NewCozyServerOptions(os.Stdout, os.Stderr)
|
||||
cmd := server.NewCommandStartCozyServer(ctx, options)
|
||||
code := cli.Run(cmd)
|
||||
os.Exit(code)
|
||||
}
|
||||
|
||||
252
cmd/cozystack-controller/main.go
Normal file
252
cmd/cozystack-controller/main.go
Normal file
@@ -0,0 +1,252 @@
|
||||
/*
|
||||
Copyright 2025.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"crypto/tls"
|
||||
"flag"
|
||||
"os"
|
||||
"time"
|
||||
|
||||
// Import all Kubernetes client auth plugins (e.g. Azure, GCP, OIDC, etc.)
|
||||
// to ensure that exec-entrypoint and run can make use of them.
|
||||
_ "k8s.io/client-go/plugin/pkg/client/auth"
|
||||
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
|
||||
clientgoscheme "k8s.io/client-go/kubernetes/scheme"
|
||||
ctrl "sigs.k8s.io/controller-runtime"
|
||||
"sigs.k8s.io/controller-runtime/pkg/healthz"
|
||||
"sigs.k8s.io/controller-runtime/pkg/log/zap"
|
||||
"sigs.k8s.io/controller-runtime/pkg/metrics/filters"
|
||||
metricsserver "sigs.k8s.io/controller-runtime/pkg/metrics/server"
|
||||
"sigs.k8s.io/controller-runtime/pkg/webhook"
|
||||
|
||||
cozystackiov1alpha1 "github.com/cozystack/cozystack/api/v1alpha1"
|
||||
"github.com/cozystack/cozystack/internal/controller"
|
||||
"github.com/cozystack/cozystack/internal/controller/dashboard"
|
||||
"github.com/cozystack/cozystack/internal/telemetry"
|
||||
|
||||
helmv2 "github.com/fluxcd/helm-controller/api/v2"
|
||||
// +kubebuilder:scaffold:imports
|
||||
)
|
||||
|
||||
var (
|
||||
scheme = runtime.NewScheme()
|
||||
setupLog = ctrl.Log.WithName("setup")
|
||||
)
|
||||
|
||||
func init() {
|
||||
utilruntime.Must(clientgoscheme.AddToScheme(scheme))
|
||||
|
||||
utilruntime.Must(cozystackiov1alpha1.AddToScheme(scheme))
|
||||
utilruntime.Must(dashboard.AddToScheme(scheme))
|
||||
utilruntime.Must(helmv2.AddToScheme(scheme))
|
||||
// +kubebuilder:scaffold:scheme
|
||||
}
|
||||
|
||||
func main() {
|
||||
var metricsAddr string
|
||||
var enableLeaderElection bool
|
||||
var probeAddr string
|
||||
var secureMetrics bool
|
||||
var enableHTTP2 bool
|
||||
var disableTelemetry bool
|
||||
var telemetryEndpoint string
|
||||
var telemetryInterval string
|
||||
var tlsOpts []func(*tls.Config)
|
||||
flag.StringVar(&metricsAddr, "metrics-bind-address", "0", "The address the metrics endpoint binds to. "+
|
||||
"Use :8443 for HTTPS or :8080 for HTTP, or leave as 0 to disable the metrics service.")
|
||||
flag.StringVar(&probeAddr, "health-probe-bind-address", ":8081", "The address the probe endpoint binds to.")
|
||||
flag.BoolVar(&enableLeaderElection, "leader-elect", false,
|
||||
"Enable leader election for controller manager. "+
|
||||
"Enabling this will ensure there is only one active controller manager.")
|
||||
flag.BoolVar(&secureMetrics, "metrics-secure", true,
|
||||
"If set, the metrics endpoint is served securely via HTTPS. Use --metrics-secure=false to use HTTP instead.")
|
||||
flag.BoolVar(&enableHTTP2, "enable-http2", false,
|
||||
"If set, HTTP/2 will be enabled for the metrics and webhook servers")
|
||||
flag.BoolVar(&disableTelemetry, "disable-telemetry", false,
|
||||
"Disable telemetry collection")
|
||||
flag.StringVar(&telemetryEndpoint, "telemetry-endpoint", "https://telemetry.cozystack.io",
|
||||
"Endpoint for sending telemetry data")
|
||||
flag.StringVar(&telemetryInterval, "telemetry-interval", "15m",
|
||||
"Interval between telemetry data collection (e.g. 15m, 1h)")
|
||||
opts := zap.Options{
|
||||
Development: false,
|
||||
}
|
||||
opts.BindFlags(flag.CommandLine)
|
||||
flag.Parse()
|
||||
|
||||
// Parse telemetry interval
|
||||
interval, err := time.ParseDuration(telemetryInterval)
|
||||
if err != nil {
|
||||
setupLog.Error(err, "invalid telemetry interval")
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
// Configure telemetry
|
||||
telemetryConfig := telemetry.Config{
|
||||
Disabled: disableTelemetry,
|
||||
Endpoint: telemetryEndpoint,
|
||||
Interval: interval,
|
||||
}
|
||||
|
||||
ctrl.SetLogger(zap.New(zap.UseFlagOptions(&opts)))
|
||||
|
||||
// if the enable-http2 flag is false (the default), http/2 should be disabled
|
||||
// due to its vulnerabilities. More specifically, disabling http/2 will
|
||||
// prevent from being vulnerable to the HTTP/2 Stream Cancellation and
|
||||
// Rapid Reset CVEs. For more information see:
|
||||
// - https://github.com/advisories/GHSA-qppj-fm5r-hxr3
|
||||
// - https://github.com/advisories/GHSA-4374-p667-p6c8
|
||||
disableHTTP2 := func(c *tls.Config) {
|
||||
setupLog.Info("disabling http/2")
|
||||
c.NextProtos = []string{"http/1.1"}
|
||||
}
|
||||
|
||||
if !enableHTTP2 {
|
||||
tlsOpts = append(tlsOpts, disableHTTP2)
|
||||
}
|
||||
|
||||
webhookServer := webhook.NewServer(webhook.Options{
|
||||
TLSOpts: tlsOpts,
|
||||
})
|
||||
|
||||
// Metrics endpoint is enabled in 'config/default/kustomization.yaml'. The Metrics options configure the server.
|
||||
// More info:
|
||||
// - https://pkg.go.dev/sigs.k8s.io/controller-runtime@v0.19.1/pkg/metrics/server
|
||||
// - https://book.kubebuilder.io/reference/metrics.html
|
||||
metricsServerOptions := metricsserver.Options{
|
||||
BindAddress: metricsAddr,
|
||||
SecureServing: secureMetrics,
|
||||
TLSOpts: tlsOpts,
|
||||
}
|
||||
|
||||
if secureMetrics {
|
||||
// FilterProvider is used to protect the metrics endpoint with authn/authz.
|
||||
// These configurations ensure that only authorized users and service accounts
|
||||
// can access the metrics endpoint. The RBAC are configured in 'config/rbac/kustomization.yaml'. More info:
|
||||
// https://pkg.go.dev/sigs.k8s.io/controller-runtime@v0.19.1/pkg/metrics/filters#WithAuthenticationAndAuthorization
|
||||
metricsServerOptions.FilterProvider = filters.WithAuthenticationAndAuthorization
|
||||
|
||||
// TODO(user): If CertDir, CertName, and KeyName are not specified, controller-runtime will automatically
|
||||
// generate self-signed certificates for the metrics server. While convenient for development and testing,
|
||||
// this setup is not recommended for production.
|
||||
}
|
||||
|
||||
// Configure rate limiting for the Kubernetes client
|
||||
config := ctrl.GetConfigOrDie()
|
||||
config.QPS = 50.0 // Increased from default 5.0
|
||||
config.Burst = 100 // Increased from default 10
|
||||
|
||||
mgr, err := ctrl.NewManager(config, ctrl.Options{
|
||||
Scheme: scheme,
|
||||
Metrics: metricsServerOptions,
|
||||
WebhookServer: webhookServer,
|
||||
HealthProbeBindAddress: probeAddr,
|
||||
LeaderElection: enableLeaderElection,
|
||||
LeaderElectionID: "19a0338c.cozystack.io",
|
||||
// LeaderElectionReleaseOnCancel defines if the leader should step down voluntarily
|
||||
// when the Manager ends. This requires the binary to immediately end when the
|
||||
// Manager is stopped, otherwise, this setting is unsafe. Setting this significantly
|
||||
// speeds up voluntary leader transitions as the new leader don't have to wait
|
||||
// LeaseDuration time first.
|
||||
//
|
||||
// In the default scaffold provided, the program ends immediately after
|
||||
// the manager stops, so would be fine to enable this option. However,
|
||||
// if you are doing or is intended to do any operation such as perform cleanups
|
||||
// after the manager stops then its usage might be unsafe.
|
||||
// LeaderElectionReleaseOnCancel: true,
|
||||
})
|
||||
if err != nil {
|
||||
setupLog.Error(err, "unable to start manager")
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
if err = (&controller.WorkloadMonitorReconciler{
|
||||
Client: mgr.GetClient(),
|
||||
Scheme: mgr.GetScheme(),
|
||||
}).SetupWithManager(mgr); err != nil {
|
||||
setupLog.Error(err, "unable to create controller", "controller", "WorkloadMonitor")
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
if err = (&controller.WorkloadReconciler{
|
||||
Client: mgr.GetClient(),
|
||||
Scheme: mgr.GetScheme(),
|
||||
}).SetupWithManager(mgr); err != nil {
|
||||
setupLog.Error(err, "unable to create controller", "controller", "WorkloadReconciler")
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
if err = (&controller.ApplicationDefinitionReconciler{
|
||||
Client: mgr.GetClient(),
|
||||
Scheme: mgr.GetScheme(),
|
||||
}).SetupWithManager(mgr); err != nil {
|
||||
setupLog.Error(err, "unable to create controller", "controller", "ApplicationDefinitionReconciler")
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
if err = (&controller.ApplicationDefinitionHelmReconciler{
|
||||
Client: mgr.GetClient(),
|
||||
Scheme: mgr.GetScheme(),
|
||||
}).SetupWithManager(mgr); err != nil {
|
||||
setupLog.Error(err, "unable to create controller", "controller", "ApplicationDefinitionHelmReconciler")
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
dashboardManager := &dashboard.Manager{
|
||||
Client: mgr.GetClient(),
|
||||
Scheme: mgr.GetScheme(),
|
||||
}
|
||||
if err = dashboardManager.SetupWithManager(mgr); err != nil {
|
||||
setupLog.Error(err, "unable to create controller", "controller", "DashboardReconciler")
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
// +kubebuilder:scaffold:builder
|
||||
|
||||
if err := mgr.AddHealthzCheck("healthz", healthz.Ping); err != nil {
|
||||
setupLog.Error(err, "unable to set up health check")
|
||||
os.Exit(1)
|
||||
}
|
||||
if err := mgr.AddReadyzCheck("readyz", healthz.Ping); err != nil {
|
||||
setupLog.Error(err, "unable to set up ready check")
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
// Initialize telemetry collector
|
||||
collector, err := telemetry.NewCollector(mgr.GetClient(), &telemetryConfig, mgr.GetConfig())
|
||||
if err != nil {
|
||||
setupLog.V(1).Error(err, "unable to create telemetry collector, telemetry will be disabled")
|
||||
}
|
||||
|
||||
if collector != nil {
|
||||
if err := mgr.Add(collector); err != nil {
|
||||
setupLog.Error(err, "unable to set up telemetry collector")
|
||||
setupLog.V(1).Error(err, "unable to set up telemetry collector, continuing without telemetry")
|
||||
}
|
||||
}
|
||||
|
||||
setupLog.Info("starting manager")
|
||||
ctx := ctrl.SetupSignalHandler()
|
||||
dashboardManager.InitializeStaticResources(ctx)
|
||||
if err := mgr.Start(ctx); err != nil {
|
||||
setupLog.Error(err, "problem running manager")
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
653
cmd/cozystack-operator/main.go
Normal file
653
cmd/cozystack-operator/main.go
Normal file
@@ -0,0 +1,653 @@
|
||||
/*
|
||||
Copyright 2025 The Cozystack Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"flag"
|
||||
"fmt"
|
||||
"os"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
// Import all Kubernetes client auth plugins (e.g. Azure, GCP, OIDC, etc.)
|
||||
// to ensure that exec-entrypoint and run can make use of them.
|
||||
_ "k8s.io/client-go/plugin/pkg/client/auth"
|
||||
|
||||
cozyv1alpha1 "github.com/cozystack/cozystack/api/v1alpha1"
|
||||
helmv2 "github.com/fluxcd/helm-controller/api/v2"
|
||||
sourcev1 "github.com/fluxcd/source-controller/api/v1"
|
||||
sourcewatcherv1beta1 "github.com/fluxcd/source-watcher/api/v2/v1beta1"
|
||||
corev1 "k8s.io/api/core/v1"
|
||||
apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/fields"
|
||||
"k8s.io/apimachinery/pkg/labels"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
|
||||
clientgoscheme "k8s.io/client-go/kubernetes/scheme"
|
||||
ctrl "sigs.k8s.io/controller-runtime"
|
||||
"sigs.k8s.io/controller-runtime/pkg/cache"
|
||||
"sigs.k8s.io/controller-runtime/pkg/client"
|
||||
"sigs.k8s.io/controller-runtime/pkg/healthz"
|
||||
"sigs.k8s.io/controller-runtime/pkg/log"
|
||||
"sigs.k8s.io/controller-runtime/pkg/log/zap"
|
||||
metricsserver "sigs.k8s.io/controller-runtime/pkg/metrics/server"
|
||||
"sigs.k8s.io/controller-runtime/pkg/webhook"
|
||||
|
||||
"github.com/cozystack/cozystack/internal/cozyvaluesreplicator"
|
||||
"github.com/cozystack/cozystack/internal/crdinstall"
|
||||
"github.com/cozystack/cozystack/internal/fluxinstall"
|
||||
"github.com/cozystack/cozystack/internal/operator"
|
||||
"github.com/cozystack/cozystack/internal/telemetry"
|
||||
// +kubebuilder:scaffold:imports
|
||||
)
|
||||
|
||||
var (
|
||||
scheme = runtime.NewScheme()
|
||||
setupLog = ctrl.Log.WithName("setup")
|
||||
)
|
||||
|
||||
func init() {
|
||||
utilruntime.Must(clientgoscheme.AddToScheme(scheme))
|
||||
utilruntime.Must(apiextensionsv1.AddToScheme(scheme))
|
||||
utilruntime.Must(cozyv1alpha1.AddToScheme(scheme))
|
||||
utilruntime.Must(helmv2.AddToScheme(scheme))
|
||||
utilruntime.Must(sourcev1.AddToScheme(scheme))
|
||||
utilruntime.Must(sourcewatcherv1beta1.AddToScheme(scheme))
|
||||
// +kubebuilder:scaffold:scheme
|
||||
}
|
||||
|
||||
func main() {
|
||||
var metricsAddr string
|
||||
var enableLeaderElection bool
|
||||
var probeAddr string
|
||||
var secureMetrics bool
|
||||
var enableHTTP2 bool
|
||||
var installCRDs bool
|
||||
var installFlux bool
|
||||
var disableTelemetry bool
|
||||
var telemetryEndpoint string
|
||||
var telemetryInterval string
|
||||
var cozyValuesSecretName string
|
||||
var cozyValuesSecretNamespace string
|
||||
var cozyValuesNamespaceSelector string
|
||||
var platformSourceURL string
|
||||
var platformSourceName string
|
||||
var platformSourceRef string
|
||||
|
||||
flag.StringVar(&metricsAddr, "metrics-bind-address", ":8080", "The address the metric endpoint binds to.")
|
||||
flag.StringVar(&probeAddr, "health-probe-bind-address", ":8081", "The address the probe endpoint binds to.")
|
||||
flag.BoolVar(&enableLeaderElection, "leader-elect", false,
|
||||
"Enable leader election for controller manager. "+
|
||||
"Enabling this will ensure there is only one active controller manager.")
|
||||
flag.BoolVar(&secureMetrics, "metrics-secure", false,
|
||||
"If set the metrics endpoint is served securely")
|
||||
flag.BoolVar(&enableHTTP2, "enable-http2", false,
|
||||
"If set, HTTP/2 will be enabled for the metrics and webhook servers")
|
||||
flag.BoolVar(&installCRDs, "install-crds", false, "Install Cozystack CRDs before starting reconcile loop")
|
||||
flag.BoolVar(&installFlux, "install-flux", false, "Install Flux components before starting reconcile loop")
|
||||
flag.BoolVar(&disableTelemetry, "disable-telemetry", false,
|
||||
"Disable telemetry collection")
|
||||
flag.StringVar(&telemetryEndpoint, "telemetry-endpoint", "https://telemetry.cozystack.io",
|
||||
"Endpoint for sending telemetry data")
|
||||
flag.StringVar(&telemetryInterval, "telemetry-interval", "15m",
|
||||
"Interval between telemetry data collection (e.g. 15m, 1h)")
|
||||
flag.StringVar(&platformSourceURL, "platform-source-url", "", "Platform source URL (oci:// or https://). If specified, generates OCIRepository or GitRepository resource.")
|
||||
flag.StringVar(&platformSourceName, "platform-source-name", "cozystack-platform", "Name for the generated platform source resource and PackageSource")
|
||||
flag.StringVar(&platformSourceRef, "platform-source-ref", "", "Reference specification as key=value pairs (e.g., 'branch=main' or 'digest=sha256:...,tag=v1.0'). For OCI: digest, semver, semverFilter, tag. For Git: branch, tag, semver, name, commit.")
|
||||
flag.StringVar(&cozyValuesSecretName, "cozy-values-secret-name", "cozystack-values", "The name of the secret containing cluster-wide configuration values.")
|
||||
flag.StringVar(&cozyValuesSecretNamespace, "cozy-values-secret-namespace", "cozy-system", "The namespace of the secret containing cluster-wide configuration values.")
|
||||
flag.StringVar(&cozyValuesNamespaceSelector, "cozy-values-namespace-selector", "cozystack.io/system=true", "The label selector for namespaces where the cluster-wide configuration values must be replicated.")
|
||||
|
||||
opts := zap.Options{
|
||||
Development: true,
|
||||
}
|
||||
opts.BindFlags(flag.CommandLine)
|
||||
flag.Parse()
|
||||
|
||||
ctrl.SetLogger(zap.New(zap.UseFlagOptions(&opts)))
|
||||
|
||||
config := ctrl.GetConfigOrDie()
|
||||
|
||||
// Create a direct client (without cache) for pre-start operations
|
||||
directClient, err := client.New(config, client.Options{Scheme: scheme})
|
||||
if err != nil {
|
||||
setupLog.Error(err, "unable to create direct client")
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
targetNSSelector, err := labels.Parse(cozyValuesNamespaceSelector)
|
||||
if err != nil {
|
||||
setupLog.Error(err, "could not parse namespace label selector")
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
// Initialize the controller manager
|
||||
mgr, err := ctrl.NewManager(config, ctrl.Options{
|
||||
Scheme: scheme,
|
||||
Cache: cache.Options{
|
||||
ByObject: map[client.Object]cache.ByObject{
|
||||
// Cache only Secrets named <secretName> (in any namespace)
|
||||
&corev1.Secret{}: {
|
||||
Field: fields.OneTermEqualSelector("metadata.name", cozyValuesSecretName),
|
||||
},
|
||||
|
||||
// Cache only Namespaces that match a label selector
|
||||
&corev1.Namespace{}: {
|
||||
Label: targetNSSelector,
|
||||
},
|
||||
},
|
||||
},
|
||||
Metrics: metricsserver.Options{
|
||||
BindAddress: metricsAddr,
|
||||
SecureServing: secureMetrics,
|
||||
},
|
||||
WebhookServer: webhook.NewServer(webhook.Options{
|
||||
Port: 9443,
|
||||
}),
|
||||
HealthProbeBindAddress: probeAddr,
|
||||
LeaderElection: enableLeaderElection,
|
||||
LeaderElectionID: "cozystack-operator.cozystack.io",
|
||||
// LeaderElectionReleaseOnCancel defines if the leader should step down voluntarily
|
||||
// when the Manager ends. This requires the binary to immediately end when the
|
||||
// Manager is stopped, otherwise, setting this significantly speeds up voluntary
|
||||
// leader transitions as the new leader don't have to wait LeaseDuration time first.
|
||||
//
|
||||
// In the default scaffold provided, the program ends immediately after
|
||||
// the manager stops, so would be fine to enable this option. However,
|
||||
// if you are doing or is intended to do any operation such as perform cleanups
|
||||
// after the manager stops then its usage might be unsafe.
|
||||
// LeaderElectionReleaseOnCancel: true,
|
||||
})
|
||||
if err != nil {
|
||||
setupLog.Error(err, "unable to start manager")
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
// Set up signal handler early so install phases respect SIGTERM
|
||||
mgrCtx := ctrl.SetupSignalHandler()
|
||||
|
||||
// Install Cozystack CRDs before starting reconcile loop
|
||||
if installCRDs {
|
||||
setupLog.Info("Installing Cozystack CRDs before starting reconcile loop")
|
||||
installCtx, installCancel := context.WithTimeout(mgrCtx, 2*time.Minute)
|
||||
defer installCancel()
|
||||
|
||||
if err := crdinstall.Install(installCtx, directClient, crdinstall.WriteEmbeddedManifests); err != nil {
|
||||
setupLog.Error(err, "failed to install CRDs")
|
||||
os.Exit(1)
|
||||
}
|
||||
setupLog.Info("CRD installation completed successfully")
|
||||
}
|
||||
|
||||
// Install Flux before starting reconcile loop
|
||||
if installFlux {
|
||||
setupLog.Info("Installing Flux components before starting reconcile loop")
|
||||
installCtx, installCancel := context.WithTimeout(mgrCtx, 5*time.Minute)
|
||||
defer installCancel()
|
||||
|
||||
// Use direct client for pre-start operations (cache is not ready yet)
|
||||
if err := fluxinstall.Install(installCtx, directClient, fluxinstall.WriteEmbeddedManifests); err != nil {
|
||||
setupLog.Error(err, "failed to install Flux")
|
||||
os.Exit(1)
|
||||
}
|
||||
setupLog.Info("Flux installation completed successfully")
|
||||
}
|
||||
|
||||
// Generate and install platform source resource if specified
|
||||
if platformSourceURL != "" {
|
||||
setupLog.Info("Generating platform source resource", "url", platformSourceURL, "name", platformSourceName, "ref", platformSourceRef)
|
||||
installCtx, installCancel := context.WithTimeout(mgrCtx, 2*time.Minute)
|
||||
defer installCancel()
|
||||
|
||||
// Use direct client for pre-start operations (cache is not ready yet)
|
||||
if err := installPlatformSourceResource(installCtx, directClient, platformSourceURL, platformSourceName, platformSourceRef); err != nil {
|
||||
setupLog.Error(err, "failed to install platform source resource")
|
||||
os.Exit(1)
|
||||
} else {
|
||||
setupLog.Info("Platform source resource installation completed successfully")
|
||||
}
|
||||
}
|
||||
|
||||
// Create platform PackageSource when CRDs are managed by the operator and
|
||||
// a platform source URL is configured. Without a URL there is no Flux source
|
||||
// resource to reference, so creating a PackageSource would leave a dangling SourceRef.
|
||||
if installCRDs && platformSourceURL != "" {
|
||||
sourceRefKind := "OCIRepository"
|
||||
sourceType, _, err := parsePlatformSourceURL(platformSourceURL)
|
||||
if err != nil {
|
||||
setupLog.Error(err, "failed to parse platform source URL for PackageSource")
|
||||
os.Exit(1)
|
||||
}
|
||||
if sourceType == "git" {
|
||||
sourceRefKind = "GitRepository"
|
||||
}
|
||||
setupLog.Info("Creating platform PackageSource", "platformSourceName", platformSourceName)
|
||||
psCtx, psCancel := context.WithTimeout(mgrCtx, 2*time.Minute)
|
||||
defer psCancel()
|
||||
if err := installPlatformPackageSource(psCtx, directClient, platformSourceName, sourceRefKind); err != nil {
|
||||
setupLog.Error(err, "failed to create platform PackageSource")
|
||||
os.Exit(1)
|
||||
}
|
||||
setupLog.Info("Platform PackageSource creation completed successfully")
|
||||
}
|
||||
|
||||
// Setup PackageSource reconciler
|
||||
if err := (&operator.PackageSourceReconciler{
|
||||
Client: mgr.GetClient(),
|
||||
Scheme: mgr.GetScheme(),
|
||||
}).SetupWithManager(mgr); err != nil {
|
||||
setupLog.Error(err, "unable to create controller", "controller", "PackageSource")
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
// Setup Package reconciler
|
||||
if err := (&operator.PackageReconciler{
|
||||
Client: mgr.GetClient(),
|
||||
Scheme: mgr.GetScheme(),
|
||||
}).SetupWithManager(mgr); err != nil {
|
||||
setupLog.Error(err, "unable to create controller", "controller", "Package")
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
// Setup CozyValuesReplicator reconciler
|
||||
if err := (&cozyvaluesreplicator.SecretReplicatorReconciler{
|
||||
Client: mgr.GetClient(),
|
||||
Scheme: mgr.GetScheme(),
|
||||
SourceNamespace: cozyValuesSecretNamespace,
|
||||
SecretName: cozyValuesSecretName,
|
||||
TargetNamespaceSelector: targetNSSelector,
|
||||
}).SetupWithManager(mgr); err != nil {
|
||||
setupLog.Error(err, "unable to create controller", "controller", "CozyValuesReplicator")
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
// +kubebuilder:scaffold:builder
|
||||
|
||||
if err := mgr.AddHealthzCheck("healthz", healthz.Ping); err != nil {
|
||||
setupLog.Error(err, "unable to set up health check")
|
||||
os.Exit(1)
|
||||
}
|
||||
if err := mgr.AddReadyzCheck("readyz", healthz.Ping); err != nil {
|
||||
setupLog.Error(err, "unable to set up ready check")
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
// Parse telemetry interval
|
||||
interval, err := time.ParseDuration(telemetryInterval)
|
||||
if err != nil {
|
||||
setupLog.Error(err, "invalid telemetry interval")
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
// Configure telemetry
|
||||
telemetryConfig := telemetry.Config{
|
||||
Disabled: disableTelemetry,
|
||||
Endpoint: telemetryEndpoint,
|
||||
Interval: interval,
|
||||
}
|
||||
|
||||
// Initialize telemetry collector
|
||||
// Use APIReader (non-cached) because the manager's cache is filtered
|
||||
// and doesn't include resources needed for telemetry (e.g., kube-system namespace, nodes, etc.)
|
||||
collector, err := telemetry.NewOperatorCollector(mgr.GetAPIReader(), &telemetryConfig, config)
|
||||
if err != nil {
|
||||
setupLog.V(1).Info("unable to create telemetry collector, telemetry will be disabled", "error", err)
|
||||
}
|
||||
|
||||
if collector != nil {
|
||||
if err := mgr.Add(collector); err != nil {
|
||||
setupLog.V(1).Info("unable to set up telemetry collector, continuing without telemetry", "error", err)
|
||||
}
|
||||
}
|
||||
|
||||
setupLog.Info("Starting controller manager")
|
||||
if err := mgr.Start(mgrCtx); err != nil {
|
||||
setupLog.Error(err, "problem running manager")
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
|
||||
// installPlatformSourceResource generates and installs a Flux source resource (OCIRepository or GitRepository)
|
||||
// based on the platform source URL
|
||||
func installPlatformSourceResource(ctx context.Context, k8sClient client.Client, sourceURL, resourceName, refSpec string) error {
|
||||
logger := log.FromContext(ctx)
|
||||
|
||||
// Parse the source URL to determine type
|
||||
sourceType, repoURL, err := parsePlatformSourceURL(sourceURL)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to parse platform source URL: %w", err)
|
||||
}
|
||||
|
||||
// Parse reference specification
|
||||
refMap, err := parseRefSpec(refSpec)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to parse reference specification: %w", err)
|
||||
}
|
||||
|
||||
var obj client.Object
|
||||
switch sourceType {
|
||||
case "oci":
|
||||
obj, err = generateOCIRepository(resourceName, repoURL, refMap)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to generate OCIRepository: %w", err)
|
||||
}
|
||||
case "git":
|
||||
obj, err = generateGitRepository(resourceName, repoURL, refMap)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to generate GitRepository: %w", err)
|
||||
}
|
||||
default:
|
||||
return fmt.Errorf("unsupported source type: %s (expected oci:// or https://)", sourceType)
|
||||
}
|
||||
|
||||
// Apply the resource (create or update)
|
||||
logger.Info("Applying platform source resource",
|
||||
"apiVersion", obj.GetObjectKind().GroupVersionKind().GroupVersion().String(),
|
||||
"kind", obj.GetObjectKind().GroupVersionKind().Kind,
|
||||
"name", obj.GetName(),
|
||||
"namespace", obj.GetNamespace(),
|
||||
)
|
||||
|
||||
existing := obj.DeepCopyObject().(client.Object)
|
||||
key := client.ObjectKeyFromObject(obj)
|
||||
|
||||
err = k8sClient.Get(ctx, key, existing)
|
||||
if err != nil {
|
||||
if client.IgnoreNotFound(err) == nil {
|
||||
// Resource doesn't exist, create it
|
||||
if err := k8sClient.Create(ctx, obj); err != nil {
|
||||
return fmt.Errorf("failed to create resource %s/%s: %w", obj.GetObjectKind().GroupVersionKind().Kind, obj.GetName(), err)
|
||||
}
|
||||
logger.Info("Created platform source resource", "kind", obj.GetObjectKind().GroupVersionKind().Kind, "name", obj.GetName())
|
||||
} else {
|
||||
return fmt.Errorf("failed to check if resource exists: %w", err)
|
||||
}
|
||||
} else {
|
||||
// Resource exists, update it
|
||||
obj.SetResourceVersion(existing.GetResourceVersion())
|
||||
if err := k8sClient.Update(ctx, obj); err != nil {
|
||||
return fmt.Errorf("failed to update resource %s/%s: %w", obj.GetObjectKind().GroupVersionKind().Kind, obj.GetName(), err)
|
||||
}
|
||||
logger.Info("Updated platform source resource", "kind", obj.GetObjectKind().GroupVersionKind().Kind, "name", obj.GetName())
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// parsePlatformSourceURL parses the source URL and returns the source type and repository URL.
|
||||
// Supports formats:
|
||||
// - oci://registry.example.com/repo
|
||||
// - https://github.com/user/repo
|
||||
// - http://github.com/user/repo
|
||||
// - ssh://git@github.com/user/repo
|
||||
func parsePlatformSourceURL(sourceURL string) (sourceType, repoURL string, err error) {
|
||||
sourceURL = strings.TrimSpace(sourceURL)
|
||||
|
||||
if strings.HasPrefix(sourceURL, "oci://") {
|
||||
return "oci", sourceURL, nil
|
||||
}
|
||||
|
||||
if strings.HasPrefix(sourceURL, "https://") || strings.HasPrefix(sourceURL, "http://") || strings.HasPrefix(sourceURL, "ssh://") {
|
||||
return "git", sourceURL, nil
|
||||
}
|
||||
|
||||
return "", "", fmt.Errorf("unsupported source URL scheme (expected oci://, https://, http://, or ssh://): %s", sourceURL)
|
||||
}
|
||||
|
||||
// parseRefSpec parses a reference specification string in the format "key1=value1,key2=value2".
|
||||
// Returns a map of key-value pairs.
|
||||
func parseRefSpec(refSpec string) (map[string]string, error) {
|
||||
result := make(map[string]string)
|
||||
|
||||
refSpec = strings.TrimSpace(refSpec)
|
||||
if refSpec == "" {
|
||||
return result, nil
|
||||
}
|
||||
|
||||
pairs := strings.Split(refSpec, ",")
|
||||
for _, pair := range pairs {
|
||||
pair = strings.TrimSpace(pair)
|
||||
if pair == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
// Split on first '=' only to allow '=' in values (e.g., digest=sha256:...)
|
||||
idx := strings.Index(pair, "=")
|
||||
if idx == -1 {
|
||||
return nil, fmt.Errorf("invalid reference specification format: %q (expected key=value)", pair)
|
||||
}
|
||||
|
||||
key := strings.TrimSpace(pair[:idx])
|
||||
value := strings.TrimSpace(pair[idx+1:])
|
||||
|
||||
if key == "" {
|
||||
return nil, fmt.Errorf("empty key in reference specification: %q", pair)
|
||||
}
|
||||
if value == "" {
|
||||
return nil, fmt.Errorf("empty value for key %q in reference specification", key)
|
||||
}
|
||||
|
||||
result[key] = value
|
||||
}
|
||||
|
||||
return result, nil
|
||||
}
|
||||
|
||||
// Valid reference keys for OCI repositories
|
||||
var validOCIRefKeys = map[string]bool{
|
||||
"digest": true,
|
||||
"semver": true,
|
||||
"semverFilter": true,
|
||||
"tag": true,
|
||||
}
|
||||
|
||||
// Valid reference keys for Git repositories
|
||||
var validGitRefKeys = map[string]bool{
|
||||
"branch": true,
|
||||
"tag": true,
|
||||
"semver": true,
|
||||
"name": true,
|
||||
"commit": true,
|
||||
}
|
||||
|
||||
// validateOCIRef validates reference keys for OCI repositories
|
||||
func validateOCIRef(refMap map[string]string) error {
|
||||
for key := range refMap {
|
||||
if !validOCIRefKeys[key] {
|
||||
return fmt.Errorf("invalid OCI reference key %q (valid keys: digest, semver, semverFilter, tag)", key)
|
||||
}
|
||||
}
|
||||
|
||||
// Validate digest format if provided
|
||||
if digest, ok := refMap["digest"]; ok {
|
||||
if !strings.HasPrefix(digest, "sha256:") {
|
||||
return fmt.Errorf("digest must be in format 'sha256:<hash>', got: %s", digest)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// validateGitRef validates reference keys for Git repositories
|
||||
func validateGitRef(refMap map[string]string) error {
|
||||
for key := range refMap {
|
||||
if !validGitRefKeys[key] {
|
||||
return fmt.Errorf("invalid Git reference key %q (valid keys: branch, tag, semver, name, commit)", key)
|
||||
}
|
||||
}
|
||||
|
||||
// Validate commit format if provided (should be a hex string)
|
||||
if commit, ok := refMap["commit"]; ok {
|
||||
if len(commit) < 7 {
|
||||
return fmt.Errorf("commit SHA should be at least 7 characters, got: %s", commit)
|
||||
}
|
||||
for _, c := range commit {
|
||||
if !((c >= '0' && c <= '9') || (c >= 'a' && c <= 'f') || (c >= 'A' && c <= 'F')) {
|
||||
return fmt.Errorf("commit SHA should be a hexadecimal string, got: %s", commit)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// generateOCIRepository creates an OCIRepository resource
|
||||
func generateOCIRepository(name, repoURL string, refMap map[string]string) (*sourcev1.OCIRepository, error) {
|
||||
if err := validateOCIRef(refMap); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
obj := &sourcev1.OCIRepository{
|
||||
TypeMeta: metav1.TypeMeta{
|
||||
APIVersion: sourcev1.GroupVersion.String(),
|
||||
Kind: sourcev1.OCIRepositoryKind,
|
||||
},
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: name,
|
||||
Namespace: "cozy-system",
|
||||
},
|
||||
Spec: sourcev1.OCIRepositorySpec{
|
||||
URL: repoURL,
|
||||
Interval: metav1.Duration{Duration: 5 * time.Minute},
|
||||
},
|
||||
}
|
||||
|
||||
// Set reference if any ref options are provided
|
||||
if len(refMap) > 0 {
|
||||
obj.Spec.Reference = &sourcev1.OCIRepositoryRef{
|
||||
Digest: refMap["digest"],
|
||||
SemVer: refMap["semver"],
|
||||
SemverFilter: refMap["semverFilter"],
|
||||
Tag: refMap["tag"],
|
||||
}
|
||||
}
|
||||
|
||||
return obj, nil
|
||||
}
|
||||
|
||||
// generateGitRepository creates a GitRepository resource
|
||||
func generateGitRepository(name, repoURL string, refMap map[string]string) (*sourcev1.GitRepository, error) {
|
||||
if err := validateGitRef(refMap); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
obj := &sourcev1.GitRepository{
|
||||
TypeMeta: metav1.TypeMeta{
|
||||
APIVersion: sourcev1.GroupVersion.String(),
|
||||
Kind: sourcev1.GitRepositoryKind,
|
||||
},
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: name,
|
||||
Namespace: "cozy-system",
|
||||
},
|
||||
Spec: sourcev1.GitRepositorySpec{
|
||||
URL: repoURL,
|
||||
Interval: metav1.Duration{Duration: 5 * time.Minute},
|
||||
},
|
||||
}
|
||||
|
||||
// Set reference if any ref options are provided
|
||||
if len(refMap) > 0 {
|
||||
obj.Spec.Reference = &sourcev1.GitRepositoryRef{
|
||||
Branch: refMap["branch"],
|
||||
Tag: refMap["tag"],
|
||||
SemVer: refMap["semver"],
|
||||
Name: refMap["name"],
|
||||
Commit: refMap["commit"],
|
||||
}
|
||||
}
|
||||
|
||||
return obj, nil
|
||||
}
|
||||
|
||||
// installPlatformPackageSource creates the platform PackageSource resource
|
||||
// that references the Flux source resource (OCIRepository or GitRepository).
|
||||
//
|
||||
// The variant list is intentionally hardcoded here. These are platform-defined
|
||||
// deployment profiles (not user-extensible), matching what was previously in
|
||||
// the Helm template. Changes require a new operator build and release.
|
||||
func installPlatformPackageSource(ctx context.Context, k8sClient client.Client, platformSourceName, sourceRefKind string) error {
|
||||
logger := log.FromContext(ctx)
|
||||
|
||||
packageSourceName := "cozystack." + platformSourceName
|
||||
|
||||
ps := &cozyv1alpha1.PackageSource{
|
||||
TypeMeta: metav1.TypeMeta{
|
||||
APIVersion: cozyv1alpha1.GroupVersion.String(),
|
||||
Kind: "PackageSource",
|
||||
},
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: packageSourceName,
|
||||
Annotations: map[string]string{
|
||||
"operator.cozystack.io/skip-cozystack-values": "true",
|
||||
},
|
||||
},
|
||||
Spec: cozyv1alpha1.PackageSourceSpec{
|
||||
SourceRef: &cozyv1alpha1.PackageSourceRef{
|
||||
Kind: sourceRefKind,
|
||||
Name: platformSourceName,
|
||||
Namespace: "cozy-system",
|
||||
Path: "/",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
variantData := []struct {
|
||||
name string
|
||||
valuesFiles []string
|
||||
}{
|
||||
{"default", []string{"values.yaml"}},
|
||||
{"isp-full", []string{"values.yaml", "values-isp-full.yaml"}},
|
||||
{"isp-hosted", []string{"values.yaml", "values-isp-hosted.yaml"}},
|
||||
{"isp-full-generic", []string{"values.yaml", "values-isp-full-generic.yaml"}},
|
||||
}
|
||||
|
||||
variants := make([]cozyv1alpha1.Variant, len(variantData))
|
||||
for i, v := range variantData {
|
||||
variants[i] = cozyv1alpha1.Variant{
|
||||
Name: v.name,
|
||||
Components: []cozyv1alpha1.Component{
|
||||
{
|
||||
Name: "platform",
|
||||
Path: "core/platform",
|
||||
Install: &cozyv1alpha1.ComponentInstall{
|
||||
Namespace: "cozy-system",
|
||||
ReleaseName: "cozystack-platform",
|
||||
},
|
||||
ValuesFiles: v.valuesFiles,
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
ps.Spec.Variants = variants
|
||||
|
||||
logger.Info("Applying platform PackageSource", "name", packageSourceName)
|
||||
|
||||
patchOptions := &client.PatchOptions{
|
||||
FieldManager: "cozystack-operator",
|
||||
Force: func() *bool { b := true; return &b }(),
|
||||
}
|
||||
|
||||
if err := k8sClient.Patch(ctx, ps, client.Apply, patchOptions); err != nil {
|
||||
return fmt.Errorf("failed to apply PackageSource %s: %w", packageSourceName, err)
|
||||
}
|
||||
|
||||
logger.Info("Applied platform PackageSource", "name", packageSourceName)
|
||||
return nil
|
||||
}
|
||||
574
cmd/cozystack-operator/main_test.go
Normal file
574
cmd/cozystack-operator/main_test.go
Normal file
@@ -0,0 +1,574 @@
|
||||
/*
|
||||
Copyright 2025 The Cozystack Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
cozyv1alpha1 "github.com/cozystack/cozystack/api/v1alpha1"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
"sigs.k8s.io/controller-runtime/pkg/client"
|
||||
"sigs.k8s.io/controller-runtime/pkg/client/fake"
|
||||
)
|
||||
|
||||
func newTestScheme() *runtime.Scheme {
|
||||
s := runtime.NewScheme()
|
||||
_ = cozyv1alpha1.AddToScheme(s)
|
||||
return s
|
||||
}
|
||||
|
||||
func TestInstallPlatformPackageSource_Creates(t *testing.T) {
|
||||
s := newTestScheme()
|
||||
k8sClient := fake.NewClientBuilder().WithScheme(s).Build()
|
||||
|
||||
err := installPlatformPackageSource(context.Background(), k8sClient, "cozystack-platform", "OCIRepository")
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error: %v", err)
|
||||
}
|
||||
|
||||
ps := &cozyv1alpha1.PackageSource{}
|
||||
if err := k8sClient.Get(context.Background(), client.ObjectKey{Name: "cozystack.cozystack-platform"}, ps); err != nil {
|
||||
t.Fatalf("PackageSource not found: %v", err)
|
||||
}
|
||||
|
||||
// Verify name
|
||||
if ps.Name != "cozystack.cozystack-platform" {
|
||||
t.Errorf("expected name %q, got %q", "cozystack.cozystack-platform", ps.Name)
|
||||
}
|
||||
|
||||
// Verify annotation
|
||||
if ps.Annotations["operator.cozystack.io/skip-cozystack-values"] != "true" {
|
||||
t.Errorf("expected skip-cozystack-values annotation to be 'true', got %q", ps.Annotations["operator.cozystack.io/skip-cozystack-values"])
|
||||
}
|
||||
|
||||
// Verify sourceRef
|
||||
if ps.Spec.SourceRef == nil {
|
||||
t.Fatal("expected SourceRef to be set")
|
||||
}
|
||||
if ps.Spec.SourceRef.Kind != "OCIRepository" {
|
||||
t.Errorf("expected sourceRef.kind %q, got %q", "OCIRepository", ps.Spec.SourceRef.Kind)
|
||||
}
|
||||
if ps.Spec.SourceRef.Name != "cozystack-platform" {
|
||||
t.Errorf("expected sourceRef.name %q, got %q", "cozystack-platform", ps.Spec.SourceRef.Name)
|
||||
}
|
||||
if ps.Spec.SourceRef.Namespace != "cozy-system" {
|
||||
t.Errorf("expected sourceRef.namespace %q, got %q", "cozy-system", ps.Spec.SourceRef.Namespace)
|
||||
}
|
||||
if ps.Spec.SourceRef.Path != "/" {
|
||||
t.Errorf("expected sourceRef.path %q, got %q", "/", ps.Spec.SourceRef.Path)
|
||||
}
|
||||
|
||||
// Verify variants
|
||||
expectedVariants := []string{"default", "isp-full", "isp-hosted", "isp-full-generic"}
|
||||
if len(ps.Spec.Variants) != len(expectedVariants) {
|
||||
t.Fatalf("expected %d variants, got %d", len(expectedVariants), len(ps.Spec.Variants))
|
||||
}
|
||||
for i, name := range expectedVariants {
|
||||
if ps.Spec.Variants[i].Name != name {
|
||||
t.Errorf("expected variant[%d].name %q, got %q", i, name, ps.Spec.Variants[i].Name)
|
||||
}
|
||||
if len(ps.Spec.Variants[i].Components) != 1 {
|
||||
t.Errorf("expected variant[%d] to have 1 component, got %d", i, len(ps.Spec.Variants[i].Components))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestInstallPlatformPackageSource_Updates(t *testing.T) {
|
||||
s := newTestScheme()
|
||||
|
||||
existing := &cozyv1alpha1.PackageSource{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "cozystack.cozystack-platform",
|
||||
ResourceVersion: "1",
|
||||
Labels: map[string]string{
|
||||
"custom-label": "should-be-preserved",
|
||||
},
|
||||
},
|
||||
Spec: cozyv1alpha1.PackageSourceSpec{
|
||||
SourceRef: &cozyv1alpha1.PackageSourceRef{
|
||||
Kind: "OCIRepository",
|
||||
Name: "old-name",
|
||||
Namespace: "cozy-system",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
k8sClient := fake.NewClientBuilder().WithScheme(s).WithObjects(existing).Build()
|
||||
|
||||
err := installPlatformPackageSource(context.Background(), k8sClient, "cozystack-platform", "OCIRepository")
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error: %v", err)
|
||||
}
|
||||
|
||||
ps := &cozyv1alpha1.PackageSource{}
|
||||
if err := k8sClient.Get(context.Background(), client.ObjectKey{Name: "cozystack.cozystack-platform"}, ps); err != nil {
|
||||
t.Fatalf("PackageSource not found: %v", err)
|
||||
}
|
||||
|
||||
// Verify sourceRef was updated
|
||||
if ps.Spec.SourceRef.Name != "cozystack-platform" {
|
||||
t.Errorf("expected updated sourceRef.name %q, got %q", "cozystack-platform", ps.Spec.SourceRef.Name)
|
||||
}
|
||||
|
||||
// Verify all 4 variants are present after update
|
||||
if len(ps.Spec.Variants) != 4 {
|
||||
t.Errorf("expected 4 variants after update, got %d", len(ps.Spec.Variants))
|
||||
}
|
||||
|
||||
// Verify that labels set by other controllers are preserved (SSA does not overwrite unmanaged fields)
|
||||
if ps.Labels["custom-label"] != "should-be-preserved" {
|
||||
t.Errorf("expected custom-label to be preserved, got %q", ps.Labels["custom-label"])
|
||||
}
|
||||
}
|
||||
|
||||
func TestParsePlatformSourceURL(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
url string
|
||||
wantType string
|
||||
wantURL string
|
||||
wantErr bool
|
||||
}{
|
||||
{
|
||||
name: "OCI URL",
|
||||
url: "oci://ghcr.io/cozystack/cozystack/cozystack-packages",
|
||||
wantType: "oci",
|
||||
wantURL: "oci://ghcr.io/cozystack/cozystack/cozystack-packages",
|
||||
},
|
||||
{
|
||||
name: "HTTPS URL",
|
||||
url: "https://github.com/cozystack/cozystack",
|
||||
wantType: "git",
|
||||
wantURL: "https://github.com/cozystack/cozystack",
|
||||
},
|
||||
{
|
||||
name: "SSH URL",
|
||||
url: "ssh://git@github.com/cozystack/cozystack",
|
||||
wantType: "git",
|
||||
wantURL: "ssh://git@github.com/cozystack/cozystack",
|
||||
},
|
||||
{
|
||||
name: "empty URL",
|
||||
url: "",
|
||||
wantErr: true,
|
||||
},
|
||||
{
|
||||
name: "unsupported scheme",
|
||||
url: "ftp://example.com/repo",
|
||||
wantErr: true,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
sourceType, repoURL, err := parsePlatformSourceURL(tt.url)
|
||||
if tt.wantErr {
|
||||
if err == nil {
|
||||
t.Fatalf("expected error for URL %q, got nil", tt.url)
|
||||
}
|
||||
return
|
||||
}
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error: %v", err)
|
||||
}
|
||||
if sourceType != tt.wantType {
|
||||
t.Errorf("expected type %q, got %q", tt.wantType, sourceType)
|
||||
}
|
||||
if repoURL != tt.wantURL {
|
||||
t.Errorf("expected URL %q, got %q", tt.wantURL, repoURL)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestInstallPlatformPackageSource_VariantValuesFiles(t *testing.T) {
|
||||
s := newTestScheme()
|
||||
k8sClient := fake.NewClientBuilder().WithScheme(s).Build()
|
||||
|
||||
err := installPlatformPackageSource(context.Background(), k8sClient, "cozystack-platform", "OCIRepository")
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error: %v", err)
|
||||
}
|
||||
|
||||
ps := &cozyv1alpha1.PackageSource{}
|
||||
if err := k8sClient.Get(context.Background(), client.ObjectKey{Name: "cozystack.cozystack-platform"}, ps); err != nil {
|
||||
t.Fatalf("PackageSource not found: %v", err)
|
||||
}
|
||||
|
||||
expectedValuesFiles := map[string][]string{
|
||||
"default": {"values.yaml"},
|
||||
"isp-full": {"values.yaml", "values-isp-full.yaml"},
|
||||
"isp-hosted": {"values.yaml", "values-isp-hosted.yaml"},
|
||||
"isp-full-generic": {"values.yaml", "values-isp-full-generic.yaml"},
|
||||
}
|
||||
|
||||
for _, v := range ps.Spec.Variants {
|
||||
expected, ok := expectedValuesFiles[v.Name]
|
||||
if !ok {
|
||||
t.Errorf("unexpected variant %q", v.Name)
|
||||
continue
|
||||
}
|
||||
|
||||
if len(v.Components) != 1 {
|
||||
t.Errorf("variant %q: expected 1 component, got %d", v.Name, len(v.Components))
|
||||
continue
|
||||
}
|
||||
|
||||
comp := v.Components[0]
|
||||
if comp.Name != "platform" {
|
||||
t.Errorf("variant %q: expected component name %q, got %q", v.Name, "platform", comp.Name)
|
||||
}
|
||||
if comp.Path != "core/platform" {
|
||||
t.Errorf("variant %q: expected component path %q, got %q", v.Name, "core/platform", comp.Path)
|
||||
}
|
||||
if comp.Install == nil {
|
||||
t.Errorf("variant %q: expected Install to be set", v.Name)
|
||||
} else {
|
||||
if comp.Install.Namespace != "cozy-system" {
|
||||
t.Errorf("variant %q: expected install namespace %q, got %q", v.Name, "cozy-system", comp.Install.Namespace)
|
||||
}
|
||||
if comp.Install.ReleaseName != "cozystack-platform" {
|
||||
t.Errorf("variant %q: expected install releaseName %q, got %q", v.Name, "cozystack-platform", comp.Install.ReleaseName)
|
||||
}
|
||||
}
|
||||
|
||||
if len(comp.ValuesFiles) != len(expected) {
|
||||
t.Errorf("variant %q: expected %d valuesFiles, got %d", v.Name, len(expected), len(comp.ValuesFiles))
|
||||
continue
|
||||
}
|
||||
for i, f := range expected {
|
||||
if comp.ValuesFiles[i] != f {
|
||||
t.Errorf("variant %q: expected valuesFiles[%d] %q, got %q", v.Name, i, f, comp.ValuesFiles[i])
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestInstallPlatformPackageSource_CustomName(t *testing.T) {
|
||||
s := newTestScheme()
|
||||
k8sClient := fake.NewClientBuilder().WithScheme(s).Build()
|
||||
|
||||
err := installPlatformPackageSource(context.Background(), k8sClient, "custom-source", "OCIRepository")
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error: %v", err)
|
||||
}
|
||||
|
||||
ps := &cozyv1alpha1.PackageSource{}
|
||||
if err := k8sClient.Get(context.Background(), client.ObjectKey{Name: "cozystack.custom-source"}, ps); err != nil {
|
||||
t.Fatalf("PackageSource not found: %v", err)
|
||||
}
|
||||
|
||||
if ps.Name != "cozystack.custom-source" {
|
||||
t.Errorf("expected name %q, got %q", "cozystack.custom-source", ps.Name)
|
||||
}
|
||||
if ps.Spec.SourceRef.Name != "custom-source" {
|
||||
t.Errorf("expected sourceRef.name %q, got %q", "custom-source", ps.Spec.SourceRef.Name)
|
||||
}
|
||||
}
|
||||
|
||||
func TestInstallPlatformPackageSource_GitRepository(t *testing.T) {
|
||||
s := newTestScheme()
|
||||
k8sClient := fake.NewClientBuilder().WithScheme(s).Build()
|
||||
|
||||
err := installPlatformPackageSource(context.Background(), k8sClient, "my-source", "GitRepository")
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error: %v", err)
|
||||
}
|
||||
|
||||
ps := &cozyv1alpha1.PackageSource{}
|
||||
if err := k8sClient.Get(context.Background(), client.ObjectKey{Name: "cozystack.my-source"}, ps); err != nil {
|
||||
t.Fatalf("PackageSource not found: %v", err)
|
||||
}
|
||||
|
||||
if ps.Spec.SourceRef.Kind != "GitRepository" {
|
||||
t.Errorf("expected sourceRef.kind %q, got %q", "GitRepository", ps.Spec.SourceRef.Kind)
|
||||
}
|
||||
if ps.Spec.SourceRef.Name != "my-source" {
|
||||
t.Errorf("expected sourceRef.name %q, got %q", "my-source", ps.Spec.SourceRef.Name)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseRefSpec(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
input string
|
||||
want map[string]string
|
||||
wantErr bool
|
||||
}{
|
||||
{
|
||||
name: "empty string",
|
||||
input: "",
|
||||
want: map[string]string{},
|
||||
},
|
||||
{
|
||||
name: "single key-value",
|
||||
input: "tag=v1.0",
|
||||
want: map[string]string{"tag": "v1.0"},
|
||||
},
|
||||
{
|
||||
name: "multiple key-values",
|
||||
input: "digest=sha256:abc123,tag=v1.0",
|
||||
want: map[string]string{"digest": "sha256:abc123", "tag": "v1.0"},
|
||||
},
|
||||
{
|
||||
name: "whitespace around pairs",
|
||||
input: " tag=v1.0 , branch=main ",
|
||||
want: map[string]string{"tag": "v1.0", "branch": "main"},
|
||||
},
|
||||
{
|
||||
name: "equals sign in value",
|
||||
input: "digest=sha256:abc=123",
|
||||
want: map[string]string{"digest": "sha256:abc=123"},
|
||||
},
|
||||
{
|
||||
name: "missing equals sign",
|
||||
input: "tag",
|
||||
wantErr: true,
|
||||
},
|
||||
{
|
||||
name: "empty key",
|
||||
input: "=value",
|
||||
wantErr: true,
|
||||
},
|
||||
{
|
||||
name: "empty value",
|
||||
input: "tag=",
|
||||
wantErr: true,
|
||||
},
|
||||
{
|
||||
name: "trailing comma",
|
||||
input: "tag=v1.0,",
|
||||
want: map[string]string{"tag": "v1.0"},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
got, err := parseRefSpec(tt.input)
|
||||
if tt.wantErr {
|
||||
if err == nil {
|
||||
t.Fatalf("expected error for input %q, got nil", tt.input)
|
||||
}
|
||||
return
|
||||
}
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error: %v", err)
|
||||
}
|
||||
if len(got) != len(tt.want) {
|
||||
t.Fatalf("expected %d entries, got %d: %v", len(tt.want), len(got), got)
|
||||
}
|
||||
for k, v := range tt.want {
|
||||
if got[k] != v {
|
||||
t.Errorf("expected %q=%q, got %q=%q", k, v, k, got[k])
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestValidateOCIRef(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
refMap map[string]string
|
||||
wantErr bool
|
||||
}{
|
||||
{
|
||||
name: "valid tag",
|
||||
refMap: map[string]string{"tag": "v1.0"},
|
||||
},
|
||||
{
|
||||
name: "valid digest",
|
||||
refMap: map[string]string{"digest": "sha256:abc123def456"},
|
||||
},
|
||||
{
|
||||
name: "valid semver",
|
||||
refMap: map[string]string{"semver": ">=1.0.0"},
|
||||
},
|
||||
{
|
||||
name: "multiple valid keys",
|
||||
refMap: map[string]string{"tag": "v1.0", "digest": "sha256:abc"},
|
||||
},
|
||||
{
|
||||
name: "empty map",
|
||||
refMap: map[string]string{},
|
||||
},
|
||||
{
|
||||
name: "invalid key",
|
||||
refMap: map[string]string{"branch": "main"},
|
||||
wantErr: true,
|
||||
},
|
||||
{
|
||||
name: "invalid digest format",
|
||||
refMap: map[string]string{"digest": "md5:abc"},
|
||||
wantErr: true,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
err := validateOCIRef(tt.refMap)
|
||||
if tt.wantErr && err == nil {
|
||||
t.Fatal("expected error, got nil")
|
||||
}
|
||||
if !tt.wantErr && err != nil {
|
||||
t.Fatalf("unexpected error: %v", err)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestValidateGitRef(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
refMap map[string]string
|
||||
wantErr bool
|
||||
}{
|
||||
{
|
||||
name: "valid branch",
|
||||
refMap: map[string]string{"branch": "main"},
|
||||
},
|
||||
{
|
||||
name: "valid commit",
|
||||
refMap: map[string]string{"commit": "abc1234"},
|
||||
},
|
||||
{
|
||||
name: "valid tag and branch",
|
||||
refMap: map[string]string{"tag": "v1.0", "branch": "release"},
|
||||
},
|
||||
{
|
||||
name: "empty map",
|
||||
refMap: map[string]string{},
|
||||
},
|
||||
{
|
||||
name: "invalid key",
|
||||
refMap: map[string]string{"digest": "sha256:abc"},
|
||||
wantErr: true,
|
||||
},
|
||||
{
|
||||
name: "commit too short",
|
||||
refMap: map[string]string{"commit": "abc"},
|
||||
wantErr: true,
|
||||
},
|
||||
{
|
||||
name: "commit not hex",
|
||||
refMap: map[string]string{"commit": "zzzzzzz"},
|
||||
wantErr: true,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
err := validateGitRef(tt.refMap)
|
||||
if tt.wantErr && err == nil {
|
||||
t.Fatal("expected error, got nil")
|
||||
}
|
||||
if !tt.wantErr && err != nil {
|
||||
t.Fatalf("unexpected error: %v", err)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestGenerateOCIRepository(t *testing.T) {
|
||||
refMap := map[string]string{"tag": "v1.0", "digest": "sha256:abc123"}
|
||||
obj, err := generateOCIRepository("my-repo", "oci://registry.example.com/repo", refMap)
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error: %v", err)
|
||||
}
|
||||
|
||||
if obj.Name != "my-repo" {
|
||||
t.Errorf("expected name %q, got %q", "my-repo", obj.Name)
|
||||
}
|
||||
if obj.Namespace != "cozy-system" {
|
||||
t.Errorf("expected namespace %q, got %q", "cozy-system", obj.Namespace)
|
||||
}
|
||||
if obj.Spec.URL != "oci://registry.example.com/repo" {
|
||||
t.Errorf("expected URL %q, got %q", "oci://registry.example.com/repo", obj.Spec.URL)
|
||||
}
|
||||
if obj.Spec.Reference == nil {
|
||||
t.Fatal("expected Reference to be set")
|
||||
}
|
||||
if obj.Spec.Reference.Tag != "v1.0" {
|
||||
t.Errorf("expected tag %q, got %q", "v1.0", obj.Spec.Reference.Tag)
|
||||
}
|
||||
if obj.Spec.Reference.Digest != "sha256:abc123" {
|
||||
t.Errorf("expected digest %q, got %q", "sha256:abc123", obj.Spec.Reference.Digest)
|
||||
}
|
||||
}
|
||||
|
||||
func TestGenerateOCIRepository_NoRef(t *testing.T) {
|
||||
obj, err := generateOCIRepository("my-repo", "oci://registry.example.com/repo", map[string]string{})
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error: %v", err)
|
||||
}
|
||||
if obj.Spec.Reference != nil {
|
||||
t.Error("expected Reference to be nil for empty refMap")
|
||||
}
|
||||
}
|
||||
|
||||
func TestGenerateOCIRepository_InvalidRef(t *testing.T) {
|
||||
_, err := generateOCIRepository("my-repo", "oci://registry.example.com/repo", map[string]string{"branch": "main"})
|
||||
if err == nil {
|
||||
t.Fatal("expected error for invalid OCI ref key, got nil")
|
||||
}
|
||||
}
|
||||
|
||||
func TestGenerateGitRepository(t *testing.T) {
|
||||
refMap := map[string]string{"branch": "main", "commit": "abc1234def5678"}
|
||||
obj, err := generateGitRepository("my-repo", "https://github.com/user/repo", refMap)
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error: %v", err)
|
||||
}
|
||||
|
||||
if obj.Name != "my-repo" {
|
||||
t.Errorf("expected name %q, got %q", "my-repo", obj.Name)
|
||||
}
|
||||
if obj.Namespace != "cozy-system" {
|
||||
t.Errorf("expected namespace %q, got %q", "cozy-system", obj.Namespace)
|
||||
}
|
||||
if obj.Spec.URL != "https://github.com/user/repo" {
|
||||
t.Errorf("expected URL %q, got %q", "https://github.com/user/repo", obj.Spec.URL)
|
||||
}
|
||||
if obj.Spec.Reference == nil {
|
||||
t.Fatal("expected Reference to be set")
|
||||
}
|
||||
if obj.Spec.Reference.Branch != "main" {
|
||||
t.Errorf("expected branch %q, got %q", "main", obj.Spec.Reference.Branch)
|
||||
}
|
||||
if obj.Spec.Reference.Commit != "abc1234def5678" {
|
||||
t.Errorf("expected commit %q, got %q", "abc1234def5678", obj.Spec.Reference.Commit)
|
||||
}
|
||||
}
|
||||
|
||||
func TestGenerateGitRepository_NoRef(t *testing.T) {
|
||||
obj, err := generateGitRepository("my-repo", "https://github.com/user/repo", map[string]string{})
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error: %v", err)
|
||||
}
|
||||
if obj.Spec.Reference != nil {
|
||||
t.Error("expected Reference to be nil for empty refMap")
|
||||
}
|
||||
}
|
||||
|
||||
func TestGenerateGitRepository_InvalidRef(t *testing.T) {
|
||||
_, err := generateGitRepository("my-repo", "https://github.com/user/repo", map[string]string{"digest": "sha256:abc"})
|
||||
if err == nil {
|
||||
t.Fatal("expected error for invalid Git ref key, got nil")
|
||||
}
|
||||
}
|
||||
151
cmd/flux-plunger/main.go
Normal file
151
cmd/flux-plunger/main.go
Normal file
@@ -0,0 +1,151 @@
|
||||
/*
|
||||
Copyright 2025.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"crypto/tls"
|
||||
"flag"
|
||||
"os"
|
||||
|
||||
// Import all Kubernetes client auth plugins (e.g. Azure, GCP, OIDC, etc.)
|
||||
// to ensure that exec-entrypoint and run can make use of them.
|
||||
_ "k8s.io/client-go/plugin/pkg/client/auth"
|
||||
|
||||
helmv2 "github.com/fluxcd/helm-controller/api/v2"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
|
||||
clientgoscheme "k8s.io/client-go/kubernetes/scheme"
|
||||
ctrl "sigs.k8s.io/controller-runtime"
|
||||
"sigs.k8s.io/controller-runtime/pkg/healthz"
|
||||
"sigs.k8s.io/controller-runtime/pkg/log/zap"
|
||||
metricsserver "sigs.k8s.io/controller-runtime/pkg/metrics/server"
|
||||
|
||||
"github.com/cozystack/cozystack/internal/controller/fluxplunger"
|
||||
// +kubebuilder:scaffold:imports
|
||||
)
|
||||
|
||||
var (
|
||||
scheme = runtime.NewScheme()
|
||||
setupLog = ctrl.Log.WithName("setup")
|
||||
)
|
||||
|
||||
func init() {
|
||||
utilruntime.Must(clientgoscheme.AddToScheme(scheme))
|
||||
utilruntime.Must(helmv2.AddToScheme(scheme))
|
||||
|
||||
// +kubebuilder:scaffold:scheme
|
||||
}
|
||||
|
||||
func main() {
|
||||
var metricsAddr string
|
||||
var enableLeaderElection bool
|
||||
var probeAddr string
|
||||
var secureMetrics bool
|
||||
var enableHTTP2 bool
|
||||
var tlsOpts []func(*tls.Config)
|
||||
|
||||
flag.StringVar(&metricsAddr, "metrics-bind-address", "0", "The address the metrics endpoint binds to. "+
|
||||
"Use :8443 for HTTPS or :8080 for HTTP, or leave as 0 to disable the metrics service.")
|
||||
flag.StringVar(&probeAddr, "health-probe-bind-address", ":8081", "The address the probe endpoint binds to.")
|
||||
flag.BoolVar(&enableLeaderElection, "leader-elect", false,
|
||||
"Enable leader election for controller manager. "+
|
||||
"Enabling this will ensure there is only one active controller manager.")
|
||||
flag.BoolVar(&secureMetrics, "metrics-secure", true,
|
||||
"If set, the metrics endpoint is served securely via HTTPS. Use --metrics-secure=false to use HTTP instead.")
|
||||
flag.BoolVar(&enableHTTP2, "enable-http2", false,
|
||||
"If set, HTTP/2 will be enabled for the metrics server")
|
||||
|
||||
opts := zap.Options{
|
||||
Development: false,
|
||||
}
|
||||
opts.BindFlags(flag.CommandLine)
|
||||
flag.Parse()
|
||||
|
||||
ctrl.SetLogger(zap.New(zap.UseFlagOptions(&opts)))
|
||||
|
||||
// if the enable-http2 flag is false (the default), http/2 should be disabled
|
||||
// due to its vulnerabilities. More specifically, disabling http/2 will
|
||||
// prevent from being vulnerable to the HTTP/2 Stream Cancellation and
|
||||
// Rapid Reset CVEs. For more information see:
|
||||
// - https://github.com/advisories/GHSA-qppj-fm5r-hxr3
|
||||
// - https://github.com/advisories/GHSA-4374-p667-p6c8
|
||||
disableHTTP2 := func(c *tls.Config) {
|
||||
setupLog.Info("disabling http/2")
|
||||
c.NextProtos = []string{"http/1.1"}
|
||||
}
|
||||
|
||||
if !enableHTTP2 {
|
||||
tlsOpts = append(tlsOpts, disableHTTP2)
|
||||
}
|
||||
|
||||
// Metrics endpoint is enabled in 'config/default/kustomization.yaml'. The Metrics options configure the server.
|
||||
// More info:
|
||||
// - https://pkg.go.dev/sigs.k8s.io/controller-runtime@v0.19.1/pkg/metrics/server
|
||||
// - https://book.kubebuilder.io/reference/metrics.html
|
||||
metricsServerOptions := metricsserver.Options{
|
||||
BindAddress: metricsAddr,
|
||||
SecureServing: secureMetrics,
|
||||
TLSOpts: tlsOpts,
|
||||
}
|
||||
|
||||
mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{
|
||||
Scheme: scheme,
|
||||
Metrics: metricsServerOptions,
|
||||
HealthProbeBindAddress: probeAddr,
|
||||
LeaderElection: enableLeaderElection,
|
||||
LeaderElectionID: "flux-plunger.cozystack.io",
|
||||
// LeaderElectionReleaseOnCancel defines if the leader should step down voluntarily
|
||||
// when the Manager ends. This requires the binary to immediately end when the
|
||||
// Manager is stopped, otherwise, this setting is unsafe. Setting this significantly
|
||||
// speeds up voluntary leader transitions as the new leader don't have to wait
|
||||
// LeaseDuration time first.
|
||||
//
|
||||
// In the default scaffold provided, the program ends immediately after
|
||||
// the manager stops, so would be fine to enable this option. However,
|
||||
// if you are doing or is intended to do any operation such as perform cleanups
|
||||
// after the manager stops then its usage might be unsafe.
|
||||
// LeaderElectionReleaseOnCancel: true,
|
||||
})
|
||||
if err != nil {
|
||||
setupLog.Error(err, "unable to create manager")
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
if err = (&fluxplunger.FluxPlunger{
|
||||
Client: mgr.GetClient(),
|
||||
}).SetupWithManager(mgr); err != nil {
|
||||
setupLog.Error(err, "unable to create controller", "controller", "FluxPlunger")
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
// +kubebuilder:scaffold:builder
|
||||
|
||||
if err := mgr.AddHealthzCheck("healthz", healthz.Ping); err != nil {
|
||||
setupLog.Error(err, "unable to set up health check")
|
||||
os.Exit(1)
|
||||
}
|
||||
if err := mgr.AddReadyzCheck("readyz", healthz.Ping); err != nil {
|
||||
setupLog.Error(err, "unable to set up ready check")
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
setupLog.Info("starting manager")
|
||||
if err := mgr.Start(ctrl.SetupSignalHandler()); err != nil {
|
||||
setupLog.Error(err, "problem running manager")
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
176
cmd/kubeovn-plunger/main.go
Normal file
176
cmd/kubeovn-plunger/main.go
Normal file
@@ -0,0 +1,176 @@
|
||||
/*
|
||||
Copyright 2025.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"crypto/tls"
|
||||
"flag"
|
||||
"os"
|
||||
|
||||
// Import all Kubernetes client auth plugins (e.g. Azure, GCP, OIDC, etc.)
|
||||
// to ensure that exec-entrypoint and run can make use of them.
|
||||
_ "k8s.io/client-go/plugin/pkg/client/auth"
|
||||
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
|
||||
clientgoscheme "k8s.io/client-go/kubernetes/scheme"
|
||||
ctrl "sigs.k8s.io/controller-runtime"
|
||||
"sigs.k8s.io/controller-runtime/pkg/healthz"
|
||||
"sigs.k8s.io/controller-runtime/pkg/log/zap"
|
||||
"sigs.k8s.io/controller-runtime/pkg/metrics"
|
||||
"sigs.k8s.io/controller-runtime/pkg/metrics/filters"
|
||||
metricsserver "sigs.k8s.io/controller-runtime/pkg/metrics/server"
|
||||
"sigs.k8s.io/controller-runtime/pkg/webhook"
|
||||
|
||||
"github.com/cozystack/cozystack/internal/controller/kubeovnplunger"
|
||||
// +kubebuilder:scaffold:imports
|
||||
)
|
||||
|
||||
var (
|
||||
scheme = runtime.NewScheme()
|
||||
setupLog = ctrl.Log.WithName("setup")
|
||||
)
|
||||
|
||||
func init() {
|
||||
utilruntime.Must(clientgoscheme.AddToScheme(scheme))
|
||||
|
||||
// +kubebuilder:scaffold:scheme
|
||||
}
|
||||
|
||||
func main() {
|
||||
var metricsAddr string
|
||||
var enableLeaderElection bool
|
||||
var probeAddr string
|
||||
var kubeOVNNamespace string
|
||||
var ovnCentralName string
|
||||
var secureMetrics bool
|
||||
var enableHTTP2 bool
|
||||
var disableTelemetry bool
|
||||
var tlsOpts []func(*tls.Config)
|
||||
flag.StringVar(&metricsAddr, "metrics-bind-address", "0", "The address the metrics endpoint binds to. "+
|
||||
"Use :8443 for HTTPS or :8080 for HTTP, or leave as 0 to disable the metrics service.")
|
||||
flag.StringVar(&probeAddr, "health-probe-bind-address", ":8081", "The address the probe endpoint binds to.")
|
||||
flag.StringVar(&kubeOVNNamespace, "kube-ovn-namespace", "cozy-kubeovn", "Namespace where kube-OVN is deployed.")
|
||||
flag.StringVar(&ovnCentralName, "ovn-central-name", "ovn-central", "Ovn-central deployment name.")
|
||||
flag.BoolVar(&enableLeaderElection, "leader-elect", false,
|
||||
"Enable leader election for controller manager. "+
|
||||
"Enabling this will ensure there is only one active controller manager.")
|
||||
flag.BoolVar(&secureMetrics, "metrics-secure", true,
|
||||
"If set, the metrics endpoint is served securely via HTTPS. Use --metrics-secure=false to use HTTP instead.")
|
||||
flag.BoolVar(&enableHTTP2, "enable-http2", false,
|
||||
"If set, HTTP/2 will be enabled for the metrics and webhook servers")
|
||||
flag.BoolVar(&disableTelemetry, "disable-telemetry", false,
|
||||
"Disable telemetry collection")
|
||||
opts := zap.Options{
|
||||
Development: false,
|
||||
}
|
||||
opts.BindFlags(flag.CommandLine)
|
||||
flag.Parse()
|
||||
|
||||
ctrl.SetLogger(zap.New(zap.UseFlagOptions(&opts)))
|
||||
|
||||
// if the enable-http2 flag is false (the default), http/2 should be disabled
|
||||
// due to its vulnerabilities. More specifically, disabling http/2 will
|
||||
// prevent from being vulnerable to the HTTP/2 Stream Cancellation and
|
||||
// Rapid Reset CVEs. For more information see:
|
||||
// - https://github.com/advisories/GHSA-qppj-fm5r-hxr3
|
||||
// - https://github.com/advisories/GHSA-4374-p667-p6c8
|
||||
disableHTTP2 := func(c *tls.Config) {
|
||||
setupLog.Info("disabling http/2")
|
||||
c.NextProtos = []string{"http/1.1"}
|
||||
}
|
||||
|
||||
if !enableHTTP2 {
|
||||
tlsOpts = append(tlsOpts, disableHTTP2)
|
||||
}
|
||||
|
||||
webhookServer := webhook.NewServer(webhook.Options{
|
||||
TLSOpts: tlsOpts,
|
||||
})
|
||||
|
||||
// Metrics endpoint is enabled in 'config/default/kustomization.yaml'. The Metrics options configure the server.
|
||||
// More info:
|
||||
// - https://pkg.go.dev/sigs.k8s.io/controller-runtime@v0.19.1/pkg/metrics/server
|
||||
// - https://book.kubebuilder.io/reference/metrics.html
|
||||
metricsServerOptions := metricsserver.Options{
|
||||
BindAddress: metricsAddr,
|
||||
SecureServing: secureMetrics,
|
||||
TLSOpts: tlsOpts,
|
||||
}
|
||||
|
||||
if secureMetrics {
|
||||
// FilterProvider is used to protect the metrics endpoint with authn/authz.
|
||||
// These configurations ensure that only authorized users and service accounts
|
||||
// can access the metrics endpoint. The RBAC are configured in 'config/rbac/kustomization.yaml'. More info:
|
||||
// https://pkg.go.dev/sigs.k8s.io/controller-runtime@v0.19.1/pkg/metrics/filters#WithAuthenticationAndAuthorization
|
||||
metricsServerOptions.FilterProvider = filters.WithAuthenticationAndAuthorization
|
||||
|
||||
// TODO(user): If CertDir, CertName, and KeyName are not specified, controller-runtime will automatically
|
||||
// generate self-signed certificates for the metrics server. While convenient for development and testing,
|
||||
// this setup is not recommended for production.
|
||||
}
|
||||
|
||||
mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{
|
||||
Scheme: scheme,
|
||||
Metrics: metricsServerOptions,
|
||||
WebhookServer: webhookServer,
|
||||
HealthProbeBindAddress: probeAddr,
|
||||
LeaderElection: enableLeaderElection,
|
||||
LeaderElectionID: "29a0338b.cozystack.io",
|
||||
// LeaderElectionReleaseOnCancel defines if the leader should step down voluntarily
|
||||
// when the Manager ends. This requires the binary to immediately end when the
|
||||
// Manager is stopped, otherwise, this setting is unsafe. Setting this significantly
|
||||
// speeds up voluntary leader transitions as the new leader don't have to wait
|
||||
// LeaseDuration time first.
|
||||
//
|
||||
// In the default scaffold provided, the program ends immediately after
|
||||
// the manager stops, so would be fine to enable this option. However,
|
||||
// if you are doing or is intended to do any operation such as perform cleanups
|
||||
// after the manager stops then its usage might be unsafe.
|
||||
// LeaderElectionReleaseOnCancel: true,
|
||||
})
|
||||
if err != nil {
|
||||
setupLog.Error(err, "unable to create manager")
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
if err = (&kubeovnplunger.KubeOVNPlunger{
|
||||
Client: mgr.GetClient(),
|
||||
Scheme: mgr.GetScheme(),
|
||||
Registry: metrics.Registry,
|
||||
}).SetupWithManager(mgr, kubeOVNNamespace, ovnCentralName); err != nil {
|
||||
setupLog.Error(err, "unable to create controller", "controller", "KubeOVNPlunger")
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
// +kubebuilder:scaffold:builder
|
||||
|
||||
if err := mgr.AddHealthzCheck("healthz", healthz.Ping); err != nil {
|
||||
setupLog.Error(err, "unable to set up health check")
|
||||
os.Exit(1)
|
||||
}
|
||||
if err := mgr.AddReadyzCheck("readyz", healthz.Ping); err != nil {
|
||||
setupLog.Error(err, "unable to set up ready check")
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
setupLog.Info("starting manager")
|
||||
if err := mgr.Start(ctrl.SetupSignalHandler()); err != nil {
|
||||
setupLog.Error(err, "problem running manager")
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
179
cmd/lineage-controller-webhook/main.go
Normal file
179
cmd/lineage-controller-webhook/main.go
Normal file
@@ -0,0 +1,179 @@
|
||||
/*
|
||||
Copyright 2025.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"crypto/tls"
|
||||
"flag"
|
||||
"os"
|
||||
|
||||
// Import all Kubernetes client auth plugins (e.g. Azure, GCP, OIDC, etc.)
|
||||
// to ensure that exec-entrypoint and run can make use of them.
|
||||
_ "k8s.io/client-go/plugin/pkg/client/auth"
|
||||
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
|
||||
clientgoscheme "k8s.io/client-go/kubernetes/scheme"
|
||||
ctrl "sigs.k8s.io/controller-runtime"
|
||||
"sigs.k8s.io/controller-runtime/pkg/healthz"
|
||||
"sigs.k8s.io/controller-runtime/pkg/log/zap"
|
||||
"sigs.k8s.io/controller-runtime/pkg/metrics/filters"
|
||||
metricsserver "sigs.k8s.io/controller-runtime/pkg/metrics/server"
|
||||
"sigs.k8s.io/controller-runtime/pkg/webhook"
|
||||
|
||||
cozystackiov1alpha1 "github.com/cozystack/cozystack/api/v1alpha1"
|
||||
lcw "github.com/cozystack/cozystack/internal/lineagecontrollerwebhook"
|
||||
// +kubebuilder:scaffold:imports
|
||||
)
|
||||
|
||||
var (
|
||||
scheme = runtime.NewScheme()
|
||||
setupLog = ctrl.Log.WithName("setup")
|
||||
)
|
||||
|
||||
func init() {
|
||||
utilruntime.Must(clientgoscheme.AddToScheme(scheme))
|
||||
|
||||
utilruntime.Must(cozystackiov1alpha1.AddToScheme(scheme))
|
||||
// +kubebuilder:scaffold:scheme
|
||||
}
|
||||
|
||||
func main() {
|
||||
var metricsAddr string
|
||||
var enableLeaderElection bool
|
||||
var probeAddr string
|
||||
var secureMetrics bool
|
||||
var enableHTTP2 bool
|
||||
var tlsOpts []func(*tls.Config)
|
||||
flag.StringVar(&metricsAddr, "metrics-bind-address", "0", "The address the metrics endpoint binds to. "+
|
||||
"Use :8443 for HTTPS or :8080 for HTTP, or leave as 0 to disable the metrics service.")
|
||||
flag.StringVar(&probeAddr, "health-probe-bind-address", ":8081", "The address the probe endpoint binds to.")
|
||||
flag.BoolVar(&enableLeaderElection, "leader-elect", false,
|
||||
"Enable leader election for controller manager. "+
|
||||
"Enabling this will ensure there is only one active controller manager.")
|
||||
flag.BoolVar(&secureMetrics, "metrics-secure", true,
|
||||
"If set, the metrics endpoint is served securely via HTTPS. Use --metrics-secure=false to use HTTP instead.")
|
||||
flag.BoolVar(&enableHTTP2, "enable-http2", false,
|
||||
"If set, HTTP/2 will be enabled for the metrics and webhook servers")
|
||||
opts := zap.Options{
|
||||
Development: false,
|
||||
}
|
||||
opts.BindFlags(flag.CommandLine)
|
||||
flag.Parse()
|
||||
|
||||
ctrl.SetLogger(zap.New(zap.UseFlagOptions(&opts)))
|
||||
|
||||
// if the enable-http2 flag is false (the default), http/2 should be disabled
|
||||
// due to its vulnerabilities. More specifically, disabling http/2 will
|
||||
// prevent from being vulnerable to the HTTP/2 Stream Cancellation and
|
||||
// Rapid Reset CVEs. For more information see:
|
||||
// - https://github.com/advisories/GHSA-qppj-fm5r-hxr3
|
||||
// - https://github.com/advisories/GHSA-4374-p667-p6c8
|
||||
disableHTTP2 := func(c *tls.Config) {
|
||||
setupLog.Info("disabling http/2")
|
||||
c.NextProtos = []string{"http/1.1"}
|
||||
}
|
||||
|
||||
if !enableHTTP2 {
|
||||
tlsOpts = append(tlsOpts, disableHTTP2)
|
||||
}
|
||||
|
||||
webhookServer := webhook.NewServer(webhook.Options{
|
||||
TLSOpts: tlsOpts,
|
||||
})
|
||||
|
||||
// Metrics endpoint is enabled in 'config/default/kustomization.yaml'. The Metrics options configure the server.
|
||||
// More info:
|
||||
// - https://pkg.go.dev/sigs.k8s.io/controller-runtime@v0.19.1/pkg/metrics/server
|
||||
// - https://book.kubebuilder.io/reference/metrics.html
|
||||
metricsServerOptions := metricsserver.Options{
|
||||
BindAddress: metricsAddr,
|
||||
SecureServing: secureMetrics,
|
||||
TLSOpts: tlsOpts,
|
||||
}
|
||||
|
||||
if secureMetrics {
|
||||
// FilterProvider is used to protect the metrics endpoint with authn/authz.
|
||||
// These configurations ensure that only authorized users and service accounts
|
||||
// can access the metrics endpoint. The RBAC are configured in 'config/rbac/kustomization.yaml'. More info:
|
||||
// https://pkg.go.dev/sigs.k8s.io/controller-runtime@v0.19.1/pkg/metrics/filters#WithAuthenticationAndAuthorization
|
||||
metricsServerOptions.FilterProvider = filters.WithAuthenticationAndAuthorization
|
||||
|
||||
// TODO(user): If CertDir, CertName, and KeyName are not specified, controller-runtime will automatically
|
||||
// generate self-signed certificates for the metrics server. While convenient for development and testing,
|
||||
// this setup is not recommended for production.
|
||||
}
|
||||
|
||||
// Configure rate limiting for the Kubernetes client
|
||||
config := ctrl.GetConfigOrDie()
|
||||
config.QPS = 50.0 // Increased from default 5.0
|
||||
config.Burst = 100 // Increased from default 10
|
||||
|
||||
mgr, err := ctrl.NewManager(config, ctrl.Options{
|
||||
Scheme: scheme,
|
||||
Metrics: metricsServerOptions,
|
||||
WebhookServer: webhookServer,
|
||||
HealthProbeBindAddress: probeAddr,
|
||||
LeaderElection: enableLeaderElection,
|
||||
LeaderElectionID: "8796f12d.cozystack.io",
|
||||
// LeaderElectionReleaseOnCancel defines if the leader should step down voluntarily
|
||||
// when the Manager ends. This requires the binary to immediately end when the
|
||||
// Manager is stopped, otherwise, this setting is unsafe. Setting this significantly
|
||||
// speeds up voluntary leader transitions as the new leader don't have to wait
|
||||
// LeaseDuration time first.
|
||||
//
|
||||
// In the default scaffold provided, the program ends immediately after
|
||||
// the manager stops, so would be fine to enable this option. However,
|
||||
// if you are doing or is intended to do any operation such as perform cleanups
|
||||
// after the manager stops then its usage might be unsafe.
|
||||
// LeaderElectionReleaseOnCancel: true,
|
||||
})
|
||||
if err != nil {
|
||||
setupLog.Error(err, "unable to start manager")
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
lineageControllerWebhook := &lcw.LineageControllerWebhook{
|
||||
Client: mgr.GetClient(),
|
||||
Scheme: mgr.GetScheme(),
|
||||
}
|
||||
if err := lineageControllerWebhook.SetupWithManagerAsController(mgr); err != nil {
|
||||
setupLog.Error(err, "unable to setup controller", "controller", "LineageController")
|
||||
os.Exit(1)
|
||||
}
|
||||
if err := lineageControllerWebhook.SetupWithManagerAsWebhook(mgr); err != nil {
|
||||
setupLog.Error(err, "unable to setup webhook", "webhook", "LineageWebhook")
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
// +kubebuilder:scaffold:builder
|
||||
|
||||
if err := mgr.AddHealthzCheck("healthz", healthz.Ping); err != nil {
|
||||
setupLog.Error(err, "unable to set up health check")
|
||||
os.Exit(1)
|
||||
}
|
||||
if err := mgr.AddReadyzCheck("readyz", healthz.Ping); err != nil {
|
||||
setupLog.Error(err, "unable to set up ready check")
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
setupLog.Info("starting manager")
|
||||
if err := mgr.Start(ctrl.SetupSignalHandler()); err != nil {
|
||||
setupLog.Error(err, "problem running manager")
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
5407
dashboards/clickhouse/altinity-clickhouse-operator-dashboard.json
Normal file
5407
dashboards/clickhouse/altinity-clickhouse-operator-dashboard.json
Normal file
File diff suppressed because it is too large
Load Diff
3611
dashboards/control-plane/kube-etcd.json
Normal file
3611
dashboards/control-plane/kube-etcd.json
Normal file
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
1725
dashboards/flux/flux-control-plane.json
Normal file
1725
dashboards/flux/flux-control-plane.json
Normal file
File diff suppressed because it is too large
Load Diff
1391
dashboards/flux/flux-stats.json
Normal file
1391
dashboards/flux/flux-stats.json
Normal file
File diff suppressed because it is too large
Load Diff
1219
dashboards/goldpinger/goldpinger.json
Normal file
1219
dashboards/goldpinger/goldpinger.json
Normal file
File diff suppressed because it is too large
Load Diff
602
dashboards/hubble/dns-namespace.json
Normal file
602
dashboards/hubble/dns-namespace.json
Normal file
@@ -0,0 +1,602 @@
|
||||
{
|
||||
"__inputs": [
|
||||
{
|
||||
"name": "DS_PROMETHEUS",
|
||||
"label": "Prometheus",
|
||||
"description": "",
|
||||
"type": "datasource",
|
||||
"pluginId": "prometheus",
|
||||
"pluginName": "Prometheus"
|
||||
}
|
||||
],
|
||||
"__elements": {},
|
||||
"__requires": [
|
||||
{
|
||||
"type": "panel",
|
||||
"id": "bargauge",
|
||||
"name": "Bar gauge",
|
||||
"version": ""
|
||||
},
|
||||
{
|
||||
"type": "grafana",
|
||||
"id": "grafana",
|
||||
"name": "Grafana",
|
||||
"version": "9.4.7"
|
||||
},
|
||||
{
|
||||
"type": "datasource",
|
||||
"id": "prometheus",
|
||||
"name": "Prometheus",
|
||||
"version": "1.0.0"
|
||||
},
|
||||
{
|
||||
"type": "panel",
|
||||
"id": "timeseries",
|
||||
"name": "Time series",
|
||||
"version": ""
|
||||
}
|
||||
],
|
||||
"annotations": {
|
||||
"list": [
|
||||
{
|
||||
"builtIn": 1,
|
||||
"datasource": {
|
||||
"type": "datasource",
|
||||
"uid": "grafana"
|
||||
},
|
||||
"enable": true,
|
||||
"hide": true,
|
||||
"iconColor": "rgba(0, 211, 255, 1)",
|
||||
"name": "Annotations & Alerts",
|
||||
"target": {
|
||||
"limit": 100,
|
||||
"matchAny": false,
|
||||
"tags": [],
|
||||
"type": "dashboard"
|
||||
},
|
||||
"type": "dashboard"
|
||||
}
|
||||
]
|
||||
},
|
||||
"description": "",
|
||||
"editable": true,
|
||||
"fiscalYearStartMonth": 0,
|
||||
"gnetId": 16612,
|
||||
"graphTooltip": 0,
|
||||
"id": null,
|
||||
"links": [
|
||||
{
|
||||
"asDropdown": true,
|
||||
"icon": "external link",
|
||||
"includeVars": true,
|
||||
"keepTime": true,
|
||||
"tags": [
|
||||
"cilium-overview"
|
||||
],
|
||||
"targetBlank": false,
|
||||
"title": "Cilium Overviews",
|
||||
"tooltip": "",
|
||||
"type": "dashboards",
|
||||
"url": ""
|
||||
},
|
||||
{
|
||||
"asDropdown": true,
|
||||
"icon": "external link",
|
||||
"includeVars": false,
|
||||
"keepTime": true,
|
||||
"tags": [
|
||||
"hubble"
|
||||
],
|
||||
"targetBlank": false,
|
||||
"title": "Hubble",
|
||||
"tooltip": "",
|
||||
"type": "dashboards",
|
||||
"url": ""
|
||||
}
|
||||
],
|
||||
"liveNow": false,
|
||||
"panels": [
|
||||
{
|
||||
"collapsed": false,
|
||||
"gridPos": {
|
||||
"h": 1,
|
||||
"w": 24,
|
||||
"x": 0,
|
||||
"y": 0
|
||||
},
|
||||
"id": 2,
|
||||
"panels": [],
|
||||
"title": "DNS",
|
||||
"type": "row"
|
||||
},
|
||||
{
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"description": "",
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"color": {
|
||||
"mode": "palette-classic"
|
||||
},
|
||||
"custom": {
|
||||
"axisCenteredZero": false,
|
||||
"axisColorMode": "text",
|
||||
"axisLabel": "",
|
||||
"axisPlacement": "auto",
|
||||
"barAlignment": 0,
|
||||
"drawStyle": "line",
|
||||
"fillOpacity": 10,
|
||||
"gradientMode": "none",
|
||||
"hideFrom": {
|
||||
"legend": false,
|
||||
"tooltip": false,
|
||||
"viz": false
|
||||
},
|
||||
"lineInterpolation": "linear",
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
"scaleDistribution": {
|
||||
"type": "linear"
|
||||
},
|
||||
"showPoints": "auto",
|
||||
"spanNulls": false,
|
||||
"stacking": {
|
||||
"group": "A",
|
||||
"mode": "normal"
|
||||
},
|
||||
"thresholdsStyle": {
|
||||
"mode": "off"
|
||||
}
|
||||
},
|
||||
"mappings": [],
|
||||
"min": 0,
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
"value": 80
|
||||
}
|
||||
]
|
||||
},
|
||||
"unit": "reqps"
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
"gridPos": {
|
||||
"h": 9,
|
||||
"w": 12,
|
||||
"x": 0,
|
||||
"y": 1
|
||||
},
|
||||
"id": 37,
|
||||
"options": {
|
||||
"legend": {
|
||||
"calcs": [
|
||||
"mean",
|
||||
"lastNotNull"
|
||||
],
|
||||
"displayMode": "table",
|
||||
"placement": "bottom",
|
||||
"showLegend": true
|
||||
},
|
||||
"tooltip": {
|
||||
"mode": "single",
|
||||
"sort": "none"
|
||||
}
|
||||
},
|
||||
"targets": [
|
||||
{
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"editorMode": "code",
|
||||
"expr": "sum(rate(hubble_dns_queries_total{cluster=~\"$cluster\", source_namespace=~\"$source_namespace\", destination_namespace=~\"$destination_namespace\"}[$__rate_interval])) by (source) > 0",
|
||||
"legendFormat": "{{source}}",
|
||||
"range": true,
|
||||
"refId": "A"
|
||||
}
|
||||
],
|
||||
"title": "DNS queries",
|
||||
"type": "timeseries"
|
||||
},
|
||||
{
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"color": {
|
||||
"mode": "thresholds"
|
||||
},
|
||||
"mappings": [],
|
||||
"min": 0,
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
}
|
||||
]
|
||||
},
|
||||
"unit": "reqps"
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
"gridPos": {
|
||||
"h": 9,
|
||||
"w": 12,
|
||||
"x": 12,
|
||||
"y": 1
|
||||
},
|
||||
"id": 41,
|
||||
"options": {
|
||||
"displayMode": "gradient",
|
||||
"minVizHeight": 10,
|
||||
"minVizWidth": 0,
|
||||
"orientation": "horizontal",
|
||||
"reduceOptions": {
|
||||
"calcs": [
|
||||
"lastNotNull"
|
||||
],
|
||||
"fields": "",
|
||||
"values": false
|
||||
},
|
||||
"showUnfilled": true
|
||||
},
|
||||
"pluginVersion": "9.4.7",
|
||||
"targets": [
|
||||
{
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"editorMode": "code",
|
||||
"expr": "topk(10, sum(rate(hubble_dns_queries_total{cluster=~\"$cluster\", source_namespace=~\"$source_namespace\", destination_namespace=~\"$destination_namespace\"}[$__rate_interval])*60) by (query))",
|
||||
"legendFormat": "{{query}}",
|
||||
"range": true,
|
||||
"refId": "A"
|
||||
}
|
||||
],
|
||||
"title": "Top 10 DNS queries",
|
||||
"type": "bargauge"
|
||||
},
|
||||
{
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"color": {
|
||||
"mode": "palette-classic"
|
||||
},
|
||||
"custom": {
|
||||
"axisCenteredZero": false,
|
||||
"axisColorMode": "text",
|
||||
"axisLabel": "",
|
||||
"axisPlacement": "auto",
|
||||
"barAlignment": 0,
|
||||
"drawStyle": "line",
|
||||
"fillOpacity": 10,
|
||||
"gradientMode": "none",
|
||||
"hideFrom": {
|
||||
"legend": false,
|
||||
"tooltip": false,
|
||||
"viz": false
|
||||
},
|
||||
"lineInterpolation": "linear",
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
"scaleDistribution": {
|
||||
"type": "linear"
|
||||
},
|
||||
"showPoints": "auto",
|
||||
"spanNulls": false,
|
||||
"stacking": {
|
||||
"group": "A",
|
||||
"mode": "normal"
|
||||
},
|
||||
"thresholdsStyle": {
|
||||
"mode": "off"
|
||||
}
|
||||
},
|
||||
"mappings": [],
|
||||
"min": 0,
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
"value": 80
|
||||
}
|
||||
]
|
||||
},
|
||||
"unit": "reqps"
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
"gridPos": {
|
||||
"h": 9,
|
||||
"w": 12,
|
||||
"x": 0,
|
||||
"y": 10
|
||||
},
|
||||
"id": 39,
|
||||
"options": {
|
||||
"legend": {
|
||||
"calcs": [
|
||||
"mean",
|
||||
"lastNotNull"
|
||||
],
|
||||
"displayMode": "table",
|
||||
"placement": "bottom",
|
||||
"showLegend": true
|
||||
},
|
||||
"tooltip": {
|
||||
"mode": "single",
|
||||
"sort": "none"
|
||||
}
|
||||
},
|
||||
"targets": [
|
||||
{
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"editorMode": "code",
|
||||
"expr": "round(sum(rate(hubble_dns_queries_total{cluster=~\"$cluster\", source_namespace=~\"$source_namespace\", destination_namespace=~\"$destination_namespace\"}[$__rate_interval])) by (source) - sum(label_replace(sum(rate(hubble_dns_responses_total{cluster=~\"$cluster\", source_namespace=~\"$destination_namespace\", destination_namespace=~\"$source_namespace\"}[$__rate_interval])) by (destination), \"source\", \"$1\", \"destination\", \"(.*)\")) without (destination), 0.001) > 0",
|
||||
"legendFormat": "{{source}}",
|
||||
"range": true,
|
||||
"refId": "A"
|
||||
}
|
||||
],
|
||||
"title": "Missing DNS responses",
|
||||
"type": "timeseries"
|
||||
},
|
||||
{
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"color": {
|
||||
"mode": "palette-classic"
|
||||
},
|
||||
"custom": {
|
||||
"axisCenteredZero": false,
|
||||
"axisColorMode": "text",
|
||||
"axisLabel": "",
|
||||
"axisPlacement": "auto",
|
||||
"barAlignment": 0,
|
||||
"drawStyle": "line",
|
||||
"fillOpacity": 10,
|
||||
"gradientMode": "none",
|
||||
"hideFrom": {
|
||||
"legend": false,
|
||||
"tooltip": false,
|
||||
"viz": false
|
||||
},
|
||||
"lineInterpolation": "linear",
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
"scaleDistribution": {
|
||||
"type": "linear"
|
||||
},
|
||||
"showPoints": "auto",
|
||||
"spanNulls": false,
|
||||
"stacking": {
|
||||
"group": "A",
|
||||
"mode": "normal"
|
||||
},
|
||||
"thresholdsStyle": {
|
||||
"mode": "off"
|
||||
}
|
||||
},
|
||||
"mappings": [],
|
||||
"min": 0,
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
"value": 80
|
||||
}
|
||||
]
|
||||
},
|
||||
"unit": "reqps"
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
"gridPos": {
|
||||
"h": 9,
|
||||
"w": 12,
|
||||
"x": 12,
|
||||
"y": 10
|
||||
},
|
||||
"id": 43,
|
||||
"options": {
|
||||
"legend": {
|
||||
"calcs": [
|
||||
"mean",
|
||||
"lastNotNull"
|
||||
],
|
||||
"displayMode": "table",
|
||||
"placement": "bottom",
|
||||
"showLegend": true
|
||||
},
|
||||
"tooltip": {
|
||||
"mode": "single",
|
||||
"sort": "none"
|
||||
}
|
||||
},
|
||||
"targets": [
|
||||
{
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"editorMode": "code",
|
||||
"expr": "sum(rate(hubble_dns_responses_total{cluster=~\"$cluster\", source_namespace=~\"$destination_namespace\", destination_namespace=~\"$source_namespace\", rcode!=\"No Error\"}[$__rate_interval])) by (destination, rcode) > 0",
|
||||
"legendFormat": "{{destination}}: {{rcode}}",
|
||||
"range": true,
|
||||
"refId": "A"
|
||||
}
|
||||
],
|
||||
"title": "DNS errors",
|
||||
"type": "timeseries"
|
||||
}
|
||||
],
|
||||
"refresh": "",
|
||||
"revision": 1,
|
||||
"schemaVersion": 38,
|
||||
"style": "dark",
|
||||
"tags": [
|
||||
"kubecon-demo"
|
||||
],
|
||||
"templating": {
|
||||
"list": [
|
||||
{
|
||||
"current": {
|
||||
"selected": false,
|
||||
"text": "default",
|
||||
"value": "default"
|
||||
},
|
||||
"hide": 0,
|
||||
"includeAll": false,
|
||||
"label": "Data Source",
|
||||
"multi": false,
|
||||
"name": "DS_PROMETHEUS",
|
||||
"options": [],
|
||||
"query": "prometheus",
|
||||
"queryValue": "",
|
||||
"refresh": 1,
|
||||
"regex": "(?!grafanacloud-usage|grafanacloud-ml-metrics).+",
|
||||
"skipUrlSync": false,
|
||||
"type": "datasource"
|
||||
},
|
||||
{
|
||||
"current": {},
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"definition": "label_values(cilium_version, cluster)",
|
||||
"hide": 0,
|
||||
"includeAll": true,
|
||||
"multi": true,
|
||||
"name": "cluster",
|
||||
"options": [],
|
||||
"query": {
|
||||
"query": "label_values(cilium_version, cluster)",
|
||||
"refId": "StandardVariableQuery"
|
||||
},
|
||||
"refresh": 1,
|
||||
"regex": "",
|
||||
"skipUrlSync": false,
|
||||
"sort": 0,
|
||||
"type": "query"
|
||||
},
|
||||
{
|
||||
"allValue": ".*",
|
||||
"current": {},
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"definition": "label_values(source_namespace)",
|
||||
"hide": 0,
|
||||
"includeAll": true,
|
||||
"label": "Source Namespace",
|
||||
"multi": true,
|
||||
"name": "source_namespace",
|
||||
"options": [],
|
||||
"query": {
|
||||
"query": "label_values(source_namespace)",
|
||||
"refId": "StandardVariableQuery"
|
||||
},
|
||||
"refresh": 1,
|
||||
"regex": "",
|
||||
"skipUrlSync": false,
|
||||
"sort": 0,
|
||||
"type": "query"
|
||||
},
|
||||
{
|
||||
"allValue": ".*",
|
||||
"current": {},
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"definition": "label_values(destination_namespace)",
|
||||
"hide": 0,
|
||||
"includeAll": true,
|
||||
"label": "Destination Namespace",
|
||||
"multi": true,
|
||||
"name": "destination_namespace",
|
||||
"options": [],
|
||||
"query": {
|
||||
"query": "label_values(destination_namespace)",
|
||||
"refId": "StandardVariableQuery"
|
||||
},
|
||||
"refresh": 1,
|
||||
"regex": "",
|
||||
"skipUrlSync": false,
|
||||
"sort": 0,
|
||||
"type": "query"
|
||||
}
|
||||
]
|
||||
},
|
||||
"time": {
|
||||
"from": "now-1h",
|
||||
"to": "now"
|
||||
},
|
||||
"timepicker": {
|
||||
"refresh_intervals": [
|
||||
"10s",
|
||||
"30s",
|
||||
"1m",
|
||||
"5m",
|
||||
"15m",
|
||||
"30m",
|
||||
"1h",
|
||||
"2h",
|
||||
"1d"
|
||||
],
|
||||
"time_options": [
|
||||
"5m",
|
||||
"15m",
|
||||
"1h",
|
||||
"6h",
|
||||
"12h",
|
||||
"24h",
|
||||
"2d",
|
||||
"7d",
|
||||
"30d"
|
||||
]
|
||||
},
|
||||
"timezone": "",
|
||||
"title": "Hubble / DNS Overview (Namespace)",
|
||||
"uid": "_f0DUpY4k",
|
||||
"version": 26,
|
||||
"weekStart": ""
|
||||
}
|
||||
|
||||
1394
dashboards/hubble/l7-http-metrics.json
Normal file
1394
dashboards/hubble/l7-http-metrics.json
Normal file
File diff suppressed because it is too large
Load Diff
1001
dashboards/hubble/network-overview.json
Normal file
1001
dashboards/hubble/network-overview.json
Normal file
File diff suppressed because it is too large
Load Diff
3357
dashboards/hubble/overview.json
Normal file
3357
dashboards/hubble/overview.json
Normal file
File diff suppressed because it is too large
Load Diff
2940
dashboards/kafka/strimzi-kafka.json
Normal file
2940
dashboards/kafka/strimzi-kafka.json
Normal file
File diff suppressed because it is too large
Load Diff
5196
dashboards/kubevirt/kubevirt-control-plane.json
Normal file
5196
dashboards/kubevirt/kubevirt-control-plane.json
Normal file
File diff suppressed because it is too large
Load Diff
@@ -450,7 +450,7 @@
|
||||
"uid": "$ds_prometheus"
|
||||
},
|
||||
"editorMode": "code",
|
||||
"expr": "sum(sum by (node) (rate(container_cpu_usage_seconds_total{container!=\"POD\",container!=\"\",node=~\"$node\"}[$__rate_interval])))\n / sum(sum by (node) (avg_over_time(kube_node_status_allocatable{resource=\"cpu\",unit=\"core\",node=~\"$node\"}[$__rate_interval])))",
|
||||
"expr": "sum(sum by (node) (rate(container_cpu_usage_seconds_total{container!=\"\",node=~\"$node\"}[$__rate_interval])))\n / sum(sum by (node) (avg_over_time(kube_node_status_allocatable{resource=\"cpu\",unit=\"core\",node=~\"$node\"}[$__rate_interval])))",
|
||||
"hide": false,
|
||||
"legendFormat": "Total",
|
||||
"range": true,
|
||||
@@ -520,7 +520,7 @@
|
||||
"uid": "$ds_prometheus"
|
||||
},
|
||||
"editorMode": "code",
|
||||
"expr": "sum(sum by (node) (container_memory_working_set_bytes:without_kmem{container!=\"POD\",container!=\"\",node=~\"$node\"})) / sum(sum by (node) (avg_over_time(kube_node_status_allocatable{resource=\"memory\",unit=\"byte\",node=~\"$node\"}[$__rate_interval])))",
|
||||
"expr": "sum(sum by (node) (container_memory_working_set_bytes:without_kmem{container!=\"\",node=~\"$node\"})) / sum(sum by (node) (avg_over_time(kube_node_status_allocatable{resource=\"memory\",unit=\"byte\",node=~\"$node\"}[$__rate_interval])))",
|
||||
"hide": false,
|
||||
"legendFormat": "Total",
|
||||
"range": true,
|
||||
@@ -590,7 +590,7 @@
|
||||
"uid": "$ds_prometheus"
|
||||
},
|
||||
"editorMode": "code",
|
||||
"expr": "sum(sum by (node) (rate(container_cpu_usage_seconds_total{container!=\"POD\",container!=\"\",node=~\"$node\"}[$__rate_interval]))) / sum(sum by (node) (avg_over_time(kube_pod_container_resource_requests{resource=\"cpu\",unit=\"core\",node=~\"$node\"}[$__rate_interval])))",
|
||||
"expr": "sum(sum by (node) (rate(container_cpu_usage_seconds_total{container!=\"\",node=~\"$node\"}[$__rate_interval]))) / sum(sum by (node) (avg_over_time(kube_pod_container_resource_requests{resource=\"cpu\",unit=\"core\",node=~\"$node\"}[$__rate_interval])))",
|
||||
"hide": false,
|
||||
"legendFormat": "Total",
|
||||
"range": true,
|
||||
@@ -660,7 +660,7 @@
|
||||
"uid": "$ds_prometheus"
|
||||
},
|
||||
"editorMode": "code",
|
||||
"expr": "sum(sum by (node) (container_memory_working_set_bytes:without_kmem{container!=\"POD\",container!=\"\",node=~\"$node\"} )) / sum(sum by (node) (avg_over_time(kube_pod_container_resource_requests{resource=\"memory\",node=~\"$node\"}[$__rate_interval])))",
|
||||
"expr": "sum(sum by (node) (container_memory_working_set_bytes:without_kmem{container!=\"\",node=~\"$node\"} )) / sum(sum by (node) (avg_over_time(kube_pod_container_resource_requests{resource=\"memory\",node=~\"$node\"}[$__rate_interval])))",
|
||||
"hide": false,
|
||||
"legendFormat": "__auto",
|
||||
"range": true,
|
||||
@@ -1128,7 +1128,7 @@
|
||||
"uid": "$ds_prometheus"
|
||||
},
|
||||
"editorMode": "code",
|
||||
"expr": "sum by (node) (rate(container_cpu_usage_seconds_total{container!=\"POD\",container!=\"\",node=~\"$node\"}[$__rate_interval]) - on (namespace,pod,container,node) group_left avg by (namespace,pod,container, node)(kube_pod_container_resource_requests{resource=\"cpu\",node=~\"$node\"})) * -1 > 0\n",
|
||||
"expr": "sum by (node) (rate(container_cpu_usage_seconds_total{container!=\"\",node=~\"$node\"}[$__rate_interval]) - on (namespace,pod,container,node) group_left avg by (namespace,pod,container, node)(kube_pod_container_resource_requests{resource=\"cpu\",node=~\"$node\"})) * -1 > 0\n",
|
||||
"format": "time_series",
|
||||
"hide": false,
|
||||
"intervalFactor": 1,
|
||||
@@ -1143,7 +1143,7 @@
|
||||
"uid": "$ds_prometheus"
|
||||
},
|
||||
"editorMode": "code",
|
||||
"expr": "sum(sum by (node) (rate(container_cpu_usage_seconds_total{container!=\"POD\",container!=\"\",node=~\"$node\"}[$__rate_interval]) - on (namespace,pod,container,node) group_left avg by (namespace,pod,container, node)(kube_pod_container_resource_requests{resource=\"cpu\",node=~\"$node\"})) * -1 > 0)",
|
||||
"expr": "sum(sum by (node) (rate(container_cpu_usage_seconds_total{container!=\"\",node=~\"$node\"}[$__rate_interval]) - on (namespace,pod,container,node) group_left avg by (namespace,pod,container, node)(kube_pod_container_resource_requests{resource=\"cpu\",node=~\"$node\"})) * -1 > 0)",
|
||||
"hide": false,
|
||||
"legendFormat": "Total",
|
||||
"range": true,
|
||||
@@ -1527,7 +1527,7 @@
|
||||
"uid": "$ds_prometheus"
|
||||
},
|
||||
"editorMode": "code",
|
||||
"expr": "(sum by (node) (container_memory_working_set_bytes:without_kmem{container!=\"POD\",container!=\"\",node=~\"$node\"} ) - sum by (node) (kube_pod_container_resource_requests{resource=\"memory\",node=~\"$node\"})) * -1 > 0\n",
|
||||
"expr": "(sum by (node) (container_memory_working_set_bytes:without_kmem{container!=\"\",node=~\"$node\"} ) - sum by (node) (kube_pod_container_resource_requests{resource=\"memory\",node=~\"$node\"})) * -1 > 0\n",
|
||||
"format": "time_series",
|
||||
"hide": false,
|
||||
"intervalFactor": 1,
|
||||
@@ -1542,7 +1542,7 @@
|
||||
"uid": "$ds_prometheus"
|
||||
},
|
||||
"editorMode": "code",
|
||||
"expr": "sum((sum by (node) (container_memory_working_set_bytes:without_kmem{container!=\"POD\",container!=\"\",node=~\"$node\"} ) - sum by (node) (kube_pod_container_resource_requests{resource=\"memory\",node=~\"$node\"})) * -1 > 0)",
|
||||
"expr": "sum((sum by (node) (container_memory_working_set_bytes:without_kmem{container!=\"\",node=~\"$node\"} ) - sum by (node) (kube_pod_container_resource_requests{resource=\"memory\",node=~\"$node\"})) * -1 > 0)",
|
||||
"hide": false,
|
||||
"legendFormat": "Total",
|
||||
"range": true,
|
||||
@@ -1909,7 +1909,7 @@
|
||||
},
|
||||
"editorMode": "code",
|
||||
"exemplar": false,
|
||||
"expr": "topk(10, (sum by (namespace,pod,container)((rate(container_cpu_usage_seconds_total{namespace=~\"$namespace\",container!=\"POD\",container!=\"\",node=~\"$node\"}[$__rate_interval])) - on (namespace,pod,container) group_left avg by (namespace,pod,container)(kube_pod_container_resource_requests{resource=\"cpu\",node=~\"$node\"}))) * -1 > 0)\n",
|
||||
"expr": "topk(10, (sum by (namespace,pod,container)((rate(container_cpu_usage_seconds_total{namespace=~\"$namespace\",container!=\"\",node=~\"$node\"}[$__rate_interval])) - on (namespace,pod,container) group_left avg by (namespace,pod,container)(kube_pod_container_resource_requests{resource=\"cpu\",node=~\"$node\"}))) * -1 > 0)\n",
|
||||
"format": "table",
|
||||
"instant": true,
|
||||
"range": false,
|
||||
@@ -2037,7 +2037,7 @@
|
||||
},
|
||||
"editorMode": "code",
|
||||
"exemplar": false,
|
||||
"expr": "topk(10, (sum by (namespace,container,pod) (container_memory_working_set_bytes:without_kmem{container!=\"POD\",container!=\"\",namespace=~\"$namespace\",node=~\"$node\"}) - on (namespace,pod,container) avg by (namespace,pod,container)(kube_pod_container_resource_requests{resource=\"memory\",namespace=~\"$namespace\",node=~\"$node\"})) * -1 >0)\n",
|
||||
"expr": "topk(10, (sum by (namespace,container,pod) (container_memory_working_set_bytes:without_kmem{container!=\"\",namespace=~\"$namespace\",node=~\"$node\"}) - on (namespace,pod,container) avg by (namespace,pod,container)(kube_pod_container_resource_requests{resource=\"memory\",namespace=~\"$namespace\",node=~\"$node\"})) * -1 >0)\n",
|
||||
"format": "table",
|
||||
"instant": true,
|
||||
"range": false,
|
||||
@@ -2160,7 +2160,7 @@
|
||||
},
|
||||
"editorMode": "code",
|
||||
"exemplar": false,
|
||||
"expr": "topk(10, (sum by (namespace,pod,container)((rate(container_cpu_usage_seconds_total{namespace=~\"$namespace\",container!=\"POD\",container!=\"\",node=~\"$node\"}[$__rate_interval])) - on (namespace,pod,container) group_left avg by (namespace,pod,container)(kube_pod_container_resource_requests{resource=\"cpu\",node=~\"$node\"}))) > 0)\n",
|
||||
"expr": "topk(10, (sum by (namespace,pod,container)((rate(container_cpu_usage_seconds_total{namespace=~\"$namespace\",container!=\"\",node=~\"$node\"}[$__rate_interval])) - on (namespace,pod,container) group_left avg by (namespace,pod,container)(kube_pod_container_resource_requests{resource=\"cpu\",node=~\"$node\"}))) > 0)\n",
|
||||
"format": "table",
|
||||
"instant": true,
|
||||
"range": false,
|
||||
@@ -2288,7 +2288,7 @@
|
||||
},
|
||||
"editorMode": "code",
|
||||
"exemplar": false,
|
||||
"expr": "topk(10, (sum by (namespace,container,pod) (container_memory_working_set_bytes:without_kmem{container!=\"POD\",container!=\"\",namespace=~\"$namespace\",node=~\"$node\"}) - on (namespace,pod,container) avg by (namespace,pod,container)(kube_pod_container_resource_requests{resource=\"memory\",namespace=~\"$namespace\",node=~\"$node\"})) >0)\n",
|
||||
"expr": "topk(10, (sum by (namespace,container,pod) (container_memory_working_set_bytes:without_kmem{container!=\"\",namespace=~\"$namespace\",node=~\"$node\"}) - on (namespace,pod,container) avg by (namespace,pod,container)(kube_pod_container_resource_requests{resource=\"memory\",namespace=~\"$namespace\",node=~\"$node\"})) >0)\n",
|
||||
"format": "table",
|
||||
"instant": true,
|
||||
"range": false,
|
||||
|
||||
@@ -684,7 +684,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "${ds_prometheus}"
|
||||
},
|
||||
"expr": "(\n sum by (pod) (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}[$__range])) \n * on (pod)\n sum by (pod) (rate(container_cpu_usage_seconds_total{node=~\"$node\", container!=\"POD\", namespace=\"$namespace\", pod=~\"$pod\"}[$__range]))\n)\nor\nsum by (pod) (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}[$__range]) * 0)",
|
||||
"expr": "(\n sum by (pod) (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}[$__range])) \n * on (pod)\n sum by (pod) (rate(container_cpu_usage_seconds_total{node=~\"$node\", container!=\"\", namespace=\"$namespace\", pod=~\"$pod\"}[$__range]))\n)\nor\nsum by (pod) (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}[$__range]) * 0)",
|
||||
"format": "table",
|
||||
"hide": false,
|
||||
"instant": true,
|
||||
@@ -710,7 +710,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "${ds_prometheus}"
|
||||
},
|
||||
"expr": "sum by (pod)\n(\n avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}[$__range])\n * on (controller_type, controller_name) group_left()\n sum by (controller_type, controller_name) (avg_over_time(vpa_target_recommendation{container!=\"POD\", namespace=\"$namespace\", resource=\"cpu\"}[$__range]))\n)\nor\nsum by (pod) (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}[$__range]) * 0)",
|
||||
"expr": "sum by (pod)\n(\n avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}[$__range])\n * on (controller_type, controller_name) group_left()\n sum by (controller_type, controller_name) (avg_over_time(vpa_target_recommendation{container!=\"\", namespace=\"$namespace\", resource=\"cpu\"}[$__range]))\n)\nor\nsum by (pod) (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}[$__range]) * 0)",
|
||||
"format": "table",
|
||||
"hide": false,
|
||||
"instant": true,
|
||||
@@ -723,7 +723,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "${ds_prometheus}"
|
||||
},
|
||||
"expr": "(\n sum by (pod) (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}[$__range])) \n * on (pod)\n sum by (pod)\n (\n sum by (namespace, pod) (avg_over_time(kube_pod_container_resource_requests{resource=\"cpu\",unit=\"core\",node=~\"$node\", namespace=\"$namespace\", pod=~\"$pod\"}[$__range]))\n -\n sum by (namespace, pod) (rate(container_cpu_usage_seconds_total{node=~\"$node\", namespace=\"$namespace\", pod=~\"$pod\", container!=\"POD\"}[$__range]))\n ) > 0\n)\nor\nsum by (pod) (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}[$__range]) * 0)",
|
||||
"expr": "(\n sum by (pod) (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}[$__range])) \n * on (pod)\n sum by (pod)\n (\n sum by (namespace, pod) (avg_over_time(kube_pod_container_resource_requests{resource=\"cpu\",unit=\"core\",node=~\"$node\", namespace=\"$namespace\", pod=~\"$pod\"}[$__range]))\n -\n sum by (namespace, pod) (rate(container_cpu_usage_seconds_total{node=~\"$node\", namespace=\"$namespace\", pod=~\"$pod\", container!=\"\"}[$__range]))\n ) > 0\n)\nor\nsum by (pod) (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}[$__range]) * 0)",
|
||||
"format": "table",
|
||||
"hide": false,
|
||||
"instant": true,
|
||||
@@ -736,7 +736,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "${ds_prometheus}"
|
||||
},
|
||||
"expr": "(\n sum by (pod) (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}[$__range])) \n * on (pod)\n sum by (pod) \n (\n (\n (\n sum by (namespace, pod) (rate(container_cpu_usage_seconds_total{node=~\"$node\", namespace=\"$namespace\", pod=~\"$pod\"}[$__range]))\n -\n sum by (namespace, pod) (avg_over_time(kube_pod_container_resource_requests{resource=\"cpu\",unit=\"core\",node=~\"$node\", namespace=\"$namespace\", pod=~\"$pod\", container!=\"POD\"}[$__range]))\n ) or sum by (namespace, pod) (rate(container_cpu_usage_seconds_total{node=~\"$node\", namespace=\"$namespace\", pod=~\"$pod\", container!=\"POD\"}[$__range]))\n ) > 0\n )\n)\nor\nsum by (pod) (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}[$__range]) * 0)",
|
||||
"expr": "(\n sum by (pod) (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}[$__range])) \n * on (pod)\n sum by (pod) \n (\n (\n (\n sum by (namespace, pod) (rate(container_cpu_usage_seconds_total{node=~\"$node\", namespace=\"$namespace\", pod=~\"$pod\"}[$__range]))\n -\n sum by (namespace, pod) (avg_over_time(kube_pod_container_resource_requests{resource=\"cpu\",unit=\"core\",node=~\"$node\", namespace=\"$namespace\", pod=~\"$pod\", container!=\"\"}[$__range]))\n ) or sum by (namespace, pod) (rate(container_cpu_usage_seconds_total{node=~\"$node\", namespace=\"$namespace\", pod=~\"$pod\", container!=\"\"}[$__range]))\n ) > 0\n )\n)\nor\nsum by (pod) (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}[$__range]) * 0)",
|
||||
"format": "table",
|
||||
"hide": false,
|
||||
"instant": true,
|
||||
@@ -762,7 +762,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "${ds_prometheus}"
|
||||
},
|
||||
"expr": "(\n sum by (pod) (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}[$__range])) \n * on (pod)\n sum by (pod) (avg_over_time(container_memory_working_set_bytes:without_kmem{node=~\"$node\", container!=\"POD\", namespace=\"$namespace\", pod=~\"$pod\"}[$__range]))\n)\nor\nsum by (pod) (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}[$__range]) * 0)",
|
||||
"expr": "(\n sum by (pod) (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}[$__range])) \n * on (pod)\n sum by (pod) (avg_over_time(container_memory_working_set_bytes:without_kmem{node=~\"$node\", container!=\"\", namespace=\"$namespace\", pod=~\"$pod\"}[$__range]))\n)\nor\nsum by (pod) (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}[$__range]) * 0)",
|
||||
"format": "table",
|
||||
"instant": true,
|
||||
"intervalFactor": 1,
|
||||
@@ -786,7 +786,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "${ds_prometheus}"
|
||||
},
|
||||
"expr": "sum by (pod)\n(\n avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}[$__range])\n * on (controller_type, controller_name) group_left()\n sum by (controller_type, controller_name) (avg_over_time(vpa_target_recommendation{container!=\"POD\", namespace=\"$namespace\", resource=\"memory\"}[$__range]))\n)\nor\nsum by (pod) (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}[$__range]) * 0)",
|
||||
"expr": "sum by (pod)\n(\n avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}[$__range])\n * on (controller_type, controller_name) group_left()\n sum by (controller_type, controller_name) (avg_over_time(vpa_target_recommendation{container!=\"\", namespace=\"$namespace\", resource=\"memory\"}[$__range]))\n)\nor\nsum by (pod) (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}[$__range]) * 0)",
|
||||
"format": "table",
|
||||
"instant": true,
|
||||
"intervalFactor": 1,
|
||||
@@ -798,7 +798,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "${ds_prometheus}"
|
||||
},
|
||||
"expr": "(\n sum by (pod) (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}[$__range])) \n * on (pod)\n sum by (pod)\n (\n (\n (\n sum by (namespace, pod) (avg_over_time(kube_pod_container_resource_requests{resource=\"memory\",unit=\"byte\",node=~\"$node\", namespace=\"$namespace\", pod=~\"$pod\"}[$__range]))\n -\n sum by (namespace, pod) (avg_over_time(container_memory_working_set_bytes:without_kmem{node=~\"$node\", namespace=\"$namespace\", pod=~\"$pod\", container!=\"POD\"}[$__range]))\n ) > 0\n )\n )\n)\nor\nsum by (pod) (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}[$__range]) * 0)",
|
||||
"expr": "(\n sum by (pod) (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}[$__range])) \n * on (pod)\n sum by (pod)\n (\n (\n (\n sum by (namespace, pod) (avg_over_time(kube_pod_container_resource_requests{resource=\"memory\",unit=\"byte\",node=~\"$node\", namespace=\"$namespace\", pod=~\"$pod\"}[$__range]))\n -\n sum by (namespace, pod) (avg_over_time(container_memory_working_set_bytes:without_kmem{node=~\"$node\", namespace=\"$namespace\", pod=~\"$pod\", container!=\"\"}[$__range]))\n ) > 0\n )\n )\n)\nor\nsum by (pod) (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}[$__range]) * 0)",
|
||||
"format": "table",
|
||||
"instant": true,
|
||||
"intervalFactor": 1,
|
||||
@@ -810,7 +810,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "${ds_prometheus}"
|
||||
},
|
||||
"expr": "(\n sum by (pod) (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}[$__range])) \n * on (pod)\n sum by (pod)\n (\n (\n (\n sum by (namespace, pod) (avg_over_time(container_memory_working_set_bytes:without_kmem{node=~\"$node\", namespace=\"$namespace\", pod=~\"$pod\"}[$__range]))\n -\n sum by (namespace, pod) (avg_over_time(kube_pod_container_resource_requests{resource=\"memory\",unit=\"byte\",node=~\"$node\", namespace=\"$namespace\", pod=~\"$pod\", container!=\"POD\"}[$__range]))\n ) or sum by (namespace, pod) (avg_over_time(container_memory_working_set_bytes:without_kmem{node=~\"$node\", namespace=\"$namespace\", pod=~\"$pod\", container!=\"POD\"}[$__range]))\n ) > 0\n )\n)\nor\nsum by (pod) (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}[$__range]) * 0)",
|
||||
"expr": "(\n sum by (pod) (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}[$__range])) \n * on (pod)\n sum by (pod)\n (\n (\n (\n sum by (namespace, pod) (avg_over_time(container_memory_working_set_bytes:without_kmem{node=~\"$node\", namespace=\"$namespace\", pod=~\"$pod\"}[$__range]))\n -\n sum by (namespace, pod) (avg_over_time(kube_pod_container_resource_requests{resource=\"memory\",unit=\"byte\",node=~\"$node\", namespace=\"$namespace\", pod=~\"$pod\", container!=\"\"}[$__range]))\n ) or sum by (namespace, pod) (avg_over_time(container_memory_working_set_bytes:without_kmem{node=~\"$node\", namespace=\"$namespace\", pod=~\"$pod\", container!=\"\"}[$__range]))\n ) > 0\n )\n)\nor\nsum by (pod) (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}[$__range]) * 0)",
|
||||
"format": "table",
|
||||
"instant": true,
|
||||
"intervalFactor": 1,
|
||||
@@ -848,7 +848,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "${ds_prometheus}"
|
||||
},
|
||||
"expr": "(\n sum by (pod) (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}[$__range])) \n * on (pod)\n sum by (pod) (rate(container_fs_reads_total{node=~\"$node\", container!=\"POD\", namespace=\"$namespace\", pod=~\"$pod\"}[$__range]))\n)\nor\nsum by (pod) (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}[$__range]) * 0)",
|
||||
"expr": "(\n sum by (pod) (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}[$__range])) \n * on (pod)\n sum by (pod) (rate(container_fs_reads_total{node=~\"$node\", container!=\"\", namespace=\"$namespace\", pod=~\"$pod\"}[$__range]))\n)\nor\nsum by (pod) (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}[$__range]) * 0)",
|
||||
"format": "table",
|
||||
"instant": true,
|
||||
"intervalFactor": 1,
|
||||
@@ -860,7 +860,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "${ds_prometheus}"
|
||||
},
|
||||
"expr": "(\n sum by (pod) (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}[$__range])) \n * on (pod)\n sum by (pod) (rate(container_fs_writes_total{node=~\"$node\", container!=\"POD\", namespace=\"$namespace\", pod=~\"$pod\"}[$__range]))\n)\nor\nsum by (pod) (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}[$__range]) * 0)",
|
||||
"expr": "(\n sum by (pod) (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}[$__range])) \n * on (pod)\n sum by (pod) (rate(container_fs_writes_total{node=~\"$node\", container!=\"\", namespace=\"$namespace\", pod=~\"$pod\"}[$__range]))\n)\nor\nsum by (pod) (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}[$__range]) * 0)",
|
||||
"format": "table",
|
||||
"instant": true,
|
||||
"intervalFactor": 1,
|
||||
@@ -1315,7 +1315,7 @@
|
||||
"uid": "$ds_prometheus"
|
||||
},
|
||||
"editorMode": "code",
|
||||
"expr": "sum by(pod) (\n max(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}) by(pod)\n * on (pod)\n sum by (pod) (rate(container_cpu_usage_seconds_total{node=~\"$node\", container!=\"POD\", pod=~\"$pod\", namespace=\"$namespace\"}[$__rate_interval]))\n)",
|
||||
"expr": "sum by(pod) (\n max(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}) by(pod)\n * on (pod)\n sum by (pod) (rate(container_cpu_usage_seconds_total{node=~\"$node\", container!=\"\", pod=~\"$pod\", namespace=\"$namespace\"}[$__rate_interval]))\n)",
|
||||
"format": "time_series",
|
||||
"instant": false,
|
||||
"intervalFactor": 1,
|
||||
@@ -1488,7 +1488,7 @@
|
||||
"uid": "$ds_prometheus"
|
||||
},
|
||||
"editorMode": "code",
|
||||
"expr": "sum (\n max(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}) by(pod)\n * on (pod)\n sum by (pod) (rate(container_cpu_system_seconds_total{node=~\"$node\", container!=\"POD\", pod=~\"$pod\", namespace=\"$namespace\"}[$__rate_interval]))\n)",
|
||||
"expr": "sum (\n max(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}) by(pod)\n * on (pod)\n sum by (pod) (rate(container_cpu_system_seconds_total{node=~\"$node\", container!=\"\", pod=~\"$pod\", namespace=\"$namespace\"}[$__rate_interval]))\n)",
|
||||
"format": "time_series",
|
||||
"interval": "",
|
||||
"intervalFactor": 1,
|
||||
@@ -1502,7 +1502,7 @@
|
||||
"uid": "$ds_prometheus"
|
||||
},
|
||||
"editorMode": "code",
|
||||
"expr": "sum (\n max(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}) by(pod)\n * on (pod)\n sum by (pod) (rate(container_cpu_user_seconds_total{node=~\"$node\", container!=\"POD\", pod=~\"$pod\", namespace=\"$namespace\"}[$__rate_interval]))\n)",
|
||||
"expr": "sum (\n max(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}) by(pod)\n * on (pod)\n sum by (pod) (rate(container_cpu_user_seconds_total{node=~\"$node\", container!=\"\", pod=~\"$pod\", namespace=\"$namespace\"}[$__rate_interval]))\n)",
|
||||
"format": "time_series",
|
||||
"interval": "",
|
||||
"intervalFactor": 1,
|
||||
@@ -1642,7 +1642,7 @@
|
||||
"uid": "$ds_prometheus"
|
||||
},
|
||||
"editorMode": "code",
|
||||
"expr": "sum by (pod)\n (\n max(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}) by(pod)\n * on (pod)\n sum by (pod) (\n sum by(namespace, pod, container) (avg_over_time(kube_pod_container_resource_requests{resource=\"cpu\",unit=\"core\",node=~\"$node\", namespace=\"$namespace\", pod=~\"$pod\"}[$__rate_interval]))\n -\n sum by(namespace, pod, container) (rate(container_cpu_usage_seconds_total{node=~\"$node\", container!=\"POD\", namespace=\"$namespace\", pod=~\"$pod\"}[$__rate_interval]))\n ) > 0\n )",
|
||||
"expr": "sum by (pod)\n (\n max(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}) by(pod)\n * on (pod)\n sum by (pod) (\n sum by(namespace, pod, container) (avg_over_time(kube_pod_container_resource_requests{resource=\"cpu\",unit=\"core\",node=~\"$node\", namespace=\"$namespace\", pod=~\"$pod\"}[$__rate_interval]))\n -\n sum by(namespace, pod, container) (rate(container_cpu_usage_seconds_total{node=~\"$node\", container!=\"\", namespace=\"$namespace\", pod=~\"$pod\"}[$__rate_interval]))\n ) > 0\n )",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "{{ pod }}",
|
||||
@@ -1779,7 +1779,7 @@
|
||||
"uid": "$ds_prometheus"
|
||||
},
|
||||
"editorMode": "code",
|
||||
"expr": " (\n max(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}) by(pod)\n * on (pod)\n sum by (pod) (\n (\n sum by(namespace, pod, container) (rate(container_cpu_usage_seconds_total{node=~\"$node\", namespace=\"$namespace\", pod=~\"$pod\"}[$__rate_interval]))\n -\n sum by(namespace, pod, container) (avg_over_time(kube_pod_container_resource_requests{resource=\"cpu\",unit=\"core\",node=~\"$node\", namespace=\"$namespace\", pod=~\"$pod\", container!=\"POD\"}[$__rate_interval]))\n )\n or\n sum by(namespace, pod, container) (rate(container_cpu_usage_seconds_total{node=~\"$node\", namespace=\"$namespace\", pod=~\"$pod\"}[$__rate_interval]))\n )\n) > 0",
|
||||
"expr": " (\n max(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}) by(pod)\n * on (pod)\n sum by (pod) (\n (\n sum by(namespace, pod, container) (rate(container_cpu_usage_seconds_total{node=~\"$node\", namespace=\"$namespace\", pod=~\"$pod\"}[$__rate_interval]))\n -\n sum by(namespace, pod, container) (avg_over_time(kube_pod_container_resource_requests{resource=\"cpu\",unit=\"core\",node=~\"$node\", namespace=\"$namespace\", pod=~\"$pod\", container!=\"\"}[$__rate_interval]))\n )\n or\n sum by(namespace, pod, container) (rate(container_cpu_usage_seconds_total{node=~\"$node\", namespace=\"$namespace\", pod=~\"$pod\"}[$__rate_interval]))\n )\n) > 0",
|
||||
"format": "time_series",
|
||||
"hide": false,
|
||||
"intervalFactor": 1,
|
||||
@@ -2095,7 +2095,7 @@
|
||||
"repeatDirection": "h",
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum by(pod) (rate(container_cpu_usage_seconds_total{node=~\"$node\", container!=\"POD\", pod=\"$pod\", namespace=\"$namespace\"}[$__rate_interval]))",
|
||||
"expr": "sum by(pod) (rate(container_cpu_usage_seconds_total{node=~\"$node\", container!=\"\", pod=\"$pod\", namespace=\"$namespace\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "Usage",
|
||||
@@ -2109,7 +2109,7 @@
|
||||
"refId": "D"
|
||||
},
|
||||
{
|
||||
"expr": "sum by (pod)\n(\n kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}\n * on (controller_type, controller_name) group_left()\n sum by (controller_type, controller_name) (avg_over_time(vpa_target_recommendation{container!=\"POD\", namespace=\"$namespace\", resource=\"cpu\"}[$__rate_interval]))\n)",
|
||||
"expr": "sum by (pod)\n(\n kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}\n * on (controller_type, controller_name) group_left()\n sum by (controller_type, controller_name) (avg_over_time(vpa_target_recommendation{container!=\"\", namespace=\"$namespace\", resource=\"cpu\"}[$__rate_interval]))\n)",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "VPA Target",
|
||||
@@ -2295,7 +2295,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "$ds_prometheus"
|
||||
},
|
||||
"expr": "sum by(pod) (rate(container_cpu_system_seconds_total{node=~\"$node\", container!=\"POD\", pod=\"$pod\", namespace=\"$namespace\"}[$__rate_interval]))",
|
||||
"expr": "sum by(pod) (rate(container_cpu_system_seconds_total{node=~\"$node\", container!=\"\", pod=\"$pod\", namespace=\"$namespace\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "System",
|
||||
@@ -2306,7 +2306,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "$ds_prometheus"
|
||||
},
|
||||
"expr": "sum by(pod) (rate(container_cpu_user_seconds_total{node=~\"$node\", container!=\"POD\", pod=\"$pod\", namespace=\"$namespace\"}[$__rate_interval]))",
|
||||
"expr": "sum by(pod) (rate(container_cpu_user_seconds_total{node=~\"$node\", container!=\"\", pod=\"$pod\", namespace=\"$namespace\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "User",
|
||||
@@ -2468,7 +2468,7 @@
|
||||
"uid": "$ds_prometheus"
|
||||
},
|
||||
"editorMode": "code",
|
||||
"expr": "sum by(pod) (\n max(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}) by(pod)\n * on (pod)\n sum by (pod) (avg_over_time(container_memory_working_set_bytes:without_kmem{node=~\"$node\", container!=\"POD\", pod=~\"$pod\", namespace=\"$namespace\"}[$__rate_interval]))\n)",
|
||||
"expr": "sum by(pod) (\n max(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}) by(pod)\n * on (pod)\n sum by (pod) (avg_over_time(container_memory_working_set_bytes:without_kmem{node=~\"$node\", container!=\"\", pod=~\"$pod\", namespace=\"$namespace\"}[$__rate_interval]))\n)",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "{{ pod }}",
|
||||
@@ -2653,7 +2653,7 @@
|
||||
"uid": "${ds_prometheus}"
|
||||
},
|
||||
"editorMode": "code",
|
||||
"expr": "sum (\n max(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}) by(pod)\n * on (pod)\n sum by (pod) (avg_over_time(container_memory_rss{node=~\"$node\", namespace=\"$namespace\", pod=~\"$pod\", container!=\"POD\"}[$__rate_interval]))\n)",
|
||||
"expr": "sum (\n max(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}) by(pod)\n * on (pod)\n sum by (pod) (avg_over_time(container_memory_rss{node=~\"$node\", namespace=\"$namespace\", pod=~\"$pod\", container!=\"\"}[$__rate_interval]))\n)",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "RSS",
|
||||
@@ -2666,7 +2666,7 @@
|
||||
"uid": "${ds_prometheus}"
|
||||
},
|
||||
"editorMode": "code",
|
||||
"expr": "sum (\n max(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}) by(pod)\n * on (pod)\n sum by (pod) (avg_over_time(container_memory_cache{node=~\"$node\", namespace=\"$namespace\", pod=~\"$pod\", container!=\"POD\"}[$__rate_interval]))\n)",
|
||||
"expr": "sum (\n max(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}) by(pod)\n * on (pod)\n sum by (pod) (avg_over_time(container_memory_cache{node=~\"$node\", namespace=\"$namespace\", pod=~\"$pod\", container!=\"\"}[$__rate_interval]))\n)",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "Cache",
|
||||
@@ -2679,7 +2679,7 @@
|
||||
"uid": "${ds_prometheus}"
|
||||
},
|
||||
"editorMode": "code",
|
||||
"expr": "sum (\n max(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}) by(pod)\n * on (pod)\n sum by (pod) (avg_over_time(container_memory_swap{node=~\"$node\", namespace=\"$namespace\", pod=~\"$pod\", container!=\"POD\"}[$__rate_interval]))\n)",
|
||||
"expr": "sum (\n max(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}) by(pod)\n * on (pod)\n sum by (pod) (avg_over_time(container_memory_swap{node=~\"$node\", namespace=\"$namespace\", pod=~\"$pod\", container!=\"\"}[$__rate_interval]))\n)",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "Swap",
|
||||
@@ -2692,7 +2692,7 @@
|
||||
"uid": "${ds_prometheus}"
|
||||
},
|
||||
"editorMode": "code",
|
||||
"expr": "sum (\n max(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}) by(pod)\n * on (pod)\n sum by (pod) (avg_over_time(container_memory_working_set_bytes:without_kmem{node=~\"$node\", namespace=\"$namespace\", pod=~\"$pod\", container!=\"POD\"}[$__rate_interval]))\n)",
|
||||
"expr": "sum (\n max(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}) by(pod)\n * on (pod)\n sum by (pod) (avg_over_time(container_memory_working_set_bytes:without_kmem{node=~\"$node\", namespace=\"$namespace\", pod=~\"$pod\", container!=\"\"}[$__rate_interval]))\n)",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "Working set bytes without kmem",
|
||||
@@ -2705,7 +2705,7 @@
|
||||
"uid": "${ds_prometheus}"
|
||||
},
|
||||
"editorMode": "code",
|
||||
"expr": "sum (\n max(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}) by(pod)\n * on (pod)\n sum by (pod) (avg_over_time(container_memory:kmem{node=~\"$node\", namespace=\"$namespace\", pod=~\"$pod\", container!=\"POD\"}[$__rate_interval]))\n)",
|
||||
"expr": "sum (\n max(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}) by(pod)\n * on (pod)\n sum by (pod) (avg_over_time(container_memory:kmem{node=~\"$node\", namespace=\"$namespace\", pod=~\"$pod\", container!=\"\"}[$__rate_interval]))\n)",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "Kmem",
|
||||
@@ -2837,7 +2837,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "$ds_prometheus"
|
||||
},
|
||||
"expr": "(\n kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}\n * on (pod) group_left()\n sum by (pod)\n (\n (\n sum by (namespace, pod, container) (avg_over_time(kube_pod_container_resource_requests{resource=\"memory\",unit=\"byte\",node=~\"$node\", namespace=\"$namespace\"}[$__rate_interval]))\n -\n sum by (namespace, pod, container) (avg_over_time(container_memory_working_set_bytes:without_kmem{node=~\"$node\", namespace=\"$namespace\", container!=\"POD\"}[$__rate_interval]))\n ) > 0\n )\n)",
|
||||
"expr": "(\n kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}\n * on (pod) group_left()\n sum by (pod)\n (\n (\n sum by (namespace, pod, container) (avg_over_time(kube_pod_container_resource_requests{resource=\"memory\",unit=\"byte\",node=~\"$node\", namespace=\"$namespace\"}[$__rate_interval]))\n -\n sum by (namespace, pod, container) (avg_over_time(container_memory_working_set_bytes:without_kmem{node=~\"$node\", namespace=\"$namespace\", container!=\"\"}[$__rate_interval]))\n ) > 0\n )\n)",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "{{ pod }}",
|
||||
@@ -2974,7 +2974,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "$ds_prometheus"
|
||||
},
|
||||
"expr": "(\n kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}\n * on (pod) group_left()\n sum by (pod)\n (\n (\n (\n sum by (namespace, pod, container) (avg_over_time(container_memory_working_set_bytes:without_kmem{node=~\"$node\", namespace=\"$namespace\"}[$__rate_interval]))\n -\n sum by (namespace, pod, container) (avg_over_time(kube_pod_container_resource_requests{resource=\"memory\",unit=\"byte\",node=~\"$node\", namespace=\"$namespace\", container!=\"POD\"}[$__rate_interval]))\n ) or sum by (namespace, pod, container) (avg_over_time(container_memory_working_set_bytes:without_kmem{node=~\"$node\", namespace=\"$namespace\", container!=\"POD\"}[$__rate_interval]))\n ) > 0\n )\n)",
|
||||
"expr": "(\n kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}\n * on (pod) group_left()\n sum by (pod)\n (\n (\n (\n sum by (namespace, pod, container) (avg_over_time(container_memory_working_set_bytes:without_kmem{node=~\"$node\", namespace=\"$namespace\"}[$__rate_interval]))\n -\n sum by (namespace, pod, container) (avg_over_time(kube_pod_container_resource_requests{resource=\"memory\",unit=\"byte\",node=~\"$node\", namespace=\"$namespace\", container!=\"\"}[$__rate_interval]))\n ) or sum by (namespace, pod, container) (avg_over_time(container_memory_working_set_bytes:without_kmem{node=~\"$node\", namespace=\"$namespace\", container!=\"\"}[$__rate_interval]))\n ) > 0\n )\n)",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "{{ pod }}",
|
||||
@@ -3290,56 +3290,56 @@
|
||||
"repeatDirection": "h",
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum by (pod) (avg_over_time(container_memory_rss{node=~\"$node\", namespace=\"$namespace\", pod=~\"$pod\", container!=\"POD\"}[$__rate_interval]))",
|
||||
"expr": "sum by (pod) (avg_over_time(container_memory_rss{node=~\"$node\", namespace=\"$namespace\", pod=~\"$pod\", container!=\"\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "RSS",
|
||||
"refId": "A"
|
||||
},
|
||||
{
|
||||
"expr": "sum by (pod) (avg_over_time(container_memory_cache{node=~\"$node\", namespace=\"$namespace\", pod=~\"$pod\", container!=\"POD\"}[$__rate_interval]))",
|
||||
"expr": "sum by (pod) (avg_over_time(container_memory_cache{node=~\"$node\", namespace=\"$namespace\", pod=~\"$pod\", container!=\"\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "Cache",
|
||||
"refId": "B"
|
||||
},
|
||||
{
|
||||
"expr": "sum by (pod) (avg_over_time(container_memory_swap{node=~\"$node\", namespace=\"$namespace\", pod=~\"$pod\", container!=\"POD\"}[$__rate_interval]))",
|
||||
"expr": "sum by (pod) (avg_over_time(container_memory_swap{node=~\"$node\", namespace=\"$namespace\", pod=~\"$pod\", container!=\"\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "Swap",
|
||||
"refId": "C"
|
||||
},
|
||||
{
|
||||
"expr": "sum by (pod) (avg_over_time(container_memory_working_set_bytes:without_kmem{node=~\"$node\", namespace=\"$namespace\", pod=~\"$pod\", container!=\"POD\"}[$__rate_interval]))",
|
||||
"expr": "sum by (pod) (avg_over_time(container_memory_working_set_bytes:without_kmem{node=~\"$node\", namespace=\"$namespace\", pod=~\"$pod\", container!=\"\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "Working set bytes without kmem",
|
||||
"refId": "D"
|
||||
},
|
||||
{
|
||||
"expr": "sum by (pod)\n(\n kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}\n * on (controller_type, controller_name) group_left()\n sum by (controller_type, controller_name) (avg_over_time(vpa_target_recommendation{namespace=\"$namespace\", container!=\"POD\", resource=\"memory\"}[$__rate_interval]))\n)",
|
||||
"expr": "sum by (pod)\n(\n kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\", pod=~\"$pod\"}\n * on (controller_type, controller_name) group_left()\n sum by (controller_type, controller_name) (avg_over_time(vpa_target_recommendation{namespace=\"$namespace\", container!=\"\", resource=\"memory\"}[$__rate_interval]))\n)",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "VPA Target",
|
||||
"refId": "E"
|
||||
},
|
||||
{
|
||||
"expr": "sum by(pod) (avg_over_time(kube_pod_container_resource_limits{resource=\"memory\",unit=\"byte\",node=~\"$node\", namespace=\"$namespace\", pod=~\"$pod\", container!=\"POD\"}[$__rate_interval]))",
|
||||
"expr": "sum by(pod) (avg_over_time(kube_pod_container_resource_limits{resource=\"memory\",unit=\"byte\",node=~\"$node\", namespace=\"$namespace\", pod=~\"$pod\", container!=\"\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "Limits",
|
||||
"refId": "F"
|
||||
},
|
||||
{
|
||||
"expr": "sum by(pod) (avg_over_time(kube_pod_container_resource_requests{resource=\"memory\",unit=\"byte\",node=~\"$node\", namespace=\"$namespace\", pod=~\"$pod\", container!=\"POD\"}[$__rate_interval]))",
|
||||
"expr": "sum by(pod) (avg_over_time(kube_pod_container_resource_requests{resource=\"memory\",unit=\"byte\",node=~\"$node\", namespace=\"$namespace\", pod=~\"$pod\", container!=\"\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "Requests",
|
||||
"refId": "G"
|
||||
},
|
||||
{
|
||||
"expr": "sum by(pod) (avg_over_time(container_memory:kmem{node=~\"$node\", namespace=\"$namespace\", pod=~\"$pod\", container!=\"POD\"}[$__rate_interval]))",
|
||||
"expr": "sum by(pod) (avg_over_time(container_memory:kmem{node=~\"$node\", namespace=\"$namespace\", pod=~\"$pod\", container!=\"\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "Kmem",
|
||||
@@ -3834,7 +3834,7 @@
|
||||
"uid": "$ds_prometheus"
|
||||
},
|
||||
"editorMode": "code",
|
||||
"expr": "sum by(pod) (\n max(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\"}) by(pod)\n * on (pod)\n sum by (pod) (rate(container_fs_reads_total{node=~\"$node\", container!=\"POD\", pod=~\"$pod\", namespace=\"$namespace\"}[$__rate_interval]))\n)",
|
||||
"expr": "sum by(pod) (\n max(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\"}) by(pod)\n * on (pod)\n sum by (pod) (rate(container_fs_reads_total{node=~\"$node\", container!=\"\", pod=~\"$pod\", namespace=\"$namespace\"}[$__rate_interval]))\n)",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "{{ pod }}",
|
||||
@@ -3972,7 +3972,7 @@
|
||||
"uid": "$ds_prometheus"
|
||||
},
|
||||
"editorMode": "code",
|
||||
"expr": "sum by(pod) (\n max(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\"}) by(pod)\n * on (pod)\n sum by (pod) (rate(container_fs_writes_total{node=~\"$node\", container!=\"POD\", pod=~\"$pod\", namespace=\"$namespace\"}[$__rate_interval]))\n)",
|
||||
"expr": "sum by(pod) (\n max(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\"}) by(pod)\n * on (pod)\n sum by (pod) (rate(container_fs_writes_total{node=~\"$node\", container!=\"\", pod=~\"$pod\", namespace=\"$namespace\"}[$__rate_interval]))\n)",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "{{ pod }}",
|
||||
|
||||
@@ -656,7 +656,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "${ds_prometheus}"
|
||||
},
|
||||
"expr": "sum by (controller) (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}[$__range]) * on (pod) group_left() sum by (pod) (rate(container_cpu_usage_seconds_total{node=~\"$node\", namespace=\"$namespace\", container!=\"POD\"}[$__range])))\nor\ncount (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}[$__range])) by (controller) * 0",
|
||||
"expr": "sum by (controller) (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}[$__range]) * on (pod) group_left() sum by (pod) (rate(container_cpu_usage_seconds_total{node=~\"$node\", namespace=\"$namespace\", container!=\"\"}[$__range])))\nor\ncount (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}[$__range])) by (controller) * 0",
|
||||
"format": "table",
|
||||
"instant": true,
|
||||
"intervalFactor": 1,
|
||||
@@ -680,7 +680,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "${ds_prometheus}"
|
||||
},
|
||||
"expr": "sum by (controller)\n (\n avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}[$__range])\n * on (controller_type, controller_name) group_left()\n sum by(controller_type, controller_name) (avg_over_time(vpa_target_recommendation{container!=\"POD\",namespace=\"$namespace\", resource=\"cpu\"}[$__range]))\n ) \nor\ncount (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}[$__range])) by (controller) * 0",
|
||||
"expr": "sum by (controller)\n (\n avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}[$__range])\n * on (controller_type, controller_name) group_left()\n sum by(controller_type, controller_name) (avg_over_time(vpa_target_recommendation{container!=\"\",namespace=\"$namespace\", resource=\"cpu\"}[$__range]))\n ) \nor\ncount (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}[$__range])) by (controller) * 0",
|
||||
"format": "table",
|
||||
"instant": true,
|
||||
"intervalFactor": 1,
|
||||
@@ -692,7 +692,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "${ds_prometheus}"
|
||||
},
|
||||
"expr": "sum by (controller)\n (\n avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}[$__range])\n * on (namespace, pod) group_left()\n sum by (namespace, pod)\n (\n (\n sum by(namespace, pod, container) (avg_over_time(kube_pod_container_resource_requests{resource=\"cpu\",unit=\"core\",node=~\"$node\", namespace=\"$namespace\"}[$__range]))\n -\n sum by(namespace, pod, container) (rate(container_cpu_usage_seconds_total{node=~\"$node\", container!=\"POD\", namespace=\"$namespace\"}[$__range]))\n ) > 0\n )\n )\nor\ncount (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}[$__range])) by (controller) * 0",
|
||||
"expr": "sum by (controller)\n (\n avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}[$__range])\n * on (namespace, pod) group_left()\n sum by (namespace, pod)\n (\n (\n sum by(namespace, pod, container) (avg_over_time(kube_pod_container_resource_requests{resource=\"cpu\",unit=\"core\",node=~\"$node\", namespace=\"$namespace\"}[$__range]))\n -\n sum by(namespace, pod, container) (rate(container_cpu_usage_seconds_total{node=~\"$node\", container!=\"\", namespace=\"$namespace\"}[$__range]))\n ) > 0\n )\n )\nor\ncount (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}[$__range])) by (controller) * 0",
|
||||
"format": "table",
|
||||
"instant": true,
|
||||
"intervalFactor": 1,
|
||||
@@ -704,7 +704,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "${ds_prometheus}"
|
||||
},
|
||||
"expr": "sum by (controller)\n (\n avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}[$__range])\n * on (namespace, pod) group_left()\n sum by (namespace, pod)\n (\n (\n (\n sum by(namespace, pod, container) (rate(container_cpu_usage_seconds_total{node=~\"$node\", namespace=\"$namespace\"}[$__range]))\n -\n sum by(namespace, pod, container) (avg_over_time(kube_pod_container_resource_requests{resource=\"cpu\",unit=\"core\",node=~\"$node\", container!=\"POD\", namespace=\"$namespace\"}[$__range]))\n ) or sum by(namespace, pod, container) (rate(container_cpu_usage_seconds_total{node=~\"$node\", container!=\"POD\", namespace=\"$namespace\"}[$__range]))\n ) > 0\n )\n )\nor\ncount (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}[$__range])) by (controller) * 0",
|
||||
"expr": "sum by (controller)\n (\n avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}[$__range])\n * on (namespace, pod) group_left()\n sum by (namespace, pod)\n (\n (\n (\n sum by(namespace, pod, container) (rate(container_cpu_usage_seconds_total{node=~\"$node\", namespace=\"$namespace\"}[$__range]))\n -\n sum by(namespace, pod, container) (avg_over_time(kube_pod_container_resource_requests{resource=\"cpu\",unit=\"core\",node=~\"$node\", container!=\"\", namespace=\"$namespace\"}[$__range]))\n ) or sum by(namespace, pod, container) (rate(container_cpu_usage_seconds_total{node=~\"$node\", container!=\"\", namespace=\"$namespace\"}[$__range]))\n ) > 0\n )\n )\nor\ncount (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}[$__range])) by (controller) * 0",
|
||||
"format": "table",
|
||||
"instant": true,
|
||||
"intervalFactor": 1,
|
||||
@@ -728,7 +728,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "${ds_prometheus}"
|
||||
},
|
||||
"expr": "sum by (controller) (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}[$__range]) * on (pod) group_left() sum by (pod) (avg_over_time(container_memory_working_set_bytes:without_kmem{node=~\"$node\", namespace=\"$namespace\", container!=\"POD\"}[$__range])))\nor\ncount (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}[$__range])) by (controller) * 0",
|
||||
"expr": "sum by (controller) (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}[$__range]) * on (pod) group_left() sum by (pod) (avg_over_time(container_memory_working_set_bytes:without_kmem{node=~\"$node\", namespace=\"$namespace\", container!=\"\"}[$__range])))\nor\ncount (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}[$__range])) by (controller) * 0",
|
||||
"format": "table",
|
||||
"instant": true,
|
||||
"intervalFactor": 1,
|
||||
@@ -740,7 +740,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "${ds_prometheus}"
|
||||
},
|
||||
"expr": "sum by (controller)\n (\n avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}[$__range])\n * on (pod) group_left()\n sum by (namespace, pod)\n (\n avg_over_time(kube_pod_container_resource_requests{resource=\"memory\",unit=\"byte\",node=~\"$node\", namespace=\"$namespace\", container!=\"POD\"}[$__range])\n )\n )\n or\n count (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}[$__range])) by (controller) * 0",
|
||||
"expr": "sum by (controller)\n (\n avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}[$__range])\n * on (pod) group_left()\n sum by (namespace, pod)\n (\n avg_over_time(kube_pod_container_resource_requests{resource=\"memory\",unit=\"byte\",node=~\"$node\", namespace=\"$namespace\", container!=\"\"}[$__range])\n )\n )\n or\n count (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}[$__range])) by (controller) * 0",
|
||||
"format": "table",
|
||||
"instant": true,
|
||||
"intervalFactor": 1,
|
||||
@@ -752,7 +752,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "${ds_prometheus}"
|
||||
},
|
||||
"expr": "sum by (controller)\n (\n avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}[$__range])\n * on (controller_type, controller_name) group_left()\n sum by(controller_type, controller_name) (avg_over_time(vpa_target_recommendation{container!=\"POD\",namespace=\"$namespace\", resource=\"memory\"}[$__range]))\n ) \n or \ncount (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}[$__range])) by (controller) * 0",
|
||||
"expr": "sum by (controller)\n (\n avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}[$__range])\n * on (controller_type, controller_name) group_left()\n sum by(controller_type, controller_name) (avg_over_time(vpa_target_recommendation{container!=\"\",namespace=\"$namespace\", resource=\"memory\"}[$__range]))\n ) \n or \ncount (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}[$__range])) by (controller) * 0",
|
||||
"format": "table",
|
||||
"instant": true,
|
||||
"intervalFactor": 1,
|
||||
@@ -764,7 +764,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "${ds_prometheus}"
|
||||
},
|
||||
"expr": "sum by (controller)\n (\n avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}[$__range])\n * on (namespace, pod) group_left()\n sum by (namespace, pod)\n (\n (\n sum by(namespace, pod, container) (avg_over_time(kube_pod_container_resource_requests{resource=\"memory\",unit=\"byte\",node=~\"$node\", namespace=\"$namespace\"}[$__range]))\n -\n sum by(namespace, pod, container) (avg_over_time(container_memory_working_set_bytes:without_kmem{node=~\"$node\", container!=\"POD\", namespace=\"$namespace\"}[$__range]))\n ) > 0\n )\n )\nor\ncount (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}[$__range])) by (controller) * 0",
|
||||
"expr": "sum by (controller)\n (\n avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}[$__range])\n * on (namespace, pod) group_left()\n sum by (namespace, pod)\n (\n (\n sum by(namespace, pod, container) (avg_over_time(kube_pod_container_resource_requests{resource=\"memory\",unit=\"byte\",node=~\"$node\", namespace=\"$namespace\"}[$__range]))\n -\n sum by(namespace, pod, container) (avg_over_time(container_memory_working_set_bytes:without_kmem{node=~\"$node\", container!=\"\", namespace=\"$namespace\"}[$__range]))\n ) > 0\n )\n )\nor\ncount (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}[$__range])) by (controller) * 0",
|
||||
"format": "table",
|
||||
"instant": true,
|
||||
"intervalFactor": 1,
|
||||
@@ -776,7 +776,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "${ds_prometheus}"
|
||||
},
|
||||
"expr": "sum by (controller)\n (\n avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}[$__range])\n * on (namespace, pod) group_left()\n sum by (namespace, pod)\n (\n (\n (\n sum by(namespace, pod, container) (avg_over_time(container_memory_working_set_bytes:without_kmem{node=~\"$node\", namespace=\"$namespace\"}[$__range]))\n -\n sum by(namespace, pod, container) (avg_over_time(kube_pod_container_resource_requests{resource=\"memory\",unit=\"byte\",node=~\"$node\", container!=\"POD\", namespace=\"$namespace\"}[$__range]))\n ) or sum by(namespace, pod, container) (avg_over_time(container_memory_working_set_bytes:without_kmem{node=~\"$node\", container!=\"POD\", namespace=\"$namespace\"}[$__range]))\n ) > 0\n )\n )\nor\ncount (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}[$__range])) by (controller) * 0",
|
||||
"expr": "sum by (controller)\n (\n avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}[$__range])\n * on (namespace, pod) group_left()\n sum by (namespace, pod)\n (\n (\n (\n sum by(namespace, pod, container) (avg_over_time(container_memory_working_set_bytes:without_kmem{node=~\"$node\", namespace=\"$namespace\"}[$__range]))\n -\n sum by(namespace, pod, container) (avg_over_time(kube_pod_container_resource_requests{resource=\"memory\",unit=\"byte\",node=~\"$node\", container!=\"\", namespace=\"$namespace\"}[$__range]))\n ) or sum by(namespace, pod, container) (avg_over_time(container_memory_working_set_bytes:without_kmem{node=~\"$node\", container!=\"\", namespace=\"$namespace\"}[$__range]))\n ) > 0\n )\n )\nor\ncount (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}[$__range])) by (controller) * 0",
|
||||
"format": "table",
|
||||
"instant": true,
|
||||
"intervalFactor": 1,
|
||||
@@ -814,7 +814,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "${ds_prometheus}"
|
||||
},
|
||||
"expr": "sum by (controller) (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}[$__range]) * on (pod) group_left() sum by (pod) (rate(container_fs_reads_total{node=~\"$node\", container!=\"POD\", namespace=\"$namespace\"}[$__range])))\nor\ncount (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}[$__range])) by (controller) * 0",
|
||||
"expr": "sum by (controller) (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}[$__range]) * on (pod) group_left() sum by (pod) (rate(container_fs_reads_total{node=~\"$node\", container!=\"\", namespace=\"$namespace\"}[$__range])))\nor\ncount (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}[$__range])) by (controller) * 0",
|
||||
"format": "table",
|
||||
"instant": true,
|
||||
"intervalFactor": 1,
|
||||
@@ -826,7 +826,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "${ds_prometheus}"
|
||||
},
|
||||
"expr": "sum by (controller) (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}[$__range]) * on (pod) group_left() sum by (pod) (rate(container_fs_writes_total{node=~\"$node\", container!=\"POD\", namespace=\"$namespace\"}[$__range])))\nor\ncount (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}[$__range])) by (controller) * 0",
|
||||
"expr": "sum by (controller) (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}[$__range]) * on (pod) group_left() sum by (pod) (rate(container_fs_writes_total{node=~\"$node\", container!=\"\", namespace=\"$namespace\"}[$__range])))\nor\ncount (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}[$__range])) by (controller) * 0",
|
||||
"format": "table",
|
||||
"instant": true,
|
||||
"intervalFactor": 1,
|
||||
@@ -877,7 +877,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "${ds_prometheus}"
|
||||
},
|
||||
"expr": "sum by (controller) (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}[$__range]) * on (pod) group_left() sum by (pod) (avg_over_time(container_memory:kmem{node=~\"$node\", namespace=\"$namespace\", container!=\"POD\"}[$__range])))\nor\ncount (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}[$__range])) by (controller) * 0",
|
||||
"expr": "sum by (controller) (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}[$__range]) * on (pod) group_left() sum by (pod) (avg_over_time(container_memory:kmem{node=~\"$node\", namespace=\"$namespace\", container!=\"\"}[$__range])))\nor\ncount (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}[$__range])) by (controller) * 0",
|
||||
"format": "table",
|
||||
"instant": true,
|
||||
"intervalFactor": 1,
|
||||
@@ -1475,7 +1475,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "$ds_prometheus"
|
||||
},
|
||||
"expr": "sum by (controller) (kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"} * on (pod) group_left() sum by (pod) (rate(container_cpu_usage_seconds_total{node=~\"$node\", namespace=\"$namespace\", container!=\"POD\"}[$__rate_interval])))",
|
||||
"expr": "sum by (controller) (kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"} * on (pod) group_left() sum by (pod) (rate(container_cpu_usage_seconds_total{node=~\"$node\", namespace=\"$namespace\", container!=\"\"}[$__rate_interval])))",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "{{ controller }}",
|
||||
@@ -1646,7 +1646,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "$ds_prometheus"
|
||||
},
|
||||
"expr": "sum (sum by (controller) (kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"} * on (pod) group_left() sum by (pod) (rate(container_cpu_system_seconds_total{node=~\"$node\", namespace=\"$namespace\", container!=\"POD\"}[$__rate_interval]))))",
|
||||
"expr": "sum (sum by (controller) (kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"} * on (pod) group_left() sum by (pod) (rate(container_cpu_system_seconds_total{node=~\"$node\", namespace=\"$namespace\", container!=\"\"}[$__rate_interval]))))",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "System",
|
||||
@@ -1657,7 +1657,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "$ds_prometheus"
|
||||
},
|
||||
"expr": "sum (sum by (controller) (kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"} * on (pod) group_left() sum by (pod) (rate(container_cpu_user_seconds_total{node=~\"$node\", namespace=\"$namespace\", container!=\"POD\"}[$__rate_interval]))))",
|
||||
"expr": "sum (sum by (controller) (kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"} * on (pod) group_left() sum by (pod) (rate(container_cpu_user_seconds_total{node=~\"$node\", namespace=\"$namespace\", container!=\"\"}[$__rate_interval]))))",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "User",
|
||||
@@ -1798,7 +1798,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "$ds_prometheus"
|
||||
},
|
||||
"expr": "sum by (controller)\n (\n kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}\n * on (namespace, pod) group_left()\n sum by (namespace, pod)\n (\n (\n sum by(namespace, pod, container) (avg_over_time(kube_pod_container_resource_requests{resource=\"cpu\",unit=\"core\",node=~\"$node\", namespace=\"$namespace\"}[$__rate_interval]))\n -\n sum by(namespace, pod, container) (rate(container_cpu_usage_seconds_total{node=~\"$node\", container!=\"POD\", namespace=\"$namespace\"}[$__rate_interval]))\n ) > 0\n )\n )",
|
||||
"expr": "sum by (controller)\n (\n kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}\n * on (namespace, pod) group_left()\n sum by (namespace, pod)\n (\n (\n sum by(namespace, pod, container) (avg_over_time(kube_pod_container_resource_requests{resource=\"cpu\",unit=\"core\",node=~\"$node\", namespace=\"$namespace\"}[$__rate_interval]))\n -\n sum by(namespace, pod, container) (rate(container_cpu_usage_seconds_total{node=~\"$node\", container!=\"\", namespace=\"$namespace\"}[$__rate_interval]))\n ) > 0\n )\n )",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "{{ controller }}",
|
||||
@@ -1939,7 +1939,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "$ds_prometheus"
|
||||
},
|
||||
"expr": "sum by (controller)\n (\n kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}\n * on (namespace, pod) group_left()\n sum by (namespace, pod)\n (\n (\n (\n sum by(namespace, pod, container) (rate(container_cpu_usage_seconds_total{node=~\"$node\", namespace=\"$namespace\"}[$__rate_interval]))\n -\n sum by(namespace, pod, container) (avg_over_time(kube_pod_container_resource_requests{resource=\"cpu\",unit=\"core\",node=~\"$node\", container!=\"POD\", namespace=\"$namespace\"}[$__rate_interval]))\n ) or sum by(namespace, pod, container) (rate(container_cpu_usage_seconds_total{node=~\"$node\", container!=\"POD\", namespace=\"$namespace\"}[$__rate_interval]))\n ) > 0\n )\n )",
|
||||
"expr": "sum by (controller)\n (\n kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}\n * on (namespace, pod) group_left()\n sum by (namespace, pod)\n (\n (\n (\n sum by(namespace, pod, container) (rate(container_cpu_usage_seconds_total{node=~\"$node\", namespace=\"$namespace\"}[$__rate_interval]))\n -\n sum by(namespace, pod, container) (avg_over_time(kube_pod_container_resource_requests{resource=\"cpu\",unit=\"core\",node=~\"$node\", container!=\"\", namespace=\"$namespace\"}[$__rate_interval]))\n ) or sum by(namespace, pod, container) (rate(container_cpu_usage_seconds_total{node=~\"$node\", container!=\"\", namespace=\"$namespace\"}[$__rate_interval]))\n ) > 0\n )\n )",
|
||||
"format": "time_series",
|
||||
"instant": false,
|
||||
"intervalFactor": 1,
|
||||
@@ -2257,28 +2257,28 @@
|
||||
"repeatDirection": "h",
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum by (controller) (kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\"} * on (pod) group_left() sum by (pod) (rate(container_cpu_usage_seconds_total{node=~\"$node\", namespace=\"$namespace\", container!=\"POD\"}[$__rate_interval])))",
|
||||
"expr": "sum by (controller) (kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\"} * on (pod) group_left() sum by (pod) (rate(container_cpu_usage_seconds_total{node=~\"$node\", namespace=\"$namespace\", container!=\"\"}[$__rate_interval])))",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "Usage",
|
||||
"refId": "D"
|
||||
},
|
||||
{
|
||||
"expr": "sum by (controller)\n (\n kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\"}\n * on (pod) group_left()\n sum by(pod) (avg_over_time(kube_pod_container_resource_requests{resource=\"cpu\",unit=\"core\",node=~\"$node\", container!=\"POD\",namespace=\"$namespace\"}[$__rate_interval]))\n )",
|
||||
"expr": "sum by (controller)\n (\n kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\"}\n * on (pod) group_left()\n sum by(pod) (avg_over_time(kube_pod_container_resource_requests{resource=\"cpu\",unit=\"core\",node=~\"$node\", container!=\"\",namespace=\"$namespace\"}[$__rate_interval]))\n )",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "Requests",
|
||||
"refId": "C"
|
||||
},
|
||||
{
|
||||
"expr": "sum by (controller)\n (\n kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\"}\n * on (pod) group_left()\n sum by(pod) (avg_over_time(kube_pod_container_resource_limits{resource=\"cpu\",unit=\"core\",node=~\"$node\", container!=\"POD\",namespace=\"$namespace\"}[$__rate_interval]))\n )",
|
||||
"expr": "sum by (controller)\n (\n kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\"}\n * on (pod) group_left()\n sum by(pod) (avg_over_time(kube_pod_container_resource_limits{resource=\"cpu\",unit=\"core\",node=~\"$node\", container!=\"\",namespace=\"$namespace\"}[$__rate_interval]))\n )",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "Limits",
|
||||
"refId": "E"
|
||||
},
|
||||
{
|
||||
"expr": "sum by (controller)\n (\n kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\"}\n * on (controller_type, controller_name) group_left()\n sum by(controller_type, controller_name) (avg_over_time(vpa_target_recommendation{container!=\"POD\",namespace=\"$namespace\", resource=\"cpu\"}[$__rate_interval]))\n )",
|
||||
"expr": "sum by (controller)\n (\n kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\"}\n * on (controller_type, controller_name) group_left()\n sum by(controller_type, controller_name) (avg_over_time(vpa_target_recommendation{container!=\"\",namespace=\"$namespace\", resource=\"cpu\"}[$__rate_interval]))\n )",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "VPA Target",
|
||||
@@ -2458,7 +2458,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "$ds_prometheus"
|
||||
},
|
||||
"expr": "sum by (controller) (kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\"} * on (pod) group_left() sum by (pod) (rate(container_cpu_system_seconds_total{node=~\"$node\", namespace=\"$namespace\", container!=\"POD\"}[$__rate_interval])))",
|
||||
"expr": "sum by (controller) (kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\"} * on (pod) group_left() sum by (pod) (rate(container_cpu_system_seconds_total{node=~\"$node\", namespace=\"$namespace\", container!=\"\"}[$__rate_interval])))",
|
||||
"format": "time_series",
|
||||
"interval": "",
|
||||
"intervalFactor": 1,
|
||||
@@ -2470,7 +2470,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "$ds_prometheus"
|
||||
},
|
||||
"expr": "sum by (controller) (kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\"} * on (pod) group_left() sum by (pod) (rate(container_cpu_user_seconds_total{node=~\"$node\", namespace=\"$namespace\", container!=\"POD\"}[$__rate_interval])))",
|
||||
"expr": "sum by (controller) (kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\"} * on (pod) group_left() sum by (pod) (rate(container_cpu_user_seconds_total{node=~\"$node\", namespace=\"$namespace\", container!=\"\"}[$__rate_interval])))",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "User",
|
||||
@@ -2622,7 +2622,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "$ds_prometheus"
|
||||
},
|
||||
"expr": "sum by (controller)\n (\n kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}\n * on (pod) group_left()\n sum by (pod) (avg_over_time(container_memory_working_set_bytes:without_kmem{node=~\"$node\", namespace=\"$namespace\", container!=\"POD\"}[$__rate_interval]))\n )",
|
||||
"expr": "sum by (controller)\n (\n kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}\n * on (pod) group_left()\n sum by (pod) (avg_over_time(container_memory_working_set_bytes:without_kmem{node=~\"$node\", namespace=\"$namespace\", container!=\"\"}[$__rate_interval]))\n )",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "{{ controller }}",
|
||||
@@ -2799,14 +2799,14 @@
|
||||
"pluginVersion": "8.5.13",
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum\n (\n kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}\n * on (pod) group_left()\n sum by (pod) (avg_over_time(container_memory_rss{node=~\"$node\", namespace=\"$namespace\", container!=\"POD\"}[$__rate_interval]))\n )",
|
||||
"expr": "sum\n (\n kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}\n * on (pod) group_left()\n sum by (pod) (avg_over_time(container_memory_rss{node=~\"$node\", namespace=\"$namespace\", container!=\"\"}[$__rate_interval]))\n )",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "RSS",
|
||||
"refId": "A"
|
||||
},
|
||||
{
|
||||
"expr": "sum \n (\n kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}\n * on (pod) group_left()\n sum by (pod) (avg_over_time(container_memory_cache{node=~\"$node\", namespace=\"$namespace\", container!=\"POD\"}[$__rate_interval]))\n )",
|
||||
"expr": "sum \n (\n kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}\n * on (pod) group_left()\n sum by (pod) (avg_over_time(container_memory_cache{node=~\"$node\", namespace=\"$namespace\", container!=\"\"}[$__rate_interval]))\n )",
|
||||
"format": "time_series",
|
||||
"interval": "",
|
||||
"intervalFactor": 1,
|
||||
@@ -2814,7 +2814,7 @@
|
||||
"refId": "B"
|
||||
},
|
||||
{
|
||||
"expr": "sum \n (\n kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}\n * on (pod) group_left()\n sum by (pod) (avg_over_time(container_memory_swap{node=~\"$node\", namespace=\"$namespace\", container!=\"POD\"}[$__rate_interval]))\n )",
|
||||
"expr": "sum \n (\n kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}\n * on (pod) group_left()\n sum by (pod) (avg_over_time(container_memory_swap{node=~\"$node\", namespace=\"$namespace\", container!=\"\"}[$__rate_interval]))\n )",
|
||||
"format": "time_series",
|
||||
"interval": "",
|
||||
"intervalFactor": 1,
|
||||
@@ -2822,14 +2822,14 @@
|
||||
"refId": "C"
|
||||
},
|
||||
{
|
||||
"expr": "sum \n (\n kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}\n * on (pod) group_left()\n sum by (pod) (avg_over_time(container_memory_working_set_bytes:without_kmem{node=~\"$node\", namespace=\"$namespace\", container!=\"POD\"}[$__rate_interval]))\n )",
|
||||
"expr": "sum \n (\n kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}\n * on (pod) group_left()\n sum by (pod) (avg_over_time(container_memory_working_set_bytes:without_kmem{node=~\"$node\", namespace=\"$namespace\", container!=\"\"}[$__rate_interval]))\n )",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "Working set bytes without kmem",
|
||||
"refId": "D"
|
||||
},
|
||||
{
|
||||
"expr": "sum \n (\n kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}\n * on (pod) group_left()\n sum by (pod) (avg_over_time(container_memory:kmem{node=~\"$node\", namespace=\"$namespace\", container!=\"POD\"}[$__rate_interval]))\n )",
|
||||
"expr": "sum \n (\n kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}\n * on (pod) group_left()\n sum by (pod) (avg_over_time(container_memory:kmem{node=~\"$node\", namespace=\"$namespace\", container!=\"\"}[$__rate_interval]))\n )",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "Kmem",
|
||||
@@ -2955,7 +2955,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "$ds_prometheus"
|
||||
},
|
||||
"expr": "sum by (controller)\n (\n kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}\n * on (namespace, pod) group_left()\n sum by (namespace, pod)\n (\n (\n sum by(namespace, pod, container) (avg_over_time(kube_pod_container_resource_requests{resource=\"memory\",unit=\"byte\",node=~\"$node\", namespace=\"$namespace\"}[$__rate_interval]))\n -\n sum by(namespace, pod, container) (avg_over_time(container_memory_working_set_bytes:without_kmem{node=~\"$node\", container!=\"POD\", namespace=\"$namespace\"}[$__rate_interval]))\n ) > 0\n )\n )",
|
||||
"expr": "sum by (controller)\n (\n kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}\n * on (namespace, pod) group_left()\n sum by (namespace, pod)\n (\n (\n sum by(namespace, pod, container) (avg_over_time(kube_pod_container_resource_requests{resource=\"memory\",unit=\"byte\",node=~\"$node\", namespace=\"$namespace\"}[$__rate_interval]))\n -\n sum by(namespace, pod, container) (avg_over_time(container_memory_working_set_bytes:without_kmem{node=~\"$node\", container!=\"\", namespace=\"$namespace\"}[$__rate_interval]))\n ) > 0\n )\n )",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "{{ controller }}",
|
||||
@@ -3091,7 +3091,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "$ds_prometheus"
|
||||
},
|
||||
"expr": "sum by (controller)\n (\n kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}\n * on (namespace, pod) group_left()\n sum by (namespace, pod)\n (\n (\n (\n sum by(namespace, pod, container) (avg_over_time(container_memory_working_set_bytes:without_kmem{node=~\"$node\", namespace=\"$namespace\"}[$__rate_interval]))\n -\n sum by(namespace, pod, container) (avg_over_time(kube_pod_container_resource_requests{resource=\"memory\",unit=\"byte\",node=~\"$node\", container!=\"POD\", namespace=\"$namespace\"}[$__rate_interval]))\n )\n or\n (\n sum by(namespace, pod, container) (avg_over_time(container_memory_working_set_bytes:without_kmem{node=~\"$node\", container!=\"POD\", namespace=\"$namespace\"}[$__rate_interval]))\n +\n sum by(namespace, pod, container) (avg_over_time(container_memory:kmem{node=~\"$node\", container!=\"POD\", namespace=\"$namespace\"}[$__rate_interval]))\n )\n ) > 0\n )\n )",
|
||||
"expr": "sum by (controller)\n (\n kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"}\n * on (namespace, pod) group_left()\n sum by (namespace, pod)\n (\n (\n (\n sum by(namespace, pod, container) (avg_over_time(container_memory_working_set_bytes:without_kmem{node=~\"$node\", namespace=\"$namespace\"}[$__rate_interval]))\n -\n sum by(namespace, pod, container) (avg_over_time(kube_pod_container_resource_requests{resource=\"memory\",unit=\"byte\",node=~\"$node\", container!=\"\", namespace=\"$namespace\"}[$__rate_interval]))\n )\n or\n (\n sum by(namespace, pod, container) (avg_over_time(container_memory_working_set_bytes:without_kmem{node=~\"$node\", container!=\"\", namespace=\"$namespace\"}[$__rate_interval]))\n +\n sum by(namespace, pod, container) (avg_over_time(container_memory:kmem{node=~\"$node\", container!=\"\", namespace=\"$namespace\"}[$__rate_interval]))\n )\n ) > 0\n )\n )",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "{{ controller }}",
|
||||
@@ -3408,14 +3408,14 @@
|
||||
"repeatDirection": "h",
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum \n (\n kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\"}\n * on (pod) group_left() \n sum by (pod) (avg_over_time(container_memory_rss{node=~\"$node\", namespace=\"$namespace\", container!=\"POD\"}[$__rate_interval]))\n )",
|
||||
"expr": "sum \n (\n kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\"}\n * on (pod) group_left() \n sum by (pod) (avg_over_time(container_memory_rss{node=~\"$node\", namespace=\"$namespace\", container!=\"\"}[$__rate_interval]))\n )",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "RSS",
|
||||
"refId": "A"
|
||||
},
|
||||
{
|
||||
"expr": "sum\n (\n kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\"} \n * on (pod) group_left() \n sum by (pod) (avg_over_time(container_memory_cache{node=~\"$node\", namespace=\"$namespace\", container!=\"POD\"}[$__rate_interval]))\n )",
|
||||
"expr": "sum\n (\n kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\"} \n * on (pod) group_left() \n sum by (pod) (avg_over_time(container_memory_cache{node=~\"$node\", namespace=\"$namespace\", container!=\"\"}[$__rate_interval]))\n )",
|
||||
"format": "time_series",
|
||||
"interval": "",
|
||||
"intervalFactor": 1,
|
||||
@@ -3423,7 +3423,7 @@
|
||||
"refId": "B"
|
||||
},
|
||||
{
|
||||
"expr": "sum \n (\n kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\"}\n * on (pod) group_left() \n sum by (pod) (avg_over_time(container_memory_swap{node=~\"$node\", namespace=\"$namespace\", container!=\"POD\"}[$__rate_interval]))\n )",
|
||||
"expr": "sum \n (\n kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\"}\n * on (pod) group_left() \n sum by (pod) (avg_over_time(container_memory_swap{node=~\"$node\", namespace=\"$namespace\", container!=\"\"}[$__rate_interval]))\n )",
|
||||
"format": "time_series",
|
||||
"interval": "",
|
||||
"intervalFactor": 1,
|
||||
@@ -3431,35 +3431,35 @@
|
||||
"refId": "C"
|
||||
},
|
||||
{
|
||||
"expr": "sum \n (\n kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\"}\n * on (pod) group_left()\n sum by (pod) (avg_over_time(container_memory_working_set_bytes:without_kmem{node=~\"$node\", namespace=\"$namespace\", container!=\"POD\"}[$__rate_interval]))\n )",
|
||||
"expr": "sum \n (\n kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\"}\n * on (pod) group_left()\n sum by (pod) (avg_over_time(container_memory_working_set_bytes:without_kmem{node=~\"$node\", namespace=\"$namespace\", container!=\"\"}[$__rate_interval]))\n )",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "Working set bytes without kmem",
|
||||
"refId": "D"
|
||||
},
|
||||
{
|
||||
"expr": "sum \n (\n kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\"}\n * on (pod) group_left()\n sum by(pod) (avg_over_time(kube_pod_container_resource_requests{resource=\"memory\",unit=\"byte\",node=~\"$node\", container!=\"POD\",namespace=\"$namespace\"}[$__rate_interval]))\n ) ",
|
||||
"expr": "sum \n (\n kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\"}\n * on (pod) group_left()\n sum by(pod) (avg_over_time(kube_pod_container_resource_requests{resource=\"memory\",unit=\"byte\",node=~\"$node\", container!=\"\",namespace=\"$namespace\"}[$__rate_interval]))\n ) ",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "Requests",
|
||||
"refId": "E"
|
||||
},
|
||||
{
|
||||
"expr": "sum\n (\n kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\"} \n * on (pod) group_left() \n sum by(pod) (avg_over_time(kube_pod_container_resource_limits{resource=\"memory\",unit=\"byte\",node=~\"$node\", container!=\"POD\",namespace=\"$namespace\"}[$__rate_interval]))\n )",
|
||||
"expr": "sum\n (\n kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\"} \n * on (pod) group_left() \n sum by(pod) (avg_over_time(kube_pod_container_resource_limits{resource=\"memory\",unit=\"byte\",node=~\"$node\", container!=\"\",namespace=\"$namespace\"}[$__rate_interval]))\n )",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "Limits",
|
||||
"refId": "F"
|
||||
},
|
||||
{
|
||||
"expr": "sum \n (\n kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\"}\n * on (controller_type, controller_name) group_left()\n sum by(controller_type, controller_name) (avg_over_time(vpa_target_recommendation{container!=\"POD\",namespace=\"$namespace\", resource=\"memory\"}[$__rate_interval]))\n )",
|
||||
"expr": "sum \n (\n kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\"}\n * on (controller_type, controller_name) group_left()\n sum by(controller_type, controller_name) (avg_over_time(vpa_target_recommendation{container!=\"\",namespace=\"$namespace\", resource=\"memory\"}[$__rate_interval]))\n )",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "VPA Target",
|
||||
"refId": "G"
|
||||
},
|
||||
{
|
||||
"expr": "sum \n (\n kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\"}\n * on (pod) group_left()\n sum by (pod) (avg_over_time(container_memory:kmem{node=~\"$node\", namespace=\"$namespace\", container!=\"POD\"}[$__rate_interval]))\n )",
|
||||
"expr": "sum \n (\n kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller=\"$controller\"}\n * on (pod) group_left()\n sum by (pod) (avg_over_time(container_memory:kmem{node=~\"$node\", namespace=\"$namespace\", container!=\"\"}[$__rate_interval]))\n )",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "Kmem",
|
||||
@@ -3910,7 +3910,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "$ds_prometheus"
|
||||
},
|
||||
"expr": "sum by (controller) (kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"} * on (pod) group_left() sum by (pod) (rate(container_fs_reads_total{node=~\"$node\", container!=\"POD\", namespace=\"$namespace\"}[$__rate_interval])))",
|
||||
"expr": "sum by (controller) (kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"} * on (pod) group_left() sum by (pod) (rate(container_fs_reads_total{node=~\"$node\", container!=\"\", namespace=\"$namespace\"}[$__rate_interval])))",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "{{ controller }}",
|
||||
@@ -4049,7 +4049,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "$ds_prometheus"
|
||||
},
|
||||
"expr": "sum by (controller) (kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"} * on (pod) group_left() sum by (pod) (rate(container_fs_writes_total{node=~\"$node\", container!=\"POD\", namespace=\"$namespace\"}[$__rate_interval])))",
|
||||
"expr": "sum by (controller) (kube_controller_pod{node=~\"$node\", namespace=\"$namespace\", controller_type=~\"$controller_type\", controller=~\"$controller\"} * on (pod) group_left() sum by (pod) (rate(container_fs_writes_total{node=~\"$node\", container!=\"\", namespace=\"$namespace\"}[$__rate_interval])))",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "{{ controller }}",
|
||||
|
||||
@@ -869,7 +869,7 @@
|
||||
"refId": "A"
|
||||
},
|
||||
{
|
||||
"expr": "100 * count by (namespace) (\n sum by (namespace, verticalpodautoscaler) ( \n count by (namespace, controller_name, verticalpodautoscaler) (avg_over_time(vpa_target_recommendation{namespace=~\"$namespace\", container!=\"POD\"}[$__range]))\n / on (controller_name, namespace) group_left\n count by (namespace, controller_name) (avg_over_time(kube_controller_pod{namespace=~\"$namespace\"}[$__range]))\n )\n) \n/ count by (namespace) (sum by (namespace, controller) (avg_over_time(kube_controller_pod{namespace=~\"$namespace\"}[$__range])))\nor\ncount (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=~\"$namespace\"}[$__range])) by (namespace) * 0",
|
||||
"expr": "100 * count by (namespace) (\n sum by (namespace, verticalpodautoscaler) ( \n count by (namespace, controller_name, verticalpodautoscaler) (avg_over_time(vpa_target_recommendation{namespace=~\"$namespace\", container!=\"\"}[$__range]))\n / on (controller_name, namespace) group_left\n count by (namespace, controller_name) (avg_over_time(kube_controller_pod{namespace=~\"$namespace\"}[$__range]))\n )\n) \n/ count by (namespace) (sum by (namespace, controller) (avg_over_time(kube_controller_pod{namespace=~\"$namespace\"}[$__range])))\nor\ncount (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=~\"$namespace\"}[$__range])) by (namespace) * 0",
|
||||
"format": "table",
|
||||
"hide": false,
|
||||
"instant": true,
|
||||
@@ -878,7 +878,7 @@
|
||||
"refId": "B"
|
||||
},
|
||||
{
|
||||
"expr": "sum by (namespace) (rate(container_cpu_usage_seconds_total{node=~\"$node\", namespace=~\"$namespace\", container!=\"POD\"}[$__range]))\nor\ncount (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=~\"$namespace\"}[$__range])) by (namespace) * 0",
|
||||
"expr": "sum by (namespace) (rate(container_cpu_usage_seconds_total{node=~\"$node\", namespace=~\"$namespace\", container!=\"\"}[$__range]))\nor\ncount (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=~\"$namespace\"}[$__range])) by (namespace) * 0",
|
||||
"format": "table",
|
||||
"hide": false,
|
||||
"instant": true,
|
||||
@@ -895,7 +895,7 @@
|
||||
"refId": "D"
|
||||
},
|
||||
{
|
||||
"expr": "sum by (namespace)\n (\n (\n sum by(namespace, pod, container) (avg_over_time(kube_pod_container_resource_requests{resource=\"cpu\",unit=\"core\",node=~\"$node\", namespace=~\"$namespace\"}[$__range]))\n -\n sum by(namespace, pod, container) (rate(container_cpu_usage_seconds_total{node=~\"$node\", container!=\"POD\", namespace=~\"$namespace\"}[$__range]))\n ) > 0\n )\nor count (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=~\"$namespace\"}[$__range])) by (namespace) * 0",
|
||||
"expr": "sum by (namespace)\n (\n (\n sum by(namespace, pod, container) (avg_over_time(kube_pod_container_resource_requests{resource=\"cpu\",unit=\"core\",node=~\"$node\", namespace=~\"$namespace\"}[$__range]))\n -\n sum by(namespace, pod, container) (rate(container_cpu_usage_seconds_total{node=~\"$node\", container!=\"\", namespace=~\"$namespace\"}[$__range]))\n ) > 0\n )\nor count (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=~\"$namespace\"}[$__range])) by (namespace) * 0",
|
||||
"format": "table",
|
||||
"instant": true,
|
||||
"intervalFactor": 1,
|
||||
@@ -903,7 +903,7 @@
|
||||
"refId": "E"
|
||||
},
|
||||
{
|
||||
"expr": "sum by (namespace)\n (\n (\n (\n sum by(namespace, pod, container) (rate(container_cpu_usage_seconds_total{node=~\"$node\", container!=\"POD\", namespace=~\"$namespace\"}[$__range]))\n -\n sum by(namespace, pod, container) (avg_over_time(kube_pod_container_resource_requests{resource=\"cpu\",unit=\"core\",node=~\"$node\", namespace=~\"$namespace\"}[$__range]))\n ) or sum by(namespace, pod, container) (rate(container_cpu_usage_seconds_total{node=~\"$node\", container!=\"POD\", namespace=~\"$namespace\"}[$__range]))\n )\n > 0\n )\nor count (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=~\"$namespace\"}[$__range])) by (namespace) * 0",
|
||||
"expr": "sum by (namespace)\n (\n (\n (\n sum by(namespace, pod, container) (rate(container_cpu_usage_seconds_total{node=~\"$node\", container!=\"\", namespace=~\"$namespace\"}[$__range]))\n -\n sum by(namespace, pod, container) (avg_over_time(kube_pod_container_resource_requests{resource=\"cpu\",unit=\"core\",node=~\"$node\", namespace=~\"$namespace\"}[$__range]))\n ) or sum by(namespace, pod, container) (rate(container_cpu_usage_seconds_total{node=~\"$node\", container!=\"\", namespace=~\"$namespace\"}[$__range]))\n )\n > 0\n )\nor count (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=~\"$namespace\"}[$__range])) by (namespace) * 0",
|
||||
"format": "table",
|
||||
"instant": true,
|
||||
"intervalFactor": 1,
|
||||
@@ -919,7 +919,7 @@
|
||||
"refId": "G"
|
||||
},
|
||||
{
|
||||
"expr": "sum by (namespace) (avg_over_time(container_memory_working_set_bytes:without_kmem{node=~\"$node\", namespace=~\"$namespace\", container!=\"POD\"}[$__range]))\nor\ncount (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=~\"$namespace\"}[$__range])) by (namespace) * 0",
|
||||
"expr": "sum by (namespace) (avg_over_time(container_memory_working_set_bytes:without_kmem{node=~\"$node\", namespace=~\"$namespace\", container!=\"\"}[$__range]))\nor\ncount (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=~\"$namespace\"}[$__range])) by (namespace) * 0",
|
||||
"format": "table",
|
||||
"instant": true,
|
||||
"intervalFactor": 1,
|
||||
@@ -935,7 +935,7 @@
|
||||
"refId": "I"
|
||||
},
|
||||
{
|
||||
"expr": "sum by (namespace)\n (\n (\n sum by(namespace, pod, container) (avg_over_time(kube_pod_container_resource_requests{resource=\"memory\",unit=\"byte\",node=~\"$node\", namespace=~\"$namespace\"}[$__range]))\n -\n sum by(namespace, pod, container) (avg_over_time(container_memory_working_set_bytes:without_kmem{node=~\"$node\", container!=\"POD\", namespace=~\"$namespace\"}[$__range]))\n ) > 0\n )\nor\ncount(avg_over_time(kube_controller_pod{node=~\"$node\", namespace=~\"$namespace\"}[$__range])) by (namespace) * 0",
|
||||
"expr": "sum by (namespace)\n (\n (\n sum by(namespace, pod, container) (avg_over_time(kube_pod_container_resource_requests{resource=\"memory\",unit=\"byte\",node=~\"$node\", namespace=~\"$namespace\"}[$__range]))\n -\n sum by(namespace, pod, container) (avg_over_time(container_memory_working_set_bytes:without_kmem{node=~\"$node\", container!=\"\", namespace=~\"$namespace\"}[$__range]))\n ) > 0\n )\nor\ncount(avg_over_time(kube_controller_pod{node=~\"$node\", namespace=~\"$namespace\"}[$__range])) by (namespace) * 0",
|
||||
"format": "table",
|
||||
"instant": true,
|
||||
"intervalFactor": 1,
|
||||
@@ -943,7 +943,7 @@
|
||||
"refId": "J"
|
||||
},
|
||||
{
|
||||
"expr": "sum by (namespace)\n (\n (\n (\n sum by(namespace, pod, container) (avg_over_time(container_memory_working_set_bytes:without_kmem{node=~\"$node\", container!=\"POD\", namespace=~\"$namespace\"}[$__range]))\n -\n sum by(namespace, pod, container) (avg_over_time(kube_pod_container_resource_requests{resource=\"memory\",unit=\"byte\",node=~\"$node\", namespace=~\"$namespace\"}[$__range]))\n ) or sum by(namespace, pod, container) (avg_over_time(container_memory_working_set_bytes:without_kmem{node=~\"$node\", container!=\"POD\", namespace=~\"$namespace\"}[$__range]))\n )\n > 0\n )\nor count (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=~\"$namespace\"}[$__range])) by (namespace) * 0",
|
||||
"expr": "sum by (namespace)\n (\n (\n (\n sum by(namespace, pod, container) (avg_over_time(container_memory_working_set_bytes:without_kmem{node=~\"$node\", container!=\"\", namespace=~\"$namespace\"}[$__range]))\n -\n sum by(namespace, pod, container) (avg_over_time(kube_pod_container_resource_requests{resource=\"memory\",unit=\"byte\",node=~\"$node\", namespace=~\"$namespace\"}[$__range]))\n ) or sum by(namespace, pod, container) (avg_over_time(container_memory_working_set_bytes:without_kmem{node=~\"$node\", container!=\"\", namespace=~\"$namespace\"}[$__range]))\n )\n > 0\n )\nor count (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=~\"$namespace\"}[$__range])) by (namespace) * 0",
|
||||
"format": "table",
|
||||
"instant": true,
|
||||
"intervalFactor": 1,
|
||||
@@ -968,7 +968,7 @@
|
||||
"refId": "M"
|
||||
},
|
||||
{
|
||||
"expr": "sum by (namespace) (rate(container_fs_reads_total{node=~\"$node\", namespace=~\"$namespace\", container!=\"POD\"}[$__range]))\nor\ncount (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=~\"$namespace\"}[$__range])) by (namespace) * 0",
|
||||
"expr": "sum by (namespace) (rate(container_fs_reads_total{node=~\"$node\", namespace=~\"$namespace\", container!=\"\"}[$__range]))\nor\ncount (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=~\"$namespace\"}[$__range])) by (namespace) * 0",
|
||||
"format": "table",
|
||||
"hide": false,
|
||||
"instant": true,
|
||||
@@ -977,7 +977,7 @@
|
||||
"refId": "N"
|
||||
},
|
||||
{
|
||||
"expr": "sum by (namespace) (rate(container_fs_writes_total{node=~\"$node\", namespace=~\"$namespace\", container!=\"POD\"}[$__range]))\nor\ncount (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=~\"$namespace\"}[$__range])) by (namespace) * 0",
|
||||
"expr": "sum by (namespace) (rate(container_fs_writes_total{node=~\"$node\", namespace=~\"$namespace\", container!=\"\"}[$__range]))\nor\ncount (avg_over_time(kube_controller_pod{node=~\"$node\", namespace=~\"$namespace\"}[$__range])) by (namespace) * 0",
|
||||
"format": "table",
|
||||
"hide": false,
|
||||
"instant": true,
|
||||
@@ -1449,7 +1449,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "$ds_prometheus"
|
||||
},
|
||||
"expr": "sum by (namespace) (rate(container_cpu_usage_seconds_total{node=~\"$node\", namespace=~\"$namespace\", container!=\"POD\"}[$__rate_interval]))",
|
||||
"expr": "sum by (namespace) (rate(container_cpu_usage_seconds_total{node=~\"$node\", namespace=~\"$namespace\", container!=\"\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "{{ namespace }}",
|
||||
@@ -1616,7 +1616,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "$ds_prometheus"
|
||||
},
|
||||
"expr": "sum (rate(container_cpu_system_seconds_total{node=~\"$node\", namespace=~\"$namespace\", container!=\"POD\"}[$__rate_interval]))",
|
||||
"expr": "sum (rate(container_cpu_system_seconds_total{node=~\"$node\", namespace=~\"$namespace\", container!=\"\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "System",
|
||||
@@ -1627,7 +1627,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "$ds_prometheus"
|
||||
},
|
||||
"expr": "sum (rate(container_cpu_user_seconds_total{node=~\"$node\", namespace=~\"$namespace\", container!=\"POD\"}[$__rate_interval]))",
|
||||
"expr": "sum (rate(container_cpu_user_seconds_total{node=~\"$node\", namespace=~\"$namespace\", container!=\"\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "User",
|
||||
@@ -1764,7 +1764,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "$ds_prometheus"
|
||||
},
|
||||
"expr": "sum by (namespace)\n (\n (\n sum by(namespace, pod, container) (avg_over_time(kube_pod_container_resource_requests{resource=\"cpu\",unit=\"core\",node=~\"$node\", namespace=~\"$namespace\"}[$__rate_interval]))\n -\n sum by(namespace, pod, container) (rate(container_cpu_usage_seconds_total{node=~\"$node\", container!=\"POD\", namespace=~\"$namespace\"}[$__rate_interval]))\n ) > 0\n )",
|
||||
"expr": "sum by (namespace)\n (\n (\n sum by(namespace, pod, container) (avg_over_time(kube_pod_container_resource_requests{resource=\"cpu\",unit=\"core\",node=~\"$node\", namespace=~\"$namespace\"}[$__rate_interval]))\n -\n sum by(namespace, pod, container) (rate(container_cpu_usage_seconds_total{node=~\"$node\", container!=\"\", namespace=~\"$namespace\"}[$__rate_interval]))\n ) > 0\n )",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "{{ namespace }}",
|
||||
@@ -1901,7 +1901,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "$ds_prometheus"
|
||||
},
|
||||
"expr": "sum by (namespace)\n (\n (\n (\n sum by(namespace, pod, container) (rate(container_cpu_usage_seconds_total{node=~\"$node\", container!=\"POD\", namespace=~\"$namespace\"}[$__rate_interval]))\n -\n sum by(namespace, pod, container) (avg_over_time(kube_pod_container_resource_requests{resource=\"cpu\",unit=\"core\",node=~\"$node\", namespace=~\"$namespace\"}[$__rate_interval]))\n ) or sum by(namespace, pod, container) (rate(container_cpu_usage_seconds_total{node=~\"$node\", container!=\"POD\", namespace=~\"$namespace\"}[$__rate_interval]))\n )\n > 0\n )",
|
||||
"expr": "sum by (namespace)\n (\n (\n (\n sum by(namespace, pod, container) (rate(container_cpu_usage_seconds_total{node=~\"$node\", container!=\"\", namespace=~\"$namespace\"}[$__rate_interval]))\n -\n sum by(namespace, pod, container) (avg_over_time(kube_pod_container_resource_requests{resource=\"cpu\",unit=\"core\",node=~\"$node\", namespace=~\"$namespace\"}[$__rate_interval]))\n ) or sum by(namespace, pod, container) (rate(container_cpu_usage_seconds_total{node=~\"$node\", container!=\"\", namespace=~\"$namespace\"}[$__rate_interval]))\n )\n > 0\n )",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "{{ namespace }}",
|
||||
@@ -2210,7 +2210,7 @@
|
||||
"repeatDirection": "h",
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum by (namespace) (rate(container_cpu_usage_seconds_total{node=~\"$node\", namespace=\"$namespace\", container!=\"POD\"}[$__rate_interval]))",
|
||||
"expr": "sum by (namespace) (rate(container_cpu_usage_seconds_total{node=~\"$node\", namespace=\"$namespace\", container!=\"\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"interval": "",
|
||||
"intervalFactor": 1,
|
||||
@@ -2218,21 +2218,21 @@
|
||||
"refId": "A"
|
||||
},
|
||||
{
|
||||
"expr": "sum by (namespace) (avg_over_time(kube_pod_container_resource_requests{resource=\"cpu\",unit=\"core\",node=~\"$node\", container!=\"POD\", namespace=\"$namespace\"}[$__rate_interval])* on (uid) group_left(phase) kube_pod_status_phase{phase=\"Running\"})",
|
||||
"expr": "sum by (namespace) (avg_over_time(kube_pod_container_resource_requests{resource=\"cpu\",unit=\"core\",node=~\"$node\", container!=\"\", namespace=\"$namespace\"}[$__rate_interval])* on (uid) group_left(phase) kube_pod_status_phase{phase=\"Running\"})",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "Requests",
|
||||
"refId": "B"
|
||||
},
|
||||
{
|
||||
"expr": "sum by (namespace) (avg_over_time(kube_pod_container_resource_limits{resource=\"cpu\",unit=\"core\",node=~\"$node\", container!=\"POD\", namespace=\"$namespace\"}[$__rate_interval])* on (uid) group_left(phase) kube_pod_status_phase{phase=\"Running\"})",
|
||||
"expr": "sum by (namespace) (avg_over_time(kube_pod_container_resource_limits{resource=\"cpu\",unit=\"core\",node=~\"$node\", container!=\"\", namespace=\"$namespace\"}[$__rate_interval])* on (uid) group_left(phase) kube_pod_status_phase{phase=\"Running\"})",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "Limits",
|
||||
"refId": "C"
|
||||
},
|
||||
{
|
||||
"expr": "sum by (namespace) (avg_over_time(vpa_target_recommendation{container!=\"POD\", namespace=\"$namespace\", resource=\"cpu\"}[$__rate_interval]))",
|
||||
"expr": "sum by (namespace) (avg_over_time(vpa_target_recommendation{container!=\"\", namespace=\"$namespace\", resource=\"cpu\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "VPA Target",
|
||||
@@ -2407,7 +2407,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "$ds_prometheus"
|
||||
},
|
||||
"expr": "sum by (namespace) (rate(container_cpu_system_seconds_total{node=~\"$node\", namespace=\"$namespace\", container!=\"POD\"}[$__rate_interval]))",
|
||||
"expr": "sum by (namespace) (rate(container_cpu_system_seconds_total{node=~\"$node\", namespace=\"$namespace\", container!=\"\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"interval": "",
|
||||
"intervalFactor": 1,
|
||||
@@ -2419,7 +2419,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "$ds_prometheus"
|
||||
},
|
||||
"expr": "sum by (namespace) (rate(container_cpu_user_seconds_total{node=~\"$node\", namespace=\"$namespace\", container!=\"POD\"}[$__rate_interval]))",
|
||||
"expr": "sum by (namespace) (rate(container_cpu_user_seconds_total{node=~\"$node\", namespace=\"$namespace\", container!=\"\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "User",
|
||||
@@ -2572,7 +2572,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "$ds_prometheus"
|
||||
},
|
||||
"expr": "sum by (namespace) (avg_over_time(container_memory_working_set_bytes:without_kmem{node=~\"$node\", namespace=~\"$namespace\", container!=\"POD\"}[$__rate_interval]))",
|
||||
"expr": "sum by (namespace) (avg_over_time(container_memory_working_set_bytes:without_kmem{node=~\"$node\", namespace=~\"$namespace\", container!=\"\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "{{ namespace }}",
|
||||
@@ -2754,14 +2754,14 @@
|
||||
"pluginVersion": "8.5.13",
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum (avg_over_time(container_memory_rss{node=~\"$node\", namespace=~\"$namespace\", container!=\"POD\"}[$__rate_interval]))",
|
||||
"expr": "sum (avg_over_time(container_memory_rss{node=~\"$node\", namespace=~\"$namespace\", container!=\"\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "RSS",
|
||||
"refId": "A"
|
||||
},
|
||||
{
|
||||
"expr": "sum (avg_over_time(container_memory_cache{node=~\"$node\", namespace=~\"$namespace\", container!=\"POD\"}[$__rate_interval]))",
|
||||
"expr": "sum (avg_over_time(container_memory_cache{node=~\"$node\", namespace=~\"$namespace\", container!=\"\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"interval": "",
|
||||
"intervalFactor": 1,
|
||||
@@ -2769,7 +2769,7 @@
|
||||
"refId": "B"
|
||||
},
|
||||
{
|
||||
"expr": "sum (avg_over_time(container_memory_swap{node=~\"$node\", namespace=~\"$namespace\", container!=\"POD\"}[$__rate_interval]))",
|
||||
"expr": "sum (avg_over_time(container_memory_swap{node=~\"$node\", namespace=~\"$namespace\", container!=\"\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"interval": "",
|
||||
"intervalFactor": 1,
|
||||
@@ -2777,14 +2777,14 @@
|
||||
"refId": "C"
|
||||
},
|
||||
{
|
||||
"expr": "sum (avg_over_time(container_memory_working_set_bytes:without_kmem{node=~\"$node\", namespace=~\"$namespace\", container!=\"POD\"}[$__rate_interval]))",
|
||||
"expr": "sum (avg_over_time(container_memory_working_set_bytes:without_kmem{node=~\"$node\", namespace=~\"$namespace\", container!=\"\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "Working set bytes without kmem",
|
||||
"refId": "D"
|
||||
},
|
||||
{
|
||||
"expr": "sum (avg_over_time(container_memory:kmem{node=~\"$node\", namespace=~\"$namespace\", container!=\"POD\"}[$__rate_interval]))",
|
||||
"expr": "sum (avg_over_time(container_memory:kmem{node=~\"$node\", namespace=~\"$namespace\", container!=\"\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "Kmem",
|
||||
@@ -2910,7 +2910,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "$ds_prometheus"
|
||||
},
|
||||
"expr": "sum by (namespace)\n (\n (\n sum by(namespace, pod, container) (avg_over_time(kube_pod_container_resource_requests{resource=\"memory\",unit=\"byte\",node=~\"$node\", namespace=~\"$namespace\"}[$__rate_interval]))\n -\n sum by(namespace, pod, container) (avg_over_time(container_memory_working_set_bytes:without_kmem{node=~\"$node\", container!=\"POD\", namespace=~\"$namespace\"}[$__rate_interval]))\n ) > 0\n )",
|
||||
"expr": "sum by (namespace)\n (\n (\n sum by(namespace, pod, container) (avg_over_time(kube_pod_container_resource_requests{resource=\"memory\",unit=\"byte\",node=~\"$node\", namespace=~\"$namespace\"}[$__rate_interval]))\n -\n sum by(namespace, pod, container) (avg_over_time(container_memory_working_set_bytes:without_kmem{node=~\"$node\", container!=\"\", namespace=~\"$namespace\"}[$__rate_interval]))\n ) > 0\n )",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "{{ namespace }}",
|
||||
@@ -3046,7 +3046,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "$ds_prometheus"
|
||||
},
|
||||
"expr": "sum by (namespace)\n (\n (\n (\n sum by(namespace, pod, container) (avg_over_time(container_memory_working_set_bytes:without_kmem{node=~\"$node\", container!=\"POD\", namespace=~\"$namespace\"}[$__rate_interval]))\n -\n sum by(namespace, pod, container) (avg_over_time(kube_pod_container_resource_requests{resource=\"memory\",unit=\"byte\",node=~\"$node\", namespace=~\"$namespace\"}[$__rate_interval]))\n ) or sum by(namespace, pod, container) (avg_over_time(container_memory_working_set_bytes:without_kmem{node=~\"$node\", container!=\"POD\", namespace=~\"$namespace\"}[$__rate_interval]))\n )\n > 0\n )",
|
||||
"expr": "sum by (namespace)\n (\n (\n (\n sum by(namespace, pod, container) (avg_over_time(container_memory_working_set_bytes:without_kmem{node=~\"$node\", container!=\"\", namespace=~\"$namespace\"}[$__rate_interval]))\n -\n sum by(namespace, pod, container) (avg_over_time(kube_pod_container_resource_requests{resource=\"memory\",unit=\"byte\",node=~\"$node\", namespace=~\"$namespace\"}[$__rate_interval]))\n ) or sum by(namespace, pod, container) (avg_over_time(container_memory_working_set_bytes:without_kmem{node=~\"$node\", container!=\"\", namespace=~\"$namespace\"}[$__rate_interval]))\n )\n > 0\n )",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "{{ namespace }}",
|
||||
@@ -3370,14 +3370,14 @@
|
||||
"repeatDirection": "h",
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum by (namespace) (avg_over_time(container_memory_rss{node=~\"$node\", namespace=\"$namespace\", container!=\"POD\"}[$__rate_interval]))",
|
||||
"expr": "sum by (namespace) (avg_over_time(container_memory_rss{node=~\"$node\", namespace=\"$namespace\", container!=\"\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "RSS",
|
||||
"refId": "A"
|
||||
},
|
||||
{
|
||||
"expr": "sum by (namespace) (avg_over_time(container_memory_cache{node=~\"$node\", namespace=\"$namespace\", container!=\"POD\"}[$__rate_interval]))",
|
||||
"expr": "sum by (namespace) (avg_over_time(container_memory_cache{node=~\"$node\", namespace=\"$namespace\", container!=\"\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"interval": "",
|
||||
"intervalFactor": 1,
|
||||
@@ -3385,7 +3385,7 @@
|
||||
"refId": "B"
|
||||
},
|
||||
{
|
||||
"expr": "sum by (namespace) (avg_over_time(container_memory_swap{node=~\"$node\", namespace=\"$namespace\", container!=\"POD\"}[$__rate_interval]))",
|
||||
"expr": "sum by (namespace) (avg_over_time(container_memory_swap{node=~\"$node\", namespace=\"$namespace\", container!=\"\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"interval": "",
|
||||
"intervalFactor": 1,
|
||||
@@ -3393,35 +3393,35 @@
|
||||
"refId": "C"
|
||||
},
|
||||
{
|
||||
"expr": "sum by (namespace) (avg_over_time(container_memory_working_set_bytes:without_kmem{node=~\"$node\", namespace=\"$namespace\", container!=\"POD\"}[$__rate_interval]))",
|
||||
"expr": "sum by (namespace) (avg_over_time(container_memory_working_set_bytes:without_kmem{node=~\"$node\", namespace=\"$namespace\", container!=\"\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "Working set bytes without kmem",
|
||||
"refId": "D"
|
||||
},
|
||||
{
|
||||
"expr": "sum by(namespace) (avg_over_time(vpa_target_recommendation{container!=\"POD\",namespace=\"$namespace\", resource=\"memory\"}[$__rate_interval]))",
|
||||
"expr": "sum by(namespace) (avg_over_time(vpa_target_recommendation{container!=\"\",namespace=\"$namespace\", resource=\"memory\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "VPA Target",
|
||||
"refId": "E"
|
||||
},
|
||||
{
|
||||
"expr": "sum by(namespace) (avg_over_time(kube_pod_container_resource_requests{resource=\"memory\",unit=\"byte\",node=~\"$node\", container!=\"POD\", namespace=\"$namespace\"}[$__rate_interval]))",
|
||||
"expr": "sum by(namespace) (avg_over_time(kube_pod_container_resource_requests{resource=\"memory\",unit=\"byte\",node=~\"$node\", container!=\"\", namespace=\"$namespace\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "Requests",
|
||||
"refId": "F"
|
||||
},
|
||||
{
|
||||
"expr": "sum by(namespace) (avg_over_time(kube_pod_container_resource_limits{resource=\"memory\",unit=\"byte\",node=~\"$node\", container!=\"POD\", namespace=\"$namespace\"}[$__rate_interval]))",
|
||||
"expr": "sum by(namespace) (avg_over_time(kube_pod_container_resource_limits{resource=\"memory\",unit=\"byte\",node=~\"$node\", container!=\"\", namespace=\"$namespace\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "Limits",
|
||||
"refId": "G"
|
||||
},
|
||||
{
|
||||
"expr": "sum by (namespace) (avg_over_time(container_memory:kmem{node=~\"$node\", namespace=\"$namespace\", container!=\"POD\"}[$__rate_interval]))",
|
||||
"expr": "sum by (namespace) (avg_over_time(container_memory:kmem{node=~\"$node\", namespace=\"$namespace\", container!=\"\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "Kmem",
|
||||
@@ -3873,7 +3873,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "$ds_prometheus"
|
||||
},
|
||||
"expr": "sum by (namespace) (rate(container_fs_reads_total{node=~\"$node\", namespace=~\"$namespace\", container!=\"POD\"}[$__rate_interval]))",
|
||||
"expr": "sum by (namespace) (rate(container_fs_reads_total{node=~\"$node\", namespace=~\"$namespace\", container!=\"\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "{{ namespace }}",
|
||||
@@ -4008,7 +4008,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "$ds_prometheus"
|
||||
},
|
||||
"expr": "sum by (namespace) (rate(container_fs_writes_total{node=~\"$node\", namespace=~\"$namespace\", container!=\"POD\"}[$__rate_interval]))",
|
||||
"expr": "sum by (namespace) (rate(container_fs_writes_total{node=~\"$node\", namespace=~\"$namespace\", container!=\"\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "{{ namespace }}",
|
||||
|
||||
@@ -686,7 +686,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "${ds_prometheus}"
|
||||
},
|
||||
"expr": "sum by (container) (rate(container_cpu_usage_seconds_total{namespace=\"$namespace\", pod=\"$pod\", container!=\"POD\", container=~\"$container\"}[$__range]))\nor\nsum by (container) (avg_over_time(kube_pod_container_info{namespace=\"$namespace\", pod=\"$pod\", container=~\"$container\"}[$__range]) * 0)",
|
||||
"expr": "sum by (container) (rate(container_cpu_usage_seconds_total{namespace=\"$namespace\", pod=\"$pod\", container!=\"\", container=~\"$container\"}[$__range]))\nor\nsum by (container) (avg_over_time(kube_pod_container_info{namespace=\"$namespace\", pod=\"$pod\", container=~\"$container\"}[$__range]) * 0)",
|
||||
"format": "table",
|
||||
"hide": false,
|
||||
"instant": true,
|
||||
@@ -759,7 +759,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "${ds_prometheus}"
|
||||
},
|
||||
"expr": "sum by (container) (avg_over_time(container_memory_working_set_bytes:without_kmem{namespace=\"$namespace\", pod=\"$pod\", container!=\"POD\", container=~\"$container\"}[$__range]))\nor\nsum by (container) (avg_over_time(kube_pod_container_info{namespace=\"$namespace\", pod=\"$pod\", container=~\"$container\"}[$__range]) * 0)",
|
||||
"expr": "sum by (container) (avg_over_time(container_memory_working_set_bytes:without_kmem{namespace=\"$namespace\", pod=\"$pod\", container!=\"\", container=~\"$container\"}[$__range]))\nor\nsum by (container) (avg_over_time(kube_pod_container_info{namespace=\"$namespace\", pod=\"$pod\", container=~\"$container\"}[$__range]) * 0)",
|
||||
"format": "table",
|
||||
"instant": true,
|
||||
"intervalFactor": 1,
|
||||
@@ -847,7 +847,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "${ds_prometheus}"
|
||||
},
|
||||
"expr": "sum by(container) (rate(container_fs_reads_total{namespace=\"$namespace\", pod=\"$pod\", container!=\"POD\"}[$__range]))",
|
||||
"expr": "sum by(container) (rate(container_fs_reads_total{namespace=\"$namespace\", pod=\"$pod\", container!=\"\"}[$__range]))",
|
||||
"format": "table",
|
||||
"hide": false,
|
||||
"instant": true,
|
||||
@@ -860,7 +860,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "${ds_prometheus}"
|
||||
},
|
||||
"expr": "sum by(container) (rate(container_fs_writes_total{namespace=\"$namespace\", pod=\"$pod\", container!=\"POD\"}[$__range]))",
|
||||
"expr": "sum by(container) (rate(container_fs_writes_total{namespace=\"$namespace\", pod=\"$pod\", container!=\"\"}[$__range]))",
|
||||
"format": "table",
|
||||
"hide": false,
|
||||
"instant": true,
|
||||
@@ -899,7 +899,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "${ds_prometheus}"
|
||||
},
|
||||
"expr": "sum by (container) (avg_over_time(container_memory:kmem{namespace=\"$namespace\", pod=\"$pod\", container!=\"POD\", container=~\"$container\"}[$__range]))\nor\nsum by (container) (avg_over_time(kube_pod_container_info{namespace=\"$namespace\", pod=\"$pod\", container=~\"$container\"}[$__range]) * 0)",
|
||||
"expr": "sum by (container) (avg_over_time(container_memory:kmem{namespace=\"$namespace\", pod=\"$pod\", container!=\"\", container=~\"$container\"}[$__range]))\nor\nsum by (container) (avg_over_time(kube_pod_container_info{namespace=\"$namespace\", pod=\"$pod\", container=~\"$container\"}[$__range]) * 0)",
|
||||
"format": "table",
|
||||
"instant": true,
|
||||
"intervalFactor": 1,
|
||||
@@ -1503,7 +1503,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "$ds_prometheus"
|
||||
},
|
||||
"expr": "sum by(container) (rate(container_cpu_usage_seconds_total{container!=\"POD\", pod=\"$pod\", namespace=\"$namespace\"}[$__rate_interval]))",
|
||||
"expr": "sum by(container) (rate(container_cpu_usage_seconds_total{container!=\"\", pod=\"$pod\", namespace=\"$namespace\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"instant": false,
|
||||
"intervalFactor": 1,
|
||||
@@ -1669,7 +1669,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "$ds_prometheus"
|
||||
},
|
||||
"expr": "sum by(pod) (rate(container_cpu_system_seconds_total{container!=\"POD\", pod=\"$pod\", namespace=\"$namespace\"}[$__rate_interval]))",
|
||||
"expr": "sum by(pod) (rate(container_cpu_system_seconds_total{container!=\"\", pod=\"$pod\", namespace=\"$namespace\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"instant": false,
|
||||
"intervalFactor": 1,
|
||||
@@ -1681,7 +1681,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "$ds_prometheus"
|
||||
},
|
||||
"expr": "sum by(pod) (rate(container_cpu_user_seconds_total{container!=\"POD\", pod=\"$pod\", namespace=\"$namespace\"}[$__rate_interval]))",
|
||||
"expr": "sum by(pod) (rate(container_cpu_user_seconds_total{container!=\"\", pod=\"$pod\", namespace=\"$namespace\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "User",
|
||||
@@ -1820,7 +1820,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "$ds_prometheus"
|
||||
},
|
||||
"expr": "sum by (namespace, pod, container)\n (\n (\n sum by(namespace, pod, container) (avg_over_time(kube_pod_container_resource_requests{resource=\"cpu\",unit=\"core\",namespace=\"$namespace\", pod=\"$pod\", container=~\"$container\"}[$__rate_interval]))\n -\n sum by(namespace, pod, container) (rate(container_cpu_usage_seconds_total{container!=\"POD\", namespace=\"$namespace\", pod=\"$pod\", container=~\"$container\"}[$__rate_interval]))\n ) > 0\n )",
|
||||
"expr": "sum by (namespace, pod, container)\n (\n (\n sum by(namespace, pod, container) (avg_over_time(kube_pod_container_resource_requests{resource=\"cpu\",unit=\"core\",namespace=\"$namespace\", pod=\"$pod\", container=~\"$container\"}[$__rate_interval]))\n -\n sum by(namespace, pod, container) (rate(container_cpu_usage_seconds_total{container!=\"\", namespace=\"$namespace\", pod=\"$pod\", container=~\"$container\"}[$__rate_interval]))\n ) > 0\n )",
|
||||
"format": "time_series",
|
||||
"hide": false,
|
||||
"intervalFactor": 1,
|
||||
@@ -2269,7 +2269,7 @@
|
||||
"repeatDirection": "h",
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum by(container) (rate(container_cpu_usage_seconds_total{namespace=\"$namespace\", pod=\"$pod\", container!=\"POD\", container=\"$container\"}[$__rate_interval]))",
|
||||
"expr": "sum by(container) (rate(container_cpu_usage_seconds_total{namespace=\"$namespace\", pod=\"$pod\", container!=\"\", container=\"$container\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "Usage",
|
||||
@@ -2476,7 +2476,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "$ds_prometheus"
|
||||
},
|
||||
"expr": "sum by(container) (rate(container_cpu_system_seconds_total{container!=\"POD\", pod=\"$pod\", namespace=\"$namespace\", container=\"$container\"}[$__rate_interval]))",
|
||||
"expr": "sum by(container) (rate(container_cpu_system_seconds_total{container!=\"\", pod=\"$pod\", namespace=\"$namespace\", container=\"$container\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"instant": false,
|
||||
"intervalFactor": 1,
|
||||
@@ -2488,7 +2488,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "$ds_prometheus"
|
||||
},
|
||||
"expr": "sum by(container) (rate(container_cpu_user_seconds_total{container!=\"POD\", pod=\"$pod\", namespace=\"$namespace\", container=\"$container\"}[$__rate_interval]))",
|
||||
"expr": "sum by(container) (rate(container_cpu_user_seconds_total{container!=\"\", pod=\"$pod\", namespace=\"$namespace\", container=\"$container\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "User",
|
||||
@@ -2639,7 +2639,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "$ds_prometheus"
|
||||
},
|
||||
"expr": "sum by(container) (avg_over_time(container_memory_working_set_bytes:without_kmem{container!=\"POD\", pod=\"$pod\", namespace=\"$namespace\"}[$__rate_interval]))",
|
||||
"expr": "sum by(container) (avg_over_time(container_memory_working_set_bytes:without_kmem{container!=\"\", pod=\"$pod\", namespace=\"$namespace\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"instant": false,
|
||||
"intervalFactor": 1,
|
||||
@@ -2816,7 +2816,7 @@
|
||||
"pluginVersion": "8.5.13",
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum by(pod) (avg_over_time(container_memory_rss{namespace=\"$namespace\", pod=\"$pod\", container!=\"POD\"}[$__rate_interval]))",
|
||||
"expr": "sum by(pod) (avg_over_time(container_memory_rss{namespace=\"$namespace\", pod=\"$pod\", container!=\"\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"instant": false,
|
||||
"intervalFactor": 1,
|
||||
@@ -2824,28 +2824,28 @@
|
||||
"refId": "A"
|
||||
},
|
||||
{
|
||||
"expr": "sum by(pod) (avg_over_time(container_memory_cache{namespace=\"$namespace\", pod=\"$pod\", container!=\"POD\"}[$__rate_interval]))",
|
||||
"expr": "sum by(pod) (avg_over_time(container_memory_cache{namespace=\"$namespace\", pod=\"$pod\", container!=\"\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "Cache",
|
||||
"refId": "B"
|
||||
},
|
||||
{
|
||||
"expr": "sum by(pod) (avg_over_time(container_memory_swap{namespace=\"$namespace\", pod=\"$pod\", container!=\"POD\"}[$__rate_interval]))",
|
||||
"expr": "sum by(pod) (avg_over_time(container_memory_swap{namespace=\"$namespace\", pod=\"$pod\", container!=\"\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "Swap",
|
||||
"refId": "C"
|
||||
},
|
||||
{
|
||||
"expr": "sum by(pod) (avg_over_time(container_memory_working_set_bytes:without_kmem{namespace=\"$namespace\", pod=\"$pod\", container!=\"POD\"}[$__rate_interval]))",
|
||||
"expr": "sum by(pod) (avg_over_time(container_memory_working_set_bytes:without_kmem{namespace=\"$namespace\", pod=\"$pod\", container!=\"\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "Working set bytes without kmem",
|
||||
"refId": "D"
|
||||
},
|
||||
{
|
||||
"expr": "sum by(pod) (avg_over_time(container_memory:kmem{namespace=\"$namespace\", pod=\"$pod\", container!=\"POD\"}[$__rate_interval]))",
|
||||
"expr": "sum by(pod) (avg_over_time(container_memory:kmem{namespace=\"$namespace\", pod=\"$pod\", container!=\"\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "Kmem",
|
||||
@@ -2974,7 +2974,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "$ds_prometheus"
|
||||
},
|
||||
"expr": "sum by (container)\n (\n (\n sum by (namespace, pod, container) (avg_over_time(kube_pod_container_resource_requests{resource=\"memory\",unit=\"byte\",namespace=\"$namespace\", pod=\"$pod\", container=~\"$container\"}[$__rate_interval]))\n -\n sum by (namespace, pod, container) (avg_over_time(container_memory_working_set_bytes:without_kmem{namespace=\"$namespace\", pod=\"$pod\", container=~\"$container\", container!=\"POD\"}[$__rate_interval]))\n ) > 0\n )",
|
||||
"expr": "sum by (container)\n (\n (\n sum by (namespace, pod, container) (avg_over_time(kube_pod_container_resource_requests{resource=\"memory\",unit=\"byte\",namespace=\"$namespace\", pod=\"$pod\", container=~\"$container\"}[$__rate_interval]))\n -\n sum by (namespace, pod, container) (avg_over_time(container_memory_working_set_bytes:without_kmem{namespace=\"$namespace\", pod=\"$pod\", container=~\"$container\", container!=\"\"}[$__rate_interval]))\n ) > 0\n )",
|
||||
"format": "time_series",
|
||||
"hide": false,
|
||||
"intervalFactor": 1,
|
||||
@@ -3110,7 +3110,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "$ds_prometheus"
|
||||
},
|
||||
"expr": "sum by (container)\n (\n (\n (\n sum by (namespace, pod, container) (avg_over_time(container_memory_working_set_bytes:without_kmem{namespace=\"$namespace\", pod=\"$pod\", container=~\"$container\"}[$__rate_interval]))\n -\n sum by (namespace, pod, container) (avg_over_time(kube_pod_container_resource_requests{resource=\"memory\",unit=\"byte\",namespace=\"$namespace\", pod=\"$pod\", container=~\"$container\", container!=\"POD\"}[$__rate_interval]))\n ) or sum by (namespace, pod, container) (avg_over_time(container_memory_working_set_bytes:without_kmem{namespace=\"$namespace\", pod=\"$pod\", container=~\"$container\", container!=\"POD\"}[$__rate_interval]))\n ) > 0\n )",
|
||||
"expr": "sum by (container)\n (\n (\n (\n sum by (namespace, pod, container) (avg_over_time(container_memory_working_set_bytes:without_kmem{namespace=\"$namespace\", pod=\"$pod\", container=~\"$container\"}[$__rate_interval]))\n -\n sum by (namespace, pod, container) (avg_over_time(kube_pod_container_resource_requests{resource=\"memory\",unit=\"byte\",namespace=\"$namespace\", pod=\"$pod\", container=~\"$container\", container!=\"\"}[$__rate_interval]))\n ) or sum by (namespace, pod, container) (avg_over_time(container_memory_working_set_bytes:without_kmem{namespace=\"$namespace\", pod=\"$pod\", container=~\"$container\", container!=\"\"}[$__rate_interval]))\n ) > 0\n )",
|
||||
"format": "time_series",
|
||||
"hide": false,
|
||||
"intervalFactor": 1,
|
||||
@@ -3431,7 +3431,7 @@
|
||||
"repeatDirection": "h",
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum by(container) (avg_over_time(container_memory_rss{namespace=\"$namespace\", pod=\"$pod\", container!=\"POD\", container=\"$container\"}[$__rate_interval]))",
|
||||
"expr": "sum by(container) (avg_over_time(container_memory_rss{namespace=\"$namespace\", pod=\"$pod\", container!=\"\", container=\"$container\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"instant": false,
|
||||
"intervalFactor": 1,
|
||||
@@ -3439,7 +3439,7 @@
|
||||
"refId": "A"
|
||||
},
|
||||
{
|
||||
"expr": "sum by(container) (avg_over_time(container_memory_cache{namespace=\"$namespace\", pod=\"$pod\", container!=\"POD\", container=\"$container\"}[$__rate_interval]))",
|
||||
"expr": "sum by(container) (avg_over_time(container_memory_cache{namespace=\"$namespace\", pod=\"$pod\", container!=\"\", container=\"$container\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"interval": "",
|
||||
"intervalFactor": 1,
|
||||
@@ -3447,28 +3447,28 @@
|
||||
"refId": "B"
|
||||
},
|
||||
{
|
||||
"expr": "sum by(container) (avg_over_time(container_memory_swap{namespace=\"$namespace\", pod=\"$pod\", container!=\"POD\", container=\"$container\"}[$__rate_interval]))",
|
||||
"expr": "sum by(container) (avg_over_time(container_memory_swap{namespace=\"$namespace\", pod=\"$pod\", container!=\"\", container=\"$container\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "Swap",
|
||||
"refId": "C"
|
||||
},
|
||||
{
|
||||
"expr": "sum by(container) (avg_over_time(container_memory_working_set_bytes:without_kmem{namespace=\"$namespace\", pod=\"$pod\", container!=\"POD\", container=\"$container\"}[$__rate_interval]))",
|
||||
"expr": "sum by(container) (avg_over_time(container_memory_working_set_bytes:without_kmem{namespace=\"$namespace\", pod=\"$pod\", container!=\"\", container=\"$container\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "Working set bytes without kmem",
|
||||
"refId": "D"
|
||||
},
|
||||
{
|
||||
"expr": "sum by(container) (avg_over_time(kube_pod_container_resource_limits{resource=\"memory\",unit=\"byte\",namespace=\"$namespace\", pod=\"$pod\", container!=\"POD\", container=\"$container\"}[$__rate_interval]))",
|
||||
"expr": "sum by(container) (avg_over_time(kube_pod_container_resource_limits{resource=\"memory\",unit=\"byte\",namespace=\"$namespace\", pod=\"$pod\", container!=\"\", container=\"$container\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "Limits",
|
||||
"refId": "E"
|
||||
},
|
||||
{
|
||||
"expr": "sum by(container) (avg_over_time(kube_pod_container_resource_requests{resource=\"memory\",unit=\"byte\",namespace=\"$namespace\", pod=\"$pod\", container!=\"POD\", container=\"$container\"}[$__rate_interval]))",
|
||||
"expr": "sum by(container) (avg_over_time(kube_pod_container_resource_requests{resource=\"memory\",unit=\"byte\",namespace=\"$namespace\", pod=\"$pod\", container!=\"\", container=\"$container\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "Requests",
|
||||
@@ -3482,7 +3482,7 @@
|
||||
"refId": "G"
|
||||
},
|
||||
{
|
||||
"expr": "sum by(container) (avg_over_time(container_memory:kmem{namespace=\"$namespace\", pod=\"$pod\", container!=\"POD\", container=\"$container\"}[$__rate_interval]))",
|
||||
"expr": "sum by(container) (avg_over_time(container_memory:kmem{namespace=\"$namespace\", pod=\"$pod\", container!=\"\", container=\"$container\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "Kmem",
|
||||
@@ -3930,7 +3930,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "$ds_prometheus"
|
||||
},
|
||||
"expr": "sum by(container) (rate(container_fs_reads_total{container!=\"POD\", pod=\"$pod\", namespace=\"$namespace\"}[$__rate_interval]))",
|
||||
"expr": "sum by(container) (rate(container_fs_reads_total{container!=\"\", pod=\"$pod\", namespace=\"$namespace\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "{{ container }}",
|
||||
@@ -4068,7 +4068,7 @@
|
||||
"type": "prometheus",
|
||||
"uid": "$ds_prometheus"
|
||||
},
|
||||
"expr": "sum by(container) (rate(container_fs_writes_total{container!=\"POD\", pod=\"$pod\", namespace=\"$namespace\"}[$__rate_interval]))",
|
||||
"expr": "sum by(container) (rate(container_fs_writes_total{container!=\"\", pod=\"$pod\", namespace=\"$namespace\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "{{ container }}",
|
||||
|
||||
1541
dashboards/nats/nats-jetstream.json
Normal file
1541
dashboards/nats/nats-jetstream.json
Normal file
File diff suppressed because it is too large
Load Diff
1463
dashboards/nats/nats-server.json
Normal file
1463
dashboards/nats/nats-server.json
Normal file
File diff suppressed because it is too large
Load Diff
3359
dashboards/seaweedfs/seaweedfs.json
Normal file
3359
dashboards/seaweedfs/seaweedfs.json
Normal file
File diff suppressed because it is too large
Load Diff
2193
dashboards/storage/linstor.json
Normal file
2193
dashboards/storage/linstor.json
Normal file
File diff suppressed because it is too large
Load Diff
666
docs/agents/changelog.md
Normal file
666
docs/agents/changelog.md
Normal file
@@ -0,0 +1,666 @@
|
||||
# Changelog Generation Instructions
|
||||
|
||||
This file contains detailed instructions for AI-powered IDE on how to generate changelogs for Cozystack releases.
|
||||
|
||||
## When to use these instructions
|
||||
|
||||
Follow these instructions when the user explicitly asks to generate a changelog.
|
||||
|
||||
## Required Tools
|
||||
|
||||
Before generating changelogs, ensure you have access to `gh` (GitHub CLI) tool, which is used to fetch commit and PR author information. The GitHub CLI is used to correctly identify PR authors from commits and pull requests.
|
||||
|
||||
## Changelog Generation Process
|
||||
|
||||
When the user asks to generate a changelog, follow these steps in the specified order:
|
||||
|
||||
**CHECKLIST - All actions that must be completed:**
|
||||
- [ ] Step 1: Update information from remote (git fetch)
|
||||
- [ ] Step 2: Check current branch (must be main)
|
||||
- [ ] Step 3: Determine release type and previous version (minor vs patch release)
|
||||
- [ ] Step 4: Determine versions and analyze existing changelogs
|
||||
- [ ] Step 5: Get the list of commits for the release period
|
||||
- [ ] Step 6: Check additional repositories (website is REQUIRED, optional repos if tags exist)
|
||||
- [ ] **MANDATORY**: Check website repository for documentation changes WITH authors and PR links via GitHub CLI
|
||||
- [ ] **MANDATORY**: Check ALL optional repositories (talm, boot-to-talos, cozyhr, cozy-proxy) for tags during release period
|
||||
- [ ] **MANDATORY**: For ALL commits from additional repos, get GitHub username via CLI, prioritizing PR author over commit author.
|
||||
- [ ] Step 7: Analyze commits (extract PR numbers, authors, user impact)
|
||||
- [ ] **MANDATORY**: For EVERY PR in main repo, get PR author via `gh pr view <PR_NUMBER> --json author --jq .author.login` (do NOT skip this step)
|
||||
- [ ] **MANDATORY**: Extract PR numbers from commit messages, then use `gh pr view` for each PR to get the PR author. Do NOT use commit author. Only for commits without PR numbers (rare), fall back to `gh api repos/cozystack/cozystack/commits/<hash> --jq '.author.login'`
|
||||
- [ ] Step 8: Form new changelog (structure, format, generate contributors list)
|
||||
- [ ] Step 9: Verify completeness and save
|
||||
|
||||
### 1. Updating information from remote
|
||||
|
||||
```bash
|
||||
git fetch --tags --force --prune
|
||||
```
|
||||
|
||||
This is necessary to get up-to-date information about tags and commits from the remote repository.
|
||||
|
||||
### 2. Checking current branch
|
||||
|
||||
Make sure we are on the `main` branch:
|
||||
|
||||
```bash
|
||||
git branch --show-current
|
||||
```
|
||||
|
||||
### 3. Determining release type and previous version
|
||||
|
||||
**Important**: Determine if you're generating a changelog for a **minor release** (vX.Y.0) or a **patch release** (vX.Y.Z where Z > 0).
|
||||
|
||||
**For minor releases (vX.Y.0):**
|
||||
- Each minor version lives and evolves in its own branch (`release-X.Y`)
|
||||
- You MUST compare with the **previous minor version** (v(X-1).Y.0), not the last patch release
|
||||
- This ensures you capture all changes from the entire minor version cycle, including all patch releases
|
||||
- Example: For v0.38.0, compare with v0.37.0 (not v0.37.8)
|
||||
- Run a separate cycle to check the diff with the zero version of the previous minor release
|
||||
|
||||
**For patch releases (vX.Y.Z where Z > 0):**
|
||||
- Compare with the previous patch version (vX.Y.(Z-1))
|
||||
- Example: For v0.37.2, compare with v0.37.1
|
||||
|
||||
### 4. Determining versions and analyzing existing changelogs
|
||||
|
||||
**Determine the last published version:**
|
||||
1. Get the list of version tags:
|
||||
```bash
|
||||
git tag -l 'v[0-9]*.[0-9]*.[0-9]*' | sort -V
|
||||
```
|
||||
|
||||
2. Get the last tag:
|
||||
```bash
|
||||
git tag -l 'v[0-9]*.[0-9]*.[0-9]*' | sort -V | tail -1
|
||||
```
|
||||
|
||||
3. Compare tags with existing changelog files in `docs/changelogs/` to determine the last published version (the newest file `vX.Y.Z.md`)
|
||||
|
||||
**Study existing changelog format:**
|
||||
- Review recent changelog files to understand the format and structure
|
||||
- Pay attention to:
|
||||
- **Feature Highlights format** (for minor releases): Use `## Feature Highlights` with `### Feature Name` subsections containing detailed descriptions (2-4 paragraphs each). See v0.35.0 and v0.36.0 for examples.
|
||||
- Section structure (Major Features and Improvements, Security, Fixes, Dependencies, etc.)
|
||||
- PR link format (e.g., `[**@username**](https://github.com/username) in #1234`)
|
||||
- Change description style
|
||||
- Presence of Breaking changes sections, etc.
|
||||
|
||||
### 5. Getting the list of commits
|
||||
|
||||
**Important**: Determine if you're generating a changelog for a **minor release** (vX.Y.0) or a **patch release** (vX.Y.Z where Z > 0).
|
||||
|
||||
**For patch releases (vX.Y.Z where Z > 0):**
|
||||
Get the list of commits starting from the previous patch version to HEAD:
|
||||
|
||||
**⚠️ CRITICAL: Do NOT use --first-parent flag! It will skip merge commits including backports!**
|
||||
|
||||
```bash
|
||||
# Get all commits including merge commits (backports)
|
||||
git log <previous_version>..HEAD --pretty=format:"%h - %s (%an, %ar)"
|
||||
```
|
||||
|
||||
For example, if generating changelog for `v0.37.2`:
|
||||
```bash
|
||||
git log v0.37.1..HEAD --pretty=format:"%h - %s (%an, %ar)"
|
||||
```
|
||||
|
||||
**⚠️ IMPORTANT: Check for backports:**
|
||||
- Look for commits with "[Backport release-X.Y]" in the commit message
|
||||
- For backport PRs, find the original PR number mentioned in the backport commit message or PR description
|
||||
- Use the original PR author (not the backport PR author) when creating changelog entries
|
||||
- Include both the original PR number and backport PR number in the changelog entry (e.g., `#1606, #1609`)
|
||||
|
||||
**For minor releases (vX.Y.0):**
|
||||
Minor releases must include **all changes** from patch releases of the previous minor version. Get commits from the previous minor release:
|
||||
|
||||
**⚠️ CRITICAL: Do NOT use --first-parent flag! It will skip merge commits including backports!**
|
||||
|
||||
```bash
|
||||
# For v0.38.0, get all commits since v0.37.0 (including all patch releases v0.37.1, v0.37.2, etc.)
|
||||
git log v<previous_minor_version>..HEAD --pretty=format:"%h - %s (%an, %ar)"
|
||||
```
|
||||
|
||||
For example, if generating changelog for `v0.38.0`:
|
||||
```bash
|
||||
git log v0.37.0..HEAD --pretty=format:"%h - %s (%an, %ar)"
|
||||
```
|
||||
|
||||
This will include all commits from v0.37.1, v0.37.2, v0.37.3, etc., up to v0.38.0.
|
||||
|
||||
**⚠️ IMPORTANT: Always check merge commits:**
|
||||
- Merge commits may contain backports that need to be included
|
||||
- Check all commits in the range, including merge commits
|
||||
- For backports, always find and reference the original PR
|
||||
|
||||
### 6. Analyzing additional repositories
|
||||
|
||||
**⚠️ CRITICAL: This step is MANDATORY and must NOT be skipped!**
|
||||
|
||||
Cozystack release may include changes from related repositories. Check and include commits from these repositories if tags were released during the release period:
|
||||
|
||||
**Required repositories:**
|
||||
- **Documentation**: [https://github.com/cozystack/website](https://github.com/cozystack/website)
|
||||
- **MANDATORY**: Always check this repository for documentation changes during the release period
|
||||
- **MANDATORY**: Get GitHub username for EVERY commit. Extract PR number from commit message, then use `gh pr view <PR_NUMBER> --repo cozystack/website --json author --jq .author.login` to get PR author. Only if no PR number, fall back to `gh api repos/cozystack/website/commits/<hash> --jq '.author.login'`
|
||||
|
||||
**Optional repositories (MUST check ALL of them for tags during release period):**
|
||||
- [https://github.com/cozystack/talm](https://github.com/cozystack/talm)
|
||||
- [https://github.com/cozystack/boot-to-talos](https://github.com/cozystack/boot-to-talos)
|
||||
- [https://github.com/cozystack/cozyhr](https://github.com/cozystack/cozyhr)
|
||||
- [https://github.com/cozystack/cozy-proxy](https://github.com/cozystack/cozy-proxy)
|
||||
|
||||
**⚠️ IMPORTANT**: You MUST check ALL optional repositories for tags created during the release period. Do NOT skip this step even if you think there might not be any tags. Use the process below to verify.
|
||||
|
||||
**Process for each repository:**
|
||||
|
||||
1. **Get release period dates:**
|
||||
```bash
|
||||
# Get dates for the release period
|
||||
cd /path/to/cozystack
|
||||
RELEASE_START=$(git log -1 --format=%ai v<previous_version>)
|
||||
RELEASE_END=$(git log -1 --format=%ai HEAD)
|
||||
```
|
||||
|
||||
2. **Check for commits in website repository (always required):**
|
||||
```bash
|
||||
# Ensure website repository is cloned and up-to-date
|
||||
mkdir -p _repos
|
||||
if [ ! -d "_repos/website" ]; then
|
||||
cd _repos && git clone https://github.com/cozystack/website.git && cd ..
|
||||
fi
|
||||
cd _repos/website
|
||||
git fetch --all --tags --force
|
||||
git checkout main 2>/dev/null || git checkout master
|
||||
git pull
|
||||
|
||||
# Get commits between release dates (with some buffer)
|
||||
git log --since="$RELEASE_START" --until="$RELEASE_END" --format="%H|%s|%an" | while IFS='|' read -r commit_hash subject author_name; do
|
||||
# Extract PR number from commit message
|
||||
PR_NUMBER=$(git log -1 --format="%B" "$commit_hash" | grep -oE '#[0-9]+' | head -1 | tr -d '#')
|
||||
|
||||
# ALWAYS use PR author if PR number found, not commit author
|
||||
if [ -n "$PR_NUMBER" ]; then
|
||||
GITHUB_USERNAME=$(gh pr view "$PR_NUMBER" --repo cozystack/website --json author --jq '.author.login // empty' 2>/dev/null)
|
||||
echo "$commit_hash|$subject|$author_name|$GITHUB_USERNAME|cozystack/website#$PR_NUMBER"
|
||||
else
|
||||
# Only fallback to commit author if no PR number found (rare)
|
||||
GITHUB_USERNAME=$(gh api repos/cozystack/website/commits/$commit_hash --jq '.author.login // empty')
|
||||
echo "$commit_hash|$subject|$author_name|$GITHUB_USERNAME|cozystack/website@${commit_hash:0:7}"
|
||||
fi
|
||||
done
|
||||
|
||||
# Look for documentation updates, new pages, or significant content changes
|
||||
# Include these in the "Documentation" section of the changelog WITH authors and PR links
|
||||
```
|
||||
|
||||
3. **For optional repositories, check if tags exist during release period:**
|
||||
|
||||
**⚠️ MANDATORY: You MUST check ALL optional repositories (talm, boot-to-talos, cozyhr, cozy-proxy). Do NOT skip any repository!**
|
||||
|
||||
**Use the helper script:**
|
||||
```bash
|
||||
# Get release period dates
|
||||
RELEASE_START=$(git log -1 --format=%ai v<previous_version>)
|
||||
RELEASE_END=$(git log -1 --format=%ai HEAD)
|
||||
|
||||
# Run the script to check all optional repositories
|
||||
./docs/changelogs/hack/check-optional-repos.sh "$RELEASE_START" "$RELEASE_END"
|
||||
```
|
||||
|
||||
The script will:
|
||||
- Check ALL optional repositories (talm, boot-to-talos, cozyhr, cozy-proxy)
|
||||
- Look for tags created during the release period
|
||||
- Get commits between tags (if tags exist) or by date range (if no tags)
|
||||
- Extract PR numbers from commit messages
|
||||
- For EVERY commit with PR number, get PR author via CLI: `gh pr view <PR_NUMBER> --repo cozystack/<repo> --json author --jq .author.login` (ALWAYS use PR author, not commit author)
|
||||
- For commits without PR numbers (rare), fallback to: `gh api repos/cozystack/<repo>/commits/<hash> --jq '.author.login'`
|
||||
- Output results in format: `commit_hash|subject|author_name|github_username|cozystack/repo#PR_NUMBER` or `cozystack/repo@commit_hash`
|
||||
|
||||
4. **Extract PR numbers and authors using GitHub CLI:**
|
||||
- **ALWAYS use PR author, not commit author** for commits from additional repositories
|
||||
- For each commit, extract PR number from commit message first: Extract `#123` pattern from commit message
|
||||
- If PR number found, use `gh pr view <PR_NUMBER> --repo cozystack/<repo> --json author --jq .author.login` to get PR author (the person who wrote the code)
|
||||
- Only if no PR number found (rare), fallback to commit author: `gh api repos/cozystack/<repo>/commits/<hash> --jq '.author.login'`
|
||||
- **Prefer PR numbers**: Use format `cozystack/website#123` if PR number found in commit message
|
||||
- **Fallback to commit hash**: Use format `cozystack/website@abc1234` if no PR number
|
||||
- **ALWAYS include author**: Every entry from additional repositories MUST include author in format `([**@username**](https://github.com/username) in cozystack/repo#123)`
|
||||
- Determine user impact and categorize appropriately
|
||||
- Format entries with repository prefix: `[website]`, `[talm]`, etc.
|
||||
|
||||
**Example entry format for additional repositories:**
|
||||
```markdown
|
||||
# If PR number found in commit message (REQUIRED format):
|
||||
* **[website] Update installation documentation**: Improved installation guide with new examples ([**@username**](https://github.com/username) in cozystack/website#123).
|
||||
|
||||
# If no PR number (fallback, use commit hash):
|
||||
* **[website] Update installation documentation**: Improved installation guide with new examples ([**@username**](https://github.com/username) in cozystack/website@abc1234).
|
||||
|
||||
# For optional repositories:
|
||||
* **[talm] Add new feature**: Description of the change ([**@username**](https://github.com/username) in cozystack/talm#456).
|
||||
```
|
||||
|
||||
**CRITICAL**:
|
||||
- **ALWAYS include author** for every entry from additional repositories
|
||||
- **ALWAYS include PR link or commit hash** for every entry
|
||||
- Never add entries without author and PR/commit reference
|
||||
- **ALWAYS use PR author, not commit author**: Extract PR number from commit message, then use `gh pr view <PR_NUMBER> --repo cozystack/<repo> --json author --jq .author.login` to get the PR author (the person who wrote the code)
|
||||
- Only if no PR number found (rare), fallback to commit author: `gh api repos/cozystack/<repo>/commits/<hash> --jq '.author.login'`
|
||||
- The commit author (especially for squash/merge commits) is usually the person who merged the PR, not the person who wrote the code
|
||||
|
||||
### 7. Analyzing commits and PRs
|
||||
|
||||
**⚠️ CRITICAL: You MUST get the author from PR, not from commit! Always use `gh pr view` to get the PR author. Do NOT use commit author!**
|
||||
|
||||
**Get all PR numbers from commits:**
|
||||
**⚠️ CRITICAL: Do NOT use --no-merges flag! It will skip merge commits including backports!**
|
||||
|
||||
```bash
|
||||
# Extract all PR numbers from commit messages in the release range (including merge commits)
|
||||
git log <previous_version>..<new_version> --format="%s%n%b" | grep -oE '#[0-9]+' | sort -u | tr -d '#'
|
||||
```
|
||||
|
||||
**⚠️ IMPORTANT: Handle backports correctly:**
|
||||
- Backport PRs have format: `[Backport release-X.Y] <original title> (#BACKPORT_PR_NUMBER)`
|
||||
- The backport commit message or PR description usually mentions the original PR number
|
||||
- For backport entries in changelog, use the original PR author (not the backport PR author)
|
||||
- Include both original and backport PR numbers in the changelog entry (e.g., `#1606, #1609`)
|
||||
- To find original PR from backport: Check the backport PR description or commit message for "Backport of #ORIGINAL_PR"
|
||||
|
||||
**For each PR number, get the author:**
|
||||
|
||||
**CRITICAL**: The commit author (especially for squash/merge commits) is usually the person who merged the PR (or GitHub bot), NOT the person who wrote the code. **ALWAYS use the PR author**, not the commit author.
|
||||
|
||||
**⚠️ MANDATORY: ALWAYS use `gh pr view` to get the PR author. Do NOT use commit author!**
|
||||
|
||||
**ALWAYS use GitHub CLI** to get the PR author:
|
||||
|
||||
```bash
|
||||
# Usage: Get PR author - MANDATORY for EVERY PR
|
||||
# Loop through ALL PR numbers and get PR author (including backports)
|
||||
git log <previous_version>..<new_version> --format="%s%n%b" | grep -oE '#[0-9]+' | sort -u | tr -d '#' | while read PR_NUMBER; do
|
||||
# Check if this is a backport PR
|
||||
BACKPORT_INFO=$(gh pr view "$PR_NUMBER" --json body --jq '.body' 2>/dev/null | grep -i "backport of #" || echo "")
|
||||
if [ -n "$BACKPORT_INFO" ]; then
|
||||
# Extract original PR number from backport description
|
||||
ORIGINAL_PR=$(echo "$BACKPORT_INFO" | grep -oE 'backport of #([0-9]+)' | grep -oE '[0-9]+' | head -1)
|
||||
if [ -n "$ORIGINAL_PR" ]; then
|
||||
# Use original PR author
|
||||
GITHUB_USERNAME=$(gh pr view "$ORIGINAL_PR" --json author --jq '.author.login // empty')
|
||||
PR_TITLE=$(gh pr view "$ORIGINAL_PR" --json title --jq '.title // empty')
|
||||
echo "$PR_NUMBER|$ORIGINAL_PR|$GITHUB_USERNAME|$PR_TITLE|BACKPORT"
|
||||
else
|
||||
# Fallback to backport PR author if original not found
|
||||
GITHUB_USERNAME=$(gh pr view "$PR_NUMBER" --json author --jq '.author.login // empty')
|
||||
PR_TITLE=$(gh pr view "$PR_NUMBER" --json title --jq '.title // empty')
|
||||
echo "$PR_NUMBER||$GITHUB_USERNAME|$PR_TITLE|BACKPORT"
|
||||
fi
|
||||
else
|
||||
# Regular PR
|
||||
GITHUB_USERNAME=$(gh pr view "$PR_NUMBER" --json author --jq '.author.login // empty')
|
||||
PR_TITLE=$(gh pr view "$PR_NUMBER" --json title --jq '.title // empty')
|
||||
echo "$PR_NUMBER||$GITHUB_USERNAME|$PR_TITLE|REGULAR"
|
||||
fi
|
||||
done
|
||||
```
|
||||
|
||||
**⚠️ IMPORTANT**: You must run this for EVERY PR in the release period. Do NOT skip any PRs or assume the GitHub username based on the git author name.
|
||||
|
||||
**CRITICAL**: Always use `gh pr view <PR_NUMBER> --json author --jq .author.login` to get the PR author. This correctly identifies the person who wrote the code, not the person who merged it (which is especially important for squash merges).
|
||||
|
||||
**Why this matters**: Using the wrong author in changelogs gives incorrect credit and can confuse contributors. The merge/squash commit is created by the person who clicks "Merge" in GitHub, not the PR author.
|
||||
|
||||
**For commits without PR numbers (rare):**
|
||||
- Only if a commit has no PR number, fall back to commit author: `gh api repos/cozystack/cozystack/commits/<hash> --jq '.author.login'`
|
||||
- But this should be very rare - most commits should have PR numbers
|
||||
|
||||
**Extract PR number from commit messages:**
|
||||
- Check commit message subject (`%s`) and body (`%b`) for PR references: `#1234` or `(#1234)`
|
||||
- **Primary method**: Extract from commit message format `(#PR_NUMBER)` or `in #PR_NUMBER` or `Merge pull request #1234`
|
||||
- Use regex: `grep -oE '#[0-9]+'` to find all PR numbers
|
||||
|
||||
**⚠️ CRITICAL: Verify PR numbers match commit messages!**
|
||||
- Always verify that the PR number in the changelog matches the PR number in the commit message
|
||||
- Common mistake: Using wrong PR number (e.g., #1614 instead of #1617) when multiple similar commits exist
|
||||
- To verify: Check the actual commit message: `git log <commit_hash> -1 --format="%s%n%b" | grep -oE '#[0-9]+'`
|
||||
- If multiple PR numbers appear in a commit, use the one that matches the PR title/description
|
||||
- For merge commits, check the merged branch commits, not just the merge commit message
|
||||
|
||||
3. **Understand the change:**
|
||||
```bash
|
||||
# Get PR details (preferred method)
|
||||
gh pr view <PR_NUMBER> --json title,body,url
|
||||
|
||||
# Or get commit details if no PR number
|
||||
git show <commit_hash> --stat
|
||||
git show <commit_hash>
|
||||
```
|
||||
- Review PR description and changed files
|
||||
- Understand functionality added/changed/fixed
|
||||
- **Determine user impact**: What can users do now? What problems are fixed? What improvements do users experience?
|
||||
|
||||
4. **For release branches (backports):**
|
||||
- If commit is from `release-X.Y` branch, check if it's a backport
|
||||
- Find original commit in `main` to get correct PR number:
|
||||
```bash
|
||||
git log origin/main --grep="<part of commit message>" --oneline
|
||||
```
|
||||
|
||||
### 8. Forming a new changelog
|
||||
|
||||
Create a new changelog file in the format matching previous versions:
|
||||
|
||||
1. **Determine the release type:**
|
||||
- **Minor release (vX.Y.0)** - use full format with **Feature Highlights** section. **Must include all changes from patch releases of the previous minor version** (e.g., v0.38.0 should include changes from v0.37.1, v0.37.2, v0.37.3, etc.)
|
||||
- **Patch release (vX.Y.Z, where Z > 0)** - use more compact format, includes only changes since the previous patch release
|
||||
|
||||
**Feature Highlights format for minor releases:**
|
||||
- Use section header: `## Feature Highlights`
|
||||
- Include 3-6 major features as subsections with `### Feature Name` headers
|
||||
- Each feature subsection should contain:
|
||||
- **Detailed description** (2-4 paragraphs) explaining:
|
||||
- What the feature is and what problem it solves
|
||||
- How it works and what users can do with it
|
||||
- How to use it (if applicable)
|
||||
- Benefits and impact for users
|
||||
- **Links to documentation** when available (use markdown links)
|
||||
- **Code examples or configuration snippets** if helpful
|
||||
- Focus on user value and practical implications, not just technical details
|
||||
- Each feature should be substantial enough to warrant its own subsection
|
||||
- Order features by importance/impact (most important first)
|
||||
- Example format:
|
||||
```markdown
|
||||
## Feature Highlights
|
||||
|
||||
### Feature Name
|
||||
|
||||
Detailed description paragraph explaining what the feature is...
|
||||
|
||||
Another paragraph explaining how it works and what users can do...
|
||||
|
||||
Learn more in the [documentation](https://cozystack.io/docs/...).
|
||||
```
|
||||
|
||||
**Important for minor releases**: After collecting all commits, **systematically verify** that all PRs from patch releases are included:
|
||||
```bash
|
||||
# Extract all PR numbers from patch release changelogs
|
||||
grep -h "#[0-9]\+" docs/changelogs/v<previous_minor>.*.md | sort -u
|
||||
|
||||
# Extract all PR numbers from the new minor release changelog
|
||||
grep -h "#[0-9]\+" docs/changelogs/v<new_minor>.0.md | sort -u
|
||||
|
||||
# Compare and identify missing PRs
|
||||
# Ensure every PR from patch releases appears in the minor release changelog
|
||||
```
|
||||
|
||||
2. **Structure changes by categories:**
|
||||
|
||||
**For minor releases (vX.Y.0):**
|
||||
- **Feature Highlights** (required) - see format above
|
||||
- **Major Features and Improvements** - detailed list of all major features and improvements
|
||||
- **Improvements (minor)** - smaller improvements and enhancements
|
||||
- **Bug fixes** - all bug fixes
|
||||
- **Security** - security-related changes
|
||||
- **Dependencies & version updates** - dependency updates
|
||||
- **System Configuration** - system-level configuration changes
|
||||
- **Development, Testing, and CI/CD** - development and testing improvements
|
||||
- **Documentation** (include changes from website repository here - **MUST include authors and PR links for all entries**)
|
||||
- **Breaking changes & upgrade notes** (if any)
|
||||
- **Refactors & chores** (if any)
|
||||
|
||||
**For patch releases (vX.Y.Z where Z > 0):**
|
||||
- **Features and Improvements** - new features and improvements
|
||||
- **Fixes** - bug fixes
|
||||
- **Security** - security-related changes
|
||||
- **Dependencies** - dependency updates
|
||||
- **System Configuration** - system-level configuration changes
|
||||
- **Development, Testing, and CI/CD** - development and testing improvements
|
||||
- **Documentation** (include changes from website repository here - **MUST include authors and PR links for all entries**)
|
||||
- **Migration and Upgrades** (if applicable)
|
||||
|
||||
**Note**: When including changes from additional repositories, group them logically with main repository changes, or create separate subsections if there are many changes from a specific repository.
|
||||
|
||||
3. **Entry format:**
|
||||
- Use the format: `* **Brief description**: detailed description ([**@username**](https://github.com/username) in #PR_NUMBER)`
|
||||
- **CRITICAL - Get authorship correctly**:
|
||||
- **ALWAYS use PR author, not commit author**: Extract PR number from commit message, then use `gh pr view` to get the PR author. The commit author (especially for squash/merge commits) is usually the person who merged the PR (or GitHub bot), NOT the person who wrote the code.
|
||||
```bash
|
||||
# Get PR author from GitHub CLI (correct method)
|
||||
# Step 1: Extract PR number from commit message
|
||||
PR_NUMBER=$(git log <commit_hash> -1 --format="%s%n%b" | grep -oE '#[0-9]+' | head -1 | tr -d '#')
|
||||
|
||||
# Step 2: Get PR author (the person who wrote the code)
|
||||
if [ -n "$PR_NUMBER" ]; then
|
||||
GITHUB_USERNAME=$(gh pr view "$PR_NUMBER" --json author --jq '.author.login')
|
||||
else
|
||||
# Only fallback to commit author if no PR number found (rare)
|
||||
GITHUB_USERNAME=$(gh api repos/cozystack/cozystack/commits/<commit_hash> --jq '.author.login')
|
||||
fi
|
||||
```
|
||||
**Example**: For PR #1507, the squash commit has author "kvaps" (who merged), but the PR author is "lllamnyp" (who wrote the code). Using `gh pr view 1507 --json author --jq .author.login` correctly returns "lllamnyp".
|
||||
- **For regular commits**: Use the commit author directly:
|
||||
```bash
|
||||
git log <commit_hash> -1 --format="%an|%ae"
|
||||
```
|
||||
- **Validation**: Before adding to changelog, verify the author by checking:
|
||||
- For merge commits: Compare merge commit author vs PR author (they should be different)
|
||||
- Check existing changelogs for author name to GitHub username mappings
|
||||
- Verify with: `git log <merge_commit>^1..<merge_commit>^2 --format="%an" --no-merges`
|
||||
- **Map author name to GitHub username**: Check existing changelogs for author name mappings, or extract from PR links in commit messages
|
||||
- **Always include user impact**: Each entry must explain how the change affects users
|
||||
- For new features: explain what users can now do
|
||||
- For bug fixes: explain what problem is solved for users
|
||||
- For improvements: explain what users will experience better
|
||||
- For breaking changes: clearly state what users need to do
|
||||
- Group related changes
|
||||
- Use bold font for important components/modules
|
||||
- Focus on user value, not just technical details
|
||||
|
||||
4. **Add a link to the full changelog:**
|
||||
|
||||
**For patch releases (vX.Y.Z where Z > 0):**
|
||||
```markdown
|
||||
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v<previous_patch_version>...v<new_version>
|
||||
```
|
||||
Example: For v0.37.2, use `v0.37.1...v0.37.2`
|
||||
|
||||
**For minor releases (vX.Y.0):**
|
||||
```markdown
|
||||
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v<previous_minor_version>...v<new_version>
|
||||
```
|
||||
Example: For v0.38.0, use `v0.37.0...v0.38.0` (NOT `v0.37.8...v0.38.0`)
|
||||
|
||||
**Important**: Minor releases must reference the previous minor release (vX.Y.0), not the last patch release, to include all changes from the entire minor version cycle.
|
||||
|
||||
5. **Generate contributors list:**
|
||||
|
||||
**⚠️ SIMPLIFIED APPROACH: Extract contributors from the generated changelog itself!**
|
||||
|
||||
Since you've already generated the changelog with all PR authors correctly identified, simply extract GitHub usernames from the changelog entries:
|
||||
|
||||
```bash
|
||||
# Extract all GitHub usernames from the current release changelog
|
||||
# This method is simpler and more reliable than extracting from git history
|
||||
|
||||
# For patch releases: extract from the current changelog file
|
||||
grep -oE '\[@[a-zA-Z0-9_-]+\]' docs/changelogs/v<version>.md | \
|
||||
sed 's/\[@/@/' | sed 's/\]//' | \
|
||||
sort -u
|
||||
|
||||
# For minor releases: extract from the current changelog file
|
||||
grep -oE '\[@[a-zA-Z0-9_-]+\]' docs/changelogs/v<version>.md | \
|
||||
sed 's/\[@/@/' | sed 's/\]//' | \
|
||||
sort -u
|
||||
```
|
||||
|
||||
**Get all previous contributors (to identify new ones):**
|
||||
```bash
|
||||
# Extract GitHub usernames from all previous changelogs
|
||||
grep -hE '\[@[a-zA-Z0-9_-]+\]' docs/changelogs/v*.md | \
|
||||
grep -oE '@[a-zA-Z0-9_-]+' | \
|
||||
sort -u > /tmp/previous_contributors.txt
|
||||
```
|
||||
|
||||
**Identify new contributors (first-time contributors):**
|
||||
```bash
|
||||
# Get current release contributors from the changelog
|
||||
grep -oE '@[a-zA-Z0-9_-]+' docs/changelogs/v<version>.md | \
|
||||
sort -u > /tmp/current_contributors.txt
|
||||
|
||||
# Get all previous contributors
|
||||
grep -hE '@[a-zA-Z0-9_-]+' docs/changelogs/v*.md | \
|
||||
grep -oE '@[a-zA-Z0-9_-]+' | \
|
||||
sort -u > /tmp/all_previous_contributors.txt
|
||||
|
||||
# Find new contributors (those in current but not in previous)
|
||||
comm -23 <(sort /tmp/current_contributors.txt) <(sort /tmp/all_previous_contributors.txt)
|
||||
```
|
||||
|
||||
**Why this approach is better:**
|
||||
- ✅ Uses the already-verified PR authors from the changelog (no need to query GitHub API again)
|
||||
- ✅ Automatically handles backports correctly (original PR authors are already in the changelog)
|
||||
- ✅ Simpler and faster (no git log parsing or API calls)
|
||||
- ✅ More reliable (matches exactly what's in the changelog)
|
||||
- ✅ Works for both patch and minor releases
|
||||
|
||||
**Add contributors section to changelog:**
|
||||
|
||||
Place the contributors section at the end of the changelog, before the "Full Changelog" link:
|
||||
```markdown
|
||||
## Contributors
|
||||
|
||||
We'd like to thank all contributors who made this release possible:
|
||||
|
||||
* [**@username1**](https://github.com/username1)
|
||||
* [**@username2**](https://github.com/username2)
|
||||
* [**@username3**](https://github.com/username3)
|
||||
* ...
|
||||
|
||||
### New Contributors
|
||||
|
||||
We're excited to welcome our first-time contributors:
|
||||
|
||||
* [**@newuser1**](https://github.com/newuser1) - First contribution!
|
||||
* [**@newuser2**](https://github.com/newuser2) - First contribution!
|
||||
```
|
||||
|
||||
**Formatting guidelines:**
|
||||
- List contributors in alphabetical order by GitHub username
|
||||
- Use the format: `* [**@username**](https://github.com/username)`
|
||||
- For new contributors, add " - First contribution!" note
|
||||
- If GitHub username cannot be determined, you can skip that contributor or use their git author name
|
||||
|
||||
**When to include:**
|
||||
- **For patch releases**: Contributors section is optional, but can be included for significant releases
|
||||
- **For minor releases (vX.Y.0)**: Contributors section is required - you must generate and include the contributors list
|
||||
- Always verify GitHub usernames by checking commit messages, PR links in changelog entries, or by examining PR details
|
||||
|
||||
6. **Add a comment with a link to the GitHub release:**
|
||||
```markdown
|
||||
<!--
|
||||
https://github.com/cozystack/cozystack/releases/tag/v<new_version>
|
||||
-->
|
||||
```
|
||||
|
||||
### 9. Verification and saving
|
||||
|
||||
**Before saving, verify completeness:**
|
||||
|
||||
**For ALL releases:**
|
||||
- [ ] Step 5 completed: **ALL commits included** (including merge commits and backports) - do not skip any commits
|
||||
- [ ] Step 5 completed: **Backports identified and handled correctly** - original PR author used, both original and backport PR numbers included
|
||||
- [ ] Step 6 completed: Website repository checked for documentation changes WITH authors and PR links via GitHub CLI
|
||||
- [ ] Step 6 completed: **ALL** optional repositories (talm, boot-to-talos, cozyhr, cozy-proxy) checked for tags during release period
|
||||
- [ ] Step 6 completed: For ALL commits from additional repos, GitHub username obtained via GitHub CLI (not skipped). For commits with PR numbers, PR author used via `gh pr view` (not commit author)
|
||||
- [ ] Step 7 completed: For EVERY PR in main repo (including backports), PR author obtained via `gh pr view <PR_NUMBER> --json author --jq .author.login` (not skipped or assumed). Commit author NOT used - always use PR author
|
||||
- [ ] Step 7 completed: **Backports verified** - for each backport PR, original PR found and original PR author used in changelog
|
||||
- [ ] Step 8 completed: Contributors list generated
|
||||
- [ ] All commits from main repository included (including merge commits)
|
||||
- [ ] User impact described for each change
|
||||
- [ ] Format matches existing changelogs
|
||||
|
||||
**For patch releases:**
|
||||
- [ ] All commits from the release period are included (including merge commits with backports)
|
||||
- [ ] PR numbers match commit messages
|
||||
- [ ] Backports are properly identified and linked to original PRs
|
||||
|
||||
**For minor releases (vX.Y.0):**
|
||||
- [ ] All changes from patch releases (vX.Y.1, vX.Y.2, etc.) are included
|
||||
- [ ] Contributors section is present and complete
|
||||
- [ ] Full Changelog link references previous minor version (vX.Y.0), not last patch
|
||||
- [ ] Verify all PRs from patch releases are included:
|
||||
```bash
|
||||
# Extract and compare PR numbers
|
||||
PATCH_PRS=$(grep -hE "#[0-9]+" docs/changelogs/v<previous_minor>.*.md | grep -oE "#[0-9]+" | sort -u)
|
||||
MINOR_PRS=$(grep -hE "#[0-9]+" docs/changelogs/v<new_minor>.0.md | grep -oE "#[0-9]+" | sort -u)
|
||||
MISSING=$(comm -23 <(echo "$PATCH_PRS") <(echo "$MINOR_PRS"))
|
||||
|
||||
if [ -n "$MISSING" ]; then
|
||||
echo "Missing PRs from patch releases:"
|
||||
echo "$MISSING"
|
||||
# For each missing PR, check if it's a backport and verify change is included by description
|
||||
fi
|
||||
```
|
||||
|
||||
**Only proceed to save after all checkboxes are verified!**
|
||||
|
||||
**Save the changelog:**
|
||||
Save the changelog to file `docs/changelogs/v<version>.md` according to the version for which the changelog is being generated.
|
||||
|
||||
### Important notes
|
||||
|
||||
- **After fetch with --force** local tags are up-to-date, use them for work
|
||||
- **For release branches** always check original commits in `main` to get correct PR numbers
|
||||
- **Preserve the format** of existing changelog files
|
||||
- **Group related changes** logically
|
||||
- **Be accurate** in describing changes, based on actual commit diffs
|
||||
- **Check for PR numbers** and commit authors
|
||||
- **CRITICAL - Get authorship from PR, not from commit**:
|
||||
- **ALWAYS use PR author**: Extract PR number from commit message, then use `gh pr view <PR_NUMBER> --json author --jq .author.login` to get the PR author
|
||||
- Do NOT use commit author - the commit author (especially for squash/merge commits) is usually the person who merged the PR, not the person who wrote the code
|
||||
- For commits without PR numbers (rare), fall back to commit author: `gh api repos/cozystack/cozystack/commits/<commit_hash> --jq '.author.login'`
|
||||
- **Workflow**: Extract PR numbers from commits → Use `gh pr view` for each PR → Get PR author (the person who wrote the code)
|
||||
- Example: For PR #1507, the commit author is `@kvaps` (who merged), but `gh pr view 1507 --json author --jq .author.login` correctly returns `@lllamnyp` (who wrote the code)
|
||||
- Check existing changelogs for author name to GitHub username mappings
|
||||
- **Validation**: Before adding to changelog, always verify the author using `gh pr view` - never use commit author for PRs
|
||||
- **MANDATORY**: Always describe user impact: Every changelog entry must explain how the change affects end users, not just what was changed technically. Focus on user value and practical implications.
|
||||
|
||||
**Required steps:**
|
||||
|
||||
- **Additional repositories (Step 6) - MANDATORY**:
|
||||
- **⚠️ CRITICAL**: Always check the **website** repository for documentation changes during the release period. This is a required step and MUST NOT be skipped.
|
||||
- **⚠️ CRITICAL**: You MUST check ALL optional repositories (talm, boot-to-talos, cozyhr, cozy-proxy) for tags during the release period. Do NOT skip any repository even if you think there might not be tags.
|
||||
- **CRITICAL**: For ALL entries from additional repositories (website and optional), you MUST:
|
||||
- **MANDATORY**: Extract PR number from commit message first
|
||||
- **MANDATORY**: For commits with PR numbers, ALWAYS use `gh pr view <PR_NUMBER> --repo cozystack/<repo> --json author --jq .author.login` to get PR author (not commit author)
|
||||
- **MANDATORY**: Only for commits without PR numbers (rare), fallback to: `gh api repos/cozystack/<repo>/commits/<hash> --jq '.author.login'`
|
||||
- **MANDATORY**: Do NOT skip getting GitHub username via CLI - do this for EVERY commit
|
||||
- **MANDATORY**: Do NOT use commit author for PRs - always use PR author
|
||||
- Include PR link or commit hash reference
|
||||
- Format: `* **[repo] Description**: details ([**@username**](https://github.com/username) in cozystack/repo#123)`
|
||||
- For **optional repositories** (talm, boot-to-talos, cozyhr, cozy-proxy), you MUST check ALL of them for tags during the release period. Use the loop provided in Step 6 to check each repository systematically.
|
||||
- When including changes from additional repositories, use the format: `[repo-name] Description` and link to the repository's PR/issue if available
|
||||
- **Prefer PR numbers over commit hashes**: For commits from additional repositories, extract PR number from commit message using GitHub API. Use PR format (`cozystack/website#123`) instead of commit hash (`cozystack/website@abc1234`) when available
|
||||
- **Never add entries without author and PR/commit reference**: Every entry from additional repositories must have both author and link
|
||||
- Group changes from additional repositories with main repository changes, or create separate subsections if there are many changes from a specific repository
|
||||
|
||||
- **PR author verification (Step 7) - MANDATORY**:
|
||||
- **⚠️ CRITICAL**: You MUST get the author from PR using `gh pr view`, NOT from commit
|
||||
- **⚠️ CRITICAL**: Extract PR numbers from commit messages, then use `gh pr view <PR_NUMBER> --json author --jq .author.login` for each PR
|
||||
- **⚠️ CRITICAL**: Do NOT use commit author - commit author is usually the person who merged, not the person who wrote the code
|
||||
- **⚠️ CRITICAL**: Do NOT skip this step for any PR, even if the author seems obvious
|
||||
- For commits without PR numbers (rare), fall back to: `gh api repos/cozystack/cozystack/commits/<hash> --jq '.author.login'`
|
||||
- This ensures correct attribution and prevents errors in changelog entries (especially important for squash/merge commits)
|
||||
|
||||
- **Contributors list (Step 8)**:
|
||||
- For minor releases (vX.Y.0): You must generate a list of all contributors and identify first-time contributors.
|
||||
- For patch releases: Contributors section is optional, but recommended for significant releases
|
||||
- Extract GitHub usernames from PR links in commit messages or changelog entries
|
||||
- This helps recognize community contributions and welcome new contributors
|
||||
- **Minor releases (vX.Y.0)**:
|
||||
- Must include **all changes** from patch releases of the previous minor version (e.g., v0.38.0 includes all changes from v0.37.1, v0.37.2, v0.37.3, etc.)
|
||||
- The "Full Changelog" link must reference the previous minor release (v0.37.0...v0.38.0), NOT the last patch release (v0.37.8...v0.38.0)
|
||||
- This ensures users can see the complete set of changes for the entire minor version cycle
|
||||
- **Verification step**: After creating the changelog, extract all PR numbers from patch release changelogs and verify they all appear in the minor release changelog to prevent missing entries
|
||||
- **Backport handling**: Patch releases may contain backports with different PR numbers (e.g., #1624 in patch release vs #1622 in main). For minor releases, use original PR numbers from main when available, but verify that all changes from patch releases are included regardless of PR number differences
|
||||
- **Content verification**: Don't rely solely on PR number matching - verify that change descriptions from patch releases appear in the minor release changelog, as backports may have different PR numbers
|
||||
|
||||
275
docs/agents/contributing.md
Normal file
275
docs/agents/contributing.md
Normal file
@@ -0,0 +1,275 @@
|
||||
# Instructions for AI Agents
|
||||
|
||||
Guidelines for AI agents contributing to Cozystack.
|
||||
|
||||
## Checklist for Creating a Pull Request
|
||||
|
||||
- [ ] Changes are made and tested
|
||||
- [ ] Commit message uses correct `[component]` prefix
|
||||
- [ ] Commit is signed off with `--signoff`
|
||||
- [ ] Branch is rebased on `upstream/main` (no extra commits)
|
||||
- [ ] PR body includes description and release note
|
||||
- [ ] PR is pushed and created with `gh pr create`
|
||||
|
||||
## How to Commit and Create Pull Requests
|
||||
|
||||
### 1. Make Your Changes
|
||||
|
||||
Edit the necessary files in the codebase.
|
||||
|
||||
### 2. Commit with Proper Format
|
||||
|
||||
Use the `[component]` prefix and `--signoff` flag:
|
||||
|
||||
```bash
|
||||
git commit --signoff -m "[component] Brief description of changes"
|
||||
```
|
||||
|
||||
**Component prefixes:**
|
||||
- System: `[dashboard]`, `[platform]`, `[cilium]`, `[kube-ovn]`, `[linstor]`, `[fluxcd]`, `[cluster-api]`
|
||||
- Apps: `[postgres]`, `[mariadb]`, `[redis]`, `[kafka]`, `[clickhouse]`, `[virtual-machine]`, `[kubernetes]`
|
||||
- Other: `[tests]`, `[ci]`, `[docs]`, `[maintenance]`
|
||||
|
||||
**Examples:**
|
||||
```bash
|
||||
git commit --signoff -m "[dashboard] Add config hash annotations to restart pods on config changes"
|
||||
git commit --signoff -m "[postgres] Update operator to version 1.2.3"
|
||||
git commit --signoff -m "[docs] Add installation guide"
|
||||
```
|
||||
|
||||
### 3. Rebase on upstream/main (if needed)
|
||||
|
||||
If your branch has extra commits, clean it up:
|
||||
|
||||
```bash
|
||||
# Fetch latest
|
||||
git fetch upstream
|
||||
|
||||
# Create clean branch from upstream/main
|
||||
git checkout -b my-feature upstream/main
|
||||
|
||||
# Cherry-pick only your commit
|
||||
git cherry-pick <your-commit-hash>
|
||||
|
||||
# Force push to your branch
|
||||
git push -f origin my-feature:my-branch-name
|
||||
```
|
||||
|
||||
### 4. Push Your Branch
|
||||
|
||||
```bash
|
||||
git push origin <branch-name>
|
||||
```
|
||||
|
||||
### 5. Create Pull Request
|
||||
|
||||
Write the PR body to a temporary file:
|
||||
|
||||
```bash
|
||||
cat > /tmp/pr_body.md << 'EOF'
|
||||
## What this PR does
|
||||
|
||||
Brief description of the changes.
|
||||
|
||||
Changes:
|
||||
- Change 1
|
||||
- Change 2
|
||||
|
||||
### Release note
|
||||
|
||||
```release-note
|
||||
[component] Description for changelog
|
||||
```
|
||||
EOF
|
||||
```
|
||||
|
||||
Create the PR:
|
||||
|
||||
```bash
|
||||
gh pr create --title "[component] Brief description" --body-file /tmp/pr_body.md
|
||||
```
|
||||
|
||||
Clean up:
|
||||
|
||||
```bash
|
||||
rm /tmp/pr_body.md
|
||||
```
|
||||
|
||||
## Addressing AI Bot Reviewer Comments
|
||||
|
||||
When the user asks to fix comments from AI bot reviewers (like Qodo, Copilot, etc.):
|
||||
|
||||
### 1. Get PR Comments
|
||||
|
||||
View all comments on the pull request:
|
||||
|
||||
```bash
|
||||
gh pr view <PR-number> --comments
|
||||
```
|
||||
|
||||
Or for the current branch:
|
||||
|
||||
```bash
|
||||
gh pr view --comments
|
||||
```
|
||||
|
||||
### 2. Review Each Comment Carefully
|
||||
|
||||
**Important**: Do NOT blindly apply all suggestions. Each comment should be evaluated:
|
||||
|
||||
- **Consider context** - Does the suggestion make sense for this specific case?
|
||||
- **Check project conventions** - Does it align with Cozystack patterns?
|
||||
- **Evaluate impact** - Will this improve code quality or introduce issues?
|
||||
- **Question validity** - AI bots can be wrong or miss context
|
||||
|
||||
**When to apply:**
|
||||
- ✅ Legitimate bugs or security issues
|
||||
- ✅ Clear improvements to code quality
|
||||
- ✅ Better error handling or edge cases
|
||||
- ✅ Conformance to project conventions
|
||||
|
||||
**When to skip:**
|
||||
- ❌ Stylistic preferences that don't match project style
|
||||
- ❌ Over-engineering simple code
|
||||
- ❌ Changes that break existing patterns
|
||||
- ❌ Suggestions that show misunderstanding of the code
|
||||
|
||||
### 3. Apply Valid Fixes
|
||||
|
||||
Make changes addressing the valid comments. Use your judgment.
|
||||
|
||||
### 4. Leave Changes Uncommitted
|
||||
|
||||
**Critical**: Do NOT commit or push the changes automatically.
|
||||
|
||||
Leave the changes in the working directory so the user can:
|
||||
- Review the fixes
|
||||
- Decide whether to commit them
|
||||
- Make additional adjustments if needed
|
||||
|
||||
```bash
|
||||
# After making changes, show status but DON'T commit
|
||||
git status
|
||||
git diff
|
||||
```
|
||||
|
||||
The user will commit and push when ready.
|
||||
|
||||
## Code Review Comments
|
||||
|
||||
When asked to fix code review comments, **always work only with unresolved (open) comments**. Resolved comments should be ignored as they have already been addressed.
|
||||
|
||||
### Getting Unresolved Review Comments
|
||||
|
||||
Use GitHub GraphQL API to fetch only unresolved review comments from a pull request:
|
||||
|
||||
```bash
|
||||
gh api graphql -F owner=cozystack -F repo=cozystack -F pr=<PR_NUMBER> -f query='
|
||||
query($owner: String!, $repo: String!, $pr: Int!) {
|
||||
repository(owner: $owner, name: $repo) {
|
||||
pullRequest(number: $pr) {
|
||||
reviewThreads(first: 100) {
|
||||
nodes {
|
||||
isResolved
|
||||
comments(first: 100) {
|
||||
nodes {
|
||||
id
|
||||
path
|
||||
line
|
||||
author { login }
|
||||
bodyText
|
||||
url
|
||||
createdAt
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}' --jq '.data.repository.pullRequest.reviewThreads.nodes[] | select(.isResolved == false) | .comments.nodes[]'
|
||||
```
|
||||
|
||||
### Filtering for Unresolved Comments
|
||||
|
||||
The key filter is `select(.isResolved == false)` which ensures only unresolved review threads are processed. Each thread can contain multiple comments, but if the thread is resolved, all its comments should be ignored.
|
||||
|
||||
### Working with Review Comments
|
||||
|
||||
1. **Fetch unresolved comments** using the GraphQL query above
|
||||
2. **Parse the results** to identify:
|
||||
- File path (`path`)
|
||||
- Line number (`line` or `originalLine`)
|
||||
- Comment text (`bodyText`)
|
||||
- Author (`author.login`)
|
||||
3. **Address each unresolved comment** by:
|
||||
- Locating the relevant code section
|
||||
- Making the requested changes
|
||||
- Ensuring the fix addresses the concern raised
|
||||
4. **Do NOT process resolved comments** - they have already been handled
|
||||
|
||||
### Example: Compact List of Unresolved Comments
|
||||
|
||||
For a quick overview of unresolved comments:
|
||||
|
||||
```bash
|
||||
gh api graphql -F owner=cozystack -F repo=cozystack -F pr=<PR_NUMBER> -f query='
|
||||
query($owner: String!, $repo: String!, $pr: Int!) {
|
||||
repository(owner: $owner, name: $repo) {
|
||||
pullRequest(number: $pr) {
|
||||
reviewThreads(first: 100) {
|
||||
nodes {
|
||||
isResolved
|
||||
comments(first: 100) {
|
||||
nodes {
|
||||
path
|
||||
line
|
||||
author { login }
|
||||
bodyText
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}' --jq '.data.repository.pullRequest.reviewThreads.nodes[] | select(.isResolved == false) | .comments.nodes[] | "\(.path):\(.line // "N/A") - \(.author.login): \(.bodyText[:150])"'
|
||||
```
|
||||
|
||||
### Important Notes
|
||||
|
||||
- **REST API limitation**: The REST endpoint `/pulls/{pr}/reviews` returns review summaries, not individual review comments. Use GraphQL API for accessing `reviewThreads` with `isResolved` status.
|
||||
- **Thread-based resolution**: Comments are organized in threads. If a thread is resolved (`isResolved: true`), ignore all comments in that thread.
|
||||
- **Always filter**: Never process comments from resolved threads, even if they appear in the results.
|
||||
|
||||
### Example Workflow
|
||||
|
||||
```bash
|
||||
# Get PR comments
|
||||
gh pr view 1234 --comments
|
||||
|
||||
# Review comments and identify valid ones
|
||||
# Make necessary changes to address valid comments
|
||||
# ... edit files ...
|
||||
|
||||
# Show what was changed (but don't commit)
|
||||
git status
|
||||
git diff
|
||||
|
||||
# Tell the user what was fixed and what was skipped
|
||||
```
|
||||
|
||||
## Git Permissions
|
||||
|
||||
Request these permissions when needed:
|
||||
- `git_write` - For commit, rebase, cherry-pick, branch operations
|
||||
- `network` - For push, fetch, pull operations
|
||||
|
||||
## Common Issues
|
||||
|
||||
**PR has extra commits?**
|
||||
→ Rebase on `upstream/main` and cherry-pick only your commits
|
||||
|
||||
**Wrong commit message?**
|
||||
→ `git commit --amend --signoff -m "[correct] message"` then `git push -f`
|
||||
|
||||
**Need to update PR?**
|
||||
→ `gh pr edit <number> --body "new description"`
|
||||
115
docs/agents/overview.md
Normal file
115
docs/agents/overview.md
Normal file
@@ -0,0 +1,115 @@
|
||||
# Cozystack Project Overview
|
||||
|
||||
This document provides detailed information about Cozystack project structure and conventions for AI agents.
|
||||
|
||||
## About Cozystack
|
||||
|
||||
Cozystack is an open-source Kubernetes-based platform and framework for building cloud infrastructure. It provides:
|
||||
|
||||
- **Managed Services**: Databases, VMs, Kubernetes clusters, object storage, and more
|
||||
- **Multi-tenancy**: Full isolation and self-service for tenants
|
||||
- **GitOps-driven**: FluxCD-based continuous delivery
|
||||
- **Modular Architecture**: Extensible with custom packages and services
|
||||
- **Developer Experience**: Simplified local development with cozyhr tool
|
||||
|
||||
The platform exposes infrastructure services via the Kubernetes API with ready-made configs, built-in monitoring, and alerts.
|
||||
|
||||
## Code Layout
|
||||
|
||||
```
|
||||
.
|
||||
├── packages/ # Main directory for cozystack packages
|
||||
│ ├── core/ # Core platform logic charts (installer, platform)
|
||||
│ ├── system/ # System charts (CSI, CNI, operators, etc.)
|
||||
│ ├── apps/ # User-facing charts shown in dashboard catalog
|
||||
│ └── extra/ # Tenant-specific modules, singleton charts which are used as dependencies
|
||||
├── dashboards/ # Grafana dashboards for monitoring
|
||||
├── hack/ # Helper scripts for local development
|
||||
│ └── e2e-apps/ # End-to-end application tests
|
||||
├── scripts/ # Scripts used by cozystack container
|
||||
│ └── migrations/ # Version migration scripts
|
||||
├── docs/ # Documentation
|
||||
│ ├── agents/ # AI agent instructions
|
||||
│ └── changelogs/ # Release changelogs
|
||||
├── cmd/ # Go command entry points
|
||||
│ ├── cozystack-api/
|
||||
│ ├── cozystack-controller/
|
||||
│ └── cozystack-assets-server/
|
||||
├── internal/ # Internal Go packages
|
||||
│ ├── controller/ # Controller implementations
|
||||
│ └── lineagecontrollerwebhook/
|
||||
├── pkg/ # Public Go packages
|
||||
│ ├── apis/
|
||||
│ ├── apiserver/
|
||||
│ └── registry/
|
||||
└── api/ # Kubernetes API definitions (CRDs)
|
||||
└── v1alpha1/
|
||||
```
|
||||
|
||||
## Package Structure
|
||||
|
||||
Every package is a Helm chart following the umbrella chart pattern:
|
||||
|
||||
```
|
||||
packages/<category>/<package-name>/
|
||||
├── Chart.yaml # Chart definition and parameter docs
|
||||
├── Makefile # Development workflow targets
|
||||
├── charts/ # Vendored upstream charts
|
||||
├── images/ # Dockerfiles and image build context
|
||||
├── patches/ # Optional upstream chart patches
|
||||
├── templates/ # Additional manifests
|
||||
├── templates/dashboard-resourcemap.yaml # Dashboard resource mapping
|
||||
├── values.yaml # Override values for upstream
|
||||
└── values.schema.json # JSON schema for validation and UI
|
||||
```
|
||||
|
||||
## Conventions
|
||||
|
||||
### Helm Charts
|
||||
- Follow **umbrella chart** pattern for system components
|
||||
- Include upstream charts in `charts/` directory (vendored, not referenced)
|
||||
- Override configuration in root `values.yaml`
|
||||
- Use `values.schema.json` for input validation and dashboard UI rendering
|
||||
|
||||
### Go Code
|
||||
- Follow standard **Go conventions** and idioms
|
||||
- Use **controller-runtime** patterns for Kubernetes controllers
|
||||
- Prefer **kubebuilder** for API definitions and controllers
|
||||
- Add proper error handling and structured logging
|
||||
|
||||
### Git Commits
|
||||
- Use format: `[component] Description`
|
||||
- Always use `--signoff` flag
|
||||
- Reference PR numbers when available
|
||||
- Keep commits atomic and focused
|
||||
- Follow conventional commit format for changelogs
|
||||
|
||||
### Documentation
|
||||
|
||||
Documentation is organized as follows:
|
||||
- `docs/` - General documentation
|
||||
- `docs/agents/` - Instructions for AI agents
|
||||
- `docs/changelogs/` - Release changelogs
|
||||
- Main website: https://github.com/cozystack/website
|
||||
|
||||
## Things Agents Should Not Do
|
||||
|
||||
### Never Edit These
|
||||
- Do not modify files in `/vendor/` (Go dependencies)
|
||||
- Do not edit generated files: `zz_generated.*.go`
|
||||
- Do not change `go.mod`/`go.sum` manually (use `go get`)
|
||||
- Do not edit upstream charts in `packages/*/charts/` directly (use patches)
|
||||
- Do not modify image digests in `values.yaml` (generated by build)
|
||||
|
||||
### Version Control
|
||||
- Do not commit built artifacts from `_out`
|
||||
- Do not commit test artifacts or temporary files
|
||||
|
||||
### Git Operations
|
||||
- Do not force push to main/master
|
||||
- Do not update git config
|
||||
- Do not perform destructive operations without explicit request
|
||||
|
||||
### Core Components
|
||||
- Do not modify `packages/core/platform/` without understanding migration impact
|
||||
|
||||
29
docs/agents/releasing.md
Normal file
29
docs/agents/releasing.md
Normal file
@@ -0,0 +1,29 @@
|
||||
# Release Process
|
||||
|
||||
This document provides instructions for AI agents on how to handle release-related tasks.
|
||||
|
||||
## When to Use
|
||||
|
||||
Follow these instructions when the user asks to:
|
||||
- Create a new release
|
||||
- Prepare a release
|
||||
- Tag a release
|
||||
- Perform release-related tasks
|
||||
|
||||
## Instructions
|
||||
|
||||
For detailed release process instructions, follow the steps documented in:
|
||||
|
||||
**[docs/release.md](../release.md)**
|
||||
|
||||
## Quick Reference
|
||||
|
||||
The release process typically involves:
|
||||
1. Preparing the release branch
|
||||
2. Generating changelog
|
||||
3. Updating version numbers
|
||||
4. Creating git tags
|
||||
5. Building and publishing artifacts
|
||||
|
||||
All detailed steps are documented in `docs/release.md`.
|
||||
|
||||
18
docs/changelogs/patch-template.md
Normal file
18
docs/changelogs/patch-template.md
Normal file
@@ -0,0 +1,18 @@
|
||||
|
||||
<!--
|
||||
https://github.com/cozystack/cozystack/releases/tag/v0..
|
||||
-->
|
||||
|
||||
## Features and Improvements
|
||||
|
||||
## Security
|
||||
|
||||
## Fixes
|
||||
|
||||
## Dependencies
|
||||
|
||||
## Development, Testing, and CI/CD
|
||||
|
||||
---
|
||||
|
||||
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v0.36.0...main
|
||||
20
docs/changelogs/template.md
Normal file
20
docs/changelogs/template.md
Normal file
@@ -0,0 +1,20 @@
|
||||
|
||||
<!--
|
||||
https://github.com/cozystack/cozystack/releases/tag/v0..
|
||||
-->
|
||||
|
||||
## Major Features and Improvements
|
||||
|
||||
## Security
|
||||
|
||||
## Fixes
|
||||
|
||||
## Dependencies
|
||||
|
||||
## Documentation
|
||||
|
||||
## Development, Testing, and CI/CD
|
||||
|
||||
---
|
||||
|
||||
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v0.34.0...v0.35.0
|
||||
243
docs/changelogs/v0.31.0.md
Normal file
243
docs/changelogs/v0.31.0.md
Normal file
@@ -0,0 +1,243 @@
|
||||
Cozystack v0.31.0 is a significant release that brings new features, key fixes, and updates to underlying components.
|
||||
This version enhances GPU support, improves many components of Cozystack, and introduces a more robust release process to improve stability.
|
||||
Below, we'll go over the highlights in each area for current users, developers, and our community.
|
||||
|
||||
## Major Features and Improvements
|
||||
|
||||
### GPU support for tenant Kubernetes clusters
|
||||
|
||||
Cozystack now integrates NVIDIA GPU Operator support for tenant Kubernetes clusters.
|
||||
This enables platform users to run GPU-powered AI/ML applications in their own clusters.
|
||||
To enable GPU Operator, set `addons.gpuOperator.enabled: true` in the cluster configuration.
|
||||
(@kvaps in https://github.com/cozystack/cozystack/pull/834)
|
||||
|
||||
Check out Andrei Kvapil's CNCF webinar [showcasing the GPU support by running Stable Diffusion in Cozystack](https://www.youtube.com/watch?v=S__h_QaoYEk).
|
||||
|
||||
<!--
|
||||
* [kubernetes] Introduce GPU support for tenant Kubernetes clusters. (@kvaps in https://github.com/cozystack/cozystack/pull/834)
|
||||
-->
|
||||
|
||||
### Cilium Improvements
|
||||
|
||||
Cozystack’s Cilium integration received two significant enhancements.
|
||||
First, Gateway API support in Cilium is now enabled, allowing advanced L4/L7 routing features via Kubernetes Gateway API.
|
||||
We thank Zdenek Janda @zdenekjanda for contributing this feature in https://github.com/cozystack/cozystack/pull/924.
|
||||
|
||||
Second, Cozystack now permits custom user-provided parameters in the tenant cluster’s Cilium configuration.
|
||||
(@lllamnyp in https://github.com/cozystack/cozystack/pull/917)
|
||||
|
||||
<!--
|
||||
* [cilium] Enable Cilium Gateway API. (@zdenekjanda in https://github.com/cozystack/cozystack/pull/924)
|
||||
* [cilium] Enable user-added parameters in a tenant cluster Cilium. (@lllamnyp in https://github.com/cozystack/cozystack/pull/917)
|
||||
-->
|
||||
|
||||
### Cross-Architecture Builds (ARM Support Beta)
|
||||
|
||||
Cozystack's build system was refactored to support multi-architecture binaries and container images.
|
||||
This paves the road to running Cozystack on ARM64 servers.
|
||||
Changes include Makefile improvements (https://github.com/cozystack/cozystack/pull/907)
|
||||
and multi-arch Docker image builds (https://github.com/cozystack/cozystack/pull/932 and https://github.com/cozystack/cozystack/pull/970).
|
||||
|
||||
We thank Nikita Bykov @nbykov0 for his ongoing work on ARM support!
|
||||
|
||||
<!--
|
||||
* Introduce support for cross-architecture builds and Cozystack on ARM:
|
||||
* [build] Refactor Makefiles introducing build variables. (@nbykov0 in https://github.com/cozystack/cozystack/pull/907)
|
||||
* [build] Add support for multi-architecture and cross-platform image builds. (@nbykov0 in https://github.com/cozystack/cozystack/pull/932 and https://github.com/cozystack/cozystack/pull/970)
|
||||
-->
|
||||
|
||||
### VerticalPodAutoscaler (VPA) Expansion
|
||||
|
||||
The VerticalPodAutoscaler is now enabled for more Cozystack components to automate resource tuning.
|
||||
Specifically, VPA was added for tenant Kubernetes control planes (@klinch0 in https://github.com/cozystack/cozystack/pull/806),
|
||||
the Cozystack Dashboard (https://github.com/cozystack/cozystack/pull/828),
|
||||
and the Cozystack etcd-operator (https://github.com/cozystack/cozystack/pull/850).
|
||||
All Cozystack components that have VPA enabled can automatically adjust their CPU and memory requests based on usage, improving platform and application stability.
|
||||
|
||||
<!--
|
||||
* Add VerticalPodAutoscaler to a few more components:
|
||||
* [kubernetes] Kubernetes clusters in user tenants. (@klinch0 in https://github.com/cozystack/cozystack/pull/806)
|
||||
* [platform] Cozystack dashboard. (@klinch0 in https://github.com/cozystack/cozystack/pull/828)
|
||||
* [platform] Cozystack etcd-operator (@klinch0 in https://github.com/cozystack/cozystack/pull/850)
|
||||
-->
|
||||
|
||||
### Tenant HelmRelease Reconcile Controller
|
||||
|
||||
A new controller was introduced to monitor and synchronize HelmRelease resources across tenants.
|
||||
This controller propagates configuration changes to tenant workloads and ensures that any HelmRelease defined in a tenant
|
||||
stays in sync with platform updates.
|
||||
It improves the reliability of deploying managed applications in Cozystack.
|
||||
(@klinch0 in https://github.com/cozystack/cozystack/pull/870)
|
||||
|
||||
<!--
|
||||
* [platform] Introduce a new controller to synchronize tenant HelmReleases and propagate configuration changes. (@klinch0 in https://github.com/cozystack/cozystack/pull/870)
|
||||
-->
|
||||
|
||||
### Virtual Machine Improvements
|
||||
|
||||
**Configurable KubeVirt CPU Overcommit**: The CPU allocation ratio in KubeVirt (how virtual CPUs are overcommitted relative to physical) is now configurable
|
||||
via the `cpu-allocation-ratio` value in the Cozystack configmap.
|
||||
This means Cozystack administrators can now tune CPU overcommitment for VMs to balance performance vs. density.
|
||||
(@lllamnyp in https://github.com/cozystack/cozystack/pull/905)
|
||||
|
||||
**KubeVirt VM Export**: Cozystack now allows exporting KubeVirt virtual machines.
|
||||
This feature, enabled via KubeVirt's `VirtualMachineExport` capability, lets users snapshot or back up VM images.
|
||||
(@kvaps in https://github.com/cozystack/cozystack/pull/808)
|
||||
|
||||
**Support for various storage classes in Virtual Machines**: The `virtual-machine` application (since version 0.9.2) lets you pick any StorageClass for a VM's
|
||||
system disk instead of relying on a hard-coded PVC.
|
||||
Refer to values `systemDisk.storage` and `systemDisk.storageClass` in the [application's configs](https://cozystack.io/docs/reference/applications/virtual-machine/#common-parameters).
|
||||
(@kvaps in https://github.com/cozystack/cozystack/pull/974)
|
||||
|
||||
<!--
|
||||
* [kubevirt] Enable exporting VMs. (@kvaps in https://github.com/cozystack/cozystack/pull/808)
|
||||
* [kubevirt] Make KubeVirt's CPU allocation ratio configurable. (@lllamnyp in https://github.com/cozystack/cozystack/pull/905)
|
||||
* [virtual-machine] Add support for various storages. (@kvaps in https://github.com/cozystack/cozystack/pull/974)
|
||||
-->
|
||||
|
||||
### Other Features and Improvements
|
||||
|
||||
* [platform] Introduce options `expose-services`, `expose-ingress`, and `expose-external-ips` to the ingress service. (@kvaps in https://github.com/cozystack/cozystack/pull/929)
|
||||
* [cozystack-controller] Record the IP address pool and storage class in Workload objects. (@lllamnyp in https://github.com/cozystack/cozystack/pull/831)
|
||||
* [apps] Remove user-facing config of limits and requests. (@lllamnyp in https://github.com/cozystack/cozystack/pull/935)
|
||||
|
||||
## New Release Lifecycle
|
||||
|
||||
Cozystack release lifecycle is changing to provide a more stable and predictable lifecycle to customers running Cozystack in mission-critical environments.
|
||||
|
||||
* **Gradual Release with Alpha, Beta, and Release Candidates**: Cozystack will now publish pre-release versions (alpha, beta, release candidates) before a stable release.
|
||||
Starting with v0.31.0, the team made three release candidates before releasing version v0.31.0.
|
||||
This allows more testing and feedback before marking a release as stable.
|
||||
|
||||
* **Prolonged Release Support with Patch Versions**: After the initial `vX.Y.0` release, a long-lived branch `release-X.Y` will be created to backport fixes.
|
||||
For example, with 0.31.0’s release, a `release-0.31` branch will track patch fixes (`0.31.x`).
|
||||
This strategy lets Cozystack users receive timely patch releases and updates with minimal risks.
|
||||
|
||||
To implement these new changes, we have rebuilt our CI/CD workflows and introduced automation, enabling automatic backports.
|
||||
You can read more about how it's implemented in the Development section below.
|
||||
|
||||
For more information, read the [Cozystack Release Workflow](https://github.com/cozystack/cozystack/blob/main/docs/release.md) documentation.
|
||||
|
||||
## Fixes
|
||||
|
||||
* [virtual-machine] Add GPU names to the virtual machine specifications. (@kvaps in https://github.com/cozystack/cozystack/pull/862)
|
||||
* [virtual-machine] Count Workload resources for pods by requests, not limits. Other improvements to VM resource tracking. (@lllamnyp in https://github.com/cozystack/cozystack/pull/904)
|
||||
* [virtual-machine] Set PortList method by default. (@kvaps in https://github.com/cozystack/cozystack/pull/996)
|
||||
* [virtual-machine] Specify ports even for wholeIP mode. (@kvaps in https://github.com/cozystack/cozystack/pull/1000)
|
||||
* [platform] Fix installing HelmReleases on initial setup. (@kvaps in https://github.com/cozystack/cozystack/pull/833)
|
||||
* [platform] Migration scripts update Kubernetes ConfigMap with the current stack version for improved version tracking. (@klinch0 in https://github.com/cozystack/cozystack/pull/840)
|
||||
* [platform] Reduce requested CPU and RAM for the `kamaji` provider. (@klinch0 in https://github.com/cozystack/cozystack/pull/825)
|
||||
* [platform] Improve the reconciliation loop for the Cozystack system HelmReleases logic. (@klinch0 in https://github.com/cozystack/cozystack/pull/809 and https://github.com/cozystack/cozystack/pull/810, @kvaps in https://github.com/cozystack/cozystack/pull/811)
|
||||
* [platform] Remove extra dependencies for the Piraeus operator. (@klinch0 in https://github.com/cozystack/cozystack/pull/856)
|
||||
* [platform] Refactor dashboard values. (@kvaps in https://github.com/cozystack/cozystack/pull/928, patched by @lllamnyp in https://github.com/cozystack/cozystack/pull/952)
|
||||
* [platform] Make FluxCD artifact disabled by default. (@klinch0 in https://github.com/cozystack/cozystack/pull/964)
|
||||
* [kubernetes] Update garbage collection of HelmReleases in tenant Kubernetes clusters. (@kvaps in https://github.com/cozystack/cozystack/pull/835)
|
||||
* [kubernetes] Fix merging `valuesOverride` for tenant clusters. (@kvaps in https://github.com/cozystack/cozystack/pull/879)
|
||||
* [kubernetes] Fix `ubuntu-container-disk` tag. (@kvaps in https://github.com/cozystack/cozystack/pull/887)
|
||||
* [kubernetes] Refactor Helm manifests for tenant Kubernetes clusters. (@kvaps in https://github.com/cozystack/cozystack/pull/866)
|
||||
* [kubernetes] Fix Ingress-NGINX depends on Cert-Manager. (@kvaps in https://github.com/cozystack/cozystack/pull/976)
|
||||
* [kubernetes, apps] Enable `topologySpreadConstraints` for tenant Kubernetes clusters and fix it for managed PostgreSQL. (@klinch0 in https://github.com/cozystack/cozystack/pull/995)
|
||||
* [tenant] Fix an issue with accessing external IPs of a cluster from the cluster itself. (@kvaps in https://github.com/cozystack/cozystack/pull/854)
|
||||
* [cluster-api] Remove the no longer necessary workaround for Kamaji. (@kvaps in https://github.com/cozystack/cozystack/pull/867, patched in https://github.com/cozystack/cozystack/pull/956)
|
||||
* [monitoring] Remove legacy label "POD" from the exclude filter in metrics. (@xy2 in https://github.com/cozystack/cozystack/pull/826)
|
||||
* [monitoring] Refactor management etcd monitoring config. Introduce a migration script for updating monitoring resources (`kube-rbac-proxy` daemonset). (@lllamnyp in https://github.com/cozystack/cozystack/pull/799 and https://github.com/cozystack/cozystack/pull/830)
|
||||
* [monitoring] Fix VerticalPodAutoscaler resource allocation for VMagent. (@klinch0 in https://github.com/cozystack/cozystack/pull/820)
|
||||
* [postgres] Remove duplicated `template` entry from backup manifest. (@etoshutka in https://github.com/cozystack/cozystack/pull/872)
|
||||
* [kube-ovn] Fix versions mapping in Makefile. (@kvaps in https://github.com/cozystack/cozystack/pull/883)
|
||||
* [dx] Automatically detect version for migrations in the installer.sh. (@kvaps in https://github.com/cozystack/cozystack/pull/837)
|
||||
* [dx] remove version_map and building for library charts. (@kvaps in https://github.com/cozystack/cozystack/pull/998)
|
||||
* [docs] Review the tenant Kubernetes cluster docs. (@NickVolynkin in https://github.com/cozystack/cozystack/pull/969)
|
||||
* [docs] Explain that tenants cannot have dashes in their names. (@NickVolynkin in https://github.com/cozystack/cozystack/pull/980)
|
||||
|
||||
## Dependencies
|
||||
|
||||
* MetalLB images are now built in-tree based on version 0.14.9 with additional critical patches. (@lllamnyp in https://github.com/cozystack/cozystack/pull/945)
|
||||
* Update Kubernetes to v1.32.4. (@kvaps in https://github.com/cozystack/cozystack/pull/949)
|
||||
* Update Talos Linux to v1.10.1. (@kvaps in https://github.com/cozystack/cozystack/pull/931)
|
||||
* Update Cilium to v1.17.3. (@kvaps in https://github.com/cozystack/cozystack/pull/848)
|
||||
* Update LINSTOR to v1.31.0. (@kvaps in https://github.com/cozystack/cozystack/pull/846)
|
||||
* Update Kube-OVN to v1.13.11. (@kvaps in https://github.com/cozystack/cozystack/pull/847, @lllamnyp in https://github.com/cozystack/cozystack/pull/922)
|
||||
* Update tenant Kubernetes to v1.32. (@kvaps in https://github.com/cozystack/cozystack/pull/871)
|
||||
* Update flux-operator to 0.20.0. (@kingdonb in https://github.com/cozystack/cozystack/pull/880 and https://github.com/cozystack/cozystack/pull/934)
|
||||
* Update multiple Cluster API components. (@kvaps in https://github.com/cozystack/cozystack/pull/867 and https://github.com/cozystack/cozystack/pull/947)
|
||||
* Update KamajiControlPlane to edge-25.4.1. (@kvaps in https://github.com/cozystack/cozystack/pull/953, fixed by @nbykov0 in https://github.com/cozystack/cozystack/pull/983)
|
||||
* Update cert-manager to v1.17.2. (@kvaps in https://github.com/cozystack/cozystack/pull/975)
|
||||
|
||||
## Documentation
|
||||
|
||||
* [Installing Talos in Air-Gapped Environment](https://cozystack.io/docs/operations/talos/configuration/air-gapped/):
|
||||
new guide for configuring and bootstrapping Talos Linux clusters in air-gapped environments.
|
||||
(@klinch0 in https://github.com/cozystack/website/pull/203)
|
||||
|
||||
* [Cozystack Bundles](https://cozystack.io/docs/guides/bundles/): new page in the learning section explaining how Cozystack bundles work and how to choose a bundle.
|
||||
(@NickVolynkin in https://github.com/cozystack/website/pull/188, https://github.com/cozystack/website/pull/189, and others;
|
||||
updated by @kvaps in https://github.com/cozystack/website/pull/192 and https://github.com/cozystack/website/pull/193)
|
||||
|
||||
* [Managed Application Reference](https://cozystack.io/docs/reference/applications/): A set of new pages in the docs, mirroring application docs from the Cozystack dashboard.
|
||||
(@NickVolynkin in https://github.com/cozystack/website/pull/198, https://github.com/cozystack/website/pull/202, and https://github.com/cozystack/website/pull/204)
|
||||
|
||||
* **LINSTOR Networking**: Guides on [configuring dedicated network for LINSTOR](https://cozystack.io/docs/operations/storage/dedicated-network/)
|
||||
and [configuring network for distributed storage in multi-datacenter setup](https://cozystack.io/docs/operations/stretched/linstor-dedicated-network/).
|
||||
(@xy2, edited by @NickVolynkin in https://github.com/cozystack/website/pull/171, https://github.com/cozystack/website/pull/182, and https://github.com/cozystack/website/pull/184)
|
||||
|
||||
### Fixes
|
||||
|
||||
* Correct error in the doc for the command to edit the configmap. (@lb0o in https://github.com/cozystack/website/pull/207)
|
||||
* Fix group name in OIDC docs (@kingdonb in https://github.com/cozystack/website/pull/179)
|
||||
* A bit more explanation of Docker buildx builders. (@nbykov0 in https://github.com/cozystack/website/pull/187)
|
||||
|
||||
## Development, Testing, and CI/CD
|
||||
|
||||
### Testing
|
||||
|
||||
Improvements:
|
||||
|
||||
* Introduce `cozytest` — a new [BATS-based](https://github.com/bats-core/bats-core) testing framework. (@kvaps in https://github.com/cozystack/cozystack/pull/982)
|
||||
|
||||
Fixes:
|
||||
|
||||
* Fix `device_ownership_from_security_context` CRI. (@dtrdnk in https://github.com/cozystack/cozystack/pull/896)
|
||||
* Increase timeout durations for `capi` and `keycloak` to improve reliability during e2e-tests. (@kvaps in https://github.com/cozystack/cozystack/pull/858)
|
||||
* Return `genisoimage` to the e2e-test Dockerfile (@gwynbleidd2106 in https://github.com/cozystack/cozystack/pull/962)
|
||||
|
||||
### CI/CD Changes
|
||||
|
||||
Improvements:
|
||||
|
||||
* Use release branches `release-X.Y` for gathering and releasing fixes after initial `vX.Y.0` release. (@kvaps in https://github.com/cozystack/cozystack/pull/816)
|
||||
* Automatically create release branches after initial `vX.Y.0` release is published. (@kvaps in https://github.com/cozystack/cozystack/pull/886)
|
||||
* Introduce Release Candidate versions. Automate patch backporting by applying patches from pull requests labeled `[backport]` to the current release branch. (@kvaps in https://github.com/cozystack/cozystack/pull/841 and https://github.com/cozystack/cozystack/pull/901, @nickvolynkin in https://github.com/cozystack/cozystack/pull/890)
|
||||
* Support alpha and beta pre-releases. (@kvaps in https://github.com/cozystack/cozystack/pull/978)
|
||||
* Commit changes in release pipelines under `github-actions <github-actions@github.com>`. (@kvaps in https://github.com/cozystack/cozystack/pull/823)
|
||||
* Describe the Cozystack release workflow. (@NickVolynkin in https://github.com/cozystack/cozystack/pull/817 and https://github.com/cozystack/cozystack/pull/897)
|
||||
|
||||
Fixes:
|
||||
|
||||
* Improve the check for `versions_map` running on pull requests. (@kvaps and @klinch0 in https://github.com/cozystack/cozystack/pull/836, https://github.com/cozystack/cozystack/pull/842, and https://github.com/cozystack/cozystack/pull/845)
|
||||
* If the release step was skipped on a tag, skip tests as well. (@kvaps in https://github.com/cozystack/cozystack/pull/822)
|
||||
* Allow CI to cancel the previous job if a new one is scheduled. (@kvaps in https://github.com/cozystack/cozystack/pull/873)
|
||||
* Use the correct version name when uploading build assets to the release page. (@kvaps in https://github.com/cozystack/cozystack/pull/876)
|
||||
* Stop using `ok-to-test` label to trigger CI in pull requests. (@kvaps in https://github.com/cozystack/cozystack/pull/875)
|
||||
* Do not run tests in the release building pipeline. (@kvaps in https://github.com/cozystack/cozystack/pull/882)
|
||||
* Fix release branch creation. (@kvaps in https://github.com/cozystack/cozystack/pull/884)
|
||||
* Reduce noise in the test logs by suppressing the `wget` progress bar. (@lllamnyp in https://github.com/cozystack/cozystack/pull/865)
|
||||
* Revert "automatically trigger tests in releasing PR". (@kvaps in https://github.com/cozystack/cozystack/pull/900)
|
||||
* Force-update release branch on tagged main commits. (@kvaps in https://github.com/cozystack/cozystack/pull/977)
|
||||
* Show detailed errors in the `pull-request-release` workflow. (@lllamnyp in https://github.com/cozystack/cozystack/pull/992)
|
||||
|
||||
## Community and Maintenance
|
||||
|
||||
### Repository Maintenance
|
||||
|
||||
Added @klinch0 to CODEOWNERS. (@kvaps in https://github.com/cozystack/cozystack/pull/838)
|
||||
|
||||
### New Contributors
|
||||
|
||||
* @etoshutka made their first contribution in https://github.com/cozystack/cozystack/pull/872
|
||||
* @dtrdnk made their first contribution in https://github.com/cozystack/cozystack/pull/896
|
||||
* @zdenekjanda made their first contribution in https://github.com/cozystack/cozystack/pull/924
|
||||
* @gwynbleidd2106 made their first contribution in https://github.com/cozystack/cozystack/pull/962
|
||||
|
||||
## Full Changelog
|
||||
|
||||
See https://github.com/cozystack/cozystack/compare/v0.30.0...v0.31.0
|
||||
8
docs/changelogs/v0.31.1.md
Normal file
8
docs/changelogs/v0.31.1.md
Normal file
@@ -0,0 +1,8 @@
|
||||
## Fixes
|
||||
|
||||
* [build] Update Talos Linux v1.10.3 and fix assets. (@kvaps in https://github.com/cozystack/cozystack/pull/1006)
|
||||
* [ci] Fix uploading released artifacts to GitHub. (@kvaps in https://github.com/cozystack/cozystack/pull/1009)
|
||||
* [ci] Separate build and testing jobs. (@kvaps in https://github.com/cozystack/cozystack/pull/1005)
|
||||
* [docs] Write a full release post for v0.31.1. (@NickVolynkin in https://github.com/cozystack/cozystack/pull/999)
|
||||
|
||||
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v0.31.0...v0.31.1
|
||||
12
docs/changelogs/v0.31.2.md
Normal file
12
docs/changelogs/v0.31.2.md
Normal file
@@ -0,0 +1,12 @@
|
||||
## Security
|
||||
|
||||
* Resolve a security problem that allowed a tenant administrator to gain enhanced privileges outside the tenant. (@kvaps in https://github.com/cozystack/cozystack/pull/1062, backported in https://github.com/cozystack/cozystack/pull/1066)
|
||||
|
||||
## Fixes
|
||||
|
||||
* [platform] Fix dependencies in `distro-full` bundle. (@klinch0 in https://github.com/cozystack/cozystack/pull/1056, backported in https://github.com/cozystack/cozystack/pull/1064)
|
||||
* [platform] Fix RBAC for annotating namespaces. (@kvaps in https://github.com/cozystack/cozystack/pull/1031, backported in https://github.com/cozystack/cozystack/pull/1037)
|
||||
* [platform] Reduce system resource consumption by using smaller resource presets for VerticalPodAutoscaler, SeaweedFS, and KubeOVN. (@klinch0 in https://github.com/cozystack/cozystack/pull/1054, backported in https://github.com/cozystack/cozystack/pull/1058)
|
||||
* [dashboard] Fix a number of issues in the Cozystack Dashboard (@kvaps in https://github.com/cozystack/cozystack/pull/1042, backported in https://github.com/cozystack/cozystack/pull/1066)
|
||||
* [apps] Specify minimal working resource presets. (@kvaps in https://github.com/cozystack/cozystack/pull/1040, backported in https://github.com/cozystack/cozystack/pull/1041)
|
||||
* [apps] Update built-in documentation and configuration reference for managed Clickhouse application. (@NickVolynkin in https://github.com/cozystack/cozystack/pull/1059, backported in https://github.com/cozystack/cozystack/pull/1065)
|
||||
71
docs/changelogs/v0.32.0.md
Normal file
71
docs/changelogs/v0.32.0.md
Normal file
@@ -0,0 +1,71 @@
|
||||
Cozystack v0.32.0 is a significant release that brings new features, key fixes, and updates to underlying components.
|
||||
|
||||
## Major Features and Improvements
|
||||
|
||||
* [platform] Use `cozypkg` instead of Helm (@kvaps in https://github.com/cozystack/cozystack/pull/1057)
|
||||
* [platform] Introduce the HelmRelease reconciler for system components. (@kvaps in https://github.com/cozystack/cozystack/pull/1033)
|
||||
* [kubernetes] Enable using container registry mirrors by tenant Kubernetes clusters. Configure containerd for tenant Kubernetes clusters. (@klinch0 in https://github.com/cozystack/cozystack/pull/979, patched by @lllamnyp in https://github.com/cozystack/cozystack/pull/1032)
|
||||
* [platform] Allow users to specify CPU requests in VCPUs. Use a library chart for resource management. (@lllamnyp in https://github.com/cozystack/cozystack/pull/972 and https://github.com/cozystack/cozystack/pull/1025)
|
||||
* [platform] Annotate all child objects of apps with uniform labels for tracking by WorkloadMonitors. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1018 and https://github.com/cozystack/cozystack/pull/1024)
|
||||
* [platform] Introduce `cluster-domain` option and un-hardcode `cozy.local`. (@kvaps in https://github.com/cozystack/cozystack/pull/1039)
|
||||
* [platform] Get instance type when reconciling WorkloadMonitor (https://github.com/cozystack/cozystack/pull/1030)
|
||||
* [virtual-machine] Add RBAC rules to allow port forwarding in KubeVirt for SSH via `virtctl`. (@mattia-eleuteri in https://github.com/cozystack/cozystack/pull/1027, patched by @klinch0 in https://github.com/cozystack/cozystack/pull/1028)
|
||||
* [monitoring] Add events and audit inputs (@kevin880202 in https://github.com/cozystack/cozystack/pull/948)
|
||||
|
||||
## Security
|
||||
|
||||
* Resolve a security problem that allowed tenant administrator to gain enhanced privileges outside the tenant. (@kvaps in https://github.com/cozystack/cozystack/pull/1062)
|
||||
|
||||
## Fixes
|
||||
|
||||
* [dashboard] Fix a number of issues in the Cozystack Dashboard (@kvaps in https://github.com/cozystack/cozystack/pull/1042)
|
||||
* [kafka] Specify minimal working resource presets. (@kvaps in https://github.com/cozystack/cozystack/pull/1040)
|
||||
* [cilium] Fixed Gateway API manifest. (@zdenekjanda in https://github.com/cozystack/cozystack/pull/1016)
|
||||
* [platform] Fix RBAC for annotating namespaces. (@kvaps in https://github.com/cozystack/cozystack/pull/1031)
|
||||
* [platform] Fix dependencies for paas-hosted bundle. (@kvaps in https://github.com/cozystack/cozystack/pull/1034)
|
||||
* [platform] Reduce system resource consumption by using lesser resource presets for VerticalPodAutoscaler, SeaweedFS, and KubeOVN. (@klinch0 in https://github.com/cozystack/cozystack/pull/1054)
|
||||
* [virtual-machine] Fix handling of cloudinit and ssh-key input for `virtual-machine` and `vm-instance` applications. (@gwynbleidd2106 in https://github.com/cozystack/cozystack/pull/1019 and https://github.com/cozystack/cozystack/pull/1020)
|
||||
* [apps] Fix Clickhouse version parsing. (@kvaps in https://github.com/cozystack/cozystack/commit/28302e776e9d2bb8f424cf467619fa61d71ac49a)
|
||||
* [apps] Add resource quotas for PostgreSQL jobs and fix application readme generation check in CI. (@klinch0 in https://github.com/cozystack/cozystack/pull/1051)
|
||||
* [kube-ovn] Enable database health check. (@kvaps in https://github.com/cozystack/cozystack/pull/1047)
|
||||
* [kubernetes] Fix upstream issue by updating Kubevirt-CCM. (@kvaps in https://github.com/cozystack/cozystack/pull/1052)
|
||||
* [kubernetes] Fix resources and introduce a migration when upgrading tenant Kubernetes to v0.32.4. (@kvaps in https://github.com/cozystack/cozystack/pull/1073)
|
||||
* [cluster-api] Add a missing migration for `capi-providers`. (@kvaps in https://github.com/cozystack/cozystack/pull/1072)
|
||||
|
||||
## Dependencies
|
||||
|
||||
* Introduce cozykpg, update to v1.1.0. (@kvaps in https://github.com/cozystack/cozystack/pull/1057 and https://github.com/cozystack/cozystack/pull/1063)
|
||||
* Update flux-operator to 0.22.0, Flux to 2.6.x. (@kingdonb in https://github.com/cozystack/cozystack/pull/1035)
|
||||
* Update Talos Linux to v1.10.3. (@kvaps in https://github.com/cozystack/cozystack/pull/1006)
|
||||
* Update Cilium to v1.17.4. (@kvaps in https://github.com/cozystack/cozystack/pull/1046)
|
||||
* Update MetalLB to v0.15.2. (@kvaps in https://github.com/cozystack/cozystack/pull/1045)
|
||||
* Update Kube-OVN to v1.13.13. (@kvaps in https://github.com/cozystack/cozystack/pull/1047)
|
||||
|
||||
## Documentation
|
||||
|
||||
* [Oracle Cloud Infrastructure installation guide](https://cozystack.io/docs/operations/talos/installation/oracle-cloud/). (@kvaps, @lllamnyp, and @NickVolynkin in https://github.com/cozystack/website/pull/168)
|
||||
* [Cluster configuration with `talosctl`](https://cozystack.io/docs/operations/talos/configuration/talosctl/). (@NickVolynkin in https://github.com/cozystack/website/pull/211)
|
||||
* [Configuring container registry mirrors for tenant Kubernetes clusters](https://cozystack.io/docs/operations/talos/configuration/air-gapped/#5-configure-container-registry-mirrors-for-tenant-kubernetes). (@klinch0 in https://github.com/cozystack/website/pull/210)
|
||||
* [Explain application management strategies and available versions for managed applications.](https://cozystack.io/docs/guides/applications/). (@NickVolynkin in https://github.com/cozystack/website/pull/219)
|
||||
* [How to clean up etcd state](https://cozystack.io/docs/operations/faq/#how-to-clean-up-etcd-state). (@gwynbleidd2106 in https://github.com/cozystack/website/pull/214)
|
||||
* [State that Cozystack is a CNCF Sandbox project](https://github.com/cozystack/cozystack?tab=readme-ov-file#cozystack). (@NickVolynkin in https://github.com/cozystack/cozystack/pull/1055)
|
||||
|
||||
## Development, Testing, and CI/CD
|
||||
|
||||
* [tests] Add tests for applications `virtual-machine`, `vm-disk`, `vm-instance`, `postgresql`, `mysql`, and `clickhouse`. (@gwynbleidd2106 in https://github.com/cozystack/cozystack/pull/1048, patched by @kvaps in https://github.com/cozystack/cozystack/pull/1074)
|
||||
* [tests] Fix concurrency for the `docker login` action. (@kvaps in https://github.com/cozystack/cozystack/pull/1014)
|
||||
* [tests] Increase QEMU system disk size in tests. (@kvaps in https://github.com/cozystack/cozystack/pull/1011)
|
||||
* [tests] Increase the waiting timeout for VMs in tests. (@kvaps in https://github.com/cozystack/cozystack/pull/1038)
|
||||
* [ci] Separate build and testing jobs in CI. (@kvaps in https://github.com/cozystack/cozystack/pull/1005 and https://github.com/cozystack/cozystack/pull/1010)
|
||||
* [ci] Fix the release assets. (@kvaps in https://github.com/cozystack/cozystack/pull/1006 and https://github.com/cozystack/cozystack/pull/1009)
|
||||
|
||||
## New Contributors
|
||||
|
||||
* @kevin880202 made their first contribution in https://github.com/cozystack/cozystack/pull/948
|
||||
* @mattia-eleuteri made their first contribution in https://github.com/cozystack/cozystack/pull/1027
|
||||
|
||||
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v0.31.0...v0.32.0
|
||||
|
||||
<!--
|
||||
HEAD https://github.com/cozystack/cozystack/commit/3ce6dbe8
|
||||
-->
|
||||
38
docs/changelogs/v0.32.1.md
Normal file
38
docs/changelogs/v0.32.1.md
Normal file
@@ -0,0 +1,38 @@
|
||||
## Major Features and Improvements
|
||||
|
||||
* [postgres] Introduce new functionality for backup and restore in PostgreSQL. (@klinch0 in https://github.com/cozystack/cozystack/pull/1086)
|
||||
* [apps] Refactor resources in managed applications. (@kvaps in https://github.com/cozystack/cozystack/pull/1106)
|
||||
* [system] Make VMAgent's `extraArgs` tunable. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1091)
|
||||
|
||||
## Fixes
|
||||
|
||||
* [postgres] Escape users and database names. (@kvaps in https://github.com/cozystack/cozystack/pull/1087)
|
||||
* [tenant] Fix monitoring agents HelmReleases for tenant clusters. (@klinch0 in https://github.com/cozystack/cozystack/pull/1079)
|
||||
* [kubernetes] Wrap cert-manager CRDs in a conditional. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1076)
|
||||
* [kubernetes] Remove `useCustomSecretForPatchContainerd` option and enable it by default. (@kvaps in https://github.com/cozystack/cozystack/pull/1104)
|
||||
* [apps] Increase default resource presets for Clickhouse and Kafka from `nano` to `small`. Update OpenAPI specs and readme's. (@kvaps in https://github.com/cozystack/cozystack/pull/1103 and https://github.com/cozystack/cozystack/pull/1105)
|
||||
* [linstor] Add configurable DRBD network options for connection and timeout settings, replacing scripted logic for detecting devices that lost connection. (@kvaps in https://github.com/cozystack/cozystack/pull/1094)
|
||||
|
||||
## Dependencies
|
||||
|
||||
* Update cozy-proxy to v0.2.0 (@kvaps in https://github.com/cozystack/cozystack/pull/1081)
|
||||
* Update Kafka Operator to 0.45.1-rc1 (@kvaps in https://github.com/cozystack/cozystack/pull/1082 and https://github.com/cozystack/cozystack/pull/1102)
|
||||
* Update Flux Operator to 0.23.0 (@kingdonb in https://github.com/cozystack/cozystack/pull/1078)
|
||||
|
||||
## Documentation
|
||||
|
||||
* [docs] Release notes for v0.32.0 and two beta-versions. (@NickVolynkin in https://github.com/cozystack/cozystack/pull/1043)
|
||||
|
||||
## Development, Testing, and CI/CD
|
||||
|
||||
* [tests] Add Kafka, Redis. (@gwynbleidd2106 in https://github.com/cozystack/cozystack/pull/1077)
|
||||
* [tests] Increase disk space for VMs in tests. (@kvaps in https://github.com/cozystack/cozystack/pull/1097)
|
||||
* [tests] Upd Kubernetes v1.33. (@kvaps in https://github.com/cozystack/cozystack/pull/1083)
|
||||
* [tests] increase postgres timeouts. (@kvaps in https://github.com/cozystack/cozystack/pull/1108)
|
||||
* [tests] don't wait for postgres ro service. (@kvaps in https://github.com/cozystack/cozystack/pull/1109)
|
||||
* [ci] Setup systemd timer to tear down sandbox. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1092)
|
||||
* [ci] Split testing job into several. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1075)
|
||||
* [ci] Run E2E tests as separate parallel jobs. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1093)
|
||||
* [ci] Refactor GitHub workflows. (@kvaps in https://github.com/cozystack/cozystack/pull/1107)
|
||||
|
||||
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v0.32.0...v0.32.1
|
||||
91
docs/changelogs/v0.33.0.md
Normal file
91
docs/changelogs/v0.33.0.md
Normal file
@@ -0,0 +1,91 @@
|
||||
> [!WARNING]
|
||||
> A patch release [0.33.2](github.com/cozystack/cozystack/releases/tag/v0.33.2) fixing a regression in 0.33.0 has been released.
|
||||
> It is recommended to skip this version and upgrade to [0.33.2](github.com/cozystack/cozystack/releases/tag/v0.33.2) instead.
|
||||
|
||||
## Feature Highlights
|
||||
|
||||
### Unified CPU and Memory Allocation Management
|
||||
|
||||
Since version 0.31.0, Cozystack introduced a single-point-of-truth configuration variable `cpu-allocation-ratio`,
|
||||
making CPU resource requests and limits uniform in Virtual Machines managed by KubeVirt.
|
||||
The new release 0.33.0 introduces `memory-allocation-ratio` and expands both variables to all managed applications and tenant resource quotas.
|
||||
|
||||
Resource presets also respect the allocation ratios and behave in the same way as explicit resource definitions.
|
||||
The new resource definition format is concise and simple for platform users.
|
||||
|
||||
```yaml
|
||||
# resource definition in the configuration
|
||||
resources:
|
||||
cpu: <defined cpu value>
|
||||
memory: <defined memory value>
|
||||
```
|
||||
|
||||
It results in Kubernetes resource requests and limits, based on defined values and the universal allocation ratios:
|
||||
|
||||
```yaml
|
||||
# actual requests and limits, provided to the application
|
||||
resources:
|
||||
limits:
|
||||
cpu: <defined cpu value>
|
||||
memory: <defined memory value>
|
||||
requests:
|
||||
cpu: <defined cpu value / cpu-allocation-ratio>
|
||||
memory: <defined memory value / memory-allocation-ratio>
|
||||
```
|
||||
|
||||
When updating from earlier Cozystack versions, resource configuration in managed applications will be automatically migrated to the new format.
|
||||
|
||||
### Backing up and Restoring Data in Tenant Kubernetes
|
||||
|
||||
One of the main features of the release is backup capability for PVCs in tenant Kubernetes clusters.
|
||||
It enables platform and tenant administrators to back up and restore data used by services in the tenant clusters.
|
||||
|
||||
This new functionality in Cozystack is powered by [Velero](https://velero.io/) and needs an external S3-compatible storage.
|
||||
|
||||
## Support for NFS Storage
|
||||
|
||||
Cozystack now supports using NFS shared storage with a new optional system module.
|
||||
See the documentation: https://cozystack.io/docs/operations/storage/nfs/.
|
||||
|
||||
## Features and Improvements
|
||||
|
||||
* [kubernetes] Enable PVC backups in tenant Kubernetes clusters, powered by [Velero](https://velero.io/). (@klinch0 in https://github.com/cozystack/cozystack/pull/1132)
|
||||
* [nfs-driver] Enable NFS support by introducing a new optional system module `nfs-driver`. (@kvaps in https://github.com/cozystack/cozystack/pull/1133)
|
||||
* [virtual-machine] Configure CPU sockets available to VMs with the `resources.cpu.sockets` configuration value. (@klinch0 in https://github.com/cozystack/cozystack/pull/1131)
|
||||
* [virtual-machine] Add support for using pre-imported "golden image" disks for virtual machines, enabling faster provisioning by referencing existing images instead of downloading via HTTP. (@gwynbleidd2106 in https://github.com/cozystack/cozystack/pull/1112)
|
||||
* [kubernetes] Add an option to expose the Ingress-NGINX controller in tenant Kubernetes cluster via LoadBalancer. New configuration value `exposeMethod` offers a choice of `Proxied` and `LoadBalancer`. (@kvaps in https://github.com/cozystack/cozystack/pull/1114)
|
||||
* [apps] When updating from earlier Cozystack versions, automatically migrate to the new resource definition format: from `resources.requests.[cpu,memory]` and `resources.limits.[cpu,memory]` to `resources.[cpu,memory]`. (@kvaps in https://github.com/cozystack/cozystack/pull/1127)
|
||||
* [apps] Give examples of new resource definitions in the managed app README's. (@NickVolynkin in https://github.com/cozystack/cozystack/pull/1120)
|
||||
* [tenant] Respect `cpu-allocation-ratio` in tenant's `resourceQuotas`.(@kvaps in https://github.com/cozystack/cozystack/pull/1119)
|
||||
* [cozy-lib] Introduce helper function to calculate Java heap params based on memory requests and limits. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1157)
|
||||
|
||||
## Security
|
||||
|
||||
* [monitoring] Disable sign up in Alerta. (@klinch0 in https://github.com/cozystack/cozystack/pull/1129)
|
||||
|
||||
## Fixes
|
||||
|
||||
* [platform] Always set resources for managed apps . (@lllamnyp in https://github.com/cozystack/cozystack/pull/1156)
|
||||
* [platform] Remove the memory limit for Keycloak deployment. (@klinch0 in https://github.com/cozystack/cozystack/pull/1122)
|
||||
* [kubernetes] Fix a condition in the ingress template for tenant Kubernetes. (@kvaps in https://github.com/cozystack/cozystack/pull/1143)
|
||||
* [kubernetes] Fix a deadlock on reattaching a KubeVirt-CSI volume. (@kvaps in https://github.com/cozystack/cozystack/pull/1135)
|
||||
* [mysql] MySQL applications with a single replica now correctly create a `LoadBalancer` service. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1113)
|
||||
* [etcd] Fix resources and headless services in the etcd application. (@kvaps in https://github.com/cozystack/cozystack/pull/1128)
|
||||
* [apps] Enable selecting `resourcePreset` from a drop-down list for all applications by adding enum of allowed values in the config scheme. (@NickVolynkin in https://github.com/cozystack/cozystack/pull/1117)
|
||||
* [apps] Refactor resource presets provided to managed apps by `cozy-lib`. (@kvaps in https://github.com/cozystack/cozystack/pull/1155)
|
||||
* [keycloak] Calculate and pass Java heap parameters explicitly to prevent OOM errors. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1157)
|
||||
|
||||
|
||||
## Development, Testing, and CI/CD
|
||||
|
||||
* [dx] Introduce cozyreport tool and gather reports in CI. (@kvaps in https://github.com/cozystack/cozystack/pull/1139)
|
||||
* [ci] Use Nexus as a pull-through cache for CI. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1124)
|
||||
* [ci] Save a list of observed images after each workflow run. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1089)
|
||||
* [ci] Skip Cozystack tests on PRs that only change the docs. Don't restart CI when a PR is labeled. (@NickVolynkin in https://github.com/cozystack/cozystack/pull/1136)
|
||||
* [dx] Fix Makefile variables for `capi-providers`. (@kvaps in https://github.com/cozystack/cozystack/pull/1115)
|
||||
* [tests] Introduce self-destructing testing environments. (@kvaps in https://github.com/cozystack/cozystack/pull/1138, https://github.com/cozystack/cozystack/pull/1140, https://github.com/cozystack/cozystack/pull/1141, https://github.com/cozystack/cozystack/pull/1142)
|
||||
* [e2e] Retry flaky application tests to improve total test time. (@kvaps in https://github.com/cozystack/cozystack/pull/1123)
|
||||
* [maintenance] Add a PR template. (@NickVolynkin in https://github.com/cozystack/cozystack/pull/1121)
|
||||
|
||||
|
||||
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v0.32.1...v0.33.0
|
||||
3
docs/changelogs/v0.33.1.md
Normal file
3
docs/changelogs/v0.33.1.md
Normal file
@@ -0,0 +1,3 @@
|
||||
## Fixes
|
||||
|
||||
* [kubevirt-csi] Fix a regression by updating the role of the CSI controller. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1165)
|
||||
19
docs/changelogs/v0.33.2.md
Normal file
19
docs/changelogs/v0.33.2.md
Normal file
@@ -0,0 +1,19 @@
|
||||
## Features and Improvements
|
||||
|
||||
* [vm-instance] Enable running [Windows](https://cozystack.io/docs/operations/virtualization/windows/) and [MikroTik RouterOS](https://cozystack.io/docs/operations/virtualization/mikrotik/) in Cozystack. Add `bus` option and always specify `bootOrder` for all disks. (@kvaps in https://github.com/cozystack/cozystack/pull/1168)
|
||||
* [cozystack-api] Refactor OpenAPI Schema and support reading it from config. (@kvaps in https://github.com/cozystack/cozystack/pull/1173)
|
||||
* [cozystack-api] Enable using singular resource names in Cozystack API. For example, `kubectl get tenant` is now a valid command, in addition to `kubectl get tenants`. (@kvaps in https://github.com/cozystack/cozystack/pull/1169)
|
||||
* [postgres] Explain how to back up and restore PostgreSQL using Velero backups. (@klinch0 and @NickVolynkin in https://github.com/cozystack/cozystack/pull/1141)
|
||||
|
||||
## Fixes
|
||||
|
||||
* [virtual-machine,vm-instance] Adjusted RBAC role to let users read the service associated with the VMs they create. Consequently, users can now see details of the service in the dashboard and therefore read the IP address of the VM. (@klinch0 in https://github.com/cozystack/cozystack/pull/1161)
|
||||
* [cozystack-api] Fix an error with `resourceVersion` which resulted in message 'failed to update HelmRelease: helmreleases.helm.toolkit.fluxcd.io "xxx" is invalid...'. (@kvaps in https://github.com/cozystack/cozystack/pull/1170)
|
||||
* [cozystack-api] Fix an error in updating lists in Cozystack objects, which resulted in message "Warning: resource ... is missing the kubectl.kubernetes.io/last-applied-configuration annotation". (@kvaps in https://github.com/cozystack/cozystack/pull/1171)
|
||||
* [cozystack-api] Disable `startegic-json-patch` support. (@kvaps in https://github.com/cozystack/cozystack/pull/1179)
|
||||
* [dashboard] Fix the code for removing dashboard comments which used to mistakenly remove shebang from cloudInit scripts. (@kvaps in https://github.com/cozystack/cozystack/pull/1175).
|
||||
* [virtual-machine] Fix cloudInit and sshKeys processing. (@kvaps in https://github.com/cozystack/cozystack/pull/1175 and https://github.com/cozystack/cozystack/commit/da3ee5d0ea9e87529c8adc4fcccffabe8782292e)
|
||||
* [applications] Fix a typo in preset resource tables in the built-in documentation of managed applications. (@NickVolynkin in https://github.com/cozystack/cozystack/pull/1172)
|
||||
* [kubernetes] Enable deleting Velero component from a tenant Kubernetes cluster. (@klinch0 in https://github.com/cozystack/cozystack/pull/1176)
|
||||
|
||||
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v0.33.1...v0.33.2
|
||||
87
docs/changelogs/v0.34.0.md
Normal file
87
docs/changelogs/v0.34.0.md
Normal file
@@ -0,0 +1,87 @@
|
||||
Cozystack v0.34.0 is a stable release.
|
||||
It focuses on cluster reliability, virtualization capabilities, and enhancements to the Cozystack API.
|
||||
|
||||
<!--
|
||||
https://github.com/cozystack/cozystack/releases/tag/v0.34.0
|
||||
-->
|
||||
|
||||
> [!WARNING]
|
||||
> A regression was found in this release and fixed in patch [0.34.3](https://github.com/cozystack/cozystack/releases/tag/v0.34.3).
|
||||
> When upgrading Cozystack, it's recommended to skip this version and upgrade directly to [0.34.3](https://github.com/cozystack/cozystack/releases/tag/v0.34.3).
|
||||
|
||||
|
||||
## Major Features and Improvements
|
||||
|
||||
* [kubernetes] Enable users to select Kubernetes versions in tenant clusters. Supported versions range from 1.28 to 1.33, updated to the latest patches. (@lllamnyp and @IvanHunters in https://github.com/cozystack/cozystack/pull/1202)
|
||||
* [kubernetes] Enable PVC snapshot capability in tenant Kubernetes clusters. (@klinch0 in https://github.com/cozystack/cozystack/pull/1203)
|
||||
* [vpa] Implement autoscaling for the Vertical Pod Autoscaler itself, ensuring that VPA has sufficient resources and reducing the number of configuration parameters that platform administrators have to manage. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1198)
|
||||
* [vm-instance] Enable running [Windows](https://cozystack.io/docs/operations/virtualization/windows/) and [MikroTik RouterOS](https://cozystack.io/docs/operations/virtualization/mikrotik/) in Cozystack. Add `bus` option and always specify `bootOrder` for all disks. (@kvaps in https://github.com/cozystack/cozystack/pull/1168)
|
||||
* [cozystack-api] Specify OpenAPI schema for apps. (@kvaps in https://github.com/cozystack/cozystack/pull/1174)
|
||||
* [cozystack-api] Refactor OpenAPI Schema and support reading it from config. (@kvaps in https://github.com/cozystack/cozystack/pull/1173)
|
||||
* [cozystack-api] Enable using singular resource names in Cozystack API. For example, `kubectl get tenant` is now a valid command, in addition to `kubectl get tenants`. (@kvaps in https://github.com/cozystack/cozystack/pull/1169)
|
||||
* [postgres] Explain how to back up and restore PostgreSQL using Velero backups. (@klinch0 and @NickVolynkin in https://github.com/cozystack/cozystack/pull/1141)
|
||||
* [seaweedfs] Support multi-zone configuration for S3 storage. (@kvaps in https://github.com/cozystack/cozystack/pull/1194)
|
||||
* [dashboard] Put YAML editor first when deploying and upgrading applications, as a more powerful option. Fix handling multiline strings. (@kvaps in https://github.com/cozystack/cozystack/pull/1227)
|
||||
|
||||
## Security
|
||||
|
||||
* [seaweedfs] Ensure that JWT signing keys in the SeaweedFS security configuration remain consistent across Helm upgrades. Resolve an upstream issue. (@kvaps in https://github.com/cozystack/cozystack/pull/1193 and https://github.com/seaweedfs/seaweedfs/pull/6967)
|
||||
|
||||
## Fixes
|
||||
|
||||
* [cozystack-controller] Fix stale workloads not being deleted when marked for deletion. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1210, @kvaps in https://github.com/cozystack/cozystack/pull/1229)
|
||||
* [cozystack-controller] Improve reliability when updating HelmRelease objects to prevent unintended changes during reconciliation. (@klinch0 in https://github.com/cozystack/cozystack/pull/1205)
|
||||
* [kubevirt-csi] Fix a regression by updating the role of the CSI controller. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1165)
|
||||
* [virtual-machine,vm-instance] Adjusted RBAC role to let users read the service associated with the VMs they create. Consequently, users can now see details of the service in the dashboard and therefore read the IP address of the VM. (@klinch0 in https://github.com/cozystack/cozystack/pull/1161)
|
||||
* [virtual-machine] Fix cloudInit and sshKeys processing. (@kvaps in https://github.com/cozystack/cozystack/pull/1175 and https://github.com/cozystack/cozystack/commit/da3ee5d0ea9e87529c8adc4fcccffabe8782292e)
|
||||
* [cozystack-api] Fix an error with `resourceVersion` which resulted in message 'failed to update HelmRelease: helmreleases.helm.toolkit.fluxcd.io "xxx" is invalid...'. (@kvaps in https://github.com/cozystack/cozystack/pull/1170)
|
||||
* [cozystack-api] Fix an error in updating lists in Cozystack objects, which resulted in message "Warning: resource ... is missing the kubectl.kubernetes.io/last-applied-configuration annotation". (@kvaps in https://github.com/cozystack/cozystack/pull/1171)
|
||||
* [cozystack-api] Disable `strategic-json-patch` support. (@kvaps in https://github.com/cozystack/cozystack/pull/1179)
|
||||
* [cozystack-api] Fix non-existing OpenAPI references. (@kvaps in https://github.com/cozystack/cozystack/pull/1208)
|
||||
* [dashboard] Fix the code for removing dashboard comments which used to mistakenly remove shebang from `cloudInit` scripts. (@kvaps in https://github.com/cozystack/cozystack/pull/1175).
|
||||
* [applications] Reorder configuration values in application README's for better readability. (@NickVolynkin in https://github.com/cozystack/cozystack/pull/1214)
|
||||
* [applications] Disallow selecting `resourcePreset = none` in the visual editor when deploying and upgrading applications. (@NickVolynkin in https://github.com/cozystack/cozystack/pull/1196)
|
||||
* [applications] Fix a typo in preset resource tables in the built-in documentation of managed applications. (@NickVolynkin in https://github.com/cozystack/cozystack/pull/1172)
|
||||
* [kubernetes] Enable deleting Velero component from a tenant Kubernetes cluster. (@klinch0 in https://github.com/cozystack/cozystack/pull/1176)
|
||||
* [kubernetes] Explicitly mention available K8s versions for tenant clusters in the README. (@NickVolynkin in https://github.com/cozystack/cozystack/pull/1212)
|
||||
* [oidc] Enable deleting Keycloak service. (@klinch0 in https://github.com/cozystack/cozystack/pull/1178)
|
||||
* [tenant] Enable deleting extra applications from a tenant. (@klinch0 and @kvaps and in https://github.com/cozystack/cozystack/pull/1162)
|
||||
* [nats] Fix a typo in the application template. (@klinch0 in https://github.com/cozystack/cozystack/pull/1195)
|
||||
* [postgres] Resolve an issue with the visibility of PostgreSQL load balancer on the dashboard. (@klinch0 https://github.com/cozystack/cozystack/pull/1204)
|
||||
* [objectstorage] Update COSI controller and sidecar, including fixes from upstream. (@kvaps in https://github.com/cozystack/cozystack/pull/1209, https://github.com/kubernetes-sigs/container-object-storage-interface/pull/89, and https://github.com/kubernetes-sigs/container-object-storage-interface/pull/90)
|
||||
|
||||
|
||||
## Dependencies
|
||||
|
||||
* Update FerretDB from v1 to v2.4.0.<br>**Breaking change:** before upgrading FerretDB instances, back up and restore the data following the [migration guide](https://docs.ferretdb.io/migration/migrating-from-v1/). (@kvaps in https://github.com/cozystack/cozystack/pull/1206)
|
||||
* Update Talos Linux to v1.10.5. (@kvaps in https://github.com/cozystack/cozystack/pull/1186)
|
||||
* Update LINSTOR to v1.31.2. (@kvaps in https://github.com/cozystack/cozystack/pull/1180)
|
||||
* Update KubeVirt to v1.5.2. (@kvaps in https://github.com/cozystack/cozystack/pull/1183)
|
||||
* Update CDI to v1.62.0. (@kvaps in https://github.com/cozystack/cozystack/pull/1183)
|
||||
* Update Flux Operator to 0.24.0. (@kingdonb in https://github.com/cozystack/cozystack/pull/1167)
|
||||
* Update Kamaji to edge-25.7.1. (@kvaps in https://github.com/cozystack/cozystack/pull/1184)
|
||||
* Update Kube-OVN to v1.13.14. (@kvaps in https://github.com/cozystack/cozystack/pull/1182)
|
||||
* Update Cilium to v1.17.5. (@kvaps in https://github.com/cozystack/cozystack/pull/1181)
|
||||
* Update MariaDB Operator to v0.38.1. (@kvaps in https://github.com/cozystack/cozystack/pull/1188)
|
||||
* Update SeaweedFS to v3.94. (@kvaps in https://github.com/cozystack/cozystack/pull/1194)
|
||||
|
||||
|
||||
## Documentation
|
||||
|
||||
* [Updated Cozystack Roadmap and Backlog for 2024-2026](https://cozystack.io/docs/roadmap/). (@tym83 and @kvapsova in https://github.com/cozystack/website/pull/249)
|
||||
* [Running Windows VMs](https://cozystack.io/docs/operations/virtualization/windows/). (@kvaps and @NickVolynkin in https://github.com/cozystack/website/pull/246)
|
||||
* [Running MikroTik RouterOS VMs](https://cozystack.io/docs/operations/virtualization/mikrotik/). (@kvaps and @NickVolynkin in https://github.com/cozystack/website/pull/247)
|
||||
* [Public-network Kubernetes Deployment](https://cozystack.io/docs/operations/faq/#public-network-kubernetes-deployment). (@klinch0 and @NickVolynkin in https://github.com/cozystack/website/pull/242)
|
||||
* [How to allocate space on system disk for user storage](https://cozystack.io/docs/operations/faq/#how-to-allocate-space-on-system-disk-for-user-storage). (@klinch0 and @NickVolynkin in https://github.com/cozystack/website/pull/242)
|
||||
* [Resource Management in Cozystack](https://cozystack.io/docs/guides/resource-management/). (@NickVolynkin in https://github.com/cozystack/website/pull/233)
|
||||
* [Key Concepts of Cozystack](https://cozystack.io/docs/guides/concepts/). (@NickVolynkin in https://github.com/cozystack/website/pull/254)
|
||||
* [Cozystack Architecture and Platform Stack](https://cozystack.io/docs/guides/platform-stack/). (@NickVolynkin in https://github.com/cozystack/website/pull/252)
|
||||
* Fixed a parameter in Kubespan: `cluster.discovery.enabled = true`. (@lb0o in https://github.com/cozystack/website/pull/241)
|
||||
* Updated the Linux Foundation trademark text on the Cozystack website. (@krook in https://github.com/cozystack/website/pull/251)
|
||||
* Auto-update the managed applications reference pages. (@NickVolynkin in https://github.com/cozystack/website/pull/243 and https://github.com/cozystack/website/pull/245)
|
||||
|
||||
## Development, Testing, and CI/CD
|
||||
|
||||
* [ci] Improve workflow for contributors submitting PRs from forks. Use Oracle Cloud Infrastructure Registry for non-release PRs, bypassing restrictions preventing pushing to ghcr.io with default GitHub token. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1226)
|
||||
|
||||
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v0.33.0...v0.34.0
|
||||
15
docs/changelogs/v0.34.1.md
Normal file
15
docs/changelogs/v0.34.1.md
Normal file
@@ -0,0 +1,15 @@
|
||||
<!--
|
||||
https://github.com/cozystack/cozystack/releases/tag/v0.34.1
|
||||
-->
|
||||
|
||||
> [!WARNING]
|
||||
> A regression was found in this release and fixed in patch [0.34.3](https://github.com/cozystack/cozystack/releases/tag/v0.34.3).
|
||||
> When upgrading Cozystack, it's recommended to skip this version and upgrade directly to [0.34.3](https://github.com/cozystack/cozystack/releases/tag/v0.34.3).
|
||||
|
||||
|
||||
## Fixes
|
||||
|
||||
* [kubernetes] Fix regression in `volumesnapshotclass` installation from https://github.com/cozystack/cozystack/pull/1203. (@kvaps in https://github.com/cozystack/cozystack/pull/1238)
|
||||
* [objectstorage] Fix building objectstorage images. (@kvaps in https://github.com/cozystack/cozystack/commit/a9e9dfca1fadde1bf2b4e100753e0731bbcfe923)
|
||||
|
||||
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v0.34.0...v0.34.1
|
||||
14
docs/changelogs/v0.34.2.md
Normal file
14
docs/changelogs/v0.34.2.md
Normal file
@@ -0,0 +1,14 @@
|
||||
<!--
|
||||
https://github.com/cozystack/cozystack/releases/tag/v0.34.2
|
||||
-->
|
||||
|
||||
> [!WARNING]
|
||||
> A regression was found in this release and fixed in patch [0.34.3](https://github.com/cozystack/cozystack/releases/tag/v0.34.3).
|
||||
> When upgrading Cozystack, it's recommended to skip this version and upgrade directly to [0.34.3](https://github.com/cozystack/cozystack/releases/tag/v0.34.3).
|
||||
|
||||
|
||||
## Fixes
|
||||
|
||||
* [objectstorage] Fix recording image in objectstorage. (@kvaps in https://github.com/cozystack/cozystack/commit/4d9a8389d6bc7e86d63dd976ec853b374a91a637)
|
||||
|
||||
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v0.34.1...v0.34.2
|
||||
13
docs/changelogs/v0.34.3.md
Normal file
13
docs/changelogs/v0.34.3.md
Normal file
@@ -0,0 +1,13 @@
|
||||
<!--
|
||||
https://github.com/cozystack/cozystack/releases/tag/v0.34.3
|
||||
-->
|
||||
|
||||
## Fixes
|
||||
|
||||
* [tenant] Fix tenant network policy to allow traffic to additional tenant-related services across namespace hierarchies. (@klinch0 in https://github.com/cozystack/cozystack/pull/1232, backported in https://github.com/cozystack/cozystack/pull/1272)
|
||||
* [kubernetes] Add dependency for snapshot CRD and migration to latest version. (@kvaps in https://github.com/cozystack/cozystack/pull/1275, backported in https://github.com/cozystack/cozystack/pull/1279)
|
||||
* [seaweedfs] Add support for whitelisting and exporting via nginx-ingress. Update cosi-driver. (@kvaps in https://github.com/cozystack/cozystack/pull/1277)
|
||||
* [kubevirt] Fix building Kubevirt CCM (@kvaps in 3c7e256906e1dbb0f957dc3a205fa77a147d419d)
|
||||
* [virtual-machine] Fix a regression with field `optional=true`. (@kvaps in https://github.com/cozystack/cozystack/commit/01053f7c3180d1bd045d7c5fb949984c2bdaf19d)
|
||||
|
||||
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v0.34.2...v0.34.3
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user