mirror of
https://github.com/ccfos/nightingale.git
synced 2026-03-03 06:29:16 +00:00
Compare commits
361 Commits
notify_plu
...
dashboard
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
8358ab4b81 | ||
|
|
0fc6cb8ef2 | ||
|
|
e1ab013c45 | ||
|
|
d984ad8bf4 | ||
|
|
86fe3c7c43 | ||
|
|
0f4478318e | ||
|
|
c0d0eb0e69 | ||
|
|
b62762b2e6 | ||
|
|
810ca0e469 | ||
|
|
33e3b224b9 | ||
|
|
24d7b2b1bf | ||
|
|
1d5ff1b28d | ||
|
|
ed5c8c5758 | ||
|
|
01f7860900 | ||
|
|
a6bb03c8ba | ||
|
|
e9150b2ae0 | ||
|
|
30d1ebd808 | ||
|
|
2f69d92055 | ||
|
|
deeb40b4a0 | ||
|
|
37f68fd52b | ||
|
|
73828e50b5 | ||
|
|
7e73850117 | ||
|
|
3a075e7681 | ||
|
|
4ec5612d78 | ||
|
|
817ed0ab1b | ||
|
|
63aa615761 | ||
|
|
2a36902760 | ||
|
|
bca9331182 | ||
|
|
199a23e385 | ||
|
|
c733f16cc7 | ||
|
|
81585649aa | ||
|
|
2c4422d657 | ||
|
|
aaf66cb386 | ||
|
|
cfed4d8318 | ||
|
|
606cd538ec | ||
|
|
bafb3b2546 | ||
|
|
9a0224697f | ||
|
|
23156552db | ||
|
|
36bca795fa | ||
|
|
b5503ae93e | ||
|
|
3c102e47ed | ||
|
|
60bf8139b1 | ||
|
|
fc0d077c9f | ||
|
|
3a610f7ea0 | ||
|
|
f8990ee85e | ||
|
|
88040bf277 | ||
|
|
1e15dc1f30 | ||
|
|
9880b466db | ||
|
|
b7780ebbdb | ||
|
|
1fa524b710 | ||
|
|
aa2c0cffce | ||
|
|
ed1c89fb7e | ||
|
|
988327dead | ||
|
|
5db168224e | ||
|
|
7622eba87f | ||
|
|
1cb58fedf7 | ||
|
|
7dcaec0a7b | ||
|
|
4f315cb6d5 | ||
|
|
9a2d898214 | ||
|
|
530561c038 | ||
|
|
fc68d2d598 | ||
|
|
1b40c38a7a | ||
|
|
d39d4cb91d | ||
|
|
e415538ffd | ||
|
|
05c767a803 | ||
|
|
923cff1c19 | ||
|
|
ef18d2a95f | ||
|
|
3abc4d0bfd | ||
|
|
a3ec69fe4a | ||
|
|
403466f872 | ||
|
|
81abd2f02a | ||
|
|
263c77cbbf | ||
|
|
ef42a78e59 | ||
|
|
4c7746b3b4 | ||
|
|
b142a5726e | ||
|
|
cc68b75489 | ||
|
|
1ce79e29d5 | ||
|
|
ee167ce0ba | ||
|
|
544cd02ef1 | ||
|
|
34ad6bc220 | ||
|
|
c7c694e70b | ||
|
|
dc26bb78d8 | ||
|
|
a0c635b830 | ||
|
|
0e95c29b7d | ||
|
|
cab9fed700 | ||
|
|
4ad47fb8f4 | ||
|
|
50345cb823 | ||
|
|
95bb67e66d | ||
|
|
90fbd9f16a | ||
|
|
5c8411eba1 | ||
|
|
03edb84d09 | ||
|
|
958a8c3ed1 | ||
|
|
a2a0b41909 | ||
|
|
64e1085766 | ||
|
|
5c97986908 | ||
|
|
66e291e3c3 | ||
|
|
365fcd5dd7 | ||
|
|
63690ba084 | ||
|
|
bc6616ce7c | ||
|
|
b96ff22a21 | ||
|
|
bfec911e9c | ||
|
|
76a94db7c1 | ||
|
|
eef67c956f | ||
|
|
2a405c85e0 | ||
|
|
a2bdeb4f0e | ||
|
|
5a880f002e | ||
|
|
e4733e9a04 | ||
|
|
a9595aea18 | ||
|
|
101390b4ae | ||
|
|
39e80ea786 | ||
|
|
f118cadaea | ||
|
|
bad49d2773 | ||
|
|
a897ae6db8 | ||
|
|
aac135c498 | ||
|
|
e7621ae200 | ||
|
|
c3702cde43 | ||
|
|
578ce375b5 | ||
|
|
a00be34e8e | ||
|
|
02d02463f7 | ||
|
|
96a1d4e903 | ||
|
|
e2b57396e3 | ||
|
|
381654dec5 | ||
|
|
82ac0fa625 | ||
|
|
e4d65808bf | ||
|
|
34965d818b | ||
|
|
d4eadef378 | ||
|
|
300405dc50 | ||
|
|
49bb5e1ee3 | ||
|
|
45659ee98f | ||
|
|
82b98967d8 | ||
|
|
6336d6de66 | ||
|
|
e8fd80b6d5 | ||
|
|
dca4e4c83b | ||
|
|
6514891b3a | ||
|
|
3383ca12fa | ||
|
|
86b5c9668b | ||
|
|
540ef0244d | ||
|
|
0b25f77e61 | ||
|
|
2206e8d2c1 | ||
|
|
644df733d3 | ||
|
|
2f9d7843d8 | ||
|
|
9d1486b058 | ||
|
|
d52b675516 | ||
|
|
cae75d3930 | ||
|
|
8f6d256300 | ||
|
|
e74d6a3ee5 | ||
|
|
7ecc9a4614 | ||
|
|
0c7f97c826 | ||
|
|
b47d4f5385 | ||
|
|
91a38ffc5f | ||
|
|
4e4c0f5d82 | ||
|
|
88d0b277ca | ||
|
|
e3b0ed1fca | ||
|
|
a29b5b90d2 | ||
|
|
992d5cdebd | ||
|
|
848900a2bf | ||
|
|
814af8085d | ||
|
|
4715c8e073 | ||
|
|
d442e37051 | ||
|
|
e2226f3f34 | ||
|
|
7d8a4af2ec | ||
|
|
5208138a40 | ||
|
|
fce91ffedb | ||
|
|
2310b3d1e5 | ||
|
|
462e9dd696 | ||
|
|
4f6a0bf56b | ||
|
|
bc708b4e11 | ||
|
|
2b1244616a | ||
|
|
274da279f5 | ||
|
|
2b7ab746f5 | ||
|
|
10427f5a47 | ||
|
|
c366a641a4 | ||
|
|
e044954798 | ||
|
|
b51b93c846 | ||
|
|
717941a9bc | ||
|
|
1180f1fcfd | ||
|
|
b94b494f6d | ||
|
|
480dde89af | ||
|
|
6c587ea4ef | ||
|
|
82a6786457 | ||
|
|
70d41f0c77 | ||
|
|
21a0e755b2 | ||
|
|
1aed12d93d | ||
|
|
f07964c9c9 | ||
|
|
5156ec13b1 | ||
|
|
550a12a3f7 | ||
|
|
1426ccce53 | ||
|
|
ef1fe403ba | ||
|
|
eb9ad34748 | ||
|
|
615d909e5d | ||
|
|
a51eaabe85 | ||
|
|
fba99a1001 | ||
|
|
e910c1fb22 | ||
|
|
21cad3e56c | ||
|
|
69ca0e87e9 | ||
|
|
178de1fe73 | ||
|
|
87899cbedb | ||
|
|
d4257d11f2 | ||
|
|
00b9c31f29 | ||
|
|
1c8c6b92a9 | ||
|
|
9c9fe800e4 | ||
|
|
9aeeaa191e | ||
|
|
e69112958b | ||
|
|
6d8317927e | ||
|
|
072f1bd51f | ||
|
|
25dbc62ff4 | ||
|
|
b233067789 | ||
|
|
d531178c9b | ||
|
|
174df1495c | ||
|
|
ffe423148d | ||
|
|
926559c9a7 | ||
|
|
136642f126 | ||
|
|
a054828fcc | ||
|
|
e46e946689 | ||
|
|
cf083c543b | ||
|
|
2e1508fdd3 | ||
|
|
954543a5b2 | ||
|
|
71a402c33c | ||
|
|
e30a5a316f | ||
|
|
0c9b7de391 | ||
|
|
063b6f63df | ||
|
|
44b780093a | ||
|
|
780ad19dd9 | ||
|
|
c6d133772a | ||
|
|
c5bb8a4a13 | ||
|
|
06c1664577 | ||
|
|
96a4c1ebfa | ||
|
|
b0c05368f7 | ||
|
|
eebf2cff49 | ||
|
|
30d021bc19 | ||
|
|
b4ea395fe3 | ||
|
|
9f4d1a1ea7 | ||
|
|
ed06da90d9 | ||
|
|
9461b549d2 | ||
|
|
3b1b595461 | ||
|
|
4257de69fd | ||
|
|
ddc86f20ee | ||
|
|
bf27162a9b | ||
|
|
f8ac0a9b4a | ||
|
|
7a190b152c | ||
|
|
99fbdae121 | ||
|
|
aa26ddfb48 | ||
|
|
ba5aba9cdf | ||
|
|
3400803672 | ||
|
|
f11377b289 | ||
|
|
1165312532 | ||
|
|
8a145d5ba2 | ||
|
|
352415662a | ||
|
|
65d8f80637 | ||
|
|
b3700c7251 | ||
|
|
106a8e490a | ||
|
|
5332f797a6 | ||
|
|
aff0dbfea1 | ||
|
|
da5dd683d6 | ||
|
|
15892d6e57 | ||
|
|
fbff60eefb | ||
|
|
62867ddbf2 | ||
|
|
5d4acb6cc3 | ||
|
|
b893483d26 | ||
|
|
4130a5df02 | ||
|
|
445d03e096 | ||
|
|
577c402a5b | ||
|
|
40bbbfd475 | ||
|
|
0d05ad85f2 | ||
|
|
e70622d18c | ||
|
|
562f98ddaf | ||
|
|
ee07969c8a | ||
|
|
5b0e24cd40 | ||
|
|
78b2e54910 | ||
|
|
2e64c83632 | ||
|
|
537d5d2386 | ||
|
|
86899b8c48 | ||
|
|
fcc45ebf2a | ||
|
|
95727e9c00 | ||
|
|
3a3ad5d9d9 | ||
|
|
7209da192f | ||
|
|
98f3508424 | ||
|
|
c33900ee1b | ||
|
|
a2490104b9 | ||
|
|
1a25c3804e | ||
|
|
23eb766c14 | ||
|
|
a7bad003f5 | ||
|
|
e4e48cfda0 | ||
|
|
d0ce4c25e5 | ||
|
|
01aea821b9 | ||
|
|
bdc1c1c60b | ||
|
|
09f37b8076 | ||
|
|
fc4c4b96bf | ||
|
|
5c60c2c85e | ||
|
|
1e9bd900e9 | ||
|
|
1ca000af2c | ||
|
|
81fade557b | ||
|
|
b82f646636 | ||
|
|
26a3d2dafa | ||
|
|
5e931ebe8e | ||
|
|
8c45479c02 | ||
|
|
940313bd4e | ||
|
|
5057cd0ae6 | ||
|
|
a4be2c73ac | ||
|
|
a38e50d6b8 | ||
|
|
89f66dd5d1 | ||
|
|
3963470603 | ||
|
|
640b6e6825 | ||
|
|
e7d2c45f9d | ||
|
|
80ee54898a | ||
|
|
fe68cebbf9 | ||
|
|
c1fec215a9 | ||
|
|
388228a631 | ||
|
|
b4ddd03691 | ||
|
|
b92e4abf86 | ||
|
|
a1c458b764 | ||
|
|
acb4b8e33e | ||
|
|
54eab51e54 | ||
|
|
be89fde030 | ||
|
|
37711ea6b2 | ||
|
|
3b5c8d8357 | ||
|
|
635369e3fd | ||
|
|
6c2c945bd9 | ||
|
|
48d24c79d6 | ||
|
|
c6a1761a7b | ||
|
|
23d7e5a7de | ||
|
|
b1b2c7d6b0 | ||
|
|
f34c3c6a2c | ||
|
|
454dc7f983 | ||
|
|
c1e92b56b9 | ||
|
|
fd93fd7182 | ||
|
|
1a446f0749 | ||
|
|
f18ed76593 | ||
|
|
9b3a9f29d9 | ||
|
|
49965fd5d5 | ||
|
|
a248e054fa | ||
|
|
bbb35d36be | ||
|
|
fd3e51cbb1 | ||
|
|
bd0480216c | ||
|
|
2c963258cf | ||
|
|
b4f267fb01 | ||
|
|
ea46401db2 | ||
|
|
58e777eb00 | ||
|
|
04a9161f75 | ||
|
|
1ed8f38833 | ||
|
|
bb17751a81 | ||
|
|
a8dcb1fe83 | ||
|
|
1ea30e03a4 | ||
|
|
ba0eafa065 | ||
|
|
c78c8d07f2 | ||
|
|
8fe9e57c03 | ||
|
|
64646d2ace | ||
|
|
e747e73145 | ||
|
|
896f85efdf | ||
|
|
77e4499a32 | ||
|
|
7c351e09e5 | ||
|
|
14ad3b1b0a | ||
|
|
184867d07c | ||
|
|
3476b95b35 | ||
|
|
76e105c93a | ||
|
|
39705787c9 | ||
|
|
293680a9cd | ||
|
|
05005357fb | ||
|
|
ba7ff133e6 | ||
|
|
0bd7ba9549 | ||
|
|
17c7361620 |
4
.gitignore
vendored
4
.gitignore
vendored
@@ -41,6 +41,8 @@ _test
|
||||
/docker/pub
|
||||
/docker/n9e
|
||||
/docker/mysqldata
|
||||
/docker/experience_pg_vm/pgdata
|
||||
/etc.local
|
||||
|
||||
.alerts
|
||||
.idea
|
||||
@@ -52,4 +54,4 @@ _test
|
||||
queries.active
|
||||
|
||||
/n9e-*
|
||||
|
||||
n9e.sql
|
||||
|
||||
@@ -15,7 +15,7 @@ builds:
|
||||
hooks:
|
||||
pre:
|
||||
- ./fe.sh
|
||||
main: ./src/
|
||||
main: ./cmd/center/
|
||||
binary: n9e
|
||||
env:
|
||||
- CGO_ENABLED=0
|
||||
@@ -26,12 +26,54 @@ builds:
|
||||
- arm64
|
||||
ldflags:
|
||||
- -s -w
|
||||
- -X github.com/didi/nightingale/v5/src/pkg/version.VERSION={{ .Tag }}-{{.Commit}}
|
||||
- -X github.com/ccfos/nightingale/v6/pkg/version.Version={{ .Tag }}-{{.Commit}}
|
||||
- id: build-cli
|
||||
main: ./cmd/cli/
|
||||
binary: n9e-cli
|
||||
env:
|
||||
- CGO_ENABLED=0
|
||||
goos:
|
||||
- linux
|
||||
goarch:
|
||||
- amd64
|
||||
- arm64
|
||||
ldflags:
|
||||
- -s -w
|
||||
- -X github.com/ccfos/nightingale/v6/pkg/version.Version={{ .Tag }}-{{.Commit}}
|
||||
- id: build-alert
|
||||
main: ./cmd/alert/
|
||||
binary: n9e-alert
|
||||
env:
|
||||
- CGO_ENABLED=0
|
||||
goos:
|
||||
- linux
|
||||
goarch:
|
||||
- amd64
|
||||
- arm64
|
||||
ldflags:
|
||||
- -s -w
|
||||
- -X github.com/ccfos/nightingale/v6/pkg/version.Version={{ .Tag }}-{{.Commit}}
|
||||
- id: build-pushgw
|
||||
main: ./cmd/pushgw/
|
||||
binary: n9e-pushgw
|
||||
env:
|
||||
- CGO_ENABLED=0
|
||||
goos:
|
||||
- linux
|
||||
goarch:
|
||||
- amd64
|
||||
- arm64
|
||||
ldflags:
|
||||
- -s -w
|
||||
- -X github.com/ccfos/nightingale/v6/pkg/version.Version={{ .Tag }}-{{.Commit}}
|
||||
|
||||
archives:
|
||||
- id: n9e
|
||||
builds:
|
||||
- build
|
||||
- build-cli
|
||||
- build-alert
|
||||
- build-pushgw
|
||||
format: tar.gz
|
||||
format_overrides:
|
||||
- goos: windows
|
||||
@@ -42,6 +84,9 @@ archives:
|
||||
- docker/*
|
||||
- etc/*
|
||||
- pub/*
|
||||
- integrations/*
|
||||
- cli/*
|
||||
- n9e.sql
|
||||
|
||||
release:
|
||||
github:
|
||||
@@ -59,6 +104,8 @@ dockers:
|
||||
dockerfile: docker/Dockerfile.goreleaser
|
||||
extra_files:
|
||||
- pub
|
||||
- etc
|
||||
- integrations
|
||||
use: buildx
|
||||
build_flag_templates:
|
||||
- "--platform=linux/amd64"
|
||||
@@ -68,9 +115,11 @@ dockers:
|
||||
goarch: arm64
|
||||
ids:
|
||||
- build
|
||||
dockerfile: docker/Dockerfile.goreleaser
|
||||
dockerfile: docker/Dockerfile.goreleaser.arm64
|
||||
extra_files:
|
||||
- pub
|
||||
- etc
|
||||
- integrations
|
||||
use: buildx
|
||||
build_flag_templates:
|
||||
- "--platform=linux/arm64/v8"
|
||||
|
||||
2
LICENSE
2
LICENSE
@@ -430,4 +430,4 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
|
||||
See the License for the specific language governing permissions and
|
||||
|
||||
limitations under the License.
|
||||
limitations under the License.
|
||||
47
Makefile
47
Makefile
@@ -1,50 +1,33 @@
|
||||
.PHONY: start build
|
||||
|
||||
NOW = $(shell date -u '+%Y%m%d%I%M%S')
|
||||
|
||||
|
||||
APP = n9e
|
||||
SERVER_BIN = $(APP)
|
||||
ROOT:=$(shell pwd -P)
|
||||
GIT_COMMIT:=$(shell git --work-tree ${ROOT} rev-parse 'HEAD^{commit}')
|
||||
_GIT_VERSION:=$(shell git --work-tree ${ROOT} describe --tags --abbrev=14 "${GIT_COMMIT}^{commit}" 2>/dev/null)
|
||||
TAG=$(shell echo "${_GIT_VERSION}" | awk -F"-" '{print $$1}')
|
||||
RELEASE_VERSION:="$(TAG)-$(GIT_COMMIT)"
|
||||
|
||||
# RELEASE_ROOT = release
|
||||
# RELEASE_SERVER = release/${APP}
|
||||
# GIT_COUNT = $(shell git rev-list --all --count)
|
||||
# GIT_HASH = $(shell git rev-parse --short HEAD)
|
||||
# RELEASE_TAG = $(RELEASE_VERSION).$(GIT_COUNT).$(GIT_HASH)
|
||||
|
||||
all: build
|
||||
|
||||
build:
|
||||
go build -ldflags "-w -s -X github.com/didi/nightingale/v5/src/pkg/version.VERSION=$(RELEASE_VERSION)" -o $(SERVER_BIN) ./src
|
||||
go build -ldflags "-w -s -X github.com/ccfos/nightingale/v6/pkg/version.Version=$(RELEASE_VERSION)" -o n9e ./cmd/center/main.go
|
||||
|
||||
build-linux:
|
||||
GOOS=linux GOARCH=amd64 go build -ldflags "-w -s -X github.com/didi/nightingale/v5/src/pkg/version.VERSION=$(RELEASE_VERSION)" -o $(SERVER_BIN) ./src
|
||||
build-alert:
|
||||
go build -ldflags "-w -s -X github.com/ccfos/nightingale/v6/pkg/version.Version=$(RELEASE_VERSION)" -o n9e-alert ./cmd/alert/main.go
|
||||
|
||||
# start:
|
||||
# @go run -ldflags "-X main.VERSION=$(RELEASE_TAG)" ./cmd/${APP}/main.go web -c ./configs/config.toml -m ./configs/model.conf --menu ./configs/menu.yaml
|
||||
run_webapi:
|
||||
nohup ./n9e webapi > webapi.log 2>&1 &
|
||||
build-pushgw:
|
||||
go build -ldflags "-w -s -X github.com/ccfos/nightingale/v6/pkg/version.Version=$(RELEASE_VERSION)" -o n9e-pushgw ./cmd/pushgw/main.go
|
||||
|
||||
run_server:
|
||||
nohup ./n9e server > server.log 2>&1 &
|
||||
build-cli:
|
||||
go build -ldflags "-w -s -X github.com/ccfos/nightingale/v6/pkg/version.Version=$(RELEASE_VERSION)" -o n9e-cli ./cmd/cli/main.go
|
||||
|
||||
# swagger:
|
||||
# @swag init --parseDependency --generalInfo ./cmd/${APP}/main.go --output ./internal/app/swagger
|
||||
run:
|
||||
nohup ./n9e > n9e.log 2>&1 &
|
||||
|
||||
# wire:
|
||||
# @wire gen ./internal/app
|
||||
run_alert:
|
||||
nohup ./n9e-alert > n9e-alert.log 2>&1 &
|
||||
|
||||
# test:
|
||||
# cd ./internal/app/test && go test -v
|
||||
run_pushgw:
|
||||
nohup ./n9e-pushgw > n9e-pushgw.log 2>&1 &
|
||||
|
||||
# clean:
|
||||
# rm -rf data release $(SERVER_BIN) internal/app/test/data cmd/${APP}/data
|
||||
|
||||
pack: build
|
||||
rm -rf $(APP)-$(RELEASE_VERSION).tar.gz
|
||||
tar -zcvf $(APP)-$(RELEASE_VERSION).tar.gz docker etc $(SERVER_BIN) pub/font pub/index.html pub/assets pub/image
|
||||
release:
|
||||
goreleaser --skip-validate --skip-publish --snapshot
|
||||
148
README.md
148
README.md
@@ -1,7 +1,6 @@
|
||||
<p align="center">
|
||||
<a href="https://github.com/ccfos/nightingale">
|
||||
<img src="doc/img/ccf-n9e.png" alt="nightingale - cloud native monitoring" width="240" /></a>
|
||||
<p align="center">夜莺是一款开源的云原生监控系统,采用 all-in-one 的设计,提供企业级的功能特性,开箱即用的产品体验。推荐升级您的 Prometheus + AlertManager + Grafana 组合方案到夜莺</p>
|
||||
<img src="doc/img/nightingale_logo_h.png" alt="nightingale - cloud native monitoring" width="240" /></a>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
@@ -11,101 +10,129 @@
|
||||
<a href="https://hub.docker.com/u/flashcatcloud">
|
||||
<img alt="Docker pulls" src="https://img.shields.io/docker/pulls/flashcatcloud/nightingale"/></a>
|
||||
<img alt="GitHub Repo stars" src="https://img.shields.io/github/stars/ccfos/nightingale">
|
||||
<img alt="GitHub Repo issues" src="https://img.shields.io/github/issues/ccfos/nightingale">
|
||||
<img alt="GitHub Repo issues closed" src="https://img.shields.io/github/issues-closed/ccfos/nightingale">
|
||||
<img alt="GitHub forks" src="https://img.shields.io/github/forks/ccfos/nightingale">
|
||||
<a href="https://github.com/ccfos/nightingale/graphs/contributors">
|
||||
<img alt="GitHub contributors" src="https://img.shields.io/github/contributors-anon/ccfos/nightingale"/></a>
|
||||
<a href="https://n9e-talk.slack.com/">
|
||||
<img alt="GitHub contributors" src="https://img.shields.io/badge/join%20slack-%23n9e-brightgreen.svg"/></a>
|
||||
<img alt="License" src="https://img.shields.io/badge/license-Apache--2.0-blue"/>
|
||||
</p>
|
||||
<p align="center">
|
||||
<b>All-in-one</b> 的开源观测平台 <br/>
|
||||
<b>开箱即用</b>,集数据采集、可视化、监控告警于一体 <br/>
|
||||
推荐升级您的 <b>Prometheus + AlertManager + Grafana + ELK + Jaeger</b> 组合方案到夜莺!
|
||||
</p>
|
||||
|
||||
[English](./README_EN.md) | [中文](./README.md)
|
||||
[English](./README_en.md) | [中文](./README.md)
|
||||
|
||||
## Highlighted Features
|
||||
|
||||
|
||||
## 功能和特点
|
||||
|
||||
- **开箱即用**
|
||||
- 支持 Docker、Helm Chart 等多种部署方式,内置多种监控大盘、快捷视图、告警规则模板,导入即可快速使用,活跃、专业的社区用户也在持续迭代和沉淀更多的最佳实践于产品中;
|
||||
- **兼容并包**
|
||||
- 支持 [Categraf](https://github.com/flashcatcloud/categraf)、Telegraf、Grafana-agent 等多种采集器,支持 Prometheus、VictoriaMetrics、M3DB 等各种时序数据库,支持对接 Grafana,与云原生生态无缝集成;
|
||||
- 集数据采集、可视化、监控告警、数据分析于一体,与云原生生态紧密集成,提供开箱即用的企业级监控分析和告警能力;
|
||||
- **开放社区**
|
||||
- 托管于[中国计算机学会开源发展委员会](https://www.ccf.org.cn/kyfzwyh/),有[快猫星云](https://flashcat.cloud)的持续投入,和数千名社区用户的积极参与,以及夜莺监控项目清晰明确的定位,都保证了夜莺开源社区健康、长久的发展;
|
||||
- **高性能**
|
||||
- 支持 Docker、Helm Chart、云服务等多种部署方式,集数据采集、监控告警、可视化为一体,内置多种监控仪表盘、快捷视图、告警规则模板,导入即可快速使用,**大幅降低云原生监控系统的建设成本、学习成本、使用成本**;
|
||||
- **专业告警**
|
||||
- 可视化的告警配置和管理,支持丰富的告警规则,提供屏蔽规则、订阅规则的配置能力,支持告警多种送达渠道,支持告警自愈、告警事件管理等;
|
||||
- **推荐您使用夜莺的同时,无缝搭配[FlashDuty](https://flashcat.cloud/product/flashcat-duty/),实现告警聚合收敛、认领、升级、排班、协同,让告警的触达既高效,又确保告警处理不遗漏、做到件件有回响**。
|
||||
- **云原生**
|
||||
- 以交钥匙的方式快速构建企业级的云原生监控体系,支持 [Categraf](https://github.com/flashcatcloud/categraf)、Telegraf、Grafana-agent 等多种采集器,支持 Prometheus、VictoriaMetrics、M3DB、ElasticSearch、Jaeger 等多种数据源,兼容支持导入 Grafana 仪表盘,**与云原生生态无缝集成**;
|
||||
- **高性能 高可用**
|
||||
- 得益于夜莺的多数据源管理引擎,和夜莺引擎侧优秀的架构设计,借助于高性能时序库,可以满足数亿时间线的采集、存储、告警分析场景,节省大量成本;
|
||||
- **高可用**
|
||||
- 夜莺监控组件均可水平扩展,无单点,已在上千家企业部署落地,经受了严苛的生产实践检验。众多互联网头部公司,夜莺集群机器达百台,处理十亿级时间线,重度使用夜莺监控;
|
||||
- **灵活扩展**
|
||||
- 夜莺监控,可部署在1核1G的云主机,可在上百台机器部署集群,可运行在K8s中;也可将时序库、告警引擎等组件下沉到各机房、各region,兼顾边缘部署和中心化管理;
|
||||
- 夜莺监控组件均可水平扩展,无单点,已在上千家企业部署落地,经受了严苛的生产实践检验。众多互联网头部公司,夜莺集群机器达百台,处理数亿级时间线,重度使用夜莺监控;
|
||||
- **灵活扩展 中心化管理**
|
||||
- 夜莺监控,可部署在 1 核 1G 的云主机,可在上百台机器集群化部署,可运行在 K8s 中;也可将时序库、告警引擎等组件下沉到各机房、各 Region,兼顾边缘部署和中心化统一管理,**解决数据割裂,缺乏统一视图的难题**;
|
||||
- **开放社区**
|
||||
- 托管于[中国计算机学会开源发展委员会](https://www.ccf.org.cn/kyfzwyh/),有[快猫星云](https://flashcat.cloud)和众多公司的持续投入,和数千名社区用户的积极参与,以及夜莺监控项目清晰明确的定位,都保证了夜莺开源社区健康、长久的发展。活跃、专业的社区用户也在持续迭代和沉淀更多的最佳实践于产品中;
|
||||
|
||||
> 如果您在使用 Prometheus 过程中,有以下的一个或者多个需求场景,推荐您升级到夜莺:
|
||||
## 使用场景
|
||||
1. **如果您希望在一个平台中,统一管理和查看 Metrics、Logging、Tracing 数据,推荐你使用夜莺**:
|
||||
- 请参考阅读:[不止于监控,夜莺 V6 全新升级为开源观测平台](http://flashcat.cloud/blog/nightingale-v6-release/)
|
||||
2. **如果您在使用 Prometheus 过程中,有以下的一个或者多个需求场景,推荐您无缝升级到夜莺**:
|
||||
- Prometheus、Alertmanager、Grafana 等多个系统较为割裂,缺乏统一视图,无法开箱即用;
|
||||
- 通过修改配置文件来管理 Prometheus、Alertmanager 的方式,学习曲线大,协同有难度;
|
||||
- 数据量过大而无法扩展您的 Prometheus 集群;
|
||||
- 生产环境运行多套 Prometheus 集群,面临管理和使用成本高的问题;
|
||||
3. **如果您在使用 Zabbix,有以下的场景,推荐您升级到夜莺**:
|
||||
- 监控的数据量太大,希望有更好的扩展解决方案;
|
||||
- 学习曲线高,多人多团队模式下,希望有更好的协同使用效率;
|
||||
- 微服务和云原生架构下,监控数据的生命周期多变、监控数据维度基数高,Zabbix 数据模型不易适配;
|
||||
- 了解更多Zabbix和夜莺监控的对比,推荐您进一步阅读[Zabbix 和夜莺监控选型对比](https://flashcat.cloud/blog/zabbx-vs-nightingale/)
|
||||
4. **如果您在使用 [Open-Falcon](https://github.com/open-falcon/falcon-plus),我们推荐您升级到夜莺:**
|
||||
- 关于 Open-Falcon 和夜莺的详细介绍,请参考阅读:[云原生监控的十个特点和趋势](http://flashcat.cloud/blog/10-trends-of-cloudnative-monitoring/)
|
||||
- 监控系统和可观测平台的区别,请参考阅读:[从监控系统到可观测平台,Gap有多大
|
||||
](https://flashcat.cloud/blog/gap-of-monitoring-to-o11y/)
|
||||
5. **我们推荐您使用 [Categraf](https://github.com/flashcatcloud/categraf) 作为首选的监控数据采集器**:
|
||||
- [Categraf](https://github.com/flashcatcloud/categraf) 是夜莺监控的默认采集器,采用开放插件机制和 All-in-one 的设计理念,同时支持 metric、log、trace、event 的采集。Categraf 不仅可以采集 CPU、内存、网络等系统层面的指标,也集成了众多开源组件的采集能力,支持K8s生态。Categraf 内置了对应的仪表盘和告警规则,开箱即用。
|
||||
|
||||
- Prometheus、Alertmanager、Grafana 等多个系统较为割裂,缺乏统一视图,无法开箱即用;
|
||||
- 通过修改配置文件来管理 Prometheus、Alertmanager 的方式,学习曲线大,协同有难度;
|
||||
- 数据量过大而无法扩展您的 Prometheus 集群;
|
||||
- 生产环境运行多套 Prometheus 集群,面临管理和使用成本高的问题;
|
||||
## 文档
|
||||
|
||||
> 如果您在使用 Zabbix,有以下的场景,推荐您升级到夜莺:
|
||||
[English Doc](https://n9e.github.io/) | [中文文档](https://flashcat.cloud/docs/)
|
||||
|
||||
- 监控的数据量太大,希望有更好的扩展解决方案;
|
||||
- 学习曲线高,多人多团队模式下,希望有更好的协同使用效率;
|
||||
- 微服务和云原生架构下,监控数据的生命周期多变、监控数据维度基数高,Zabbix 数据模型不易适配;
|
||||
## 产品示意图
|
||||
|
||||
> 如果您在使用 [Open-Falcon](https://github.com/open-falcon/falcon-plus),我们更推荐您升级到夜莺:
|
||||
https://user-images.githubusercontent.com/792850/216888712-2565fcea-9df5-47bd-a49e-d60af9bd76e8.mp4
|
||||
|
||||
- 关于 Open-Falcon 和夜莺的详细介绍,请参考阅读[《云原生监控的十个特点和趋势》](https://mp.weixin.qq.com/s?__biz=MzkzNjI5OTM5Nw==&mid=2247483738&idx=1&sn=e8bdbb974a2cd003c1abcc2b5405dd18&chksm=c2a19fb0f5d616a63185cd79277a79a6b80118ef2185890d0683d2bb20451bd9303c78d083c5#rd)。
|
||||
|
||||
> 我们推荐您使用 [Categraf](https://github.com/flashcatcloud/categraf) 作为首选的监控数据采集器:
|
||||
|
||||
- [Categraf](https://github.com/flashcatcloud/categraf) 是夜莺监控的默认采集器,采用开放插件机制和 all-in-one 的设计,同时支持 metric、log、trace、event 的采集。Categraf 不仅可以采集 CPU、内存、网络等系统层面的指标,也集成了众多开源组件的采集能力,支持K8s生态。Categraf 内置了对应的仪表盘和告警规则,开箱即用。
|
||||
|
||||
|
||||
## Getting Started
|
||||
|
||||
- [快速安装](https://mp.weixin.qq.com/s/iEC4pfL1TgjMDOWYh8H-FA)
|
||||
- [详细文档](https://n9e.github.io/)
|
||||
- [社区分享](https://n9e.github.io/docs/prologue/share/)
|
||||
|
||||
## Screenshots
|
||||
|
||||
<img src="doc/img/intro.gif" width="680">
|
||||
|
||||
|
||||
## Architecture
|
||||
|
||||
<img src="doc/img/arch-product.png" width="680">
|
||||
## 夜莺架构
|
||||
|
||||
夜莺监控可以接收各种采集器上报的监控数据(比如 [Categraf](https://github.com/flashcatcloud/categraf)、telegraf、grafana-agent、Prometheus),并写入多种流行的时序数据库中(可以支持Prometheus、M3DB、VictoriaMetrics、Thanos、TDEngine等),提供告警规则、屏蔽规则、订阅规则的配置能力,提供监控数据的查看能力,提供告警自愈机制(告警触发之后自动回调某个webhook地址或者执行某个脚本),提供历史告警事件的存储管理、分组查看的能力。
|
||||
|
||||
<img src="doc/img/arch-system.png" width="680">
|
||||
### 中心汇聚式部署方案
|
||||
|
||||
夜莺 v5 版本的设计非常简单,核心是 server 和 webapi 两个模块,webapi 无状态,放到中心端,承接前端请求,将用户配置写入数据库;server 是告警引擎和数据转发模块,一般随着时序库走,一个时序库就对应一套 server,每套 server 可以只用一个实例,也可以多个实例组成集群,server 可以接收 Categraf、Telegraf、Grafana-Agent、Datadog-Agent、Falcon-Plugins 上报的数据,写入后端时序库,周期性从数据库同步告警规则,然后查询时序库做告警判断。每套 server 依赖一个 redis。
|
||||

|
||||
|
||||
夜莺只有一个模块,就是 n9e,可以部署多个 n9e 实例组成集群,n9e 依赖 2 个存储,数据库、Redis,数据库可以使用 MySQL 或 Postgres,自己按需选用。
|
||||
|
||||
<img src="doc/img/install-vm.png" width="680">
|
||||
n9e 提供的是 HTTP 接口,前面负载均衡可以是 4 层的,也可以是 7 层的。一般就选用 Nginx 就可以了。
|
||||
|
||||
如果单机版本的 Prometheus 性能不够或容灾较差,我们推荐使用 [VictoriaMetrics](https://github.com/VictoriaMetrics/VictoriaMetrics),VictoriaMetrics 架构较为简单,性能优异,易于部署和运维,架构图如上。VictoriaMetrics 更详尽的文档,还请参考其[官网](https://victoriametrics.com/)。
|
||||
n9e 这个模块接收到数据之后,需要转发给后端的时序库,相关配置是:
|
||||
|
||||
```toml
|
||||
[Pushgw]
|
||||
LabelRewrite = true
|
||||
[[Pushgw.Writers]]
|
||||
Url = "http://127.0.0.1:9090/api/v1/write"
|
||||
```
|
||||
|
||||
## Community
|
||||
> 注意:虽然数据源可以在页面配置了,但是上报转发链路,还是需要在配置文件指定。
|
||||
|
||||
开源项目要更有生命力,离不开开放的治理架构和源源不断的开发者和用户共同参与,我们致力于建立开放、中立的开源治理架构,吸纳更多来自企业、高校等各方面对云原生监控感兴趣、有热情的开发者,一起打造有活力的夜莺开源社区。关于《夜莺开源项目和社区治理架构(草案)》,请查阅 **[COMMUNITY GOVERNANCE](./doc/community-governance.md)**.
|
||||
所有机房的 agent( 比如 Categraf、Telegraf、 Grafana-agent、Datadog-agent ),都直接推数据给 n9e,这个架构最为简单,维护成本最低。当然,前提是要求机房之间网络链路比较好,一般有专线。如果网络链路不好,则要使用下面的部署方式了。
|
||||
|
||||
### 边缘下沉式混杂部署方案
|
||||
|
||||

|
||||
|
||||
这个图尝试解释 3 种不同的情形,比如 A 机房和中心网络链路很好,Categraf 可以直接汇报数据给中心 n9e 模块,另一个机房网络链路不好,就需要把时序库下沉部署,时序库下沉了,对应的告警引擎和转发网关也都要跟随下沉,这样数据不会跨机房传输,比较稳定。但是心跳还是需要往中心心跳,要不然在对象列表里看不到机器的 CPU、内存使用率。还有的时候,可能是接入的一个已有的 Prometheus,数据采集没有走 Categraf,那此时只需要把 Prometheus 作为数据源接入夜莺即可,可以在夜莺里看图、配告警规则,但是就是在对象列表里看不到,也不能使用告警自愈的功能,问题也不大,核心功能都不受影响。
|
||||
|
||||
边缘机房,下沉部署时序库、告警引擎、转发网关的时候,要注意,告警引擎需要依赖数据库,因为要同步告警规则,转发网关也要依赖数据库,因为要注册对象到数据库里去,需要打通相关网络,告警引擎和转发网关都不用Redis,所以无需为 Redis 打通网络。
|
||||
|
||||
### VictoriaMetrics 集群架构
|
||||
<img src="doc/img/install-vm.png" width="600">
|
||||
|
||||
如果单机版本的时序数据库(比如 Prometheus) 性能有瓶颈或容灾较差,我们推荐使用 [VictoriaMetrics](https://github.com/VictoriaMetrics/VictoriaMetrics),VictoriaMetrics 架构较为简单,性能优异,易于部署和运维,架构图如上。VictoriaMetrics 更详尽的文档,还请参考其[官网](https://victoriametrics.com/)。
|
||||
|
||||
## 夜莺社区
|
||||
|
||||
开源项目要更有生命力,离不开开放的治理架构和源源不断的开发者和用户共同参与,我们致力于建立开放、中立的开源治理架构,吸纳更多来自企业、高校等各方面对云原生监控感兴趣、有热情的开发者,一起打造有活力的夜莺开源社区。关于《夜莺开源项目和社区治理架构(草案)》,请查阅 [COMMUNITY GOVERNANCE](./doc/community-governance.md).
|
||||
|
||||
**我们欢迎您以各种方式参与到夜莺开源项目和开源社区中来,工作包括不限于**:
|
||||
- 补充和完善文档 => [n9e.github.io](https://n9e.github.io/)
|
||||
- 分享您在使用夜莺监控过程中的最佳实践和经验心得 => [文章分享](https://n9e.github.io/docs/prologue/share/)
|
||||
- 分享您在使用夜莺监控过程中的最佳实践和经验心得 => [文章分享](https://flashcat.cloud/docs/content/flashcat-monitor/nightingale/share/)
|
||||
- 提交产品建议 =》 [github issue](https://github.com/ccfos/nightingale/issues/new?assignees=&labels=kind%2Ffeature&template=enhancement.md)
|
||||
- 提交代码,让夜莺监控更快、更稳、更好用 => [github pull request](https://github.com/didi/nightingale/pulls)
|
||||
|
||||
**尊重、认可和记录每一位贡献者的工作**是夜莺开源社区的第一指导原则,我们提倡**高效的提问**,这既是对开发者时间的尊重,也是对整个社区知识沉淀的贡献:
|
||||
- 提问之前请先查阅 [FAQ](https://www.gitlink.org.cn/ccfos/nightingale/wiki/faq)
|
||||
- 提问之前请先搜索 [github issue](https://github.com/ccfos/nightingale/issues)
|
||||
- 我们优先推荐通过提交 github issue 来提问,如果[有问题点击这里](https://github.com/ccfos/nightingale/issues/new?assignees=&labels=kind%2Fbug&template=bug_report.yml) | [有需求建议点击这里](https://github.com/ccfos/nightingale/issues/new?assignees=&labels=kind%2Ffeature&template=enhancement.md)
|
||||
- 最后,我们推荐你加入微信群,针对相关开放式问题,相互交流咨询 (请先加好友:[UlricGO](https://www.gitlink.org.cn/UlricQin/gist/tree/master/self.jpeg) 备注:夜莺加群+姓名+公司,交流群里会有开发者团队和专业、热心的群友回答问题)
|
||||
- 我们使用[论坛](https://answer.flashcat.cloud/)进行交流,有问题可以到这里搜索、提问
|
||||
- 我们也推荐你加入微信群,和其他夜莺用户交流经验 (请先加好友:[picobyte](https://www.gitlink.org.cn/UlricQin/gist/tree/master/self.jpeg) 备注:夜莺加群+姓名+公司)
|
||||
|
||||
|
||||
## Who is using
|
||||
## Who is using Nightingale
|
||||
|
||||
您可以通过在 **[Who is Using Nightingale](https://github.com/ccfos/nightingale/issues/897)** 登记您的使用情况,分享您的使用经验。
|
||||
|
||||
## Stargazers
|
||||
## Stargazers over time
|
||||
[](https://starchart.cc/ccfos/nightingale)
|
||||
|
||||
## Contributors
|
||||
@@ -116,7 +143,6 @@
|
||||
## License
|
||||
[Apache License V2.0](https://github.com/didi/nightingale/blob/main/LICENSE)
|
||||
|
||||
## Contact Us
|
||||
推荐您关注夜莺监控公众号,及时获取相关产品和社区动态:
|
||||
## 加入交流群
|
||||
|
||||
<img src="doc/img/n9e-vx-new.png" width="180">
|
||||
<img src="doc/img/wecom.png" width="120">
|
||||
|
||||
68
README_EN.md
68
README_EN.md
@@ -1,68 +0,0 @@
|
||||
<img src="doc/img/ccf-n9e.png" width="240">
|
||||
|
||||
Nightingale is an enterprise-level cloud-native monitoring system, which can be used as drop-in replacement of Prometheus for alerting and management.
|
||||
|
||||
[English](./README_EN.md) | [中文](./README.md)
|
||||
|
||||
## Introduction
|
||||
Nightingale is an cloud-native monitoring system by All-In-On design, support enterprise-class functional features with an out-of-the-box experience. We recommend upgrading your `Prometheus` + `AlertManager` + `Grafana` combo solution to Nightingale.
|
||||
|
||||
- **Multiple prometheus data sources management**: manage all alerts and dashboards in one centralized visually view;
|
||||
- **Out-of-the-box alert rule**: built-in multiple alert rules, reuse alert rules template by one-click import with detailed explanation of metrics;
|
||||
- **Multiple modes for visualizing data**: out-of-the-box dashboards, instance customize views, expression browser and Grafana integration;
|
||||
- **Multiple collection clients**: support using Promethues Exporter、Telegraf、Datadog Agent to collecting metrics;
|
||||
- **Integration of multiple storage**: support Prometheus, M3DB, VictoriaMetrics, Influxdb, TDEngine as storage solutions, and original support for PromQL;
|
||||
- **Fault self-healing**: support the ability to self-heal from failures by configuring webhook;
|
||||
|
||||
#### If you are using Prometheus and have one or more of the following requirement scenarios, it is recommended that you upgrade to Nightingale:
|
||||
|
||||
- Multiple systems such as Prometheus, Alertmanager, Grafana, etc. are fragmented and lack a unified view and cannot be used out of the box;
|
||||
- The way to manage Prometheus and Alertmanager by modifying configuration files has a big learning curve and is difficult to collaborate;
|
||||
- Too much data to scale-up your Prometheus cluster;
|
||||
- Multiple Prometheus clusters running in production environments, which faced high management and usage costs;
|
||||
|
||||
#### If you are using Zabbix and have the following scenarios, it is recommended that you upgrade to Nightingale:
|
||||
|
||||
- Monitoring too much data and wanting a better scalable solution;
|
||||
- A high learning curve and a desire for better efficiency of collaborative use in a multi-person, multi-team model;
|
||||
- Microservice and cloud-native architectures with variable monitoring data lifecycles and high monitoring data dimension bases, which are not easily adaptable to the Zabbix data model;
|
||||
|
||||
|
||||
#### If you are using [open-falcon](https://github.com/open-falcon/falcon-plus), we recommend you to upgrade to Nightingale:
|
||||
- For more information about open-falcon and Nightingale, please refer to read [Ten features and trends of cloud-native monitoring](https://mp.weixin.qq.com/s?__biz=MzkzNjI5OTM5Nw==&mid=2247483738&idx=1&sn=e8bdbb974a2cd003c1abcc2b5405dd18&chksm=c2a19fb0f5d616a63185cd79277a79a6b80118ef2185890d0683d2bb20451bd9303c78d083c5#rd)。
|
||||
|
||||
## Quickstart
|
||||
- [n9e.github.io/quickstart](https://n9e.github.io/docs/install/compose/)
|
||||
|
||||
## Documentation
|
||||
- [n9e.github.io](https://n9e.github.io/)
|
||||
|
||||
## Example of use
|
||||
|
||||
<img src="doc/img/intro.gif" width="680">
|
||||
|
||||
## System Architecture
|
||||
#### A typical Nightingale deployment architecture:
|
||||
<img src="doc/img/arch-system.png" width="680">
|
||||
|
||||
#### Typical deployment architecture using VictoriaMetrics as storage:
|
||||
<img src="doc/img/install-vm.png" width="680">
|
||||
|
||||
## Contact us and feedback questions
|
||||
- We recommend that you use [github issue](https://github.com/didi/nightingale/issues) as the preferred channel for issue feedback and requirement submission;
|
||||
- You can join our WeChat group
|
||||
|
||||
<img src="doc/img/n9e-vx-new.png" width="180">
|
||||
|
||||
|
||||
## Contributing
|
||||
We welcome your participation in the Nightingale open source project and open source community in a variety of ways:
|
||||
- Feedback on problems and bugs => [github issue](https://github.com/didi/nightingale/issues)
|
||||
- Additional and improved documentation => [n9e.github.io](https://n9e.github.io/)
|
||||
- Share your best practices and insights on using Nightingale => [User Story](https://github.com/didi/nightingale/issues/897)
|
||||
- Join our community events => [Nightingale wechat group](https://s3-gz01.didistatic.com/n9e-pub/image/n9e-wx.png)
|
||||
- Submit code to make Nightingale better =>[github PR](https://github.com/didi/nightingale/pulls)
|
||||
|
||||
|
||||
## License
|
||||
Nightingale with [Apache License V2.0](https://github.com/didi/nightingale/blob/main/LICENSE) open source license.
|
||||
104
README_en.md
Normal file
104
README_en.md
Normal file
@@ -0,0 +1,104 @@
|
||||
<p align="center">
|
||||
<a href="https://github.com/ccfos/nightingale">
|
||||
<img src="doc/img/nightingale_logo_h.png" alt="nightingale - cloud native monitoring" width="240" /></a>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<img alt="GitHub latest release" src="https://img.shields.io/github/v/release/ccfos/nightingale"/>
|
||||
<a href="https://n9e.github.io">
|
||||
<img alt="Docs" src="https://img.shields.io/badge/docs-get%20started-brightgreen"/></a>
|
||||
<a href="https://hub.docker.com/u/flashcatcloud">
|
||||
<img alt="Docker pulls" src="https://img.shields.io/docker/pulls/flashcatcloud/nightingale"/></a>
|
||||
<img alt="GitHub Repo stars" src="https://img.shields.io/github/stars/ccfos/nightingale">
|
||||
<img alt="GitHub Repo issues" src="https://img.shields.io/github/issues/ccfos/nightingale">
|
||||
<img alt="GitHub Repo issues closed" src="https://img.shields.io/github/issues-closed/ccfos/nightingale">
|
||||
<img alt="GitHub forks" src="https://img.shields.io/github/forks/ccfos/nightingale">
|
||||
<a href="https://github.com/ccfos/nightingale/graphs/contributors">
|
||||
<img alt="GitHub contributors" src="https://img.shields.io/github/contributors-anon/ccfos/nightingale"/></a>
|
||||
<a href="https://n9e-talk.slack.com/">
|
||||
<img alt="GitHub contributors" src="https://img.shields.io/badge/join%20slack-%23n9e-brightgreen.svg"/></a>
|
||||
<img alt="License" src="https://img.shields.io/badge/license-Apache--2.0-blue"/>
|
||||
</p>
|
||||
<p align="center">
|
||||
An open-source cloud-native monitoring system that is <b>all-in-one</b> <br/>
|
||||
<b>Out-of-the-box</b>, it integrates data collection, visualization, and monitoring alert <br/>
|
||||
We recommend upgrading your <b>Prometheus + AlertManager + Grafana</b> combination to Nightingale!
|
||||
</p>
|
||||
|
||||
[English](./README.md) | [中文](./README_ZH.md)
|
||||
|
||||
|
||||
## Highlighted Features
|
||||
|
||||
- **Out-of-the-box**
|
||||
- Supports multiple deployment methods such as **Docker, Helm Chart, and cloud services**, integrates data collection, monitoring, and alerting into one system, and comes with various monitoring dashboards, quick views, and alert rule templates. **It greatly reduces the construction cost, learning cost, and usage cost of cloud-native monitoring systems**.
|
||||
- **Professional Alerting**
|
||||
- Provides visual alert configuration and management, supports various alert rules, offers the ability to configure silence and subscription rules, supports multiple alert delivery channels, and has features such as alert self-healing and event management.
|
||||
- **Cloud-Native**
|
||||
- Quickly builds an enterprise-level cloud-native monitoring system through a turnkey approach, supports multiple collectors such as [Categraf](https://github.com/flashcatcloud/categraf), Telegraf, and Grafana-agent, supports multiple data sources such as Prometheus, VictoriaMetrics, M3DB, ElasticSearch, and Jaeger, and is compatible with importing Grafana dashboards. **It seamlessly integrates with the cloud-native ecosystem**.
|
||||
- **High Performance and High Availability**
|
||||
- Due to the multi-data-source management engine of Nightingale and its excellent architecture design, and utilizing a high-performance time-series database, it can handle data collection, storage, and alert analysis scenarios with billions of time-series data, saving a lot of costs.
|
||||
- Nightingale components can be horizontally scaled with no single point of failure. It has been deployed in thousands of enterprises and tested in harsh production practices. Many leading Internet companies have used Nightingale for cluster machines with hundreds of nodes, processing billions of time-series data.
|
||||
- **Flexible Extension and Centralized Management**
|
||||
- Nightingale can be deployed on a 1-core 1G cloud host, deployed in a cluster of hundreds of machines, or run in Kubernetes. Time-series databases, alert engines, and other components can also be decentralized to various data centers and regions, balancing edge deployment with centralized management. **It solves the problem of data fragmentation and lack of unified views**.
|
||||
|
||||
|
||||
#### If you are using Prometheus and have one or more of the following requirement scenarios, it is recommended that you upgrade to Nightingale:
|
||||
|
||||
- Multiple systems such as Prometheus, Alertmanager, Grafana, etc. are fragmented and lack a unified view and cannot be used out of the box;
|
||||
- The way to manage Prometheus and Alertmanager by modifying configuration files has a big learning curve and is difficult to collaborate;
|
||||
- Too much data to scale-up your Prometheus cluster;
|
||||
- Multiple Prometheus clusters running in production environments, which faced high management and usage costs;
|
||||
|
||||
#### If you are using Zabbix and have the following scenarios, it is recommended that you upgrade to Nightingale:
|
||||
|
||||
- Monitoring too much data and wanting a better scalable solution;
|
||||
- A high learning curve and a desire for better efficiency of collaborative use in a multi-person, multi-team model;
|
||||
- Microservice and cloud-native architectures with variable monitoring data lifecycles and high monitoring data dimension bases, which are not easily adaptable to the Zabbix data model;
|
||||
|
||||
|
||||
#### If you are using [open-falcon](https://github.com/open-falcon/falcon-plus), we recommend you to upgrade to Nightingale:
|
||||
- For more information about open-falcon and Nightingale, please refer to read [Ten features and trends of cloud-native monitoring](https://mp.weixin.qq.com/s?__biz=MzkzNjI5OTM5Nw==&mid=2247483738&idx=1&sn=e8bdbb974a2cd003c1abcc2b5405dd18&chksm=c2a19fb0f5d616a63185cd79277a79a6b80118ef2185890d0683d2bb20451bd9303c78d083c5#rd)。
|
||||
|
||||
## Getting Started
|
||||
|
||||
[English Doc](https://n9e.github.io/) | [中文文档](http://n9e.flashcat.cloud/)
|
||||
|
||||
## Screenshots
|
||||
|
||||
https://user-images.githubusercontent.com/792850/216888712-2565fcea-9df5-47bd-a49e-d60af9bd76e8.mp4
|
||||
|
||||
## Architecture
|
||||
|
||||
<img src="doc/img/arch-product.png" width="600">
|
||||
|
||||
Nightingale monitoring can receive monitoring data reported by various collectors (such as [Categraf](https://github.com/flashcatcloud/categraf) , telegraf, grafana-agent, Prometheus, etc.) and write them to various popular time-series databases (such as Prometheus, M3DB, VictoriaMetrics, Thanos, TDEngine, etc.). It provides configuration capabilities for alert rules, silence rules, and subscription rules, as well as the ability to view monitoring data. It also provides automatic alarm self-healing mechanisms (such as automatically calling back to a webhook address or executing a script after an alarm is triggered), and the ability to store and manage historical alarm events and view them in groups.
|
||||
|
||||
If the performance of a standalone time-series database (such as Prometheus) has bottlenecks or poor disaster recovery, we recommend using [VictoriaMetrics](https://github.com/VictoriaMetrics/VictoriaMetrics). The VictoriaMetrics architecture is relatively simple, has excellent performance, and is easy to deploy and maintain. The architecture diagram is as shown above. For more detailed documentation on VictoriaMetrics, please refer to its [official website](https://victoriametrics.com/).
|
||||
|
||||
**We welcome you to participate in the Nightingale open-source project and community in various ways, including but not limited to**:
|
||||
- Adding and improving documentation => [n9e.github.io](https://n9e.github.io/)
|
||||
- Sharing your best practices and experience in using Nightingale monitoring => [Article sharing]((https://n9e.github.io/docs/prologue/share/))
|
||||
- Submitting product suggestions => [github issue](https://github.com/ccfos/nightingale/issues/new?assignees=&labels=kind%2Ffeature&template=enhancement.md)
|
||||
- Submitting code to make Nightingale monitoring faster, more stable, and easier to use => [github pull request](https://github.com/didi/nightingale/pulls)
|
||||
|
||||
|
||||
**Respecting, recognizing, and recording the work of every contributor** is the first guiding principle of the Nightingale open-source community. We advocate effective questioning, which not only respects the developer's time but also contributes to the accumulation of knowledge in the entire community
|
||||
- Before asking a question, please first refer to the [FAQ](https://www.gitlink.org.cn/ccfos/nightingale/wiki/faq)
|
||||
- We use [GitHub Discussions](https://github.com/ccfos/nightingale/discussions) as the communication forum. You can search and ask questions here.
|
||||
- We also recommend that you join ours [Slack channel](https://n9e-talk.slack.com/) to exchange experiences with other Nightingale users.
|
||||
|
||||
|
||||
## Who is using Nightingale
|
||||
You can register your usage and share your experience by posting on **[Who is Using Nightingale](https://github.com/ccfos/nightingale/issues/897)**.
|
||||
|
||||
## Stargazers over time
|
||||
[](https://starchart.cc/ccfos/nightingale)
|
||||
|
||||
## Contributors
|
||||
<a href="https://github.com/ccfos/nightingale/graphs/contributors">
|
||||
<img src="https://contrib.rocks/image?repo=ccfos/nightingale" />
|
||||
</a>
|
||||
|
||||
## License
|
||||
[Apache License V2.0](https://github.com/didi/nightingale/blob/main/LICENSE)
|
||||
73
alert/aconf/conf.go
Normal file
73
alert/aconf/conf.go
Normal file
@@ -0,0 +1,73 @@
|
||||
package aconf
|
||||
|
||||
import (
|
||||
"path"
|
||||
|
||||
"github.com/toolkits/pkg/runner"
|
||||
)
|
||||
|
||||
type Alert struct {
|
||||
EngineDelay int64
|
||||
Heartbeat HeartbeatConfig
|
||||
Alerting Alerting
|
||||
}
|
||||
|
||||
type SMTPConfig struct {
|
||||
Host string
|
||||
Port int
|
||||
User string
|
||||
Pass string
|
||||
From string
|
||||
InsecureSkipVerify bool
|
||||
Batch int
|
||||
}
|
||||
|
||||
type HeartbeatConfig struct {
|
||||
IP string
|
||||
Interval int64
|
||||
Endpoint string
|
||||
EngineName string
|
||||
}
|
||||
|
||||
type Alerting struct {
|
||||
Timeout int64
|
||||
TemplatesDir string
|
||||
NotifyConcurrency int
|
||||
}
|
||||
|
||||
type CallPlugin struct {
|
||||
Enable bool
|
||||
PluginPath string
|
||||
Caller string
|
||||
}
|
||||
|
||||
type RedisPub struct {
|
||||
Enable bool
|
||||
ChannelPrefix string
|
||||
ChannelKey string
|
||||
}
|
||||
|
||||
type Ibex struct {
|
||||
Address string
|
||||
BasicAuthUser string
|
||||
BasicAuthPass string
|
||||
Timeout int64
|
||||
}
|
||||
|
||||
func (a *Alert) PreCheck() {
|
||||
if a.Alerting.TemplatesDir == "" {
|
||||
a.Alerting.TemplatesDir = path.Join(runner.Cwd, "etc", "template")
|
||||
}
|
||||
|
||||
if a.Alerting.NotifyConcurrency == 0 {
|
||||
a.Alerting.NotifyConcurrency = 10
|
||||
}
|
||||
|
||||
if a.Heartbeat.Interval == 0 {
|
||||
a.Heartbeat.Interval = 1000
|
||||
}
|
||||
|
||||
if a.Heartbeat.EngineName == "" {
|
||||
a.Heartbeat.EngineName = "default"
|
||||
}
|
||||
}
|
||||
103
alert/alert.go
Normal file
103
alert/alert.go
Normal file
@@ -0,0 +1,103 @@
|
||||
package alert
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/alert/aconf"
|
||||
"github.com/ccfos/nightingale/v6/alert/astats"
|
||||
"github.com/ccfos/nightingale/v6/alert/dispatch"
|
||||
"github.com/ccfos/nightingale/v6/alert/eval"
|
||||
"github.com/ccfos/nightingale/v6/alert/naming"
|
||||
"github.com/ccfos/nightingale/v6/alert/process"
|
||||
"github.com/ccfos/nightingale/v6/alert/queue"
|
||||
"github.com/ccfos/nightingale/v6/alert/record"
|
||||
"github.com/ccfos/nightingale/v6/alert/router"
|
||||
"github.com/ccfos/nightingale/v6/alert/sender"
|
||||
"github.com/ccfos/nightingale/v6/conf"
|
||||
"github.com/ccfos/nightingale/v6/memsto"
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
"github.com/ccfos/nightingale/v6/pkg/ctx"
|
||||
"github.com/ccfos/nightingale/v6/pkg/httpx"
|
||||
"github.com/ccfos/nightingale/v6/pkg/logx"
|
||||
"github.com/ccfos/nightingale/v6/prom"
|
||||
"github.com/ccfos/nightingale/v6/pushgw/pconf"
|
||||
"github.com/ccfos/nightingale/v6/pushgw/writer"
|
||||
"github.com/ccfos/nightingale/v6/storage"
|
||||
)
|
||||
|
||||
func Initialize(configDir string, cryptoKey string) (func(), error) {
|
||||
config, err := conf.InitConfig(configDir, cryptoKey)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to init config: %v", err)
|
||||
}
|
||||
|
||||
logxClean, err := logx.Init(config.Log)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
db, err := storage.New(config.DB)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
ctx := ctx.NewContext(context.Background(), db)
|
||||
|
||||
redis, err := storage.NewRedis(config.Redis)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
syncStats := memsto.NewSyncStats()
|
||||
alertStats := astats.NewSyncStats()
|
||||
|
||||
targetCache := memsto.NewTargetCache(ctx, syncStats, redis)
|
||||
busiGroupCache := memsto.NewBusiGroupCache(ctx, syncStats)
|
||||
alertMuteCache := memsto.NewAlertMuteCache(ctx, syncStats)
|
||||
alertRuleCache := memsto.NewAlertRuleCache(ctx, syncStats)
|
||||
notifyConfigCache := memsto.NewNotifyConfigCache(ctx)
|
||||
dsCache := memsto.NewDatasourceCache(ctx, syncStats)
|
||||
|
||||
promClients := prom.NewPromClient(ctx, config.Alert.Heartbeat)
|
||||
|
||||
externalProcessors := process.NewExternalProcessors()
|
||||
|
||||
Start(config.Alert, config.Pushgw, syncStats, alertStats, externalProcessors, targetCache, busiGroupCache, alertMuteCache, alertRuleCache, notifyConfigCache, dsCache, ctx, promClients, false)
|
||||
|
||||
r := httpx.GinEngine(config.Global.RunMode, config.HTTP)
|
||||
rt := router.New(config.HTTP, config.Alert, alertMuteCache, targetCache, busiGroupCache, alertStats, ctx, externalProcessors)
|
||||
rt.Config(r)
|
||||
|
||||
httpClean := httpx.Init(config.HTTP, r)
|
||||
|
||||
return func() {
|
||||
logxClean()
|
||||
httpClean()
|
||||
}, nil
|
||||
}
|
||||
|
||||
func Start(alertc aconf.Alert, pushgwc pconf.Pushgw, syncStats *memsto.Stats, alertStats *astats.Stats, externalProcessors *process.ExternalProcessorsType, targetCache *memsto.TargetCacheType, busiGroupCache *memsto.BusiGroupCacheType,
|
||||
alertMuteCache *memsto.AlertMuteCacheType, alertRuleCache *memsto.AlertRuleCacheType, notifyConfigCache *memsto.NotifyConfigCacheType, datasourceCache *memsto.DatasourceCacheType, ctx *ctx.Context, promClients *prom.PromClientMap, isCenter bool) {
|
||||
userCache := memsto.NewUserCache(ctx, syncStats)
|
||||
userGroupCache := memsto.NewUserGroupCache(ctx, syncStats)
|
||||
alertSubscribeCache := memsto.NewAlertSubscribeCache(ctx, syncStats)
|
||||
recordingRuleCache := memsto.NewRecordingRuleCache(ctx, syncStats)
|
||||
|
||||
go models.InitNotifyConfig(ctx, alertc.Alerting.TemplatesDir)
|
||||
|
||||
naming := naming.NewNaming(ctx, alertc.Heartbeat, isCenter)
|
||||
|
||||
writers := writer.NewWriters(pushgwc)
|
||||
record.NewScheduler(alertc, recordingRuleCache, promClients, writers, alertStats)
|
||||
|
||||
eval.NewScheduler(isCenter, alertc, externalProcessors, alertRuleCache, targetCache, busiGroupCache, alertMuteCache, datasourceCache, promClients, naming, ctx, alertStats)
|
||||
|
||||
dp := dispatch.NewDispatch(alertRuleCache, userCache, userGroupCache, alertSubscribeCache, targetCache, notifyConfigCache, alertc.Alerting, ctx)
|
||||
consumer := dispatch.NewConsumer(alertc.Alerting, ctx, dp)
|
||||
|
||||
go dp.ReloadTpls()
|
||||
go consumer.LoopConsume()
|
||||
|
||||
go queue.ReportQueueSize(alertStats)
|
||||
go sender.StartEmailSender(notifyConfigCache.GetSMTP()) // todo
|
||||
}
|
||||
@@ -1,4 +1,4 @@
|
||||
package stat
|
||||
package astats
|
||||
|
||||
import (
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
@@ -6,28 +6,21 @@ import (
|
||||
|
||||
const (
|
||||
namespace = "n9e"
|
||||
subsystem = "server"
|
||||
subsystem = "alert"
|
||||
)
|
||||
|
||||
var (
|
||||
// 各个周期性任务的执行耗时
|
||||
GaugeCronDuration = prometheus.NewGaugeVec(prometheus.GaugeOpts{
|
||||
Namespace: namespace,
|
||||
Subsystem: subsystem,
|
||||
Name: "cron_duration",
|
||||
Help: "Cron method use duration, unit: ms.",
|
||||
}, []string{"cluster", "name"})
|
||||
|
||||
// 从数据库同步数据的时候,同步的条数
|
||||
GaugeSyncNumber = prometheus.NewGaugeVec(prometheus.GaugeOpts{
|
||||
Namespace: namespace,
|
||||
Subsystem: subsystem,
|
||||
Name: "cron_sync_number",
|
||||
Help: "Cron sync number.",
|
||||
}, []string{"cluster", "name"})
|
||||
type Stats struct {
|
||||
CounterSampleTotal *prometheus.CounterVec
|
||||
CounterAlertsTotal *prometheus.CounterVec
|
||||
GaugeAlertQueueSize prometheus.Gauge
|
||||
GaugeSampleQueueSize *prometheus.GaugeVec
|
||||
RequestDuration *prometheus.HistogramVec
|
||||
ForwardDuration *prometheus.HistogramVec
|
||||
}
|
||||
|
||||
func NewSyncStats() *Stats {
|
||||
// 从各个接收接口接收到的监控数据总量
|
||||
CounterSampleTotal = prometheus.NewCounterVec(prometheus.CounterOpts{
|
||||
CounterSampleTotal := prometheus.NewCounterVec(prometheus.CounterOpts{
|
||||
Namespace: namespace,
|
||||
Subsystem: subsystem,
|
||||
Name: "samples_received_total",
|
||||
@@ -35,7 +28,7 @@ var (
|
||||
}, []string{"cluster", "channel"})
|
||||
|
||||
// 产生的告警总量
|
||||
CounterAlertsTotal = prometheus.NewCounterVec(prometheus.CounterOpts{
|
||||
CounterAlertsTotal := prometheus.NewCounterVec(prometheus.CounterOpts{
|
||||
Namespace: namespace,
|
||||
Subsystem: subsystem,
|
||||
Name: "alerts_total",
|
||||
@@ -43,15 +36,15 @@ var (
|
||||
}, []string{"cluster"})
|
||||
|
||||
// 内存中的告警事件队列的长度
|
||||
GaugeAlertQueueSize = prometheus.NewGaugeVec(prometheus.GaugeOpts{
|
||||
GaugeAlertQueueSize := prometheus.NewGauge(prometheus.GaugeOpts{
|
||||
Namespace: namespace,
|
||||
Subsystem: subsystem,
|
||||
Name: "alert_queue_size",
|
||||
Help: "The size of alert queue.",
|
||||
}, []string{"cluster"})
|
||||
})
|
||||
|
||||
// 数据转发队列,各个队列的长度
|
||||
GaugeSampleQueueSize = prometheus.NewGaugeVec(prometheus.GaugeOpts{
|
||||
GaugeSampleQueueSize := prometheus.NewGaugeVec(prometheus.GaugeOpts{
|
||||
Namespace: namespace,
|
||||
Subsystem: subsystem,
|
||||
Name: "sample_queue_size",
|
||||
@@ -59,7 +52,7 @@ var (
|
||||
}, []string{"cluster", "channel_number"})
|
||||
|
||||
// 一些重要的请求,比如接收数据的请求,应该统计一下延迟情况
|
||||
RequestDuration = prometheus.NewHistogramVec(
|
||||
RequestDuration := prometheus.NewHistogramVec(
|
||||
prometheus.HistogramOpts{
|
||||
Namespace: namespace,
|
||||
Subsystem: subsystem,
|
||||
@@ -70,7 +63,7 @@ var (
|
||||
)
|
||||
|
||||
// 发往后端TSDB,延迟如何
|
||||
ForwardDuration = prometheus.NewHistogramVec(
|
||||
ForwardDuration := prometheus.NewHistogramVec(
|
||||
prometheus.HistogramOpts{
|
||||
Namespace: namespace,
|
||||
Subsystem: subsystem,
|
||||
@@ -79,13 +72,8 @@ var (
|
||||
Help: "Forward samples to TSDB. latencies in seconds.",
|
||||
}, []string{"cluster", "channel_number"},
|
||||
)
|
||||
)
|
||||
|
||||
func Init() {
|
||||
// Register the summary and the histogram with Prometheus's default registry.
|
||||
prometheus.MustRegister(
|
||||
GaugeCronDuration,
|
||||
GaugeSyncNumber,
|
||||
CounterSampleTotal,
|
||||
CounterAlertsTotal,
|
||||
GaugeAlertQueueSize,
|
||||
@@ -93,4 +81,13 @@ func Init() {
|
||||
RequestDuration,
|
||||
ForwardDuration,
|
||||
)
|
||||
|
||||
return &Stats{
|
||||
CounterSampleTotal: CounterSampleTotal,
|
||||
CounterAlertsTotal: CounterAlertsTotal,
|
||||
GaugeAlertQueueSize: GaugeAlertQueueSize,
|
||||
GaugeSampleQueueSize: GaugeSampleQueueSize,
|
||||
RequestDuration: RequestDuration,
|
||||
ForwardDuration: ForwardDuration,
|
||||
}
|
||||
}
|
||||
@@ -1,19 +1,44 @@
|
||||
package conv
|
||||
package common
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"math"
|
||||
"strings"
|
||||
|
||||
"github.com/prometheus/common/model"
|
||||
)
|
||||
|
||||
type Vector struct {
|
||||
type AnomalyPoint struct {
|
||||
Key string `json:"key"`
|
||||
Labels model.Metric `json:"labels"`
|
||||
Timestamp int64 `json:"timestamp"`
|
||||
Value float64 `json:"value"`
|
||||
Severity int `json:"severity"`
|
||||
Triggered bool `json:"triggered"`
|
||||
}
|
||||
|
||||
func ConvertVectors(value model.Value) (lst []Vector) {
|
||||
func NewAnomalyPoint(key string, labels map[string]string, ts int64, value float64, severity int) AnomalyPoint {
|
||||
anomalyPointLabels := make(model.Metric)
|
||||
for k, v := range labels {
|
||||
anomalyPointLabels[model.LabelName(k)] = model.LabelValue(v)
|
||||
}
|
||||
anomalyPointLabels[model.MetricNameLabel] = model.LabelValue(key)
|
||||
return AnomalyPoint{
|
||||
Key: key,
|
||||
Labels: anomalyPointLabels,
|
||||
Timestamp: ts,
|
||||
Value: value,
|
||||
Severity: severity,
|
||||
}
|
||||
}
|
||||
|
||||
func (v *AnomalyPoint) ReadableValue() string {
|
||||
ret := fmt.Sprintf("%.5f", v.Value)
|
||||
ret = strings.TrimRight(ret, "0")
|
||||
return strings.TrimRight(ret, ".")
|
||||
}
|
||||
|
||||
func ConvertAnomalyPoints(value model.Value) (lst []AnomalyPoint) {
|
||||
if value == nil {
|
||||
return
|
||||
}
|
||||
@@ -30,7 +55,7 @@ func ConvertVectors(value model.Value) (lst []Vector) {
|
||||
continue
|
||||
}
|
||||
|
||||
lst = append(lst, Vector{
|
||||
lst = append(lst, AnomalyPoint{
|
||||
Key: item.Metric.String(),
|
||||
Timestamp: item.Timestamp.Unix(),
|
||||
Value: float64(item.Value),
|
||||
@@ -54,7 +79,7 @@ func ConvertVectors(value model.Value) (lst []Vector) {
|
||||
continue
|
||||
}
|
||||
|
||||
lst = append(lst, Vector{
|
||||
lst = append(lst, AnomalyPoint{
|
||||
Key: item.Metric.String(),
|
||||
Labels: item.Metric,
|
||||
Timestamp: last.Timestamp.Unix(),
|
||||
@@ -71,7 +96,7 @@ func ConvertVectors(value model.Value) (lst []Vector) {
|
||||
return
|
||||
}
|
||||
|
||||
lst = append(lst, Vector{
|
||||
lst = append(lst, AnomalyPoint{
|
||||
Key: "{}",
|
||||
Timestamp: item.Timestamp.Unix(),
|
||||
Value: float64(item.Value),
|
||||
45
alert/common/key.go
Normal file
45
alert/common/key.go
Normal file
@@ -0,0 +1,45 @@
|
||||
package common
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
)
|
||||
|
||||
func RuleKey(datasourceId, id int64) string {
|
||||
return fmt.Sprintf("alert-%d-%d", datasourceId, id)
|
||||
}
|
||||
|
||||
func MatchTags(eventTagsMap map[string]string, itags []models.TagFilter) bool {
|
||||
for _, filter := range itags {
|
||||
value, has := eventTagsMap[filter.Key]
|
||||
if !has {
|
||||
return false
|
||||
}
|
||||
if !matchTag(value, filter) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
func matchTag(value string, filter models.TagFilter) bool {
|
||||
switch filter.Func {
|
||||
case "==":
|
||||
return filter.Value == value
|
||||
case "!=":
|
||||
return filter.Value != value
|
||||
case "in":
|
||||
_, has := filter.Vset[value]
|
||||
return has
|
||||
case "not in":
|
||||
_, has := filter.Vset[value]
|
||||
return !has
|
||||
case "=~":
|
||||
return filter.Regexp.MatchString(value)
|
||||
case "!~":
|
||||
return !filter.Regexp.MatchString(value)
|
||||
}
|
||||
// unexpect func
|
||||
return false
|
||||
}
|
||||
159
alert/dispatch/consume.go
Normal file
159
alert/dispatch/consume.go
Normal file
@@ -0,0 +1,159 @@
|
||||
package dispatch
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/alert/aconf"
|
||||
"github.com/ccfos/nightingale/v6/alert/queue"
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
"github.com/ccfos/nightingale/v6/pkg/ctx"
|
||||
|
||||
"github.com/toolkits/pkg/concurrent/semaphore"
|
||||
"github.com/toolkits/pkg/logger"
|
||||
)
|
||||
|
||||
type Consumer struct {
|
||||
alerting aconf.Alerting
|
||||
ctx *ctx.Context
|
||||
|
||||
dispatch *Dispatch
|
||||
}
|
||||
|
||||
// 创建一个 Consumer 实例
|
||||
func NewConsumer(alerting aconf.Alerting, ctx *ctx.Context, dispatch *Dispatch) *Consumer {
|
||||
return &Consumer{
|
||||
alerting: alerting,
|
||||
ctx: ctx,
|
||||
dispatch: dispatch,
|
||||
}
|
||||
}
|
||||
|
||||
func (e *Consumer) LoopConsume() {
|
||||
sema := semaphore.NewSemaphore(e.alerting.NotifyConcurrency)
|
||||
duration := time.Duration(100) * time.Millisecond
|
||||
for {
|
||||
events := queue.EventQueue.PopBackBy(100)
|
||||
if len(events) == 0 {
|
||||
time.Sleep(duration)
|
||||
continue
|
||||
}
|
||||
e.consume(events, sema)
|
||||
}
|
||||
}
|
||||
|
||||
func (e *Consumer) consume(events []interface{}, sema *semaphore.Semaphore) {
|
||||
for i := range events {
|
||||
if events[i] == nil {
|
||||
continue
|
||||
}
|
||||
|
||||
event := events[i].(*models.AlertCurEvent)
|
||||
sema.Acquire()
|
||||
go func(event *models.AlertCurEvent) {
|
||||
defer sema.Release()
|
||||
e.consumeOne(event)
|
||||
}(event)
|
||||
}
|
||||
}
|
||||
|
||||
func (e *Consumer) consumeOne(event *models.AlertCurEvent) {
|
||||
LogEvent(event, "consume")
|
||||
|
||||
if err := event.ParseRule("rule_name"); err != nil {
|
||||
event.RuleName = fmt.Sprintf("failed to parse rule name: %v", err)
|
||||
}
|
||||
|
||||
if err := event.ParseRule("rule_note"); err != nil {
|
||||
event.RuleNote = fmt.Sprintf("failed to parse rule note: %v", err)
|
||||
}
|
||||
|
||||
if err := event.ParseRule("annotations"); err != nil {
|
||||
event.Annotations = fmt.Sprintf("failed to parse rule note: %v", err)
|
||||
}
|
||||
|
||||
e.persist(event)
|
||||
|
||||
if event.IsRecovered && event.NotifyRecovered == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
e.dispatch.HandleEventNotify(event, false)
|
||||
}
|
||||
|
||||
func (e *Consumer) persist(event *models.AlertCurEvent) {
|
||||
has, err := models.AlertCurEventExists(e.ctx, "hash=?", event.Hash)
|
||||
if err != nil {
|
||||
logger.Errorf("event_persist_check_exists_fail: %v rule_id=%d hash=%s", err, event.RuleId, event.Hash)
|
||||
return
|
||||
}
|
||||
|
||||
his := event.ToHis(e.ctx)
|
||||
|
||||
// 不管是告警还是恢复,全量告警里都要记录
|
||||
if err := his.Add(e.ctx); err != nil {
|
||||
logger.Errorf(
|
||||
"event_persist_his_fail: %v rule_id=%d cluster:%s hash=%s tags=%v timestamp=%d value=%s",
|
||||
err,
|
||||
event.RuleId,
|
||||
event.Cluster,
|
||||
event.Hash,
|
||||
event.TagsJSON,
|
||||
event.TriggerTime,
|
||||
event.TriggerValue,
|
||||
)
|
||||
}
|
||||
|
||||
if has {
|
||||
// 活跃告警表中有记录,删之
|
||||
err = models.AlertCurEventDelByHash(e.ctx, event.Hash)
|
||||
if err != nil {
|
||||
logger.Errorf("event_del_cur_fail: %v hash=%s", err, event.Hash)
|
||||
return
|
||||
}
|
||||
|
||||
if !event.IsRecovered {
|
||||
// 恢复事件,从活跃告警列表彻底删掉,告警事件,要重新加进来新的event
|
||||
// use his id as cur id
|
||||
event.Id = his.Id
|
||||
if event.Id > 0 {
|
||||
if err := event.Add(e.ctx); err != nil {
|
||||
logger.Errorf(
|
||||
"event_persist_cur_fail: %v rule_id=%d cluster:%s hash=%s tags=%v timestamp=%d value=%s",
|
||||
err,
|
||||
event.RuleId,
|
||||
event.Cluster,
|
||||
event.Hash,
|
||||
event.TagsJSON,
|
||||
event.TriggerTime,
|
||||
event.TriggerValue,
|
||||
)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
if event.IsRecovered {
|
||||
// alert_cur_event表里没有数据,表示之前没告警,结果现在报了恢复,神奇....理论上不应该出现的
|
||||
return
|
||||
}
|
||||
|
||||
// use his id as cur id
|
||||
event.Id = his.Id
|
||||
if event.Id > 0 {
|
||||
if err := event.Add(e.ctx); err != nil {
|
||||
logger.Errorf(
|
||||
"event_persist_cur_fail: %v rule_id=%d cluster:%s hash=%s tags=%v timestamp=%d value=%s",
|
||||
err,
|
||||
event.RuleId,
|
||||
event.Cluster,
|
||||
event.Hash,
|
||||
event.TagsJSON,
|
||||
event.TriggerTime,
|
||||
event.TriggerValue,
|
||||
)
|
||||
}
|
||||
}
|
||||
}
|
||||
263
alert/dispatch/dispatch.go
Normal file
263
alert/dispatch/dispatch.go
Normal file
@@ -0,0 +1,263 @@
|
||||
package dispatch
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"html/template"
|
||||
"strconv"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/alert/aconf"
|
||||
"github.com/ccfos/nightingale/v6/alert/common"
|
||||
"github.com/ccfos/nightingale/v6/alert/sender"
|
||||
"github.com/ccfos/nightingale/v6/memsto"
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
"github.com/ccfos/nightingale/v6/pkg/ctx"
|
||||
|
||||
"github.com/toolkits/pkg/logger"
|
||||
)
|
||||
|
||||
type Dispatch struct {
|
||||
alertRuleCache *memsto.AlertRuleCacheType
|
||||
userCache *memsto.UserCacheType
|
||||
userGroupCache *memsto.UserGroupCacheType
|
||||
alertSubscribeCache *memsto.AlertSubscribeCacheType
|
||||
targetCache *memsto.TargetCacheType
|
||||
notifyConfigCache *memsto.NotifyConfigCacheType
|
||||
|
||||
alerting aconf.Alerting
|
||||
|
||||
senders map[string]sender.Sender
|
||||
tpls map[string]*template.Template
|
||||
|
||||
ctx *ctx.Context
|
||||
|
||||
RwLock sync.RWMutex
|
||||
}
|
||||
|
||||
// 创建一个 Notify 实例
|
||||
func NewDispatch(alertRuleCache *memsto.AlertRuleCacheType, userCache *memsto.UserCacheType, userGroupCache *memsto.UserGroupCacheType,
|
||||
alertSubscribeCache *memsto.AlertSubscribeCacheType, targetCache *memsto.TargetCacheType, notifyConfigCache *memsto.NotifyConfigCacheType,
|
||||
alerting aconf.Alerting, ctx *ctx.Context) *Dispatch {
|
||||
notify := &Dispatch{
|
||||
alertRuleCache: alertRuleCache,
|
||||
userCache: userCache,
|
||||
userGroupCache: userGroupCache,
|
||||
alertSubscribeCache: alertSubscribeCache,
|
||||
targetCache: targetCache,
|
||||
notifyConfigCache: notifyConfigCache,
|
||||
|
||||
alerting: alerting,
|
||||
|
||||
senders: make(map[string]sender.Sender),
|
||||
tpls: make(map[string]*template.Template),
|
||||
|
||||
ctx: ctx,
|
||||
}
|
||||
return notify
|
||||
}
|
||||
|
||||
func (e *Dispatch) ReloadTpls() error {
|
||||
err := e.relaodTpls()
|
||||
if err != nil {
|
||||
logger.Error("failed to reload tpls: %v", err)
|
||||
}
|
||||
|
||||
duration := time.Duration(9000) * time.Millisecond
|
||||
for {
|
||||
time.Sleep(duration)
|
||||
if err := e.relaodTpls(); err != nil {
|
||||
logger.Warning("failed to reload tpls:", err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (e *Dispatch) relaodTpls() error {
|
||||
tmpTpls, err := models.ListTpls(e.ctx)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
smtp := e.notifyConfigCache.GetSMTP()
|
||||
|
||||
senders := map[string]sender.Sender{
|
||||
models.Email: sender.NewSender(models.Email, tmpTpls, smtp),
|
||||
models.Dingtalk: sender.NewSender(models.Dingtalk, tmpTpls, smtp),
|
||||
models.Wecom: sender.NewSender(models.Wecom, tmpTpls, smtp),
|
||||
models.Feishu: sender.NewSender(models.Feishu, tmpTpls, smtp),
|
||||
models.Mm: sender.NewSender(models.Mm, tmpTpls, smtp),
|
||||
models.Telegram: sender.NewSender(models.Telegram, tmpTpls, smtp),
|
||||
}
|
||||
|
||||
e.RwLock.Lock()
|
||||
e.tpls = tmpTpls
|
||||
e.senders = senders
|
||||
e.RwLock.Unlock()
|
||||
return nil
|
||||
}
|
||||
|
||||
// HandleEventNotify 处理event事件的主逻辑
|
||||
// event: 告警/恢复事件
|
||||
// isSubscribe: 告警事件是否由subscribe的配置产生
|
||||
func (e *Dispatch) HandleEventNotify(event *models.AlertCurEvent, isSubscribe bool) {
|
||||
rule := e.alertRuleCache.Get(event.RuleId)
|
||||
if rule == nil {
|
||||
return
|
||||
}
|
||||
fillUsers(event, e.userCache, e.userGroupCache)
|
||||
|
||||
var (
|
||||
// 处理事件到 notifyTarget 关系,处理的notifyTarget用OrMerge进行合并
|
||||
handlers []NotifyTargetDispatch
|
||||
|
||||
// 额外去掉一些订阅,处理的notifyTarget用AndMerge进行合并, 如设置 channel=false,合并后不通过这个channel发送
|
||||
// 如果实现了相关 Dispatch,可以添加到interceptors中
|
||||
interceptorHandlers []NotifyTargetDispatch
|
||||
)
|
||||
if isSubscribe {
|
||||
handlers = []NotifyTargetDispatch{NotifyGroupDispatch, EventCallbacksDispatch}
|
||||
} else {
|
||||
handlers = []NotifyTargetDispatch{NotifyGroupDispatch, GlobalWebhookDispatch, EventCallbacksDispatch}
|
||||
}
|
||||
|
||||
notifyTarget := NewNotifyTarget()
|
||||
// 处理订阅关系使用OrMerge
|
||||
for _, handler := range handlers {
|
||||
notifyTarget.OrMerge(handler(rule, event, notifyTarget, e))
|
||||
}
|
||||
|
||||
// 处理移除订阅关系的逻辑,比如员工离职,临时静默某个通道的策略等
|
||||
for _, handler := range interceptorHandlers {
|
||||
notifyTarget.AndMerge(handler(rule, event, notifyTarget, e))
|
||||
}
|
||||
|
||||
// 处理事件发送,这里用一个goroutine处理一个event的所有发送事件
|
||||
go e.Send(rule, event, notifyTarget, isSubscribe)
|
||||
|
||||
// 如果是不是订阅规则出现的event, 则需要处理订阅规则的event
|
||||
if !isSubscribe {
|
||||
e.handleSubs(event)
|
||||
}
|
||||
}
|
||||
|
||||
func (e *Dispatch) handleSubs(event *models.AlertCurEvent) {
|
||||
// handle alert subscribes
|
||||
subscribes := make([]*models.AlertSubscribe, 0)
|
||||
// rule specific subscribes
|
||||
if subs, has := e.alertSubscribeCache.Get(event.RuleId); has {
|
||||
subscribes = append(subscribes, subs...)
|
||||
}
|
||||
// global subscribes
|
||||
if subs, has := e.alertSubscribeCache.Get(0); has {
|
||||
subscribes = append(subscribes, subs...)
|
||||
}
|
||||
|
||||
for _, sub := range subscribes {
|
||||
e.handleSub(sub, *event)
|
||||
}
|
||||
}
|
||||
|
||||
// handleSub 处理订阅规则的event,注意这里event要使用值传递,因为后面会修改event的状态
|
||||
func (e *Dispatch) handleSub(sub *models.AlertSubscribe, event models.AlertCurEvent) {
|
||||
if sub.IsDisabled() || !sub.MatchCluster(event.DatasourceId) {
|
||||
return
|
||||
}
|
||||
if !common.MatchTags(event.TagsMap, sub.ITags) {
|
||||
return
|
||||
}
|
||||
if sub.ForDuration > (event.TriggerTime - event.FirstTriggerTime) {
|
||||
return
|
||||
}
|
||||
sub.ModifyEvent(&event)
|
||||
LogEvent(&event, "subscribe")
|
||||
e.HandleEventNotify(&event, true)
|
||||
}
|
||||
|
||||
func (e *Dispatch) Send(rule *models.AlertRule, event *models.AlertCurEvent, notifyTarget *NotifyTarget, isSubscribe bool) {
|
||||
for channel, uids := range notifyTarget.ToChannelUserMap() {
|
||||
ctx := sender.BuildMessageContext(rule, event, uids, e.userCache)
|
||||
e.RwLock.RLock()
|
||||
s := e.senders[channel]
|
||||
e.RwLock.RUnlock()
|
||||
if s == nil {
|
||||
logger.Warningf("no sender for channel: %s", channel)
|
||||
continue
|
||||
}
|
||||
logger.Debugf("send event: %s, channel: %s", event.Hash, channel)
|
||||
for i := 0; i < len(ctx.Users); i++ {
|
||||
logger.Debug("send event to user: ", ctx.Users[i])
|
||||
}
|
||||
s.Send(ctx)
|
||||
}
|
||||
|
||||
// handle event callbacks
|
||||
sender.SendCallbacks(e.ctx, notifyTarget.ToCallbackList(), event, e.targetCache, e.notifyConfigCache.GetIbex())
|
||||
|
||||
// handle global webhooks
|
||||
sender.SendWebhooks(notifyTarget.ToWebhookList(), event)
|
||||
|
||||
// handle plugin call
|
||||
go sender.MayPluginNotify(e.genNoticeBytes(event), e.notifyConfigCache.GetNotifyScript())
|
||||
}
|
||||
|
||||
type Notice struct {
|
||||
Event *models.AlertCurEvent `json:"event"`
|
||||
Tpls map[string]string `json:"tpls"`
|
||||
}
|
||||
|
||||
func (e *Dispatch) genNoticeBytes(event *models.AlertCurEvent) []byte {
|
||||
// build notice body with templates
|
||||
ntpls := make(map[string]string)
|
||||
|
||||
e.RwLock.RLock()
|
||||
defer e.RwLock.RUnlock()
|
||||
for filename, tpl := range e.tpls {
|
||||
var body bytes.Buffer
|
||||
if err := tpl.Execute(&body, event); err != nil {
|
||||
ntpls[filename] = err.Error()
|
||||
} else {
|
||||
ntpls[filename] = body.String()
|
||||
}
|
||||
}
|
||||
|
||||
notice := Notice{Event: event, Tpls: ntpls}
|
||||
stdinBytes, err := json.Marshal(notice)
|
||||
if err != nil {
|
||||
logger.Errorf("event_notify: failed to marshal notice: %v", err)
|
||||
return nil
|
||||
}
|
||||
|
||||
return stdinBytes
|
||||
}
|
||||
|
||||
// for alerting
|
||||
func fillUsers(ce *models.AlertCurEvent, uc *memsto.UserCacheType, ugc *memsto.UserGroupCacheType) {
|
||||
gids := make([]int64, 0, len(ce.NotifyGroupsJSON))
|
||||
for i := 0; i < len(ce.NotifyGroupsJSON); i++ {
|
||||
gid, err := strconv.ParseInt(ce.NotifyGroupsJSON[i], 10, 64)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
gids = append(gids, gid)
|
||||
}
|
||||
|
||||
ce.NotifyGroupsObj = ugc.GetByUserGroupIds(gids)
|
||||
|
||||
uids := make(map[int64]struct{})
|
||||
for i := 0; i < len(ce.NotifyGroupsObj); i++ {
|
||||
ug := ce.NotifyGroupsObj[i]
|
||||
for j := 0; j < len(ug.UserIds); j++ {
|
||||
uids[ug.UserIds[j]] = struct{}{}
|
||||
}
|
||||
}
|
||||
|
||||
ce.NotifyUsersObj = uc.GetByUserIds(mapKeys(uids))
|
||||
}
|
||||
|
||||
func mapKeys(m map[int64]struct{}) []int64 {
|
||||
lst := make([]int64, 0, len(m))
|
||||
for k := range m {
|
||||
lst = append(lst, k)
|
||||
}
|
||||
return lst
|
||||
}
|
||||
@@ -1,7 +1,8 @@
|
||||
package engine
|
||||
package dispatch
|
||||
|
||||
import (
|
||||
"github.com/didi/nightingale/v5/src/models"
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
|
||||
"github.com/toolkits/pkg/logger"
|
||||
)
|
||||
|
||||
@@ -17,11 +18,12 @@ func LogEvent(event *models.AlertCurEvent, location string, err ...error) {
|
||||
}
|
||||
|
||||
logger.Infof(
|
||||
"event(%s %s) %s: rule_id=%d %v%s@%d %s",
|
||||
"event(%s %s) %s: rule_id=%d cluster:%s %v%s@%d %s",
|
||||
event.Hash,
|
||||
status,
|
||||
location,
|
||||
event.RuleId,
|
||||
event.Cluster,
|
||||
event.TagsJSON,
|
||||
event.TriggerValue,
|
||||
event.TriggerTime,
|
||||
33
alert/dispatch/notify_channel.go
Normal file
33
alert/dispatch/notify_channel.go
Normal file
@@ -0,0 +1,33 @@
|
||||
package dispatch
|
||||
|
||||
// NotifyChannels channelKey -> bool
|
||||
type NotifyChannels map[string]bool
|
||||
|
||||
func NewNotifyChannels(channels []string) NotifyChannels {
|
||||
nc := make(NotifyChannels)
|
||||
for _, ch := range channels {
|
||||
nc[ch] = true
|
||||
}
|
||||
return nc
|
||||
}
|
||||
|
||||
func (nc NotifyChannels) OrMerge(other NotifyChannels) {
|
||||
nc.merge(other, func(a, b bool) bool { return a || b })
|
||||
}
|
||||
|
||||
func (nc NotifyChannels) AndMerge(other NotifyChannels) {
|
||||
nc.merge(other, func(a, b bool) bool { return a && b })
|
||||
}
|
||||
|
||||
func (nc NotifyChannels) merge(other NotifyChannels, f func(bool, bool) bool) {
|
||||
if other == nil {
|
||||
return
|
||||
}
|
||||
for k, v := range other {
|
||||
if curV, has := nc[k]; has {
|
||||
nc[k] = f(curV, v)
|
||||
} else {
|
||||
nc[k] = v
|
||||
}
|
||||
}
|
||||
}
|
||||
134
alert/dispatch/notify_target.go
Normal file
134
alert/dispatch/notify_target.go
Normal file
@@ -0,0 +1,134 @@
|
||||
package dispatch
|
||||
|
||||
import (
|
||||
"strconv"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
)
|
||||
|
||||
// NotifyTarget 维护所有需要发送的目标 用户-通道/回调/钩子信息,用map维护的数据结构具有去重功能
|
||||
type NotifyTarget struct {
|
||||
userMap map[int64]NotifyChannels
|
||||
webhooks map[string]*models.Webhook
|
||||
callbacks map[string]struct{}
|
||||
}
|
||||
|
||||
func NewNotifyTarget() *NotifyTarget {
|
||||
return &NotifyTarget{
|
||||
userMap: make(map[int64]NotifyChannels),
|
||||
webhooks: make(map[string]*models.Webhook),
|
||||
callbacks: make(map[string]struct{}),
|
||||
}
|
||||
}
|
||||
|
||||
// OrMerge 将 channelMap 按照 or 的方式合并,方便实现多种组合的策略,比如根据某个 tag 进行路由等
|
||||
func (s *NotifyTarget) OrMerge(other *NotifyTarget) {
|
||||
s.merge(other, NotifyChannels.OrMerge)
|
||||
}
|
||||
|
||||
// AndMerge 将 channelMap 中的 bool 值按照 and 的逻辑进行合并,可以单独将人/通道维度的通知移除
|
||||
// 常用的场景有:
|
||||
// 1. 人员离职了不需要发送告警了
|
||||
// 2. 某个告警通道进行维护,暂时不需要发送告警了
|
||||
// 3. 业务值班的重定向逻辑,将高等级的告警额外发送给应急人员等
|
||||
// 可以结合业务需求自己实现router
|
||||
func (s *NotifyTarget) AndMerge(other *NotifyTarget) {
|
||||
s.merge(other, NotifyChannels.AndMerge)
|
||||
}
|
||||
|
||||
func (s *NotifyTarget) merge(other *NotifyTarget, f func(NotifyChannels, NotifyChannels)) {
|
||||
if other == nil {
|
||||
return
|
||||
}
|
||||
for k, v := range other.userMap {
|
||||
if curV, has := s.userMap[k]; has {
|
||||
f(curV, v)
|
||||
} else {
|
||||
s.userMap[k] = v
|
||||
}
|
||||
}
|
||||
for k, v := range other.webhooks {
|
||||
s.webhooks[k] = v
|
||||
}
|
||||
for k, v := range other.callbacks {
|
||||
s.callbacks[k] = v
|
||||
}
|
||||
}
|
||||
|
||||
// ToChannelUserMap userMap(map[uid][channel]bool) 转换为 map[channel][]uid 的结构
|
||||
func (s *NotifyTarget) ToChannelUserMap() map[string][]int64 {
|
||||
m := make(map[string][]int64)
|
||||
for uid, nc := range s.userMap {
|
||||
for ch, send := range nc {
|
||||
if send {
|
||||
m[ch] = append(m[ch], uid)
|
||||
}
|
||||
}
|
||||
}
|
||||
return m
|
||||
}
|
||||
|
||||
func (s *NotifyTarget) ToCallbackList() []string {
|
||||
callbacks := make([]string, 0, len(s.callbacks))
|
||||
for cb := range s.callbacks {
|
||||
callbacks = append(callbacks, cb)
|
||||
}
|
||||
return callbacks
|
||||
}
|
||||
|
||||
func (s *NotifyTarget) ToWebhookList() []*models.Webhook {
|
||||
webhooks := make([]*models.Webhook, 0, len(s.webhooks))
|
||||
for _, wh := range s.webhooks {
|
||||
webhooks = append(webhooks, wh)
|
||||
}
|
||||
return webhooks
|
||||
}
|
||||
|
||||
// Dispatch 抽象由告警事件到信息接收者的路由策略
|
||||
// rule: 告警规则
|
||||
// event: 告警事件
|
||||
// prev: 前一次路由结果, Dispatch 的实现可以直接修改 prev, 也可以返回一个新的 NotifyTarget 用于 AndMerge/OrMerge
|
||||
type NotifyTargetDispatch func(rule *models.AlertRule, event *models.AlertCurEvent, prev *NotifyTarget, dispatch *Dispatch) *NotifyTarget
|
||||
|
||||
// GroupDispatch 处理告警规则的组订阅关系
|
||||
func NotifyGroupDispatch(rule *models.AlertRule, event *models.AlertCurEvent, prev *NotifyTarget, dispatch *Dispatch) *NotifyTarget {
|
||||
groupIds := make([]int64, 0, len(event.NotifyGroupsJSON))
|
||||
for _, groupId := range event.NotifyGroupsJSON {
|
||||
gid, err := strconv.ParseInt(groupId, 10, 64)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
groupIds = append(groupIds, gid)
|
||||
}
|
||||
|
||||
groups := dispatch.userGroupCache.GetByUserGroupIds(groupIds)
|
||||
NotifyTarget := NewNotifyTarget()
|
||||
for _, group := range groups {
|
||||
for _, userId := range group.UserIds {
|
||||
NotifyTarget.userMap[userId] = NewNotifyChannels(event.NotifyChannelsJSON)
|
||||
}
|
||||
}
|
||||
return NotifyTarget
|
||||
}
|
||||
|
||||
func GlobalWebhookDispatch(rule *models.AlertRule, event *models.AlertCurEvent, prev *NotifyTarget, dispatch *Dispatch) *NotifyTarget {
|
||||
webhooks := dispatch.notifyConfigCache.GetWebhooks()
|
||||
NotifyTarget := NewNotifyTarget()
|
||||
for _, webhook := range webhooks {
|
||||
if !webhook.Enable {
|
||||
continue
|
||||
}
|
||||
NotifyTarget.webhooks[webhook.Url] = webhook
|
||||
}
|
||||
return NotifyTarget
|
||||
}
|
||||
|
||||
func EventCallbacksDispatch(rule *models.AlertRule, event *models.AlertCurEvent, prev *NotifyTarget, dispatch *Dispatch) *NotifyTarget {
|
||||
for _, c := range event.CallbacksJSON {
|
||||
if c == "" {
|
||||
continue
|
||||
}
|
||||
prev.callbacks[c] = struct{}{}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
174
alert/eval/alert_rule.go
Normal file
174
alert/eval/alert_rule.go
Normal file
@@ -0,0 +1,174 @@
|
||||
package eval
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/alert/aconf"
|
||||
"github.com/ccfos/nightingale/v6/alert/astats"
|
||||
"github.com/ccfos/nightingale/v6/alert/naming"
|
||||
"github.com/ccfos/nightingale/v6/alert/process"
|
||||
"github.com/ccfos/nightingale/v6/memsto"
|
||||
"github.com/ccfos/nightingale/v6/pkg/ctx"
|
||||
"github.com/ccfos/nightingale/v6/prom"
|
||||
"github.com/toolkits/pkg/logger"
|
||||
)
|
||||
|
||||
type Scheduler struct {
|
||||
isCenter bool
|
||||
// key: hash
|
||||
alertRules map[string]*AlertRuleWorker
|
||||
|
||||
ExternalProcessors *process.ExternalProcessorsType
|
||||
|
||||
aconf aconf.Alert
|
||||
|
||||
alertRuleCache *memsto.AlertRuleCacheType
|
||||
targetCache *memsto.TargetCacheType
|
||||
busiGroupCache *memsto.BusiGroupCacheType
|
||||
alertMuteCache *memsto.AlertMuteCacheType
|
||||
datasourceCache *memsto.DatasourceCacheType
|
||||
|
||||
promClients *prom.PromClientMap
|
||||
|
||||
naming *naming.Naming
|
||||
|
||||
ctx *ctx.Context
|
||||
stats *astats.Stats
|
||||
}
|
||||
|
||||
func NewScheduler(isCenter bool, aconf aconf.Alert, externalProcessors *process.ExternalProcessorsType, arc *memsto.AlertRuleCacheType, targetCache *memsto.TargetCacheType,
|
||||
busiGroupCache *memsto.BusiGroupCacheType, alertMuteCache *memsto.AlertMuteCacheType, datasourceCache *memsto.DatasourceCacheType, promClients *prom.PromClientMap, naming *naming.Naming,
|
||||
ctx *ctx.Context, stats *astats.Stats) *Scheduler {
|
||||
scheduler := &Scheduler{
|
||||
isCenter: isCenter,
|
||||
aconf: aconf,
|
||||
alertRules: make(map[string]*AlertRuleWorker),
|
||||
|
||||
ExternalProcessors: externalProcessors,
|
||||
|
||||
alertRuleCache: arc,
|
||||
targetCache: targetCache,
|
||||
busiGroupCache: busiGroupCache,
|
||||
alertMuteCache: alertMuteCache,
|
||||
datasourceCache: datasourceCache,
|
||||
|
||||
promClients: promClients,
|
||||
naming: naming,
|
||||
|
||||
ctx: ctx,
|
||||
stats: stats,
|
||||
}
|
||||
|
||||
go scheduler.LoopSyncRules(context.Background())
|
||||
return scheduler
|
||||
}
|
||||
|
||||
func (s *Scheduler) LoopSyncRules(ctx context.Context) {
|
||||
time.Sleep(time.Duration(s.aconf.EngineDelay) * time.Second)
|
||||
duration := 9000 * time.Millisecond
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return
|
||||
case <-time.After(duration):
|
||||
s.syncAlertRules()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (s *Scheduler) syncAlertRules() {
|
||||
ids := s.alertRuleCache.GetRuleIds()
|
||||
alertRuleWorkers := make(map[string]*AlertRuleWorker)
|
||||
externalRuleWorkers := make(map[string]*process.Processor)
|
||||
for _, id := range ids {
|
||||
rule := s.alertRuleCache.Get(id)
|
||||
if rule == nil {
|
||||
continue
|
||||
}
|
||||
if rule.IsPrometheusRule() {
|
||||
datasourceIds := s.promClients.Hit(rule.DatasourceIdsJson)
|
||||
for _, dsId := range datasourceIds {
|
||||
if !naming.DatasourceHashRing.IsHit(dsId, fmt.Sprintf("%d", rule.Id), s.aconf.Heartbeat.Endpoint) {
|
||||
continue
|
||||
}
|
||||
ds := s.datasourceCache.GetById(dsId)
|
||||
if ds == nil {
|
||||
logger.Debugf("datasource %d not found", dsId)
|
||||
continue
|
||||
}
|
||||
|
||||
if ds.Status != "enabled" {
|
||||
logger.Debugf("datasource %d status is %s", dsId, ds.Status)
|
||||
continue
|
||||
}
|
||||
processor := process.NewProcessor(rule, dsId, s.alertRuleCache, s.targetCache, s.busiGroupCache, s.alertMuteCache, s.datasourceCache, s.promClients, s.ctx, s.stats)
|
||||
|
||||
alertRule := NewAlertRuleWorker(rule, dsId, processor, s.promClients, s.ctx)
|
||||
alertRuleWorkers[alertRule.Hash()] = alertRule
|
||||
}
|
||||
} else if rule.IsHostRule() && s.isCenter {
|
||||
// all host rule will be processed by center instance
|
||||
if !naming.DatasourceHashRing.IsHit(naming.HostDatasource, fmt.Sprintf("%d", rule.Id), s.aconf.Heartbeat.Endpoint) {
|
||||
continue
|
||||
}
|
||||
processor := process.NewProcessor(rule, 0, s.alertRuleCache, s.targetCache, s.busiGroupCache, s.alertMuteCache, s.datasourceCache, s.promClients, s.ctx, s.stats)
|
||||
alertRule := NewAlertRuleWorker(rule, 0, processor, s.promClients, s.ctx)
|
||||
alertRuleWorkers[alertRule.Hash()] = alertRule
|
||||
} else {
|
||||
// 如果 rule 不是通过 prometheus engine 来告警的,则创建为 externalRule
|
||||
// if rule is not processed by prometheus engine, create it as externalRule
|
||||
for _, dsId := range rule.DatasourceIdsJson {
|
||||
ds := s.datasourceCache.GetById(dsId)
|
||||
if ds == nil {
|
||||
logger.Debugf("datasource %d not found", dsId)
|
||||
continue
|
||||
}
|
||||
|
||||
if ds.Status != "enabled" {
|
||||
logger.Debugf("datasource %d status is %s", dsId, ds.Status)
|
||||
continue
|
||||
}
|
||||
processor := process.NewProcessor(rule, dsId, s.alertRuleCache, s.targetCache, s.busiGroupCache, s.alertMuteCache, s.datasourceCache, s.promClients, s.ctx, s.stats)
|
||||
externalRuleWorkers[processor.Key()] = processor
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
for hash, rule := range alertRuleWorkers {
|
||||
if _, has := s.alertRules[hash]; !has {
|
||||
rule.Prepare()
|
||||
rule.Start()
|
||||
s.alertRules[hash] = rule
|
||||
}
|
||||
}
|
||||
|
||||
for hash, rule := range s.alertRules {
|
||||
if _, has := alertRuleWorkers[hash]; !has {
|
||||
rule.Stop()
|
||||
delete(s.alertRules, hash)
|
||||
}
|
||||
}
|
||||
|
||||
s.ExternalProcessors.ExternalLock.Lock()
|
||||
for key, processor := range externalRuleWorkers {
|
||||
if curProcessor, has := s.ExternalProcessors.Processors[key]; has {
|
||||
// rule存在,且hash一致,认为没有变更,这里可以根据需求单独实现一个关联数据更多的hash函数
|
||||
if processor.Hash() == curProcessor.Hash() {
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
||||
// 现有规则中没有rule以及有rule但hash不一致的场景,需要触发rule的update
|
||||
processor.RecoverAlertCurEventFromDb()
|
||||
s.ExternalProcessors.Processors[key] = processor
|
||||
}
|
||||
|
||||
for key := range s.ExternalProcessors.Processors {
|
||||
if _, has := externalRuleWorkers[key]; !has {
|
||||
delete(s.ExternalProcessors.Processors, key)
|
||||
}
|
||||
}
|
||||
s.ExternalProcessors.ExternalLock.Unlock()
|
||||
}
|
||||
268
alert/eval/eval.go
Normal file
268
alert/eval/eval.go
Normal file
@@ -0,0 +1,268 @@
|
||||
package eval
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/alert/common"
|
||||
"github.com/ccfos/nightingale/v6/alert/process"
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
"github.com/ccfos/nightingale/v6/pkg/ctx"
|
||||
promsdk "github.com/ccfos/nightingale/v6/pkg/prom"
|
||||
"github.com/ccfos/nightingale/v6/prom"
|
||||
|
||||
"github.com/toolkits/pkg/logger"
|
||||
"github.com/toolkits/pkg/str"
|
||||
)
|
||||
|
||||
type AlertRuleWorker struct {
|
||||
datasourceId int64
|
||||
quit chan struct{}
|
||||
inhibit bool
|
||||
severity int
|
||||
|
||||
rule *models.AlertRule
|
||||
|
||||
processor *process.Processor
|
||||
|
||||
promClients *prom.PromClientMap
|
||||
ctx *ctx.Context
|
||||
}
|
||||
|
||||
func NewAlertRuleWorker(rule *models.AlertRule, datasourceId int64, processor *process.Processor, promClients *prom.PromClientMap, ctx *ctx.Context) *AlertRuleWorker {
|
||||
arw := &AlertRuleWorker{
|
||||
datasourceId: datasourceId,
|
||||
quit: make(chan struct{}),
|
||||
rule: rule,
|
||||
processor: processor,
|
||||
|
||||
promClients: promClients,
|
||||
ctx: ctx,
|
||||
}
|
||||
|
||||
return arw
|
||||
}
|
||||
|
||||
func (arw *AlertRuleWorker) Key() string {
|
||||
return common.RuleKey(arw.datasourceId, arw.rule.Id)
|
||||
}
|
||||
|
||||
func (arw *AlertRuleWorker) Hash() string {
|
||||
return str.MD5(fmt.Sprintf("%d_%d_%s_%d",
|
||||
arw.rule.Id,
|
||||
arw.rule.PromEvalInterval,
|
||||
arw.rule.RuleConfig,
|
||||
arw.datasourceId,
|
||||
))
|
||||
}
|
||||
|
||||
func (arw *AlertRuleWorker) Prepare() {
|
||||
arw.processor.RecoverAlertCurEventFromDb()
|
||||
}
|
||||
|
||||
func (arw *AlertRuleWorker) Start() {
|
||||
logger.Infof("eval:%s started", arw.Key())
|
||||
interval := arw.rule.PromEvalInterval
|
||||
if interval <= 0 {
|
||||
interval = 10
|
||||
}
|
||||
go func() {
|
||||
for {
|
||||
select {
|
||||
case <-arw.quit:
|
||||
return
|
||||
default:
|
||||
arw.Eval()
|
||||
time.Sleep(time.Duration(interval) * time.Second)
|
||||
}
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
func (arw *AlertRuleWorker) Eval() {
|
||||
cachedRule := arw.rule
|
||||
if cachedRule == nil {
|
||||
//logger.Errorf("rule_eval:%s rule not found", arw.Key())
|
||||
return
|
||||
}
|
||||
|
||||
typ := cachedRule.GetRuleType()
|
||||
var lst []common.AnomalyPoint
|
||||
switch typ {
|
||||
case models.PROMETHEUS:
|
||||
lst = arw.GetPromAnomalyPoint(cachedRule.RuleConfig)
|
||||
case models.HOST:
|
||||
lst = arw.GetHostAnomalyPoint(cachedRule.RuleConfig)
|
||||
default:
|
||||
return
|
||||
}
|
||||
|
||||
if arw.processor == nil {
|
||||
logger.Warningf("rule_eval:%s processor is nil", arw.Key())
|
||||
return
|
||||
}
|
||||
|
||||
arw.processor.Handle(lst, "inner", arw.inhibit)
|
||||
}
|
||||
|
||||
func (arw *AlertRuleWorker) Stop() {
|
||||
logger.Infof("%s stopped", arw.Key())
|
||||
close(arw.quit)
|
||||
}
|
||||
|
||||
func (arw *AlertRuleWorker) GetPromAnomalyPoint(ruleConfig string) []common.AnomalyPoint {
|
||||
var lst []common.AnomalyPoint
|
||||
var severity int
|
||||
|
||||
var rule *models.PromRuleConfig
|
||||
if err := json.Unmarshal([]byte(ruleConfig), &rule); err != nil {
|
||||
logger.Errorf("rule_eval:%s rule_config:%s, error:%v", arw.Key(), ruleConfig, err)
|
||||
return lst
|
||||
}
|
||||
|
||||
if rule == nil {
|
||||
logger.Errorf("rule_eval:%s rule_config:%s, error:rule is nil", arw.Key(), ruleConfig)
|
||||
return lst
|
||||
}
|
||||
|
||||
arw.inhibit = rule.Inhibit
|
||||
for _, query := range rule.Queries {
|
||||
if query.Severity < severity {
|
||||
arw.severity = query.Severity
|
||||
}
|
||||
|
||||
promql := strings.TrimSpace(query.PromQl)
|
||||
if promql == "" {
|
||||
logger.Errorf("rule_eval:%s promql is blank", arw.Key())
|
||||
continue
|
||||
}
|
||||
|
||||
if arw.promClients.IsNil(arw.datasourceId) {
|
||||
logger.Errorf("rule_eval:%s error reader client is nil", arw.Key())
|
||||
continue
|
||||
}
|
||||
|
||||
readerClient := arw.promClients.GetCli(arw.datasourceId)
|
||||
|
||||
var warnings promsdk.Warnings
|
||||
value, warnings, err := readerClient.Query(context.Background(), promql, time.Now())
|
||||
if err != nil {
|
||||
logger.Errorf("rule_eval:%s promql:%s, error:%v", arw.Key(), promql, err)
|
||||
continue
|
||||
}
|
||||
|
||||
if len(warnings) > 0 {
|
||||
logger.Errorf("rule_eval:%s promql:%s, warnings:%v", arw.Key(), promql, warnings)
|
||||
continue
|
||||
}
|
||||
|
||||
logger.Debugf("rule_eval:%s query:%+v, value:%v", arw.Key(), query, value)
|
||||
points := common.ConvertAnomalyPoints(value)
|
||||
for i := 0; i < len(points); i++ {
|
||||
points[i].Severity = query.Severity
|
||||
}
|
||||
lst = append(lst, points...)
|
||||
}
|
||||
return lst
|
||||
}
|
||||
|
||||
func (arw *AlertRuleWorker) GetHostAnomalyPoint(ruleConfig string) []common.AnomalyPoint {
|
||||
var lst []common.AnomalyPoint
|
||||
var severity int
|
||||
|
||||
var rule *models.HostRuleConfig
|
||||
if err := json.Unmarshal([]byte(ruleConfig), &rule); err != nil {
|
||||
logger.Errorf("rule_eval:%s rule_config:%s, error:%v", arw.Key(), ruleConfig, err)
|
||||
return lst
|
||||
}
|
||||
|
||||
if rule == nil {
|
||||
logger.Errorf("rule_eval:%s rule_config:%s, error:rule is nil", arw.Key(), ruleConfig)
|
||||
return lst
|
||||
}
|
||||
|
||||
arw.inhibit = rule.Inhibit
|
||||
now := time.Now().Unix()
|
||||
for _, trigger := range rule.Triggers {
|
||||
if trigger.Severity < severity {
|
||||
arw.severity = trigger.Severity
|
||||
}
|
||||
|
||||
query := models.GetHostsQuery(rule.Queries)
|
||||
switch trigger.Type {
|
||||
case "target_miss":
|
||||
t := now - int64(trigger.Duration)
|
||||
targets, err := models.MissTargetGetsByFilter(arw.ctx, query, t)
|
||||
if err != nil {
|
||||
logger.Errorf("rule_eval:%s query:%v, error:%v", arw.Key(), query, err)
|
||||
continue
|
||||
}
|
||||
for _, target := range targets {
|
||||
m := make(map[string]string)
|
||||
target.FillTagsMap()
|
||||
for k, v := range target.TagsMap {
|
||||
m[k] = v
|
||||
}
|
||||
m["ident"] = target.Ident
|
||||
|
||||
bg := arw.processor.BusiGroupCache.GetByBusiGroupId(target.GroupId)
|
||||
if bg != nil && bg.LabelEnable == 1 {
|
||||
m["busigroup"] = bg.LabelValue
|
||||
}
|
||||
|
||||
lst = append(lst, common.NewAnomalyPoint(trigger.Type, m, now, float64(now-target.UpdateAt), trigger.Severity))
|
||||
}
|
||||
case "offset":
|
||||
targets, err := models.TargetGetsByFilter(arw.ctx, query, 0, 0)
|
||||
if err != nil {
|
||||
logger.Errorf("rule_eval:%s query:%v, error:%v", arw.Key(), query, err)
|
||||
continue
|
||||
}
|
||||
var targetMap = make(map[string]*models.Target)
|
||||
for _, target := range targets {
|
||||
targetMap[target.Ident] = target
|
||||
}
|
||||
|
||||
hostOffsetMap := arw.processor.TargetCache.GetOffsetHost(targets, now, int64(trigger.Duration))
|
||||
for host, offset := range hostOffsetMap {
|
||||
m := make(map[string]string)
|
||||
target, exists := targetMap[host]
|
||||
if exists {
|
||||
target.FillTagsMap()
|
||||
for k, v := range target.TagsMap {
|
||||
m[k] = v
|
||||
}
|
||||
}
|
||||
m["ident"] = host
|
||||
|
||||
bg := arw.processor.BusiGroupCache.GetByBusiGroupId(target.GroupId)
|
||||
if bg != nil && bg.LabelEnable == 1 {
|
||||
m["busigroup"] = bg.LabelValue
|
||||
}
|
||||
|
||||
lst = append(lst, common.NewAnomalyPoint(trigger.Type, m, now, float64(offset), trigger.Severity))
|
||||
}
|
||||
case "pct_target_miss":
|
||||
t := now - int64(trigger.Duration)
|
||||
count, err := models.MissTargetCountByFilter(arw.ctx, query, t)
|
||||
if err != nil {
|
||||
logger.Errorf("rule_eval:%s query:%v, error:%v", arw.Key(), query, err)
|
||||
continue
|
||||
}
|
||||
|
||||
total, err := models.TargetCountByFilter(arw.ctx, query)
|
||||
if err != nil {
|
||||
logger.Errorf("rule_eval:%s query:%v, error:%v", arw.Key(), query, err)
|
||||
continue
|
||||
}
|
||||
pct := float64(count) / float64(total) * 100
|
||||
if pct >= float64(trigger.Percent) {
|
||||
lst = append(lst, common.NewAnomalyPoint(trigger.Type, nil, now, pct, trigger.Severity))
|
||||
}
|
||||
}
|
||||
}
|
||||
return lst
|
||||
}
|
||||
178
alert/mute/mute.go
Normal file
178
alert/mute/mute.go
Normal file
@@ -0,0 +1,178 @@
|
||||
package mute
|
||||
|
||||
import (
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/alert/common"
|
||||
"github.com/ccfos/nightingale/v6/memsto"
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
|
||||
"github.com/toolkits/pkg/logger"
|
||||
)
|
||||
|
||||
func IsMuted(rule *models.AlertRule, event *models.AlertCurEvent, targetCache *memsto.TargetCacheType, alertMuteCache *memsto.AlertMuteCacheType) bool {
|
||||
if TimeNonEffectiveMuteStrategy(rule, event) {
|
||||
return true
|
||||
}
|
||||
|
||||
if IdentNotExistsMuteStrategy(rule, event, targetCache) {
|
||||
return true
|
||||
}
|
||||
|
||||
if BgNotMatchMuteStrategy(rule, event, targetCache) {
|
||||
return true
|
||||
}
|
||||
|
||||
if EventMuteStrategy(event, alertMuteCache) {
|
||||
return true
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
// TimeNonEffectiveMuteStrategy 根据规则配置的告警时间过滤,如果产生的告警不在规则配置的告警时间内,则不告警
|
||||
func TimeNonEffectiveMuteStrategy(rule *models.AlertRule, event *models.AlertCurEvent) bool {
|
||||
if rule.Disabled == 1 {
|
||||
return true
|
||||
}
|
||||
|
||||
tm := time.Unix(event.TriggerTime, 0)
|
||||
triggerTime := tm.Format("15:04")
|
||||
triggerWeek := strconv.Itoa(int(tm.Weekday()))
|
||||
|
||||
enableStime := strings.Fields(rule.EnableStime)
|
||||
enableEtime := strings.Fields(rule.EnableEtime)
|
||||
enableDaysOfWeek := strings.Split(rule.EnableDaysOfWeek, ";")
|
||||
length := len(enableDaysOfWeek)
|
||||
// enableStime,enableEtime,enableDaysOfWeek三者长度肯定相同,这里循环一个即可
|
||||
for i := 0; i < length; i++ {
|
||||
enableDaysOfWeek[i] = strings.Replace(enableDaysOfWeek[i], "7", "0", 1)
|
||||
if !strings.Contains(enableDaysOfWeek[i], triggerWeek) {
|
||||
continue
|
||||
}
|
||||
if enableStime[i] <= enableEtime[i] {
|
||||
if triggerTime < enableStime[i] || triggerTime > enableEtime[i] {
|
||||
continue
|
||||
}
|
||||
} else {
|
||||
if triggerTime < enableStime[i] && triggerTime > enableEtime[i] {
|
||||
continue
|
||||
}
|
||||
}
|
||||
// 到这里说明当前时刻在告警规则的某组生效时间范围内,直接返回 false
|
||||
return false
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
// IdentNotExistsMuteStrategy 根据ident是否存在过滤,如果ident不存在,则target_up的告警直接过滤掉
|
||||
func IdentNotExistsMuteStrategy(rule *models.AlertRule, event *models.AlertCurEvent, targetCache *memsto.TargetCacheType) bool {
|
||||
ident, has := event.TagsMap["ident"]
|
||||
if !has {
|
||||
return false
|
||||
}
|
||||
_, exists := targetCache.Get(ident)
|
||||
// 如果是target_up的告警,且ident已经不存在了,直接过滤掉
|
||||
// 这里的判断有点太粗暴了,但是目前没有更好的办法
|
||||
if !exists && strings.Contains(rule.PromQl, "target_up") {
|
||||
logger.Debugf("[%s] mute: rule_eval:%d cluster:%s ident:%s", "IdentNotExistsMuteStrategy", rule.Id, event.Cluster, ident)
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// BgNotMatchMuteStrategy 当规则开启只在bg内部告警时,对于非bg内部的机器过滤
|
||||
func BgNotMatchMuteStrategy(rule *models.AlertRule, event *models.AlertCurEvent, targetCache *memsto.TargetCacheType) bool {
|
||||
// 没有开启BG内部告警,直接不过滤
|
||||
if rule.EnableInBG == 0 {
|
||||
return false
|
||||
}
|
||||
|
||||
ident, has := event.TagsMap["ident"]
|
||||
if !has {
|
||||
return false
|
||||
}
|
||||
|
||||
target, exists := targetCache.Get(ident)
|
||||
// 对于包含ident的告警事件,check一下ident所属bg和rule所属bg是否相同
|
||||
// 如果告警规则选择了只在本BG生效,那其他BG的机器就不能因此规则产生告警
|
||||
if exists && target.GroupId != rule.GroupId {
|
||||
logger.Debugf("[%s] mute: rule_eval:%d cluster:%s", "BgNotMatchMuteStrategy", rule.Id, event.Cluster)
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func EventMuteStrategy(event *models.AlertCurEvent, alertMuteCache *memsto.AlertMuteCacheType) bool {
|
||||
mutes, has := alertMuteCache.Gets(event.GroupId)
|
||||
if !has || len(mutes) == 0 {
|
||||
return false
|
||||
}
|
||||
|
||||
for i := 0; i < len(mutes); i++ {
|
||||
if matchMute(event, mutes[i]) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
// matchMute 如果传入了clock这个可选参数,就表示使用这个clock表示的时间,否则就从event的字段中取TriggerTime
|
||||
func matchMute(event *models.AlertCurEvent, mute *models.AlertMute, clock ...int64) bool {
|
||||
if mute.Disabled == 1 {
|
||||
return false
|
||||
}
|
||||
ts := event.TriggerTime
|
||||
if len(clock) > 0 {
|
||||
ts = clock[0]
|
||||
}
|
||||
|
||||
// 如果不是全局的,判断 匹配的 datasource id
|
||||
if !(len(mute.DatasourceIdsJson) != 0 && mute.DatasourceIdsJson[0] == 0) && event.DatasourceId != 0 {
|
||||
idm := make(map[int64]struct{}, len(mute.DatasourceIdsJson))
|
||||
for i := 0; i < len(mute.DatasourceIdsJson); i++ {
|
||||
idm[mute.DatasourceIdsJson[i]] = struct{}{}
|
||||
}
|
||||
|
||||
// 判断 event.datasourceId 是否包含在 idm 中
|
||||
if _, has := idm[event.DatasourceId]; !has {
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
var matchTime bool
|
||||
if mute.MuteTimeType == models.TimeRange {
|
||||
if ts < mute.Btime || ts > mute.Etime {
|
||||
return false
|
||||
}
|
||||
matchTime = true
|
||||
} else if mute.MuteTimeType == models.Periodic {
|
||||
tm := time.Unix(event.TriggerTime, 0)
|
||||
triggerTime := tm.Format("15:04")
|
||||
triggerWeek := strconv.Itoa(int(tm.Weekday()))
|
||||
|
||||
for i := 0; i < len(mute.PeriodicMutesJson); i++ {
|
||||
if strings.Contains(mute.PeriodicMutesJson[i].EnableDaysOfWeek, triggerWeek) {
|
||||
if mute.PeriodicMutesJson[i].EnableStime <= mute.PeriodicMutesJson[i].EnableEtime {
|
||||
if triggerTime >= mute.PeriodicMutesJson[i].EnableStime && triggerTime < mute.PeriodicMutesJson[i].EnableEtime {
|
||||
matchTime = true
|
||||
break
|
||||
}
|
||||
} else {
|
||||
if triggerTime < mute.PeriodicMutesJson[i].EnableStime || triggerTime >= mute.PeriodicMutesJson[i].EnableEtime {
|
||||
matchTime = true
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
if !matchTime {
|
||||
return false
|
||||
}
|
||||
|
||||
return common.MatchTags(event.TagsMap, mute.ITags)
|
||||
}
|
||||
65
alert/naming/hashring.go
Normal file
65
alert/naming/hashring.go
Normal file
@@ -0,0 +1,65 @@
|
||||
package naming
|
||||
|
||||
import (
|
||||
"sync"
|
||||
|
||||
"github.com/toolkits/pkg/consistent"
|
||||
"github.com/toolkits/pkg/logger"
|
||||
)
|
||||
|
||||
const NodeReplicas = 500
|
||||
|
||||
type DatasourceHashRingType struct {
|
||||
sync.RWMutex
|
||||
Rings map[int64]*consistent.Consistent
|
||||
}
|
||||
|
||||
// for alert_rule sharding
|
||||
var HostDatasource int64 = 100000
|
||||
var DatasourceHashRing = DatasourceHashRingType{Rings: make(map[int64]*consistent.Consistent)}
|
||||
|
||||
func NewConsistentHashRing(replicas int32, nodes []string) *consistent.Consistent {
|
||||
ret := consistent.New()
|
||||
ret.NumberOfReplicas = int(replicas)
|
||||
for i := 0; i < len(nodes); i++ {
|
||||
ret.Add(nodes[i])
|
||||
}
|
||||
return ret
|
||||
}
|
||||
|
||||
func RebuildConsistentHashRing(datasourceId int64, nodes []string) {
|
||||
r := consistent.New()
|
||||
r.NumberOfReplicas = NodeReplicas
|
||||
for i := 0; i < len(nodes); i++ {
|
||||
r.Add(nodes[i])
|
||||
}
|
||||
|
||||
DatasourceHashRing.Set(datasourceId, r)
|
||||
logger.Infof("hash ring %d rebuild %+v", datasourceId, r.Members())
|
||||
}
|
||||
|
||||
func (chr *DatasourceHashRingType) GetNode(datasourceId int64, pk string) (string, error) {
|
||||
chr.RLock()
|
||||
defer chr.RUnlock()
|
||||
_, exists := chr.Rings[datasourceId]
|
||||
if !exists {
|
||||
chr.Rings[datasourceId] = NewConsistentHashRing(int32(NodeReplicas), []string{})
|
||||
}
|
||||
|
||||
return chr.Rings[datasourceId].Get(pk)
|
||||
}
|
||||
|
||||
func (chr *DatasourceHashRingType) IsHit(datasourceId int64, pk string, currentNode string) bool {
|
||||
node, err := chr.GetNode(datasourceId, pk)
|
||||
if err != nil {
|
||||
logger.Debugf("datasource id:%d pk:%s failed to get node from hashring:%v", datasourceId, pk, err)
|
||||
return false
|
||||
}
|
||||
return node == currentNode
|
||||
}
|
||||
|
||||
func (chr *DatasourceHashRingType) Set(datasourceId int64, r *consistent.Consistent) {
|
||||
chr.RLock()
|
||||
defer chr.RUnlock()
|
||||
chr.Rings[datasourceId] = r
|
||||
}
|
||||
151
alert/naming/heartbeat.go
Normal file
151
alert/naming/heartbeat.go
Normal file
@@ -0,0 +1,151 @@
|
||||
package naming
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"sort"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/alert/aconf"
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
"github.com/ccfos/nightingale/v6/pkg/ctx"
|
||||
|
||||
"github.com/toolkits/pkg/logger"
|
||||
)
|
||||
|
||||
type Naming struct {
|
||||
ctx *ctx.Context
|
||||
heartbeatConfig aconf.HeartbeatConfig
|
||||
isCenter bool
|
||||
}
|
||||
|
||||
func NewNaming(ctx *ctx.Context, heartbeat aconf.HeartbeatConfig, isCenter bool) *Naming {
|
||||
naming := &Naming{
|
||||
ctx: ctx,
|
||||
heartbeatConfig: heartbeat,
|
||||
isCenter: isCenter,
|
||||
}
|
||||
naming.Heartbeats()
|
||||
return naming
|
||||
}
|
||||
|
||||
// local servers
|
||||
var localss map[int64]string
|
||||
|
||||
func (n *Naming) Heartbeats() error {
|
||||
localss = make(map[int64]string)
|
||||
if err := n.heartbeat(); err != nil {
|
||||
fmt.Println("failed to heartbeat:", err)
|
||||
return err
|
||||
}
|
||||
|
||||
go n.loopHeartbeat()
|
||||
go n.loopDeleteInactiveInstances()
|
||||
return nil
|
||||
}
|
||||
|
||||
func (n *Naming) loopDeleteInactiveInstances() {
|
||||
interval := time.Duration(10) * time.Minute
|
||||
for {
|
||||
time.Sleep(interval)
|
||||
n.DeleteInactiveInstances()
|
||||
}
|
||||
}
|
||||
|
||||
func (n *Naming) DeleteInactiveInstances() {
|
||||
err := models.DB(n.ctx).Where("clock < ?", time.Now().Unix()-600).Delete(new(models.AlertingEngines)).Error
|
||||
if err != nil {
|
||||
logger.Errorf("delete inactive instances err:%v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func (n *Naming) loopHeartbeat() {
|
||||
interval := time.Duration(n.heartbeatConfig.Interval) * time.Millisecond
|
||||
for {
|
||||
time.Sleep(interval)
|
||||
if err := n.heartbeat(); err != nil {
|
||||
logger.Warning(err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (n *Naming) heartbeat() error {
|
||||
var datasourceIds []int64
|
||||
var err error
|
||||
|
||||
// 在页面上维护实例和集群的对应关系
|
||||
datasourceIds, err = models.GetDatasourceIdsByClusterName(n.ctx, n.heartbeatConfig.EngineName)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if len(datasourceIds) == 0 {
|
||||
err := models.AlertingEngineHeartbeatWithCluster(n.ctx, n.heartbeatConfig.Endpoint, n.heartbeatConfig.EngineName, 0)
|
||||
if err != nil {
|
||||
logger.Warningf("heartbeat with cluster %s err:%v", "", err)
|
||||
}
|
||||
} else {
|
||||
for i := 0; i < len(datasourceIds); i++ {
|
||||
err := models.AlertingEngineHeartbeatWithCluster(n.ctx, n.heartbeatConfig.Endpoint, n.heartbeatConfig.EngineName, datasourceIds[i])
|
||||
if err != nil {
|
||||
logger.Warningf("heartbeat with cluster %d err:%v", datasourceIds[i], err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
for i := 0; i < len(datasourceIds); i++ {
|
||||
servers, err := n.ActiveServers(datasourceIds[i])
|
||||
if err != nil {
|
||||
logger.Warningf("hearbeat %d get active server err:%v", datasourceIds[i], err)
|
||||
continue
|
||||
}
|
||||
|
||||
sort.Strings(servers)
|
||||
newss := strings.Join(servers, " ")
|
||||
|
||||
oldss, exists := localss[datasourceIds[i]]
|
||||
if exists && oldss == newss {
|
||||
continue
|
||||
}
|
||||
|
||||
RebuildConsistentHashRing(datasourceIds[i], servers)
|
||||
localss[datasourceIds[i]] = newss
|
||||
}
|
||||
|
||||
if n.isCenter {
|
||||
// 如果是中心节点,还需要处理 host 类型的告警规则,host 类型告警规则,和数据源无关,想复用下数据源的 hash ring,想用一个虚假的数据源 id 来处理
|
||||
// if is center node, we need to handle host type alerting rules, host type alerting rules are not related to datasource, we want to reuse the hash ring of datasource, we want to use a fake datasource id to handle it
|
||||
err := models.AlertingEngineHeartbeatWithCluster(n.ctx, n.heartbeatConfig.Endpoint, n.heartbeatConfig.EngineName, HostDatasource)
|
||||
if err != nil {
|
||||
logger.Warningf("heartbeat with cluster %s err:%v", "", err)
|
||||
}
|
||||
|
||||
servers, err := n.ActiveServers(HostDatasource)
|
||||
if err != nil {
|
||||
logger.Warningf("hearbeat %d get active server err:%v", HostDatasource, err)
|
||||
return nil
|
||||
}
|
||||
|
||||
sort.Strings(servers)
|
||||
newss := strings.Join(servers, " ")
|
||||
|
||||
oldss, exists := localss[HostDatasource]
|
||||
if exists && oldss == newss {
|
||||
return nil
|
||||
}
|
||||
|
||||
RebuildConsistentHashRing(HostDatasource, servers)
|
||||
localss[HostDatasource] = newss
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (n *Naming) ActiveServers(datasourceId int64) ([]string, error) {
|
||||
if datasourceId == -1 {
|
||||
return nil, fmt.Errorf("cluster is empty")
|
||||
}
|
||||
|
||||
// 30秒内有心跳,就认为是活的
|
||||
return models.AlertingEngineGetsInstances(n.ctx, "datasource_id = ? and clock > ?", datasourceId, time.Now().Unix()-30)
|
||||
}
|
||||
74
alert/process/alert_cur_event.go
Normal file
74
alert/process/alert_cur_event.go
Normal file
@@ -0,0 +1,74 @@
|
||||
package process
|
||||
|
||||
import (
|
||||
"sync"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
)
|
||||
|
||||
type AlertCurEventMap struct {
|
||||
sync.RWMutex
|
||||
Data map[string]*models.AlertCurEvent
|
||||
}
|
||||
|
||||
func NewAlertCurEventMap(data map[string]*models.AlertCurEvent) *AlertCurEventMap {
|
||||
if data == nil {
|
||||
return &AlertCurEventMap{
|
||||
Data: make(map[string]*models.AlertCurEvent),
|
||||
}
|
||||
}
|
||||
return &AlertCurEventMap{
|
||||
Data: data,
|
||||
}
|
||||
}
|
||||
|
||||
func (a *AlertCurEventMap) SetAll(data map[string]*models.AlertCurEvent) {
|
||||
a.Lock()
|
||||
defer a.Unlock()
|
||||
a.Data = data
|
||||
}
|
||||
|
||||
func (a *AlertCurEventMap) Set(key string, value *models.AlertCurEvent) {
|
||||
a.Lock()
|
||||
defer a.Unlock()
|
||||
a.Data[key] = value
|
||||
}
|
||||
|
||||
func (a *AlertCurEventMap) Get(key string) (*models.AlertCurEvent, bool) {
|
||||
a.RLock()
|
||||
defer a.RUnlock()
|
||||
event, exists := a.Data[key]
|
||||
return event, exists
|
||||
}
|
||||
|
||||
func (a *AlertCurEventMap) UpdateLastEvalTime(key string, lastEvalTime int64) {
|
||||
a.Lock()
|
||||
defer a.Unlock()
|
||||
event, exists := a.Data[key]
|
||||
if !exists {
|
||||
return
|
||||
}
|
||||
event.LastEvalTime = lastEvalTime
|
||||
}
|
||||
|
||||
func (a *AlertCurEventMap) Delete(key string) {
|
||||
a.Lock()
|
||||
defer a.Unlock()
|
||||
delete(a.Data, key)
|
||||
}
|
||||
|
||||
func (a *AlertCurEventMap) Keys() []string {
|
||||
a.RLock()
|
||||
defer a.RUnlock()
|
||||
keys := make([]string, 0, len(a.Data))
|
||||
for k := range a.Data {
|
||||
keys = append(keys, k)
|
||||
}
|
||||
return keys
|
||||
}
|
||||
|
||||
func (a *AlertCurEventMap) GetAll() map[string]*models.AlertCurEvent {
|
||||
a.RLock()
|
||||
defer a.RUnlock()
|
||||
return a.Data
|
||||
}
|
||||
441
alert/process/process.go
Normal file
441
alert/process/process.go
Normal file
@@ -0,0 +1,441 @@
|
||||
package process
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"html/template"
|
||||
"sort"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/alert/astats"
|
||||
"github.com/ccfos/nightingale/v6/alert/common"
|
||||
"github.com/ccfos/nightingale/v6/alert/dispatch"
|
||||
"github.com/ccfos/nightingale/v6/alert/mute"
|
||||
"github.com/ccfos/nightingale/v6/alert/queue"
|
||||
"github.com/ccfos/nightingale/v6/memsto"
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
"github.com/ccfos/nightingale/v6/pkg/ctx"
|
||||
"github.com/ccfos/nightingale/v6/pkg/tplx"
|
||||
"github.com/ccfos/nightingale/v6/prom"
|
||||
"github.com/toolkits/pkg/logger"
|
||||
"github.com/toolkits/pkg/str"
|
||||
)
|
||||
|
||||
type ExternalProcessorsType struct {
|
||||
ExternalLock sync.RWMutex
|
||||
Processors map[string]*Processor
|
||||
}
|
||||
|
||||
var ExternalProcessors ExternalProcessorsType
|
||||
|
||||
func NewExternalProcessors() *ExternalProcessorsType {
|
||||
return &ExternalProcessorsType{
|
||||
Processors: make(map[string]*Processor),
|
||||
}
|
||||
}
|
||||
|
||||
func (e *ExternalProcessorsType) GetExternalAlertRule(datasourceId, id int64) (*Processor, bool) {
|
||||
e.ExternalLock.RLock()
|
||||
defer e.ExternalLock.RUnlock()
|
||||
processor, has := e.Processors[common.RuleKey(datasourceId, id)]
|
||||
return processor, has
|
||||
}
|
||||
|
||||
type Processor struct {
|
||||
datasourceId int64
|
||||
|
||||
rule *models.AlertRule
|
||||
fires *AlertCurEventMap
|
||||
pendings *AlertCurEventMap
|
||||
inhibit bool
|
||||
|
||||
tagsMap map[string]string
|
||||
tagsArr []string
|
||||
target string
|
||||
targetNote string
|
||||
groupName string
|
||||
|
||||
atertRuleCache *memsto.AlertRuleCacheType
|
||||
TargetCache *memsto.TargetCacheType
|
||||
BusiGroupCache *memsto.BusiGroupCacheType
|
||||
alertMuteCache *memsto.AlertMuteCacheType
|
||||
datasourceCache *memsto.DatasourceCacheType
|
||||
|
||||
promClients *prom.PromClientMap
|
||||
ctx *ctx.Context
|
||||
stats *astats.Stats
|
||||
}
|
||||
|
||||
func (p *Processor) Key() string {
|
||||
return common.RuleKey(p.datasourceId, p.rule.Id)
|
||||
}
|
||||
|
||||
func (p *Processor) DatasourceId() int64 {
|
||||
return p.datasourceId
|
||||
}
|
||||
|
||||
func (p *Processor) Hash() string {
|
||||
return str.MD5(fmt.Sprintf("%d_%d_%s_%d",
|
||||
p.rule.Id,
|
||||
p.rule.PromEvalInterval,
|
||||
p.rule.RuleConfig,
|
||||
p.datasourceId,
|
||||
))
|
||||
}
|
||||
|
||||
func NewProcessor(rule *models.AlertRule, datasourceId int64, atertRuleCache *memsto.AlertRuleCacheType, targetCache *memsto.TargetCacheType,
|
||||
busiGroupCache *memsto.BusiGroupCacheType, alertMuteCache *memsto.AlertMuteCacheType, datasourceCache *memsto.DatasourceCacheType, promClients *prom.PromClientMap, ctx *ctx.Context,
|
||||
stats *astats.Stats) *Processor {
|
||||
|
||||
p := &Processor{
|
||||
datasourceId: datasourceId,
|
||||
rule: rule,
|
||||
|
||||
TargetCache: targetCache,
|
||||
BusiGroupCache: busiGroupCache,
|
||||
alertMuteCache: alertMuteCache,
|
||||
atertRuleCache: atertRuleCache,
|
||||
datasourceCache: datasourceCache,
|
||||
|
||||
promClients: promClients,
|
||||
ctx: ctx,
|
||||
stats: stats,
|
||||
}
|
||||
|
||||
p.mayHandleGroup()
|
||||
return p
|
||||
}
|
||||
|
||||
func (p *Processor) Handle(anomalyPoints []common.AnomalyPoint, from string, inhibit bool) {
|
||||
// 有可能rule的一些配置已经发生变化,比如告警接收人、callbacks等
|
||||
// 这些信息的修改是不会引起worker restart的,但是确实会影响告警处理逻辑
|
||||
// 所以,这里直接从memsto.AlertRuleCache中获取并覆盖
|
||||
p.inhibit = inhibit
|
||||
p.rule = p.atertRuleCache.Get(p.rule.Id)
|
||||
cachedRule := p.rule
|
||||
if cachedRule == nil {
|
||||
logger.Errorf("rule not found %+v", anomalyPoints)
|
||||
return
|
||||
}
|
||||
|
||||
now := time.Now().Unix()
|
||||
alertingKeys := map[string]struct{}{}
|
||||
|
||||
// 根据 event 的 tag 将 events 分组,处理告警抑制的情况
|
||||
eventsMap := make(map[string][]*models.AlertCurEvent)
|
||||
for _, anomalyPoint := range anomalyPoints {
|
||||
event := p.BuildEvent(anomalyPoint, from, now)
|
||||
// 如果 event 被 mute 了,本质也是 fire 的状态,这里无论如何都添加到 alertingKeys 中,防止 fire 的事件自动恢复了
|
||||
hash := event.Hash
|
||||
alertingKeys[hash] = struct{}{}
|
||||
if mute.IsMuted(cachedRule, event, p.TargetCache, p.alertMuteCache) {
|
||||
logger.Debugf("rule_eval:%s event:%v is muted", p.Key(), event)
|
||||
continue
|
||||
}
|
||||
tagHash := TagHash(anomalyPoint)
|
||||
eventsMap[tagHash] = append(eventsMap[tagHash], event)
|
||||
}
|
||||
|
||||
for _, events := range eventsMap {
|
||||
p.handleEvent(events)
|
||||
}
|
||||
|
||||
p.HandleRecover(alertingKeys, now)
|
||||
}
|
||||
|
||||
func (p *Processor) BuildEvent(anomalyPoint common.AnomalyPoint, from string, now int64) *models.AlertCurEvent {
|
||||
p.fillTags(anomalyPoint)
|
||||
p.mayHandleIdent()
|
||||
hash := Hash(p.rule.Id, p.datasourceId, anomalyPoint)
|
||||
ds := p.datasourceCache.GetById(p.datasourceId)
|
||||
var dsName string
|
||||
if ds != nil {
|
||||
dsName = ds.Name
|
||||
}
|
||||
|
||||
event := p.rule.GenerateNewEvent(p.ctx)
|
||||
event.TriggerTime = anomalyPoint.Timestamp
|
||||
event.TagsMap = p.tagsMap
|
||||
event.DatasourceId = p.datasourceId
|
||||
event.Cluster = dsName
|
||||
event.Hash = hash
|
||||
event.TargetIdent = p.target
|
||||
event.TargetNote = p.targetNote
|
||||
event.TriggerValue = anomalyPoint.ReadableValue()
|
||||
event.TagsJSON = p.tagsArr
|
||||
event.GroupName = p.groupName
|
||||
event.Tags = strings.Join(p.tagsArr, ",,")
|
||||
event.IsRecovered = false
|
||||
event.Callbacks = p.rule.Callbacks
|
||||
event.CallbacksJSON = p.rule.CallbacksJSON
|
||||
event.Annotations = p.rule.Annotations
|
||||
event.AnnotationsJSON = make(map[string]string)
|
||||
event.RuleConfig = p.rule.RuleConfig
|
||||
event.RuleConfigJson = p.rule.RuleConfigJson
|
||||
event.Severity = anomalyPoint.Severity
|
||||
|
||||
if from == "inner" {
|
||||
event.LastEvalTime = now
|
||||
} else {
|
||||
event.LastEvalTime = event.TriggerTime
|
||||
}
|
||||
return event
|
||||
}
|
||||
|
||||
func (p *Processor) HandleRecover(alertingKeys map[string]struct{}, now int64) {
|
||||
for _, hash := range p.pendings.Keys() {
|
||||
if _, has := alertingKeys[hash]; has {
|
||||
continue
|
||||
}
|
||||
p.pendings.Delete(hash)
|
||||
}
|
||||
|
||||
for hash := range p.fires.GetAll() {
|
||||
if _, has := alertingKeys[hash]; has {
|
||||
continue
|
||||
}
|
||||
p.RecoverSingle(hash, now, nil)
|
||||
}
|
||||
}
|
||||
|
||||
func (p *Processor) RecoverSingle(hash string, now int64, value *string) {
|
||||
cachedRule := p.rule
|
||||
if cachedRule == nil {
|
||||
return
|
||||
}
|
||||
event, has := p.fires.Get(hash)
|
||||
if !has {
|
||||
return
|
||||
}
|
||||
// 如果配置了留观时长,就不能立马恢复了
|
||||
if cachedRule.RecoverDuration > 0 && now-event.LastEvalTime < cachedRule.RecoverDuration {
|
||||
logger.Debugf("rule_eval:%s event:%v not recover", p.Key(), event)
|
||||
return
|
||||
}
|
||||
if value != nil {
|
||||
event.TriggerValue = *value
|
||||
}
|
||||
|
||||
// 没查到触发阈值的vector,姑且就认为这个vector的值恢复了
|
||||
// 我确实无法分辨,是prom中有值但是未满足阈值所以没返回,还是prom中确实丢了一些点导致没有数据可以返回,尴尬
|
||||
p.fires.Delete(hash)
|
||||
p.pendings.Delete(hash)
|
||||
|
||||
// 可能是因为调整了promql才恢复的,所以事件里边要体现最新的promql,否则用户会比较困惑
|
||||
// 当然,其实rule的各个字段都可能发生变化了,都更新一下吧
|
||||
cachedRule.UpdateEvent(event)
|
||||
event.IsRecovered = true
|
||||
event.LastEvalTime = now
|
||||
p.pushEventToQueue(event)
|
||||
}
|
||||
|
||||
func (p *Processor) handleEvent(events []*models.AlertCurEvent) {
|
||||
var fireEvents []*models.AlertCurEvent
|
||||
// severity 初始为 4, 一定为遇到比自己优先级高的事件
|
||||
severity := 4
|
||||
for _, event := range events {
|
||||
if event == nil {
|
||||
continue
|
||||
}
|
||||
if p.rule.PromForDuration == 0 {
|
||||
fireEvents = append(fireEvents, event)
|
||||
if severity > event.Severity {
|
||||
severity = event.Severity
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
var preTriggerTime int64
|
||||
preEvent, has := p.pendings.Get(event.Hash)
|
||||
if has {
|
||||
p.pendings.UpdateLastEvalTime(event.Hash, event.LastEvalTime)
|
||||
preTriggerTime = preEvent.TriggerTime
|
||||
} else {
|
||||
p.pendings.Set(event.Hash, event)
|
||||
preTriggerTime = event.TriggerTime
|
||||
}
|
||||
|
||||
if event.LastEvalTime-preTriggerTime+int64(event.PromEvalInterval) >= int64(p.rule.PromForDuration) {
|
||||
fireEvents = append(fireEvents, event)
|
||||
if severity > event.Severity {
|
||||
severity = event.Severity
|
||||
}
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
||||
p.inhibitEvent(fireEvents, severity)
|
||||
}
|
||||
|
||||
func (p *Processor) inhibitEvent(events []*models.AlertCurEvent, highSeverity int) {
|
||||
for _, event := range events {
|
||||
if p.inhibit && event.Severity > highSeverity {
|
||||
logger.Debugf("rule_eval:%s event:%+v inhibit highSeverity:%d", p.Key(), event, highSeverity)
|
||||
continue
|
||||
}
|
||||
p.fireEvent(event)
|
||||
}
|
||||
}
|
||||
|
||||
func (p *Processor) fireEvent(event *models.AlertCurEvent) {
|
||||
// As p.rule maybe outdated, use rule from cache
|
||||
cachedRule := p.rule
|
||||
if cachedRule == nil {
|
||||
return
|
||||
}
|
||||
logger.Debugf("rule_eval:%s event:%+v fire", p.Key(), event)
|
||||
if fired, has := p.fires.Get(event.Hash); has {
|
||||
p.fires.UpdateLastEvalTime(event.Hash, event.LastEvalTime)
|
||||
|
||||
if cachedRule.NotifyRepeatStep == 0 {
|
||||
logger.Debugf("rule_eval:%s event:%+v repeat is zero nothing to do", p.Key(), event)
|
||||
// 说明不想重复通知,那就直接返回了,nothing to do
|
||||
// do not need to send alert again
|
||||
return
|
||||
}
|
||||
|
||||
// 之前发送过告警了,这次是否要继续发送,要看是否过了通道静默时间
|
||||
if event.LastEvalTime > fired.LastSentTime+int64(cachedRule.NotifyRepeatStep)*60 {
|
||||
if cachedRule.NotifyMaxNumber == 0 {
|
||||
// 最大可以发送次数如果是0,表示不想限制最大发送次数,一直发即可
|
||||
event.NotifyCurNumber = fired.NotifyCurNumber + 1
|
||||
event.FirstTriggerTime = fired.FirstTriggerTime
|
||||
p.pushEventToQueue(event)
|
||||
} else {
|
||||
// 有最大发送次数的限制,就要看已经发了几次了,是否达到了最大发送次数
|
||||
if fired.NotifyCurNumber >= cachedRule.NotifyMaxNumber {
|
||||
logger.Debugf("rule_eval:%s event:%+v reach max number", p.Key(), event)
|
||||
return
|
||||
} else {
|
||||
event.NotifyCurNumber = fired.NotifyCurNumber + 1
|
||||
event.FirstTriggerTime = fired.FirstTriggerTime
|
||||
p.pushEventToQueue(event)
|
||||
}
|
||||
}
|
||||
}
|
||||
} else {
|
||||
event.NotifyCurNumber = 1
|
||||
event.FirstTriggerTime = event.TriggerTime
|
||||
p.pushEventToQueue(event)
|
||||
}
|
||||
}
|
||||
|
||||
func (p *Processor) pushEventToQueue(e *models.AlertCurEvent) {
|
||||
if !e.IsRecovered {
|
||||
e.LastSentTime = e.LastEvalTime
|
||||
p.fires.Set(e.Hash, e)
|
||||
}
|
||||
|
||||
p.stats.CounterAlertsTotal.WithLabelValues(fmt.Sprintf("%d", e.DatasourceId)).Inc()
|
||||
dispatch.LogEvent(e, "push_queue")
|
||||
if !queue.EventQueue.PushFront(e) {
|
||||
logger.Warningf("event_push_queue: queue is full, event:%+v", e)
|
||||
}
|
||||
}
|
||||
|
||||
func (p *Processor) RecoverAlertCurEventFromDb() {
|
||||
p.pendings = NewAlertCurEventMap(nil)
|
||||
|
||||
curEvents, err := models.AlertCurEventGetByRuleIdAndCluster(p.ctx, p.rule.Id, p.datasourceId)
|
||||
if err != nil {
|
||||
logger.Errorf("recover event from db for rule:%s failed, err:%s", p.Key(), err)
|
||||
p.fires = NewAlertCurEventMap(nil)
|
||||
return
|
||||
}
|
||||
|
||||
fireMap := make(map[string]*models.AlertCurEvent)
|
||||
for _, event := range curEvents {
|
||||
event.DB2Mem()
|
||||
fireMap[event.Hash] = event
|
||||
}
|
||||
|
||||
p.fires = NewAlertCurEventMap(fireMap)
|
||||
}
|
||||
|
||||
func (p *Processor) fillTags(anomalyPoint common.AnomalyPoint) {
|
||||
// handle series tags
|
||||
tagsMap := make(map[string]string)
|
||||
for label, value := range anomalyPoint.Labels {
|
||||
tagsMap[string(label)] = string(value)
|
||||
}
|
||||
|
||||
var e = &models.AlertCurEvent{
|
||||
TagsMap: tagsMap,
|
||||
}
|
||||
|
||||
// handle rule tags
|
||||
for _, tag := range p.rule.AppendTagsJSON {
|
||||
arr := strings.SplitN(tag, "=", 2)
|
||||
|
||||
var defs = []string{
|
||||
"{{$labels := .TagsMap}}",
|
||||
"{{$value := .TriggerValue}}",
|
||||
}
|
||||
tagValue := arr[1]
|
||||
text := strings.Join(append(defs, tagValue), "")
|
||||
t, err := template.New(fmt.Sprint(p.rule.Id)).Funcs(template.FuncMap(tplx.TemplateFuncMap)).Parse(text)
|
||||
if err != nil {
|
||||
tagValue = fmt.Sprintf("parse tag value failed, err:%s", err)
|
||||
tagsMap[arr[0]] = tagValue
|
||||
continue
|
||||
}
|
||||
|
||||
var body bytes.Buffer
|
||||
err = t.Execute(&body, e)
|
||||
if err != nil {
|
||||
tagValue = fmt.Sprintf("parse tag value failed, err:%s", err)
|
||||
tagsMap[arr[0]] = tagValue
|
||||
continue
|
||||
}
|
||||
|
||||
tagsMap[arr[0]] = body.String()
|
||||
}
|
||||
|
||||
tagsMap["rulename"] = p.rule.Name
|
||||
p.tagsMap = tagsMap
|
||||
|
||||
// handle tagsArr
|
||||
p.tagsArr = labelMapToArr(tagsMap)
|
||||
}
|
||||
|
||||
func (p *Processor) mayHandleIdent() {
|
||||
// handle ident
|
||||
if ident, has := p.tagsMap["ident"]; has {
|
||||
if target, exists := p.TargetCache.Get(ident); exists {
|
||||
p.target = target.Ident
|
||||
p.targetNote = target.Note
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (p *Processor) mayHandleGroup() {
|
||||
// handle bg
|
||||
bg := p.BusiGroupCache.GetByBusiGroupId(p.rule.GroupId)
|
||||
if bg != nil {
|
||||
p.groupName = bg.Name
|
||||
}
|
||||
}
|
||||
|
||||
func labelMapToArr(m map[string]string) []string {
|
||||
numLabels := len(m)
|
||||
|
||||
labelStrings := make([]string, 0, numLabels)
|
||||
for label, value := range m {
|
||||
labelStrings = append(labelStrings, fmt.Sprintf("%s=%s", label, value))
|
||||
}
|
||||
|
||||
if numLabels > 1 {
|
||||
sort.Strings(labelStrings)
|
||||
}
|
||||
return labelStrings
|
||||
}
|
||||
|
||||
func Hash(ruleId, datasourceId int64, vector common.AnomalyPoint) string {
|
||||
return str.MD5(fmt.Sprintf("%d_%s_%d_%d", ruleId, vector.Labels.String(), datasourceId, vector.Severity))
|
||||
}
|
||||
|
||||
func TagHash(vector common.AnomalyPoint) string {
|
||||
return str.MD5(vector.Labels.String())
|
||||
}
|
||||
18
alert/queue/queue.go
Normal file
18
alert/queue/queue.go
Normal file
@@ -0,0 +1,18 @@
|
||||
package queue
|
||||
|
||||
import (
|
||||
"time"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/alert/astats"
|
||||
"github.com/toolkits/pkg/container/list"
|
||||
)
|
||||
|
||||
var EventQueue = list.NewSafeListLimited(10000000)
|
||||
|
||||
func ReportQueueSize(stats *astats.Stats) {
|
||||
for {
|
||||
time.Sleep(time.Second)
|
||||
|
||||
stats.GaugeAlertQueueSize.Set(float64(EventQueue.Len()))
|
||||
}
|
||||
}
|
||||
102
alert/record/prom_rule.go
Normal file
102
alert/record/prom_rule.go
Normal file
@@ -0,0 +1,102 @@
|
||||
package record
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
"github.com/ccfos/nightingale/v6/prom"
|
||||
"github.com/ccfos/nightingale/v6/pushgw/writer"
|
||||
|
||||
"github.com/toolkits/pkg/logger"
|
||||
"github.com/toolkits/pkg/str"
|
||||
)
|
||||
|
||||
type RecordRuleContext struct {
|
||||
datasourceId int64
|
||||
quit chan struct{}
|
||||
|
||||
rule *models.RecordingRule
|
||||
// writers *writer.WritersType
|
||||
promClients *prom.PromClientMap
|
||||
}
|
||||
|
||||
func NewRecordRuleContext(rule *models.RecordingRule, datasourceId int64, promClients *prom.PromClientMap, writers *writer.WritersType) *RecordRuleContext {
|
||||
return &RecordRuleContext{
|
||||
datasourceId: datasourceId,
|
||||
quit: make(chan struct{}),
|
||||
rule: rule,
|
||||
promClients: promClients,
|
||||
//writers: writers,
|
||||
}
|
||||
}
|
||||
|
||||
func (rrc *RecordRuleContext) Key() string {
|
||||
return fmt.Sprintf("record-%d-%d", rrc.datasourceId, rrc.rule.Id)
|
||||
}
|
||||
|
||||
func (rrc *RecordRuleContext) Hash() string {
|
||||
return str.MD5(fmt.Sprintf("%d_%d_%s_%d",
|
||||
rrc.rule.Id,
|
||||
rrc.rule.PromEvalInterval,
|
||||
rrc.rule.PromQl,
|
||||
rrc.datasourceId,
|
||||
))
|
||||
}
|
||||
|
||||
func (rrc *RecordRuleContext) Prepare() {}
|
||||
|
||||
func (rrc *RecordRuleContext) Start() {
|
||||
logger.Infof("eval:%s started", rrc.Key())
|
||||
interval := rrc.rule.PromEvalInterval
|
||||
if interval <= 0 {
|
||||
interval = 10
|
||||
}
|
||||
go func() {
|
||||
for {
|
||||
select {
|
||||
case <-rrc.quit:
|
||||
return
|
||||
default:
|
||||
rrc.Eval()
|
||||
time.Sleep(time.Duration(interval) * time.Second)
|
||||
}
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
func (rrc *RecordRuleContext) Eval() {
|
||||
promql := strings.TrimSpace(rrc.rule.PromQl)
|
||||
if promql == "" {
|
||||
logger.Errorf("eval:%s promql is blank", rrc.Key())
|
||||
return
|
||||
}
|
||||
|
||||
if rrc.promClients.IsNil(rrc.datasourceId) {
|
||||
logger.Errorf("eval:%s reader client is nil", rrc.Key())
|
||||
return
|
||||
}
|
||||
|
||||
value, warnings, err := rrc.promClients.GetCli(rrc.datasourceId).Query(context.Background(), promql, time.Now())
|
||||
if err != nil {
|
||||
logger.Errorf("eval:%s promql:%s, error:%v", rrc.Key(), promql, err)
|
||||
return
|
||||
}
|
||||
|
||||
if len(warnings) > 0 {
|
||||
logger.Errorf("eval:%s promql:%s, warnings:%v", rrc.Key(), promql, warnings)
|
||||
return
|
||||
}
|
||||
|
||||
ts := ConvertToTimeSeries(value, rrc.rule)
|
||||
if len(ts) != 0 {
|
||||
rrc.promClients.GetWriterCli(rrc.datasourceId).Write(ts)
|
||||
}
|
||||
}
|
||||
|
||||
func (rrc *RecordRuleContext) Stop() {
|
||||
logger.Infof("%s stopped", rrc.Key())
|
||||
close(rrc.quit)
|
||||
}
|
||||
@@ -1,11 +1,12 @@
|
||||
package conv
|
||||
package record
|
||||
|
||||
import (
|
||||
"math"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/didi/nightingale/v5/src/models"
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
|
||||
"github.com/prometheus/common/model"
|
||||
"github.com/prometheus/prometheus/prompb"
|
||||
)
|
||||
@@ -96,6 +97,9 @@ func labelsToLabelsProto(labels model.Metric, rule *models.RecordingRule) (resul
|
||||
}
|
||||
result = append(result, nameLs)
|
||||
for k, v := range labels {
|
||||
if k == LabelName {
|
||||
continue
|
||||
}
|
||||
if model.LabelNameRE.MatchString(string(k)) {
|
||||
result = append(result, &prompb.Label{
|
||||
Name: string(k),
|
||||
94
alert/record/scheduler.go
Normal file
94
alert/record/scheduler.go
Normal file
@@ -0,0 +1,94 @@
|
||||
package record
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/alert/aconf"
|
||||
"github.com/ccfos/nightingale/v6/alert/astats"
|
||||
"github.com/ccfos/nightingale/v6/alert/naming"
|
||||
"github.com/ccfos/nightingale/v6/memsto"
|
||||
"github.com/ccfos/nightingale/v6/prom"
|
||||
"github.com/ccfos/nightingale/v6/pushgw/writer"
|
||||
)
|
||||
|
||||
type Scheduler struct {
|
||||
// key: hash
|
||||
recordRules map[string]*RecordRuleContext
|
||||
|
||||
aconf aconf.Alert
|
||||
|
||||
recordingRuleCache *memsto.RecordingRuleCacheType
|
||||
|
||||
promClients *prom.PromClientMap
|
||||
writers *writer.WritersType
|
||||
|
||||
stats *astats.Stats
|
||||
}
|
||||
|
||||
func NewScheduler(aconf aconf.Alert, rrc *memsto.RecordingRuleCacheType, promClients *prom.PromClientMap, writers *writer.WritersType, stats *astats.Stats) *Scheduler {
|
||||
scheduler := &Scheduler{
|
||||
aconf: aconf,
|
||||
recordRules: make(map[string]*RecordRuleContext),
|
||||
|
||||
recordingRuleCache: rrc,
|
||||
|
||||
promClients: promClients,
|
||||
writers: writers,
|
||||
|
||||
stats: stats,
|
||||
}
|
||||
|
||||
go scheduler.LoopSyncRules(context.Background())
|
||||
return scheduler
|
||||
}
|
||||
|
||||
func (s *Scheduler) LoopSyncRules(ctx context.Context) {
|
||||
time.Sleep(time.Duration(s.aconf.EngineDelay) * time.Second)
|
||||
duration := 9000 * time.Millisecond
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return
|
||||
case <-time.After(duration):
|
||||
s.syncRecordRules()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (s *Scheduler) syncRecordRules() {
|
||||
ids := s.recordingRuleCache.GetRuleIds()
|
||||
recordRules := make(map[string]*RecordRuleContext)
|
||||
for _, id := range ids {
|
||||
rule := s.recordingRuleCache.Get(id)
|
||||
if rule == nil {
|
||||
continue
|
||||
}
|
||||
|
||||
datasourceIds := s.promClients.Hit(rule.DatasourceIdsJson)
|
||||
for _, dsId := range datasourceIds {
|
||||
if !naming.DatasourceHashRing.IsHit(dsId, fmt.Sprintf("%d", rule.Id), s.aconf.Heartbeat.Endpoint) {
|
||||
continue
|
||||
}
|
||||
|
||||
recordRule := NewRecordRuleContext(rule, dsId, s.promClients, s.writers)
|
||||
recordRules[recordRule.Hash()] = recordRule
|
||||
}
|
||||
}
|
||||
|
||||
for hash, rule := range recordRules {
|
||||
if _, has := s.recordRules[hash]; !has {
|
||||
rule.Prepare()
|
||||
rule.Start()
|
||||
s.recordRules[hash] = rule
|
||||
}
|
||||
}
|
||||
|
||||
for hash, rule := range s.recordRules {
|
||||
if _, has := recordRules[hash]; !has {
|
||||
rule.Stop()
|
||||
delete(s.recordRules, hash)
|
||||
}
|
||||
}
|
||||
}
|
||||
78
alert/router/router.go
Normal file
78
alert/router/router.go
Normal file
@@ -0,0 +1,78 @@
|
||||
package router
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/alert/aconf"
|
||||
"github.com/ccfos/nightingale/v6/alert/astats"
|
||||
"github.com/ccfos/nightingale/v6/alert/process"
|
||||
"github.com/ccfos/nightingale/v6/memsto"
|
||||
"github.com/ccfos/nightingale/v6/pkg/ctx"
|
||||
"github.com/ccfos/nightingale/v6/pkg/httpx"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
)
|
||||
|
||||
type Router struct {
|
||||
HTTP httpx.Config
|
||||
Alert aconf.Alert
|
||||
AlertMuteCache *memsto.AlertMuteCacheType
|
||||
TargetCache *memsto.TargetCacheType
|
||||
BusiGroupCache *memsto.BusiGroupCacheType
|
||||
AlertStats *astats.Stats
|
||||
Ctx *ctx.Context
|
||||
ExternalProcessors *process.ExternalProcessorsType
|
||||
}
|
||||
|
||||
func New(httpConfig httpx.Config, alert aconf.Alert, amc *memsto.AlertMuteCacheType, tc *memsto.TargetCacheType, bgc *memsto.BusiGroupCacheType,
|
||||
astats *astats.Stats, ctx *ctx.Context, externalProcessors *process.ExternalProcessorsType) *Router {
|
||||
return &Router{
|
||||
HTTP: httpConfig,
|
||||
Alert: alert,
|
||||
AlertMuteCache: amc,
|
||||
TargetCache: tc,
|
||||
BusiGroupCache: bgc,
|
||||
AlertStats: astats,
|
||||
Ctx: ctx,
|
||||
ExternalProcessors: externalProcessors,
|
||||
}
|
||||
}
|
||||
|
||||
func (rt *Router) Config(r *gin.Engine) {
|
||||
if !rt.HTTP.Alert.Enable {
|
||||
return
|
||||
}
|
||||
|
||||
service := r.Group("/v1/n9e")
|
||||
if len(rt.HTTP.Alert.BasicAuth) > 0 {
|
||||
service.Use(gin.BasicAuth(rt.HTTP.Alert.BasicAuth))
|
||||
}
|
||||
service.POST("/event", rt.pushEventToQueue)
|
||||
service.POST("/make-event", rt.makeEvent)
|
||||
}
|
||||
|
||||
func Render(c *gin.Context, data, msg interface{}) {
|
||||
if msg == nil {
|
||||
if data == nil {
|
||||
data = struct{}{}
|
||||
}
|
||||
c.JSON(http.StatusOK, gin.H{"data": data, "error": ""})
|
||||
} else {
|
||||
c.JSON(http.StatusOK, gin.H{"error": gin.H{"message": msg}})
|
||||
}
|
||||
}
|
||||
|
||||
func Dangerous(c *gin.Context, v interface{}, code ...int) {
|
||||
if v == nil {
|
||||
return
|
||||
}
|
||||
|
||||
switch t := v.(type) {
|
||||
case string:
|
||||
if t != "" {
|
||||
c.JSON(http.StatusOK, gin.H{"error": gin.H{"message": v}})
|
||||
}
|
||||
case error:
|
||||
c.JSON(http.StatusOK, gin.H{"error": gin.H{"message": t.Error()}})
|
||||
}
|
||||
}
|
||||
141
alert/router/router_event.go
Normal file
141
alert/router/router_event.go
Normal file
@@ -0,0 +1,141 @@
|
||||
package router
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/alert/common"
|
||||
"github.com/ccfos/nightingale/v6/alert/dispatch"
|
||||
"github.com/ccfos/nightingale/v6/alert/mute"
|
||||
"github.com/ccfos/nightingale/v6/alert/naming"
|
||||
"github.com/ccfos/nightingale/v6/alert/process"
|
||||
"github.com/ccfos/nightingale/v6/alert/queue"
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
"github.com/ccfos/nightingale/v6/pkg/poster"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/toolkits/pkg/ginx"
|
||||
"github.com/toolkits/pkg/logger"
|
||||
)
|
||||
|
||||
func (rt *Router) pushEventToQueue(c *gin.Context) {
|
||||
var event *models.AlertCurEvent
|
||||
ginx.BindJSON(c, &event)
|
||||
if event.RuleId == 0 {
|
||||
ginx.Bomb(200, "event is illegal")
|
||||
}
|
||||
|
||||
event.TagsMap = make(map[string]string)
|
||||
for i := 0; i < len(event.TagsJSON); i++ {
|
||||
pair := strings.TrimSpace(event.TagsJSON[i])
|
||||
if pair == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
arr := strings.Split(pair, "=")
|
||||
if len(arr) != 2 {
|
||||
continue
|
||||
}
|
||||
|
||||
event.TagsMap[arr[0]] = arr[1]
|
||||
}
|
||||
|
||||
if mute.EventMuteStrategy(event, rt.AlertMuteCache) {
|
||||
logger.Infof("event_muted: rule_id=%d %s", event.RuleId, event.Hash)
|
||||
ginx.NewRender(c).Message(nil)
|
||||
return
|
||||
}
|
||||
|
||||
if err := event.ParseRule("rule_name"); err != nil {
|
||||
event.RuleName = fmt.Sprintf("failed to parse rule name: %v", err)
|
||||
}
|
||||
|
||||
if err := event.ParseRule("rule_note"); err != nil {
|
||||
event.RuleNote = fmt.Sprintf("failed to parse rule note: %v", err)
|
||||
}
|
||||
|
||||
if err := event.ParseRule("annotations"); err != nil {
|
||||
event.RuleNote = fmt.Sprintf("failed to parse rule note: %v", err)
|
||||
}
|
||||
|
||||
// 如果 rule_note 中有 ; 前缀,则使用 rule_note 替换 tags 中的内容
|
||||
if strings.HasPrefix(event.RuleNote, ";") {
|
||||
event.RuleNote = strings.TrimPrefix(event.RuleNote, ";")
|
||||
event.Tags = strings.ReplaceAll(event.RuleNote, " ", ",,")
|
||||
event.TagsJSON = strings.Split(event.Tags, ",,")
|
||||
} else {
|
||||
event.Tags = strings.Join(event.TagsJSON, ",,")
|
||||
}
|
||||
|
||||
event.Callbacks = strings.Join(event.CallbacksJSON, " ")
|
||||
event.NotifyChannels = strings.Join(event.NotifyChannelsJSON, " ")
|
||||
event.NotifyGroups = strings.Join(event.NotifyGroupsJSON, " ")
|
||||
|
||||
rt.AlertStats.CounterAlertsTotal.WithLabelValues(event.Cluster).Inc()
|
||||
|
||||
dispatch.LogEvent(event, "http_push_queue")
|
||||
if !queue.EventQueue.PushFront(event) {
|
||||
msg := fmt.Sprintf("event:%+v push_queue err: queue is full", event)
|
||||
ginx.Bomb(200, msg)
|
||||
logger.Warningf(msg)
|
||||
}
|
||||
ginx.NewRender(c).Message(nil)
|
||||
}
|
||||
|
||||
type eventForm struct {
|
||||
Alert bool `json:"alert"`
|
||||
AnomalyPoints []common.AnomalyPoint `json:"vectors"`
|
||||
RuleId int64 `json:"rule_id"`
|
||||
DatasourceId int64 `json:"datasource_id"`
|
||||
Inhibit bool `json:"inhibit"`
|
||||
}
|
||||
|
||||
func (rt *Router) makeEvent(c *gin.Context) {
|
||||
var events []*eventForm
|
||||
ginx.BindJSON(c, &events)
|
||||
//now := time.Now().Unix()
|
||||
for i := 0; i < len(events); i++ {
|
||||
node, err := naming.DatasourceHashRing.GetNode(events[i].DatasourceId, fmt.Sprintf("%d", events[i].RuleId))
|
||||
if err != nil {
|
||||
logger.Warningf("event:%+v get node err:%v", events[i], err)
|
||||
ginx.Bomb(200, "event node not exists")
|
||||
}
|
||||
|
||||
if node != rt.Alert.Heartbeat.Endpoint {
|
||||
err := forwardEvent(events[i], node)
|
||||
if err != nil {
|
||||
logger.Warningf("event:%+v forward err:%v", events[i], err)
|
||||
ginx.Bomb(200, "event forward error")
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
ruleWorker, exists := rt.ExternalProcessors.GetExternalAlertRule(events[i].DatasourceId, events[i].RuleId)
|
||||
logger.Debugf("handle event:%+v exists:%v", events[i], exists)
|
||||
if !exists {
|
||||
ginx.Bomb(200, "rule not exists")
|
||||
}
|
||||
|
||||
if events[i].Alert {
|
||||
go ruleWorker.Handle(events[i].AnomalyPoints, "http", events[i].Inhibit)
|
||||
} else {
|
||||
for _, vector := range events[i].AnomalyPoints {
|
||||
readableString := vector.ReadableValue()
|
||||
go ruleWorker.RecoverSingle(process.Hash(events[i].RuleId, events[i].DatasourceId, vector), vector.Timestamp, &readableString)
|
||||
}
|
||||
}
|
||||
}
|
||||
ginx.NewRender(c).Message(nil)
|
||||
}
|
||||
|
||||
// event 不归本实例处理,转发给对应的实例
|
||||
func forwardEvent(event *eventForm, instance string) error {
|
||||
ur := fmt.Sprintf("http://%s/v1/n9e/make-event", instance)
|
||||
res, code, err := poster.PostJSON(ur, time.Second*5, []*eventForm{event}, 3)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
logger.Infof("forward event: result=succ url=%s code=%d event:%v response=%s", ur, code, event, string(res))
|
||||
return nil
|
||||
}
|
||||
@@ -1,21 +1,21 @@
|
||||
package engine
|
||||
package sender
|
||||
|
||||
import (
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/toolkits/pkg/logger"
|
||||
"github.com/ccfos/nightingale/v6/alert/aconf"
|
||||
"github.com/ccfos/nightingale/v6/memsto"
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
"github.com/ccfos/nightingale/v6/pkg/ctx"
|
||||
"github.com/ccfos/nightingale/v6/pkg/ibex"
|
||||
"github.com/ccfos/nightingale/v6/pkg/poster"
|
||||
|
||||
"github.com/didi/nightingale/v5/src/models"
|
||||
"github.com/didi/nightingale/v5/src/pkg/ibex"
|
||||
"github.com/didi/nightingale/v5/src/pkg/poster"
|
||||
"github.com/didi/nightingale/v5/src/server/config"
|
||||
"github.com/didi/nightingale/v5/src/server/memsto"
|
||||
"github.com/toolkits/pkg/logger"
|
||||
)
|
||||
|
||||
func callback(event *models.AlertCurEvent) {
|
||||
urls := strings.Fields(event.Callbacks)
|
||||
func SendCallbacks(ctx *ctx.Context, urls []string, event *models.AlertCurEvent, targetCache *memsto.TargetCacheType, ibexConf aconf.Ibex) {
|
||||
for _, url := range urls {
|
||||
if url == "" {
|
||||
continue
|
||||
@@ -23,7 +23,7 @@ func callback(event *models.AlertCurEvent) {
|
||||
|
||||
if strings.HasPrefix(url, "${ibex}") {
|
||||
if !event.IsRecovered {
|
||||
handleIbex(url, event)
|
||||
handleIbex(ctx, url, event, targetCache, ibexConf)
|
||||
}
|
||||
continue
|
||||
}
|
||||
@@ -32,7 +32,7 @@ func callback(event *models.AlertCurEvent) {
|
||||
url = "http://" + url
|
||||
}
|
||||
|
||||
resp, code, err := poster.PostJSON(url, 5*time.Second, event)
|
||||
resp, code, err := poster.PostJSON(url, 5*time.Second, event, 3)
|
||||
if err != nil {
|
||||
logger.Errorf("event_callback(rule_id=%d url=%s) fail, resp: %s, err: %v, code: %d", event.RuleId, url, string(resp), err, code)
|
||||
} else {
|
||||
@@ -60,7 +60,7 @@ type TaskCreateReply struct {
|
||||
Dat int64 `json:"dat"` // task.id
|
||||
}
|
||||
|
||||
func handleIbex(url string, event *models.AlertCurEvent) {
|
||||
func handleIbex(ctx *ctx.Context, url string, event *models.AlertCurEvent, targetCache *memsto.TargetCacheType, ibexConf aconf.Ibex) {
|
||||
arr := strings.Split(url, "/")
|
||||
|
||||
var idstr string
|
||||
@@ -90,7 +90,7 @@ func handleIbex(url string, event *models.AlertCurEvent) {
|
||||
return
|
||||
}
|
||||
|
||||
tpl, err := models.TaskTplGet("id = ?", id)
|
||||
tpl, err := models.TaskTplGet(ctx, "id = ?", id)
|
||||
if err != nil {
|
||||
logger.Errorf("event_callback_ibex: failed to get tpl: %v", err)
|
||||
return
|
||||
@@ -103,7 +103,7 @@ func handleIbex(url string, event *models.AlertCurEvent) {
|
||||
|
||||
// check perm
|
||||
// tpl.GroupId - host - account 三元组校验权限
|
||||
can, err := canDoIbex(tpl.UpdateBy, tpl, host)
|
||||
can, err := canDoIbex(ctx, tpl.UpdateBy, tpl, host, targetCache)
|
||||
if err != nil {
|
||||
logger.Errorf("event_callback_ibex: check perm fail: %v", err)
|
||||
return
|
||||
@@ -131,10 +131,10 @@ func handleIbex(url string, event *models.AlertCurEvent) {
|
||||
|
||||
var res TaskCreateReply
|
||||
err = ibex.New(
|
||||
config.C.Ibex.Address,
|
||||
config.C.Ibex.BasicAuthUser,
|
||||
config.C.Ibex.BasicAuthPass,
|
||||
config.C.Ibex.Timeout,
|
||||
ibexConf.Address,
|
||||
ibexConf.BasicAuthUser,
|
||||
ibexConf.BasicAuthPass,
|
||||
ibexConf.Timeout,
|
||||
).
|
||||
Path("/ibex/v1/tasks").
|
||||
In(in).
|
||||
@@ -154,10 +154,11 @@ func handleIbex(url string, event *models.AlertCurEvent) {
|
||||
// write db
|
||||
record := models.TaskRecord{
|
||||
Id: res.Dat,
|
||||
EventId: event.Id,
|
||||
GroupId: tpl.GroupId,
|
||||
IbexAddress: config.C.Ibex.Address,
|
||||
IbexAuthUser: config.C.Ibex.BasicAuthUser,
|
||||
IbexAuthPass: config.C.Ibex.BasicAuthPass,
|
||||
IbexAddress: ibexConf.Address,
|
||||
IbexAuthUser: ibexConf.BasicAuthUser,
|
||||
IbexAuthPass: ibexConf.BasicAuthPass,
|
||||
Title: in.Title,
|
||||
Account: in.Account,
|
||||
Batch: in.Batch,
|
||||
@@ -170,13 +171,13 @@ func handleIbex(url string, event *models.AlertCurEvent) {
|
||||
CreateBy: in.Creator,
|
||||
}
|
||||
|
||||
if err = record.Add(); err != nil {
|
||||
if err = record.Add(ctx); err != nil {
|
||||
logger.Errorf("event_callback_ibex: persist task_record fail: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func canDoIbex(username string, tpl *models.TaskTpl, host string) (bool, error) {
|
||||
user, err := models.UserGetByUsername(username)
|
||||
func canDoIbex(ctx *ctx.Context, username string, tpl *models.TaskTpl, host string, targetCache *memsto.TargetCacheType) (bool, error) {
|
||||
user, err := models.UserGetByUsername(ctx, username)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
@@ -185,7 +186,7 @@ func canDoIbex(username string, tpl *models.TaskTpl, host string) (bool, error)
|
||||
return true, nil
|
||||
}
|
||||
|
||||
target, has := memsto.TargetCache.Get(host)
|
||||
target, has := targetCache.Get(host)
|
||||
if !has {
|
||||
return false, nil
|
||||
}
|
||||
100
alert/sender/dingtalk.go
Normal file
100
alert/sender/dingtalk.go
Normal file
@@ -0,0 +1,100 @@
|
||||
package sender
|
||||
|
||||
import (
|
||||
"html/template"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
"github.com/ccfos/nightingale/v6/pkg/poster"
|
||||
|
||||
"github.com/toolkits/pkg/logger"
|
||||
)
|
||||
|
||||
type dingtalkMarkdown struct {
|
||||
Title string `json:"title"`
|
||||
Text string `json:"text"`
|
||||
}
|
||||
|
||||
type dingtalkAt struct {
|
||||
AtMobiles []string `json:"atMobiles"`
|
||||
IsAtAll bool `json:"isAtAll"`
|
||||
}
|
||||
|
||||
type dingtalk struct {
|
||||
Msgtype string `json:"msgtype"`
|
||||
Markdown dingtalkMarkdown `json:"markdown"`
|
||||
At dingtalkAt `json:"at"`
|
||||
}
|
||||
|
||||
type DingtalkSender struct {
|
||||
tpl *template.Template
|
||||
}
|
||||
|
||||
func (ds *DingtalkSender) Send(ctx MessageContext) {
|
||||
if len(ctx.Users) == 0 || ctx.Rule == nil || ctx.Event == nil {
|
||||
return
|
||||
}
|
||||
|
||||
urls, ats := ds.extract(ctx.Users)
|
||||
if len(urls) == 0 {
|
||||
return
|
||||
}
|
||||
message := BuildTplMessage(ds.tpl, ctx.Event)
|
||||
|
||||
for _, url := range urls {
|
||||
var body dingtalk
|
||||
// NoAt in url
|
||||
if strings.Contains(url, "noat=1") {
|
||||
body = dingtalk{
|
||||
Msgtype: "markdown",
|
||||
Markdown: dingtalkMarkdown{
|
||||
Title: ctx.Event.RuleName,
|
||||
Text: message,
|
||||
},
|
||||
}
|
||||
} else {
|
||||
body = dingtalk{
|
||||
Msgtype: "markdown",
|
||||
Markdown: dingtalkMarkdown{
|
||||
Title: ctx.Event.RuleName,
|
||||
Text: message + "\n" + strings.Join(ats, " "),
|
||||
},
|
||||
At: dingtalkAt{
|
||||
AtMobiles: ats,
|
||||
IsAtAll: false,
|
||||
},
|
||||
}
|
||||
}
|
||||
ds.doSend(url, body)
|
||||
}
|
||||
}
|
||||
|
||||
// extract urls and ats from Users
|
||||
func (ds *DingtalkSender) extract(users []*models.User) ([]string, []string) {
|
||||
urls := make([]string, 0, len(users))
|
||||
ats := make([]string, 0, len(users))
|
||||
|
||||
for _, user := range users {
|
||||
if user.Phone != "" {
|
||||
ats = append(ats, "@"+user.Phone)
|
||||
}
|
||||
if token, has := user.ExtractToken(models.Dingtalk); has {
|
||||
url := token
|
||||
if !strings.HasPrefix(token, "https://") {
|
||||
url = "https://oapi.dingtalk.com/robot/send?access_token=" + token
|
||||
}
|
||||
urls = append(urls, url)
|
||||
}
|
||||
}
|
||||
return urls, ats
|
||||
}
|
||||
|
||||
func (ds *DingtalkSender) doSend(url string, body dingtalk) {
|
||||
res, code, err := poster.PostJSON(url, time.Second*5, body, 3)
|
||||
if err != nil {
|
||||
logger.Errorf("dingtalk_sender: result=fail url=%s code=%d error=%v response=%s", url, code, err, string(res))
|
||||
} else {
|
||||
logger.Infof("dingtalk_sender: result=succ url=%s code=%d response=%s", url, code, string(res))
|
||||
}
|
||||
}
|
||||
@@ -2,17 +2,53 @@ package sender
|
||||
|
||||
import (
|
||||
"crypto/tls"
|
||||
"html/template"
|
||||
"time"
|
||||
|
||||
"github.com/didi/nightingale/v5/src/server/config"
|
||||
"github.com/ccfos/nightingale/v6/alert/aconf"
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
|
||||
"github.com/toolkits/pkg/logger"
|
||||
|
||||
"gopkg.in/gomail.v2"
|
||||
)
|
||||
|
||||
var mailch chan *gomail.Message
|
||||
|
||||
func SendEmail(subject, content string, tos []string) {
|
||||
conf := config.C.SMTP
|
||||
type EmailSender struct {
|
||||
subjectTpl *template.Template
|
||||
contentTpl *template.Template
|
||||
smtp aconf.SMTPConfig
|
||||
}
|
||||
|
||||
func (es *EmailSender) Send(ctx MessageContext) {
|
||||
if len(ctx.Users) == 0 || ctx.Rule == nil || ctx.Event == nil {
|
||||
return
|
||||
}
|
||||
tos := extract(ctx.Users)
|
||||
var subject string
|
||||
|
||||
if es.subjectTpl != nil {
|
||||
subject = BuildTplMessage(es.subjectTpl, ctx.Event)
|
||||
} else {
|
||||
subject = ctx.Event.RuleName
|
||||
}
|
||||
content := BuildTplMessage(es.contentTpl, ctx.Event)
|
||||
es.WriteEmail(subject, content, tos)
|
||||
}
|
||||
|
||||
func extract(users []*models.User) []string {
|
||||
tos := make([]string, 0, len(users))
|
||||
for _, u := range users {
|
||||
if u.Email != "" {
|
||||
tos = append(tos, u.Email)
|
||||
}
|
||||
}
|
||||
return tos
|
||||
}
|
||||
|
||||
func (es *EmailSender) SendEmail(subject, content string, tos []string, stmp aconf.SMTPConfig) {
|
||||
conf := stmp
|
||||
|
||||
d := gomail.NewDialer(conf.Host, conf.Port, conf.User, conf.Pass)
|
||||
if conf.InsecureSkipVerify {
|
||||
@@ -21,7 +57,7 @@ func SendEmail(subject, content string, tos []string) {
|
||||
|
||||
m := gomail.NewMessage()
|
||||
|
||||
m.SetHeader("From", config.C.SMTP.From)
|
||||
m.SetHeader("From", stmp.From)
|
||||
m.SetHeader("To", tos...)
|
||||
m.SetHeader("Subject", subject)
|
||||
m.SetBody("text/html", content)
|
||||
@@ -32,10 +68,10 @@ func SendEmail(subject, content string, tos []string) {
|
||||
}
|
||||
}
|
||||
|
||||
func WriteEmail(subject, content string, tos []string) {
|
||||
func (es *EmailSender) WriteEmail(subject, content string, tos []string) {
|
||||
m := gomail.NewMessage()
|
||||
|
||||
m.SetHeader("From", config.C.SMTP.From)
|
||||
m.SetHeader("From", es.smtp.From)
|
||||
m.SetHeader("To", tos...)
|
||||
m.SetHeader("Subject", subject)
|
||||
m.SetBody("text/html", content)
|
||||
@@ -55,15 +91,24 @@ func dialSmtp(d *gomail.Dialer) gomail.SendCloser {
|
||||
}
|
||||
}
|
||||
|
||||
func StartEmailSender() {
|
||||
var mailQuit = make(chan struct{})
|
||||
|
||||
func RestartEmailSender(smtp aconf.SMTPConfig) {
|
||||
close(mailQuit)
|
||||
mailQuit = make(chan struct{})
|
||||
StartEmailSender(smtp)
|
||||
}
|
||||
|
||||
func StartEmailSender(smtp aconf.SMTPConfig) {
|
||||
mailch = make(chan *gomail.Message, 100000)
|
||||
|
||||
conf := config.C.SMTP
|
||||
conf := smtp
|
||||
|
||||
if conf.Host == "" || conf.Port == 0 {
|
||||
logger.Warning("SMTP configurations invalid")
|
||||
return
|
||||
}
|
||||
logger.Infof("start email sender... %+v", conf)
|
||||
|
||||
d := gomail.NewDialer(conf.Host, conf.Port, conf.User, conf.Pass)
|
||||
if conf.InsecureSkipVerify {
|
||||
@@ -75,6 +120,8 @@ func StartEmailSender() {
|
||||
var size int
|
||||
for {
|
||||
select {
|
||||
case <-mailQuit:
|
||||
return
|
||||
case m, ok := <-mailch:
|
||||
if !ok {
|
||||
return
|
||||
@@ -84,7 +131,6 @@ func StartEmailSender() {
|
||||
s = dialSmtp(d)
|
||||
open = true
|
||||
}
|
||||
|
||||
if err := gomail.Send(s, m); err != nil {
|
||||
logger.Errorf("email_sender: failed to send: %s", err)
|
||||
|
||||
82
alert/sender/feishu.go
Normal file
82
alert/sender/feishu.go
Normal file
@@ -0,0 +1,82 @@
|
||||
package sender
|
||||
|
||||
import (
|
||||
"html/template"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
"github.com/ccfos/nightingale/v6/pkg/poster"
|
||||
|
||||
"github.com/toolkits/pkg/logger"
|
||||
)
|
||||
|
||||
type feishuContent struct {
|
||||
Text string `json:"text"`
|
||||
}
|
||||
|
||||
type feishuAt struct {
|
||||
AtMobiles []string `json:"atMobiles"`
|
||||
IsAtAll bool `json:"isAtAll"`
|
||||
}
|
||||
|
||||
type feishu struct {
|
||||
Msgtype string `json:"msg_type"`
|
||||
Content feishuContent `json:"content"`
|
||||
At feishuAt `json:"at"`
|
||||
}
|
||||
|
||||
type FeishuSender struct {
|
||||
tpl *template.Template
|
||||
}
|
||||
|
||||
func (fs *FeishuSender) Send(ctx MessageContext) {
|
||||
if len(ctx.Users) == 0 || ctx.Rule == nil || ctx.Event == nil {
|
||||
return
|
||||
}
|
||||
urls, ats := fs.extract(ctx.Users)
|
||||
message := BuildTplMessage(fs.tpl, ctx.Event)
|
||||
for _, url := range urls {
|
||||
body := feishu{
|
||||
Msgtype: "text",
|
||||
Content: feishuContent{
|
||||
Text: message,
|
||||
},
|
||||
}
|
||||
if !strings.Contains(url, "noat=1") {
|
||||
body.At = feishuAt{
|
||||
AtMobiles: ats,
|
||||
IsAtAll: false,
|
||||
}
|
||||
}
|
||||
fs.doSend(url, body)
|
||||
}
|
||||
}
|
||||
|
||||
func (fs *FeishuSender) extract(users []*models.User) ([]string, []string) {
|
||||
urls := make([]string, 0, len(users))
|
||||
ats := make([]string, 0, len(users))
|
||||
|
||||
for _, user := range users {
|
||||
if user.Phone != "" {
|
||||
ats = append(ats, user.Phone)
|
||||
}
|
||||
if token, has := user.ExtractToken(models.Feishu); has {
|
||||
url := token
|
||||
if !strings.HasPrefix(token, "https://") {
|
||||
url = "https://open.feishu.cn/open-apis/bot/v2/hook/" + token
|
||||
}
|
||||
urls = append(urls, url)
|
||||
}
|
||||
}
|
||||
return urls, ats
|
||||
}
|
||||
|
||||
func (fs *FeishuSender) doSend(url string, body feishu) {
|
||||
res, code, err := poster.PostJSON(url, time.Second*5, body, 3)
|
||||
if err != nil {
|
||||
logger.Errorf("feishu_sender: result=fail url=%s code=%d error=%v response=%s", url, code, err, string(res))
|
||||
} else {
|
||||
logger.Infof("feishu_sender: result=succ url=%s code=%d response=%s", url, code, string(res))
|
||||
}
|
||||
}
|
||||
107
alert/sender/mm.go
Normal file
107
alert/sender/mm.go
Normal file
@@ -0,0 +1,107 @@
|
||||
package sender
|
||||
|
||||
import (
|
||||
"html/template"
|
||||
"net/url"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
"github.com/ccfos/nightingale/v6/pkg/poster"
|
||||
|
||||
"github.com/toolkits/pkg/logger"
|
||||
)
|
||||
|
||||
type MatterMostMessage struct {
|
||||
Text string
|
||||
Tokens []string
|
||||
}
|
||||
|
||||
type mm struct {
|
||||
Channel string `json:"channel"`
|
||||
Username string `json:"username"`
|
||||
Text string `json:"text"`
|
||||
}
|
||||
|
||||
type MmSender struct {
|
||||
tpl *template.Template
|
||||
}
|
||||
|
||||
func (ms *MmSender) Send(ctx MessageContext) {
|
||||
if len(ctx.Users) == 0 || ctx.Rule == nil || ctx.Event == nil {
|
||||
return
|
||||
}
|
||||
|
||||
urls := ms.extract(ctx.Users)
|
||||
if len(urls) == 0 {
|
||||
return
|
||||
}
|
||||
message := BuildTplMessage(ms.tpl, ctx.Event)
|
||||
|
||||
SendMM(MatterMostMessage{
|
||||
Text: message,
|
||||
Tokens: urls,
|
||||
})
|
||||
}
|
||||
|
||||
func (ms *MmSender) extract(users []*models.User) []string {
|
||||
tokens := make([]string, 0, len(users))
|
||||
for _, user := range users {
|
||||
if token, has := user.ExtractToken(models.Mm); has {
|
||||
tokens = append(tokens, token)
|
||||
}
|
||||
}
|
||||
return tokens
|
||||
}
|
||||
|
||||
func SendMM(message MatterMostMessage) {
|
||||
for i := 0; i < len(message.Tokens); i++ {
|
||||
u, err := url.Parse(message.Tokens[i])
|
||||
if err != nil {
|
||||
logger.Errorf("mm_sender: failed to parse error=%v", err)
|
||||
continue
|
||||
}
|
||||
|
||||
v, err := url.ParseQuery(u.RawQuery)
|
||||
if err != nil {
|
||||
logger.Errorf("mm_sender: failed to parse query error=%v", err)
|
||||
}
|
||||
|
||||
channels := v["channel"] // do not get
|
||||
txt := ""
|
||||
atuser := v["atuser"]
|
||||
if len(atuser) != 0 {
|
||||
txt = strings.Join(MapStrToStr(atuser, func(u string) string {
|
||||
return "@" + u
|
||||
}), ",") + "\n"
|
||||
}
|
||||
username := v.Get("username")
|
||||
if err != nil {
|
||||
logger.Errorf("mm_sender: failed to parse error=%v", err)
|
||||
}
|
||||
// simple concatenating
|
||||
ur := u.Scheme + "://" + u.Host + u.Path
|
||||
for _, channel := range channels {
|
||||
body := mm{
|
||||
Channel: channel,
|
||||
Username: username,
|
||||
Text: txt + message.Text,
|
||||
}
|
||||
|
||||
res, code, err := poster.PostJSON(ur, time.Second*5, body, 3)
|
||||
if err != nil {
|
||||
logger.Errorf("mm_sender: result=fail url=%s code=%d error=%v response=%s", ur, code, err, string(res))
|
||||
} else {
|
||||
logger.Infof("mm_sender: result=succ url=%s code=%d response=%s", ur, code, string(res))
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func MapStrToStr(arr []string, fn func(s string) string) []string {
|
||||
var newArray = []string{}
|
||||
for _, it := range arr {
|
||||
newArray = append(newArray, fn(it))
|
||||
}
|
||||
return newArray
|
||||
}
|
||||
97
alert/sender/plugin.go
Normal file
97
alert/sender/plugin.go
Normal file
@@ -0,0 +1,97 @@
|
||||
package sender
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"os"
|
||||
"os/exec"
|
||||
"time"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
|
||||
"github.com/toolkits/pkg/file"
|
||||
"github.com/toolkits/pkg/logger"
|
||||
"github.com/toolkits/pkg/sys"
|
||||
)
|
||||
|
||||
func MayPluginNotify(noticeBytes []byte, notifyScript models.NotifyScript) {
|
||||
if len(noticeBytes) == 0 {
|
||||
return
|
||||
}
|
||||
alertingCallScript(noticeBytes, notifyScript)
|
||||
}
|
||||
|
||||
func alertingCallScript(stdinBytes []byte, notifyScript models.NotifyScript) {
|
||||
// not enable or no notify.py? do nothing
|
||||
config := notifyScript
|
||||
if !config.Enable || config.Content == "" {
|
||||
return
|
||||
}
|
||||
|
||||
fpath := ".notify_scriptt"
|
||||
if config.Type == 1 {
|
||||
fpath = config.Content
|
||||
} else {
|
||||
rewrite := true
|
||||
if file.IsExist(fpath) {
|
||||
oldContent, err := file.ToString(fpath)
|
||||
if err != nil {
|
||||
logger.Errorf("event_notify: read script file err: %v", err)
|
||||
return
|
||||
}
|
||||
|
||||
if oldContent == config.Content {
|
||||
rewrite = false
|
||||
}
|
||||
}
|
||||
|
||||
if rewrite {
|
||||
_, err := file.WriteString(fpath, config.Content)
|
||||
if err != nil {
|
||||
logger.Errorf("event_notify: write script file err: %v", err)
|
||||
return
|
||||
}
|
||||
|
||||
err = os.Chmod(fpath, 0777)
|
||||
if err != nil {
|
||||
logger.Errorf("event_notify: chmod script file err: %v", err)
|
||||
return
|
||||
}
|
||||
}
|
||||
fpath = "./" + fpath
|
||||
}
|
||||
|
||||
cmd := exec.Command(fpath)
|
||||
cmd.Stdin = bytes.NewReader(stdinBytes)
|
||||
|
||||
// combine stdout and stderr
|
||||
var buf bytes.Buffer
|
||||
cmd.Stdout = &buf
|
||||
cmd.Stderr = &buf
|
||||
|
||||
err := startCmd(cmd)
|
||||
if err != nil {
|
||||
logger.Errorf("event_notify: run cmd err: %v", err)
|
||||
return
|
||||
}
|
||||
|
||||
err, isTimeout := sys.WrapTimeout(cmd, time.Duration(config.Timeout)*time.Second)
|
||||
|
||||
if isTimeout {
|
||||
if err == nil {
|
||||
logger.Errorf("event_notify: timeout and killed process %s", fpath)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
logger.Errorf("event_notify: kill process %s occur error %v", fpath, err)
|
||||
}
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
logger.Errorf("event_notify: exec script %s occur error: %v, output: %s", fpath, err, buf.String())
|
||||
return
|
||||
}
|
||||
|
||||
logger.Infof("event_notify: exec %s output: %s", fpath, buf.String())
|
||||
}
|
||||
@@ -1,7 +1,7 @@
|
||||
//go:build !windows
|
||||
// +build !windows
|
||||
|
||||
package engine
|
||||
package sender
|
||||
|
||||
import (
|
||||
"os/exec"
|
||||
@@ -1,4 +1,4 @@
|
||||
package engine
|
||||
package sender
|
||||
|
||||
import "os/exec"
|
||||
|
||||
62
alert/sender/sender.go
Normal file
62
alert/sender/sender.go
Normal file
@@ -0,0 +1,62 @@
|
||||
package sender
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"html/template"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/alert/aconf"
|
||||
"github.com/ccfos/nightingale/v6/memsto"
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
)
|
||||
|
||||
type (
|
||||
// Sender 发送消息通知的接口
|
||||
Sender interface {
|
||||
Send(ctx MessageContext)
|
||||
}
|
||||
|
||||
// MessageContext 一个event所生成的告警通知的上下文
|
||||
MessageContext struct {
|
||||
Users []*models.User
|
||||
Rule *models.AlertRule
|
||||
Event *models.AlertCurEvent
|
||||
}
|
||||
)
|
||||
|
||||
func NewSender(key string, tpls map[string]*template.Template, smtp aconf.SMTPConfig) Sender {
|
||||
switch key {
|
||||
case models.Dingtalk:
|
||||
return &DingtalkSender{tpl: tpls[models.Dingtalk]}
|
||||
case models.Wecom:
|
||||
return &WecomSender{tpl: tpls[models.Wecom]}
|
||||
case models.Feishu:
|
||||
return &FeishuSender{tpl: tpls[models.Feishu]}
|
||||
case models.Email:
|
||||
return &EmailSender{subjectTpl: tpls["mailsubject"], contentTpl: tpls[models.Email], smtp: smtp}
|
||||
case models.Mm:
|
||||
return &MmSender{tpl: tpls[models.Mm]}
|
||||
case models.Telegram:
|
||||
return &TelegramSender{tpl: tpls[models.Telegram]}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func BuildMessageContext(rule *models.AlertRule, event *models.AlertCurEvent, uids []int64, userCache *memsto.UserCacheType) MessageContext {
|
||||
users := userCache.GetByUserIds(uids)
|
||||
return MessageContext{
|
||||
Rule: rule,
|
||||
Event: event,
|
||||
Users: users,
|
||||
}
|
||||
}
|
||||
|
||||
func BuildTplMessage(tpl *template.Template, event *models.AlertCurEvent) string {
|
||||
if tpl == nil {
|
||||
return "tpl for current sender not found, please check configuration"
|
||||
}
|
||||
var body bytes.Buffer
|
||||
if err := tpl.Execute(&body, event); err != nil {
|
||||
return err.Error()
|
||||
}
|
||||
return body.String()
|
||||
}
|
||||
82
alert/sender/telegram.go
Normal file
82
alert/sender/telegram.go
Normal file
@@ -0,0 +1,82 @@
|
||||
package sender
|
||||
|
||||
import (
|
||||
"html/template"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
"github.com/ccfos/nightingale/v6/pkg/poster"
|
||||
|
||||
"github.com/toolkits/pkg/logger"
|
||||
)
|
||||
|
||||
type TelegramMessage struct {
|
||||
Text string
|
||||
Tokens []string
|
||||
}
|
||||
|
||||
type telegram struct {
|
||||
ParseMode string `json:"parse_mode"`
|
||||
Text string `json:"text"`
|
||||
}
|
||||
|
||||
type TelegramSender struct {
|
||||
tpl *template.Template
|
||||
}
|
||||
|
||||
func (ts *TelegramSender) Send(ctx MessageContext) {
|
||||
if len(ctx.Users) == 0 || ctx.Rule == nil || ctx.Event == nil {
|
||||
return
|
||||
}
|
||||
tokens := ts.extract(ctx.Users)
|
||||
message := BuildTplMessage(ts.tpl, ctx.Event)
|
||||
|
||||
SendTelegram(TelegramMessage{
|
||||
Text: message,
|
||||
Tokens: tokens,
|
||||
})
|
||||
}
|
||||
|
||||
func (ts *TelegramSender) extract(users []*models.User) []string {
|
||||
tokens := make([]string, 0, len(users))
|
||||
for _, user := range users {
|
||||
if token, has := user.ExtractToken(models.Telegram); has {
|
||||
tokens = append(tokens, token)
|
||||
}
|
||||
}
|
||||
return tokens
|
||||
}
|
||||
|
||||
func SendTelegram(message TelegramMessage) {
|
||||
for i := 0; i < len(message.Tokens); i++ {
|
||||
if !strings.Contains(message.Tokens[i], "/") && !strings.HasPrefix(message.Tokens[i], "https://") {
|
||||
logger.Errorf("telegram_sender: result=fail invalid token=%s", message.Tokens[i])
|
||||
continue
|
||||
}
|
||||
var url string
|
||||
if strings.HasPrefix(message.Tokens[i], "https://") {
|
||||
url = message.Tokens[i]
|
||||
} else {
|
||||
array := strings.Split(message.Tokens[i], "/")
|
||||
if len(array) != 2 {
|
||||
logger.Errorf("telegram_sender: result=fail invalid token=%s", message.Tokens[i])
|
||||
continue
|
||||
}
|
||||
botToken := array[0]
|
||||
chatId := array[1]
|
||||
url = "https://api.telegram.org/bot" + botToken + "/sendMessage?chat_id=" + chatId
|
||||
}
|
||||
body := telegram{
|
||||
ParseMode: "markdown",
|
||||
Text: message.Text,
|
||||
}
|
||||
|
||||
res, code, err := poster.PostJSON(url, time.Second*5, body, 3)
|
||||
if err != nil {
|
||||
logger.Errorf("telegram_sender: result=fail url=%s code=%d error=%v response=%s", url, code, err, string(res))
|
||||
} else {
|
||||
logger.Infof("telegram_sender: result=succ url=%s code=%d response=%s", url, code, string(res))
|
||||
}
|
||||
}
|
||||
}
|
||||
68
alert/sender/webhook.go
Normal file
68
alert/sender/webhook.go
Normal file
@@ -0,0 +1,68 @@
|
||||
package sender
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"io/ioutil"
|
||||
"net/http"
|
||||
"time"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
|
||||
"github.com/toolkits/pkg/logger"
|
||||
)
|
||||
|
||||
func SendWebhooks(webhooks []*models.Webhook, event *models.AlertCurEvent) {
|
||||
for _, conf := range webhooks {
|
||||
if conf.Url == "" || !conf.Enable {
|
||||
continue
|
||||
}
|
||||
bs, err := json.Marshal(event)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
bf := bytes.NewBuffer(bs)
|
||||
|
||||
req, err := http.NewRequest("POST", conf.Url, bf)
|
||||
if err != nil {
|
||||
logger.Warning("alertingWebhook failed to new request", err)
|
||||
continue
|
||||
}
|
||||
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
if conf.BasicAuthUser != "" && conf.BasicAuthPass != "" {
|
||||
req.SetBasicAuth(conf.BasicAuthUser, conf.BasicAuthPass)
|
||||
}
|
||||
|
||||
if len(conf.Headers) > 0 && len(conf.Headers)%2 == 0 {
|
||||
for i := 0; i < len(conf.Headers); i += 2 {
|
||||
if conf.Headers[i] == "host" {
|
||||
req.Host = conf.Headers[i+1]
|
||||
continue
|
||||
}
|
||||
req.Header.Set(conf.Headers[i], conf.Headers[i+1])
|
||||
}
|
||||
}
|
||||
|
||||
// todo add skip verify
|
||||
client := http.Client{
|
||||
Timeout: time.Duration(conf.Timeout) * time.Second,
|
||||
}
|
||||
|
||||
var resp *http.Response
|
||||
resp, err = client.Do(req)
|
||||
if err != nil {
|
||||
logger.Warningf("WebhookCallError, ruleId: [%d], eventId: [%d], url: [%s], error: [%s]", event.RuleId, event.Id, conf.Url, err)
|
||||
continue
|
||||
}
|
||||
|
||||
var body []byte
|
||||
if resp.Body != nil {
|
||||
defer resp.Body.Close()
|
||||
body, _ = ioutil.ReadAll(resp.Body)
|
||||
}
|
||||
|
||||
logger.Debugf("alertingWebhook done, url: %s, response code: %d, body: %s", conf.Url, resp.StatusCode, string(body))
|
||||
}
|
||||
}
|
||||
65
alert/sender/wecom.go
Normal file
65
alert/sender/wecom.go
Normal file
@@ -0,0 +1,65 @@
|
||||
package sender
|
||||
|
||||
import (
|
||||
"html/template"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
"github.com/ccfos/nightingale/v6/pkg/poster"
|
||||
|
||||
"github.com/toolkits/pkg/logger"
|
||||
)
|
||||
|
||||
type wecomMarkdown struct {
|
||||
Content string `json:"content"`
|
||||
}
|
||||
|
||||
type wecom struct {
|
||||
Msgtype string `json:"msgtype"`
|
||||
Markdown wecomMarkdown `json:"markdown"`
|
||||
}
|
||||
|
||||
type WecomSender struct {
|
||||
tpl *template.Template
|
||||
}
|
||||
|
||||
func (ws *WecomSender) Send(ctx MessageContext) {
|
||||
if len(ctx.Users) == 0 || ctx.Rule == nil || ctx.Event == nil {
|
||||
return
|
||||
}
|
||||
urls := ws.extract(ctx.Users)
|
||||
message := BuildTplMessage(ws.tpl, ctx.Event)
|
||||
for _, url := range urls {
|
||||
body := wecom{
|
||||
Msgtype: "markdown",
|
||||
Markdown: wecomMarkdown{
|
||||
Content: message,
|
||||
},
|
||||
}
|
||||
ws.doSend(url, body)
|
||||
}
|
||||
}
|
||||
|
||||
func (ws *WecomSender) extract(users []*models.User) []string {
|
||||
urls := make([]string, 0, len(users))
|
||||
for _, user := range users {
|
||||
if token, has := user.ExtractToken(models.Wecom); has {
|
||||
url := token
|
||||
if !strings.HasPrefix(token, "https://") {
|
||||
url = "https://qyapi.weixin.qq.com/cgi-bin/webhook/send?key=" + token
|
||||
}
|
||||
urls = append(urls, url)
|
||||
}
|
||||
}
|
||||
return urls
|
||||
}
|
||||
|
||||
func (ws *WecomSender) doSend(url string, body wecom) {
|
||||
res, code, err := poster.PostJSON(url, time.Second*5, body, 3)
|
||||
if err != nil {
|
||||
logger.Errorf("wecom_sender: result=fail url=%s code=%d error=%v response=%s", url, code, err, string(res))
|
||||
} else {
|
||||
logger.Infof("wecom_sender: result=succ url=%s code=%d response=%s", url, code, string(res))
|
||||
}
|
||||
}
|
||||
35
center/cconf/conf.go
Normal file
35
center/cconf/conf.go
Normal file
@@ -0,0 +1,35 @@
|
||||
package cconf
|
||||
|
||||
import (
|
||||
"github.com/gin-gonic/gin"
|
||||
)
|
||||
|
||||
type Center struct {
|
||||
Plugins []Plugin
|
||||
BasicAuth gin.Accounts
|
||||
MetricsYamlFile string
|
||||
OpsYamlFile string
|
||||
BuiltinIntegrationsDir string
|
||||
I18NHeaderKey string
|
||||
MetricDesc MetricDescType
|
||||
TargetMetrics map[string]string
|
||||
AnonymousAccess AnonymousAccess
|
||||
}
|
||||
|
||||
type Plugin struct {
|
||||
Id int64 `json:"id"`
|
||||
Category string `json:"category"`
|
||||
Type string `json:"plugin_type"`
|
||||
TypeName string `json:"plugin_type_name"`
|
||||
}
|
||||
|
||||
type AnonymousAccess struct {
|
||||
PromQuerier bool
|
||||
AlertDetail bool
|
||||
}
|
||||
|
||||
func (c *Center) PreCheck() {
|
||||
if len(c.Plugins) == 0 {
|
||||
c.Plugins = Plugins
|
||||
}
|
||||
}
|
||||
60
center/cconf/event_example.go
Normal file
60
center/cconf/event_example.go
Normal file
@@ -0,0 +1,60 @@
|
||||
package cconf
|
||||
|
||||
const EVENT_EXAMPLE = `
|
||||
{
|
||||
"id": 1000000,
|
||||
"cate": "prometheus",
|
||||
"datasource_id": 1,
|
||||
"group_id": 1,
|
||||
"group_name": "Default Busi Group",
|
||||
"hash": "2cb966f9ba1cdc7af94c3796e855955a",
|
||||
"rule_id": 23,
|
||||
"rule_name": "测试告警",
|
||||
"rule_note": "测试告警",
|
||||
"rule_prod": "metric",
|
||||
"rule_config": {
|
||||
"queries": [
|
||||
{
|
||||
"key": "all_hosts",
|
||||
"op": "==",
|
||||
"values": []
|
||||
}
|
||||
],
|
||||
"triggers": [
|
||||
{
|
||||
"duration": 3,
|
||||
"percent": 10,
|
||||
"severity": 3,
|
||||
"type": "pct_target_miss"
|
||||
}
|
||||
]
|
||||
},
|
||||
"prom_for_duration": 60,
|
||||
"prom_eval_interval": 30,
|
||||
"callbacks": ["https://n9e.github.io"],
|
||||
"notify_recovered": 1,
|
||||
"notify_channels": ["dingtalk"],
|
||||
"notify_groups": [],
|
||||
"notify_groups_obj": null,
|
||||
"target_ident": "host01",
|
||||
"target_note": "机器备注",
|
||||
"trigger_time": 1677229517,
|
||||
"trigger_value": "2273533952",
|
||||
"tags": [
|
||||
"__name__=disk_free",
|
||||
"dc=qcloud-dev",
|
||||
"device=vda1",
|
||||
"fstype=ext4",
|
||||
"ident=tt-fc-dev00.nj"
|
||||
],
|
||||
"is_recovered": false,
|
||||
"notify_users_obj": null,
|
||||
"last_eval_time": 1677229517,
|
||||
"last_sent_time": 1677229517,
|
||||
"notify_cur_number": 1,
|
||||
"first_trigger_time": 1677229517,
|
||||
"annotations": {
|
||||
"summary": "测试告警"
|
||||
}
|
||||
}
|
||||
`
|
||||
45
center/cconf/metric.go
Normal file
45
center/cconf/metric.go
Normal file
@@ -0,0 +1,45 @@
|
||||
package cconf
|
||||
|
||||
import (
|
||||
"path"
|
||||
|
||||
"github.com/toolkits/pkg/file"
|
||||
"github.com/toolkits/pkg/runner"
|
||||
)
|
||||
|
||||
// metricDesc , As load map happens before read map, there is no necessary to use concurrent map for metric desc store
|
||||
type MetricDescType struct {
|
||||
CommonDesc map[string]string `yaml:",inline" json:"common"`
|
||||
Zh map[string]string `yaml:"zh" json:"zh"`
|
||||
En map[string]string `yaml:"en" json:"en"`
|
||||
}
|
||||
|
||||
var MetricDesc MetricDescType
|
||||
|
||||
// GetMetricDesc , if metric is not registered, empty string will be returned
|
||||
func GetMetricDesc(lang, metric string) string {
|
||||
var m map[string]string
|
||||
if lang == "zh" {
|
||||
m = MetricDesc.Zh
|
||||
} else {
|
||||
m = MetricDesc.En
|
||||
}
|
||||
if m != nil {
|
||||
if desc, has := m[metric]; has {
|
||||
return desc
|
||||
}
|
||||
}
|
||||
|
||||
return MetricDesc.CommonDesc[metric]
|
||||
}
|
||||
|
||||
func LoadMetricsYaml(metricsYamlFile string) error {
|
||||
fp := metricsYamlFile
|
||||
if fp == "" {
|
||||
fp = path.Join(runner.Cwd, "etc", "metrics.yaml")
|
||||
}
|
||||
if !file.IsExist(fp) {
|
||||
return nil
|
||||
}
|
||||
return file.ReadYaml(fp, &MetricDesc)
|
||||
}
|
||||
39
center/cconf/ops.go
Normal file
39
center/cconf/ops.go
Normal file
@@ -0,0 +1,39 @@
|
||||
package cconf
|
||||
|
||||
import (
|
||||
"path"
|
||||
|
||||
"github.com/toolkits/pkg/file"
|
||||
"github.com/toolkits/pkg/runner"
|
||||
)
|
||||
|
||||
var Operations = Operation{}
|
||||
|
||||
type Operation struct {
|
||||
Ops []Ops `yaml:"ops"`
|
||||
}
|
||||
|
||||
type Ops struct {
|
||||
Name string `yaml:"name" json:"name"`
|
||||
Cname string `yaml:"cname" json:"cname"`
|
||||
Ops []string `yaml:"ops" json:"ops"`
|
||||
}
|
||||
|
||||
func LoadOpsYaml(opsYamlFile string) error {
|
||||
fp := opsYamlFile
|
||||
if fp == "" {
|
||||
fp = path.Join(runner.Cwd, "etc", "ops.yaml")
|
||||
}
|
||||
if !file.IsExist(fp) {
|
||||
return nil
|
||||
}
|
||||
return file.ReadYaml(fp, &Operations)
|
||||
}
|
||||
|
||||
func GetAllOps(ops []Ops) []string {
|
||||
var ret []string
|
||||
for _, op := range ops {
|
||||
ret = append(ret, op.Ops...)
|
||||
}
|
||||
return ret
|
||||
}
|
||||
22
center/cconf/plugin.go
Normal file
22
center/cconf/plugin.go
Normal file
@@ -0,0 +1,22 @@
|
||||
package cconf
|
||||
|
||||
var Plugins = []Plugin{
|
||||
{
|
||||
Id: 1,
|
||||
Category: "timeseries",
|
||||
Type: "prometheus",
|
||||
TypeName: "Prometheus Like",
|
||||
},
|
||||
{
|
||||
Id: 2,
|
||||
Category: "logging",
|
||||
Type: "elasticsearch",
|
||||
TypeName: "Elasticsearch",
|
||||
},
|
||||
{
|
||||
Id: 3,
|
||||
Category: "logging",
|
||||
Type: "jaeger",
|
||||
TypeName: "Jaeger",
|
||||
},
|
||||
}
|
||||
96
center/center.go
Normal file
96
center/center.go
Normal file
@@ -0,0 +1,96 @@
|
||||
package center
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/alert"
|
||||
"github.com/ccfos/nightingale/v6/alert/astats"
|
||||
"github.com/ccfos/nightingale/v6/alert/process"
|
||||
"github.com/ccfos/nightingale/v6/center/cconf"
|
||||
"github.com/ccfos/nightingale/v6/center/metas"
|
||||
"github.com/ccfos/nightingale/v6/center/sso"
|
||||
"github.com/ccfos/nightingale/v6/conf"
|
||||
"github.com/ccfos/nightingale/v6/memsto"
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
"github.com/ccfos/nightingale/v6/pkg/ctx"
|
||||
"github.com/ccfos/nightingale/v6/pkg/httpx"
|
||||
"github.com/ccfos/nightingale/v6/pkg/i18nx"
|
||||
"github.com/ccfos/nightingale/v6/pkg/logx"
|
||||
"github.com/ccfos/nightingale/v6/prom"
|
||||
"github.com/ccfos/nightingale/v6/pushgw/idents"
|
||||
"github.com/ccfos/nightingale/v6/pushgw/writer"
|
||||
"github.com/ccfos/nightingale/v6/storage"
|
||||
|
||||
alertrt "github.com/ccfos/nightingale/v6/alert/router"
|
||||
centerrt "github.com/ccfos/nightingale/v6/center/router"
|
||||
pushgwrt "github.com/ccfos/nightingale/v6/pushgw/router"
|
||||
)
|
||||
|
||||
func Initialize(configDir string, cryptoKey string) (func(), error) {
|
||||
config, err := conf.InitConfig(configDir, cryptoKey)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to init config: %v", err)
|
||||
}
|
||||
|
||||
cconf.LoadMetricsYaml(config.Center.MetricsYamlFile)
|
||||
cconf.LoadOpsYaml(config.Center.OpsYamlFile)
|
||||
|
||||
logxClean, err := logx.Init(config.Log)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
i18nx.Init()
|
||||
|
||||
db, err := storage.New(config.DB)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
ctx := ctx.NewContext(context.Background(), db)
|
||||
models.InitRoot(ctx)
|
||||
|
||||
redis, err := storage.NewRedis(config.Redis)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
metas := metas.New(redis)
|
||||
idents := idents.New(db)
|
||||
|
||||
syncStats := memsto.NewSyncStats()
|
||||
alertStats := astats.NewSyncStats()
|
||||
|
||||
sso := sso.Init(config.Center, ctx)
|
||||
|
||||
busiGroupCache := memsto.NewBusiGroupCache(ctx, syncStats)
|
||||
targetCache := memsto.NewTargetCache(ctx, syncStats, redis)
|
||||
dsCache := memsto.NewDatasourceCache(ctx, syncStats)
|
||||
alertMuteCache := memsto.NewAlertMuteCache(ctx, syncStats)
|
||||
alertRuleCache := memsto.NewAlertRuleCache(ctx, syncStats)
|
||||
notifyConfigCache := memsto.NewNotifyConfigCache(ctx)
|
||||
|
||||
promClients := prom.NewPromClient(ctx, config.Alert.Heartbeat)
|
||||
|
||||
externalProcessors := process.NewExternalProcessors()
|
||||
alert.Start(config.Alert, config.Pushgw, syncStats, alertStats, externalProcessors, targetCache, busiGroupCache, alertMuteCache, alertRuleCache, notifyConfigCache, dsCache, ctx, promClients, true)
|
||||
|
||||
writers := writer.NewWriters(config.Pushgw)
|
||||
|
||||
alertrtRouter := alertrt.New(config.HTTP, config.Alert, alertMuteCache, targetCache, busiGroupCache, alertStats, ctx, externalProcessors)
|
||||
centerRouter := centerrt.New(config.HTTP, config.Center, cconf.Operations, dsCache, notifyConfigCache, promClients, redis, sso, ctx, metas, targetCache)
|
||||
pushgwRouter := pushgwrt.New(config.HTTP, config.Pushgw, targetCache, busiGroupCache, idents, writers, ctx)
|
||||
|
||||
r := httpx.GinEngine(config.Global.RunMode, config.HTTP)
|
||||
|
||||
centerRouter.Config(r)
|
||||
alertrtRouter.Config(r)
|
||||
pushgwRouter.Config(r)
|
||||
|
||||
httpClean := httpx.Init(config.HTTP, r)
|
||||
|
||||
return func() {
|
||||
logxClean()
|
||||
httpClean()
|
||||
}, nil
|
||||
}
|
||||
@@ -1,4 +1,4 @@
|
||||
package stat
|
||||
package cstats
|
||||
|
||||
import (
|
||||
"time"
|
||||
@@ -6,7 +6,7 @@ import (
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
)
|
||||
|
||||
const Service = "n9e-webapi"
|
||||
const Service = "n9e-center"
|
||||
|
||||
var (
|
||||
labels = []string{"service", "code", "path", "method"}
|
||||
104
center/metas/metas.go
Normal file
104
center/metas/metas.go
Normal file
@@ -0,0 +1,104 @@
|
||||
package metas
|
||||
|
||||
import (
|
||||
"context"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
"github.com/ccfos/nightingale/v6/storage"
|
||||
|
||||
"github.com/toolkits/pkg/logger"
|
||||
)
|
||||
|
||||
type Set struct {
|
||||
sync.RWMutex
|
||||
items map[string]models.HostMeta
|
||||
redis storage.Redis
|
||||
}
|
||||
|
||||
func New(redis storage.Redis) *Set {
|
||||
set := &Set{
|
||||
items: make(map[string]models.HostMeta),
|
||||
redis: redis,
|
||||
}
|
||||
|
||||
set.Init()
|
||||
return set
|
||||
}
|
||||
|
||||
func (s *Set) Init() {
|
||||
go s.LoopPersist()
|
||||
}
|
||||
|
||||
func (s *Set) MSet(items map[string]models.HostMeta) {
|
||||
s.Lock()
|
||||
defer s.Unlock()
|
||||
for ident, meta := range items {
|
||||
s.items[ident] = meta
|
||||
}
|
||||
}
|
||||
|
||||
func (s *Set) Set(ident string, meta models.HostMeta) {
|
||||
s.Lock()
|
||||
defer s.Unlock()
|
||||
s.items[ident] = meta
|
||||
}
|
||||
|
||||
func (s *Set) LoopPersist() {
|
||||
for {
|
||||
time.Sleep(time.Second)
|
||||
s.persist()
|
||||
}
|
||||
}
|
||||
|
||||
func (s *Set) persist() {
|
||||
var items map[string]models.HostMeta
|
||||
|
||||
s.Lock()
|
||||
if len(s.items) == 0 {
|
||||
s.Unlock()
|
||||
return
|
||||
}
|
||||
|
||||
items = s.items
|
||||
s.items = make(map[string]models.HostMeta)
|
||||
s.Unlock()
|
||||
|
||||
s.updateMeta(items)
|
||||
}
|
||||
|
||||
func (s *Set) updateMeta(items map[string]models.HostMeta) {
|
||||
m := make(map[string]models.HostMeta, 100)
|
||||
num := 0
|
||||
|
||||
for _, meta := range items {
|
||||
m[meta.Hostname] = meta
|
||||
num++
|
||||
if num == 100 {
|
||||
if err := s.updateTargets(m); err != nil {
|
||||
logger.Errorf("failed to update targets: %v", err)
|
||||
}
|
||||
m = make(map[string]models.HostMeta, 100)
|
||||
num = 0
|
||||
}
|
||||
}
|
||||
|
||||
if err := s.updateTargets(m); err != nil {
|
||||
logger.Errorf("failed to update targets: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func (s *Set) updateTargets(m map[string]models.HostMeta) error {
|
||||
count := int64(len(m))
|
||||
if count == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
newMap := make(map[string]interface{}, count)
|
||||
for ident, meta := range m {
|
||||
newMap[models.WrapIdent(ident)] = meta
|
||||
}
|
||||
err := storage.MSet(context.Background(), s.redis, newMap)
|
||||
return err
|
||||
}
|
||||
410
center/router/router.go
Normal file
410
center/router/router.go
Normal file
@@ -0,0 +1,410 @@
|
||||
package router
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net/http"
|
||||
"path"
|
||||
"runtime"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/center/cconf"
|
||||
"github.com/ccfos/nightingale/v6/center/cstats"
|
||||
"github.com/ccfos/nightingale/v6/center/metas"
|
||||
"github.com/ccfos/nightingale/v6/center/sso"
|
||||
"github.com/ccfos/nightingale/v6/memsto"
|
||||
"github.com/ccfos/nightingale/v6/pkg/aop"
|
||||
"github.com/ccfos/nightingale/v6/pkg/ctx"
|
||||
"github.com/ccfos/nightingale/v6/pkg/httpx"
|
||||
"github.com/ccfos/nightingale/v6/prom"
|
||||
"github.com/ccfos/nightingale/v6/storage"
|
||||
"github.com/toolkits/pkg/runner"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
)
|
||||
|
||||
type Router struct {
|
||||
HTTP httpx.Config
|
||||
Center cconf.Center
|
||||
Operations cconf.Operation
|
||||
DatasourceCache *memsto.DatasourceCacheType
|
||||
NotifyConfigCache *memsto.NotifyConfigCacheType
|
||||
PromClients *prom.PromClientMap
|
||||
Redis storage.Redis
|
||||
MetaSet *metas.Set
|
||||
TargetCache *memsto.TargetCacheType
|
||||
Sso *sso.SsoClient
|
||||
Ctx *ctx.Context
|
||||
}
|
||||
|
||||
func New(httpConfig httpx.Config, center cconf.Center, operations cconf.Operation, ds *memsto.DatasourceCacheType, ncc *memsto.NotifyConfigCacheType,
|
||||
pc *prom.PromClientMap, redis storage.Redis, sso *sso.SsoClient, ctx *ctx.Context, metaSet *metas.Set, tc *memsto.TargetCacheType) *Router {
|
||||
return &Router{
|
||||
HTTP: httpConfig,
|
||||
Center: center,
|
||||
Operations: operations,
|
||||
DatasourceCache: ds,
|
||||
NotifyConfigCache: ncc,
|
||||
PromClients: pc,
|
||||
Redis: redis,
|
||||
MetaSet: metaSet,
|
||||
TargetCache: tc,
|
||||
Sso: sso,
|
||||
Ctx: ctx,
|
||||
}
|
||||
}
|
||||
|
||||
func stat() gin.HandlerFunc {
|
||||
return func(c *gin.Context) {
|
||||
start := time.Now()
|
||||
c.Next()
|
||||
|
||||
code := fmt.Sprintf("%d", c.Writer.Status())
|
||||
method := c.Request.Method
|
||||
labels := []string{cstats.Service, code, c.FullPath(), method}
|
||||
|
||||
cstats.RequestCounter.WithLabelValues(labels...).Inc()
|
||||
cstats.RequestDuration.WithLabelValues(labels...).Observe(float64(time.Since(start).Seconds()))
|
||||
}
|
||||
}
|
||||
|
||||
func languageDetector(i18NHeaderKey string) gin.HandlerFunc {
|
||||
headerKey := i18NHeaderKey
|
||||
return func(c *gin.Context) {
|
||||
if headerKey != "" {
|
||||
lang := c.GetHeader(headerKey)
|
||||
if lang != "" {
|
||||
if strings.HasPrefix(lang, "zh") {
|
||||
c.Request.Header.Set("X-Language", "zh")
|
||||
} else if strings.HasPrefix(lang, "en") {
|
||||
c.Request.Header.Set("X-Language", "en")
|
||||
} else {
|
||||
c.Request.Header.Set("X-Language", lang)
|
||||
}
|
||||
} else {
|
||||
c.Request.Header.Set("X-Language", "en")
|
||||
}
|
||||
}
|
||||
c.Next()
|
||||
}
|
||||
}
|
||||
|
||||
func (rt *Router) configNoRoute(r *gin.Engine) {
|
||||
r.NoRoute(func(c *gin.Context) {
|
||||
arr := strings.Split(c.Request.URL.Path, ".")
|
||||
suffix := arr[len(arr)-1]
|
||||
switch suffix {
|
||||
case "png", "jpeg", "jpg", "svg", "ico", "gif", "css", "js", "html", "htm", "gz", "zip", "map":
|
||||
cwdarr := []string{"/"}
|
||||
if runtime.GOOS == "windows" {
|
||||
cwdarr[0] = ""
|
||||
}
|
||||
cwdarr = append(cwdarr, strings.Split(runner.Cwd, "/")...)
|
||||
cwdarr = append(cwdarr, "pub")
|
||||
cwdarr = append(cwdarr, strings.Split(c.Request.URL.Path, "/")...)
|
||||
c.File(path.Join(cwdarr...))
|
||||
default:
|
||||
cwdarr := []string{"/"}
|
||||
if runtime.GOOS == "windows" {
|
||||
cwdarr[0] = ""
|
||||
}
|
||||
cwdarr = append(cwdarr, strings.Split(runner.Cwd, "/")...)
|
||||
cwdarr = append(cwdarr, "pub")
|
||||
cwdarr = append(cwdarr, "index.html")
|
||||
c.File(path.Join(cwdarr...))
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func (rt *Router) Config(r *gin.Engine) {
|
||||
r.Use(stat())
|
||||
r.Use(languageDetector(rt.Center.I18NHeaderKey))
|
||||
r.Use(aop.Recovery())
|
||||
|
||||
pagesPrefix := "/api/n9e"
|
||||
pages := r.Group(pagesPrefix)
|
||||
{
|
||||
|
||||
if rt.Center.AnonymousAccess.PromQuerier {
|
||||
pages.Any("/proxy/:id/*url", rt.dsProxy)
|
||||
pages.POST("/query-range-batch", rt.promBatchQueryRange)
|
||||
pages.POST("/query-instant-batch", rt.promBatchQueryInstant)
|
||||
pages.GET("/datasource/brief", rt.datasourceBriefs)
|
||||
} else {
|
||||
pages.Any("/proxy/:id/*url", rt.auth(), rt.dsProxy)
|
||||
pages.POST("/query-range-batch", rt.auth(), rt.promBatchQueryRange)
|
||||
pages.POST("/query-instant-batch", rt.auth(), rt.promBatchQueryInstant)
|
||||
pages.GET("/datasource/brief", rt.auth(), rt.datasourceBriefs)
|
||||
}
|
||||
|
||||
pages.POST("/auth/login", rt.jwtMock(), rt.loginPost)
|
||||
pages.POST("/auth/logout", rt.jwtMock(), rt.logoutPost)
|
||||
pages.POST("/auth/refresh", rt.jwtMock(), rt.refreshPost)
|
||||
|
||||
pages.GET("/auth/sso-config", rt.ssoConfigNameGet)
|
||||
pages.GET("/auth/redirect", rt.loginRedirect)
|
||||
pages.GET("/auth/redirect/cas", rt.loginRedirectCas)
|
||||
pages.GET("/auth/redirect/oauth", rt.loginRedirectOAuth)
|
||||
pages.GET("/auth/callback", rt.loginCallback)
|
||||
pages.GET("/auth/callback/cas", rt.loginCallbackCas)
|
||||
pages.GET("/auth/callback/oauth", rt.loginCallbackOAuth)
|
||||
|
||||
pages.GET("/metrics/desc", rt.metricsDescGetFile)
|
||||
pages.POST("/metrics/desc", rt.metricsDescGetMap)
|
||||
|
||||
pages.GET("/notify-channels", rt.notifyChannelsGets)
|
||||
pages.GET("/contact-keys", rt.contactKeysGets)
|
||||
|
||||
pages.GET("/self/perms", rt.auth(), rt.user(), rt.permsGets)
|
||||
pages.GET("/self/profile", rt.auth(), rt.user(), rt.selfProfileGet)
|
||||
pages.PUT("/self/profile", rt.auth(), rt.user(), rt.selfProfilePut)
|
||||
pages.PUT("/self/password", rt.auth(), rt.user(), rt.selfPasswordPut)
|
||||
|
||||
pages.GET("/users", rt.auth(), rt.user(), rt.perm("/users"), rt.userGets)
|
||||
pages.POST("/users", rt.auth(), rt.admin(), rt.userAddPost)
|
||||
pages.GET("/user/:id/profile", rt.auth(), rt.userProfileGet)
|
||||
pages.PUT("/user/:id/profile", rt.auth(), rt.admin(), rt.userProfilePut)
|
||||
pages.PUT("/user/:id/password", rt.auth(), rt.admin(), rt.userPasswordPut)
|
||||
pages.DELETE("/user/:id", rt.auth(), rt.admin(), rt.userDel)
|
||||
|
||||
pages.GET("/metric-views", rt.auth(), rt.metricViewGets)
|
||||
pages.DELETE("/metric-views", rt.auth(), rt.user(), rt.metricViewDel)
|
||||
pages.POST("/metric-views", rt.auth(), rt.user(), rt.metricViewAdd)
|
||||
pages.PUT("/metric-views", rt.auth(), rt.user(), rt.metricViewPut)
|
||||
|
||||
pages.GET("/user-groups", rt.auth(), rt.user(), rt.userGroupGets)
|
||||
pages.POST("/user-groups", rt.auth(), rt.user(), rt.perm("/user-groups/add"), rt.userGroupAdd)
|
||||
pages.GET("/user-group/:id", rt.auth(), rt.user(), rt.userGroupGet)
|
||||
pages.PUT("/user-group/:id", rt.auth(), rt.user(), rt.perm("/user-groups/put"), rt.userGroupWrite(), rt.userGroupPut)
|
||||
pages.DELETE("/user-group/:id", rt.auth(), rt.user(), rt.perm("/user-groups/del"), rt.userGroupWrite(), rt.userGroupDel)
|
||||
pages.POST("/user-group/:id/members", rt.auth(), rt.user(), rt.perm("/user-groups/put"), rt.userGroupWrite(), rt.userGroupMemberAdd)
|
||||
pages.DELETE("/user-group/:id/members", rt.auth(), rt.user(), rt.perm("/user-groups/put"), rt.userGroupWrite(), rt.userGroupMemberDel)
|
||||
|
||||
pages.GET("/busi-groups", rt.auth(), rt.user(), rt.busiGroupGets)
|
||||
pages.POST("/busi-groups", rt.auth(), rt.user(), rt.perm("/busi-groups/add"), rt.busiGroupAdd)
|
||||
pages.GET("/busi-groups/alertings", rt.auth(), rt.busiGroupAlertingsGets)
|
||||
pages.GET("/busi-group/:id", rt.auth(), rt.user(), rt.bgro(), rt.busiGroupGet)
|
||||
pages.PUT("/busi-group/:id", rt.auth(), rt.user(), rt.perm("/busi-groups/put"), rt.bgrw(), rt.busiGroupPut)
|
||||
pages.POST("/busi-group/:id/members", rt.auth(), rt.user(), rt.perm("/busi-groups/put"), rt.bgrw(), rt.busiGroupMemberAdd)
|
||||
pages.DELETE("/busi-group/:id/members", rt.auth(), rt.user(), rt.perm("/busi-groups/put"), rt.bgrw(), rt.busiGroupMemberDel)
|
||||
pages.DELETE("/busi-group/:id", rt.auth(), rt.user(), rt.perm("/busi-groups/del"), rt.bgrw(), rt.busiGroupDel)
|
||||
pages.GET("/busi-group/:id/perm/:perm", rt.auth(), rt.user(), rt.checkBusiGroupPerm)
|
||||
|
||||
pages.GET("/targets", rt.auth(), rt.user(), rt.targetGets)
|
||||
pages.POST("/target/list", rt.auth(), rt.user(), rt.targetGetsByHostFilter)
|
||||
pages.DELETE("/targets", rt.auth(), rt.user(), rt.perm("/targets/del"), rt.targetDel)
|
||||
pages.GET("/targets/tags", rt.auth(), rt.user(), rt.targetGetTags)
|
||||
pages.POST("/targets/tags", rt.auth(), rt.user(), rt.perm("/targets/put"), rt.targetBindTagsByFE)
|
||||
pages.DELETE("/targets/tags", rt.auth(), rt.user(), rt.perm("/targets/put"), rt.targetUnbindTagsByFE)
|
||||
pages.PUT("/targets/note", rt.auth(), rt.user(), rt.perm("/targets/put"), rt.targetUpdateNote)
|
||||
pages.PUT("/targets/bgid", rt.auth(), rt.user(), rt.perm("/targets/put"), rt.targetUpdateBgid)
|
||||
|
||||
pages.POST("/builtin-cate-favorite", rt.auth(), rt.user(), rt.builtinCateFavoriteAdd)
|
||||
pages.DELETE("/builtin-cate-favorite/:name", rt.auth(), rt.user(), rt.builtinCateFavoriteDel)
|
||||
|
||||
pages.GET("/builtin-boards", rt.builtinBoardGets)
|
||||
pages.GET("/builtin-board/:name", rt.builtinBoardGet)
|
||||
pages.GET("/dashboards/builtin/list", rt.builtinBoardGets)
|
||||
pages.GET("/builtin-boards-cates", rt.auth(), rt.user(), rt.builtinBoardCateGets)
|
||||
pages.POST("/builtin-boards-detail", rt.auth(), rt.user(), rt.builtinBoardDetailGets)
|
||||
pages.GET("/integrations/icon/:cate/:name", rt.builtinIcon)
|
||||
|
||||
pages.GET("/busi-group/:id/boards", rt.auth(), rt.user(), rt.perm("/dashboards"), rt.bgro(), rt.boardGets)
|
||||
pages.POST("/busi-group/:id/boards", rt.auth(), rt.user(), rt.perm("/dashboards/add"), rt.bgrw(), rt.boardAdd)
|
||||
pages.POST("/busi-group/:id/board/:bid/clone", rt.auth(), rt.user(), rt.perm("/dashboards/add"), rt.bgrw(), rt.boardClone)
|
||||
|
||||
pages.GET("/board/:bid", rt.boardGet)
|
||||
pages.GET("/board/:bid/pure", rt.boardPureGet)
|
||||
pages.PUT("/board/:bid", rt.auth(), rt.user(), rt.perm("/dashboards/put"), rt.boardPut)
|
||||
pages.PUT("/board/:bid/configs", rt.auth(), rt.user(), rt.perm("/dashboards/put"), rt.boardPutConfigs)
|
||||
pages.PUT("/board/:bid/public", rt.auth(), rt.user(), rt.perm("/dashboards/put"), rt.boardPutPublic)
|
||||
pages.DELETE("/boards", rt.auth(), rt.user(), rt.perm("/dashboards/del"), rt.boardDel)
|
||||
|
||||
pages.GET("/share-charts", rt.chartShareGets)
|
||||
pages.POST("/share-charts", rt.auth(), rt.chartShareAdd)
|
||||
|
||||
pages.GET("/alert-rules/builtin/alerts-cates", rt.auth(), rt.user(), rt.builtinAlertCateGets)
|
||||
pages.GET("/alert-rules/builtin/list", rt.auth(), rt.user(), rt.builtinAlertRules)
|
||||
|
||||
pages.GET("/busi-group/:id/alert-rules", rt.auth(), rt.user(), rt.perm("/alert-rules"), rt.alertRuleGets)
|
||||
pages.POST("/busi-group/:id/alert-rules", rt.auth(), rt.user(), rt.perm("/alert-rules/add"), rt.bgrw(), rt.alertRuleAddByFE)
|
||||
pages.POST("/busi-group/:id/alert-rules/import", rt.auth(), rt.user(), rt.perm("/alert-rules/add"), rt.bgrw(), rt.alertRuleAddByImport)
|
||||
pages.DELETE("/busi-group/:id/alert-rules", rt.auth(), rt.user(), rt.perm("/alert-rules/del"), rt.bgrw(), rt.alertRuleDel)
|
||||
pages.PUT("/busi-group/:id/alert-rules/fields", rt.auth(), rt.user(), rt.perm("/alert-rules/put"), rt.bgrw(), rt.alertRulePutFields)
|
||||
pages.PUT("/busi-group/:id/alert-rule/:arid", rt.auth(), rt.user(), rt.perm("/alert-rules/put"), rt.alertRulePutByFE)
|
||||
pages.GET("/alert-rule/:arid", rt.auth(), rt.user(), rt.perm("/alert-rules"), rt.alertRuleGet)
|
||||
|
||||
pages.GET("/busi-group/:id/recording-rules", rt.auth(), rt.user(), rt.perm("/recording-rules"), rt.recordingRuleGets)
|
||||
pages.POST("/busi-group/:id/recording-rules", rt.auth(), rt.user(), rt.perm("/recording-rules/add"), rt.bgrw(), rt.recordingRuleAddByFE)
|
||||
pages.DELETE("/busi-group/:id/recording-rules", rt.auth(), rt.user(), rt.perm("/recording-rules/del"), rt.bgrw(), rt.recordingRuleDel)
|
||||
pages.PUT("/busi-group/:id/recording-rule/:rrid", rt.auth(), rt.user(), rt.perm("/recording-rules/put"), rt.bgrw(), rt.recordingRulePutByFE)
|
||||
pages.GET("/recording-rule/:rrid", rt.auth(), rt.user(), rt.perm("/recording-rules"), rt.recordingRuleGet)
|
||||
pages.PUT("/busi-group/:id/recording-rules/fields", rt.auth(), rt.user(), rt.perm("/recording-rules/put"), rt.recordingRulePutFields)
|
||||
|
||||
pages.GET("/busi-group/:id/alert-mutes", rt.auth(), rt.user(), rt.perm("/alert-mutes"), rt.bgro(), rt.alertMuteGetsByBG)
|
||||
pages.POST("/busi-group/:id/alert-mutes", rt.auth(), rt.user(), rt.perm("/alert-mutes/add"), rt.bgrw(), rt.alertMuteAdd)
|
||||
pages.DELETE("/busi-group/:id/alert-mutes", rt.auth(), rt.user(), rt.perm("/alert-mutes/del"), rt.bgrw(), rt.alertMuteDel)
|
||||
pages.PUT("/busi-group/:id/alert-mute/:amid", rt.auth(), rt.user(), rt.perm("/alert-mutes/put"), rt.alertMutePutByFE)
|
||||
pages.PUT("/busi-group/:id/alert-mutes/fields", rt.auth(), rt.user(), rt.perm("/alert-mutes/put"), rt.bgrw(), rt.alertMutePutFields)
|
||||
|
||||
pages.GET("/busi-group/:id/alert-subscribes", rt.auth(), rt.user(), rt.perm("/alert-subscribes"), rt.bgro(), rt.alertSubscribeGets)
|
||||
pages.GET("/alert-subscribe/:sid", rt.auth(), rt.user(), rt.perm("/alert-subscribes"), rt.alertSubscribeGet)
|
||||
pages.POST("/busi-group/:id/alert-subscribes", rt.auth(), rt.user(), rt.perm("/alert-subscribes/add"), rt.bgrw(), rt.alertSubscribeAdd)
|
||||
pages.PUT("/busi-group/:id/alert-subscribes", rt.auth(), rt.user(), rt.perm("/alert-subscribes/put"), rt.bgrw(), rt.alertSubscribePut)
|
||||
pages.DELETE("/busi-group/:id/alert-subscribes", rt.auth(), rt.user(), rt.perm("/alert-subscribes/del"), rt.bgrw(), rt.alertSubscribeDel)
|
||||
|
||||
if rt.Center.AnonymousAccess.AlertDetail {
|
||||
pages.GET("/alert-cur-event/:eid", rt.alertCurEventGet)
|
||||
pages.GET("/alert-his-event/:eid", rt.alertHisEventGet)
|
||||
} else {
|
||||
pages.GET("/alert-cur-event/:eid", rt.auth(), rt.alertCurEventGet)
|
||||
pages.GET("/alert-his-event/:eid", rt.auth(), rt.alertHisEventGet)
|
||||
}
|
||||
|
||||
// card logic
|
||||
pages.GET("/alert-cur-events/list", rt.auth(), rt.alertCurEventsList)
|
||||
pages.GET("/alert-cur-events/card", rt.auth(), rt.alertCurEventsCard)
|
||||
pages.POST("/alert-cur-events/card/details", rt.auth(), rt.alertCurEventsCardDetails)
|
||||
pages.GET("/alert-his-events/list", rt.auth(), rt.alertHisEventsList)
|
||||
pages.DELETE("/alert-cur-events", rt.auth(), rt.user(), rt.perm("/alert-cur-events/del"), rt.alertCurEventDel)
|
||||
|
||||
pages.GET("/alert-aggr-views", rt.auth(), rt.alertAggrViewGets)
|
||||
pages.DELETE("/alert-aggr-views", rt.auth(), rt.user(), rt.alertAggrViewDel)
|
||||
pages.POST("/alert-aggr-views", rt.auth(), rt.user(), rt.alertAggrViewAdd)
|
||||
pages.PUT("/alert-aggr-views", rt.auth(), rt.user(), rt.alertAggrViewPut)
|
||||
|
||||
pages.GET("/busi-group/:id/task-tpls", rt.auth(), rt.user(), rt.perm("/job-tpls"), rt.bgro(), rt.taskTplGets)
|
||||
pages.POST("/busi-group/:id/task-tpls", rt.auth(), rt.user(), rt.perm("/job-tpls/add"), rt.bgrw(), rt.taskTplAdd)
|
||||
pages.DELETE("/busi-group/:id/task-tpl/:tid", rt.auth(), rt.user(), rt.perm("/job-tpls/del"), rt.bgrw(), rt.taskTplDel)
|
||||
pages.POST("/busi-group/:id/task-tpls/tags", rt.auth(), rt.user(), rt.perm("/job-tpls/put"), rt.bgrw(), rt.taskTplBindTags)
|
||||
pages.DELETE("/busi-group/:id/task-tpls/tags", rt.auth(), rt.user(), rt.perm("/job-tpls/put"), rt.bgrw(), rt.taskTplUnbindTags)
|
||||
pages.GET("/busi-group/:id/task-tpl/:tid", rt.auth(), rt.user(), rt.perm("/job-tpls"), rt.bgro(), rt.taskTplGet)
|
||||
pages.PUT("/busi-group/:id/task-tpl/:tid", rt.auth(), rt.user(), rt.perm("/job-tpls/put"), rt.bgrw(), rt.taskTplPut)
|
||||
|
||||
pages.GET("/busi-group/:id/tasks", rt.auth(), rt.user(), rt.perm("/job-tasks"), rt.bgro(), rt.taskGets)
|
||||
pages.POST("/busi-group/:id/tasks", rt.auth(), rt.user(), rt.perm("/job-tasks/add"), rt.bgrw(), rt.taskAdd)
|
||||
pages.GET("/busi-group/:id/task/*url", rt.auth(), rt.user(), rt.perm("/job-tasks"), rt.taskProxy)
|
||||
pages.PUT("/busi-group/:id/task/*url", rt.auth(), rt.user(), rt.perm("/job-tasks/put"), rt.bgrw(), rt.taskProxy)
|
||||
|
||||
pages.GET("/servers", rt.auth(), rt.admin(), rt.serversGet)
|
||||
pages.GET("/server-clusters", rt.auth(), rt.admin(), rt.serverClustersGet)
|
||||
|
||||
pages.POST("/datasource/list", rt.auth(), rt.datasourceList)
|
||||
pages.POST("/datasource/plugin/list", rt.auth(), rt.pluginList)
|
||||
pages.POST("/datasource/upsert", rt.auth(), rt.admin(), rt.datasourceUpsert)
|
||||
pages.POST("/datasource/desc", rt.auth(), rt.admin(), rt.datasourceGet)
|
||||
pages.POST("/datasource/status/update", rt.auth(), rt.admin(), rt.datasourceUpdataStatus)
|
||||
pages.DELETE("/datasource/", rt.auth(), rt.admin(), rt.datasourceDel)
|
||||
|
||||
pages.GET("/roles", rt.auth(), rt.admin(), rt.roleGets)
|
||||
pages.POST("/roles", rt.auth(), rt.admin(), rt.roleAdd)
|
||||
pages.PUT("/roles", rt.auth(), rt.admin(), rt.rolePut)
|
||||
pages.DELETE("/role/:id", rt.auth(), rt.admin(), rt.roleDel)
|
||||
|
||||
pages.GET("/role/:id/ops", rt.auth(), rt.admin(), rt.operationOfRole)
|
||||
pages.PUT("/role/:id/ops", rt.auth(), rt.admin(), rt.roleBindOperation)
|
||||
pages.GET("operation", rt.operations)
|
||||
|
||||
pages.GET("/notify-tpls", rt.auth(), rt.admin(), rt.notifyTplGets)
|
||||
pages.PUT("/notify-tpl/content", rt.auth(), rt.admin(), rt.notifyTplUpdateContent)
|
||||
pages.PUT("/notify-tpl", rt.auth(), rt.admin(), rt.notifyTplUpdate)
|
||||
pages.POST("/notify-tpl/preview", rt.auth(), rt.admin(), rt.notifyTplPreview)
|
||||
|
||||
pages.GET("/sso-configs", rt.auth(), rt.admin(), rt.ssoConfigGets)
|
||||
pages.PUT("/sso-config", rt.auth(), rt.admin(), rt.ssoConfigUpdate)
|
||||
|
||||
pages.GET("/webhooks", rt.auth(), rt.admin(), rt.webhookGets)
|
||||
pages.PUT("/webhooks", rt.auth(), rt.admin(), rt.webhookPuts)
|
||||
|
||||
pages.GET("/notify-script", rt.auth(), rt.admin(), rt.notifyScriptGet)
|
||||
pages.PUT("/notify-script", rt.auth(), rt.admin(), rt.notifyScriptPut)
|
||||
|
||||
pages.GET("/notify-channel", rt.auth(), rt.admin(), rt.notifyChannelGets)
|
||||
pages.PUT("/notify-channel", rt.auth(), rt.admin(), rt.notifyChannelPuts)
|
||||
|
||||
pages.GET("/notify-contact", rt.auth(), rt.admin(), rt.notifyContactGets)
|
||||
pages.PUT("/notify-contact", rt.auth(), rt.admin(), rt.notifyContactPuts)
|
||||
|
||||
pages.GET("/notify-config", rt.auth(), rt.admin(), rt.notifyConfigGet)
|
||||
pages.PUT("/notify-config", rt.auth(), rt.admin(), rt.notifyConfigPut)
|
||||
}
|
||||
|
||||
if rt.HTTP.Service.Enable {
|
||||
service := r.Group("/v1/n9e")
|
||||
if len(rt.HTTP.Service.BasicAuth) > 0 {
|
||||
service.Use(gin.BasicAuth(rt.HTTP.Service.BasicAuth))
|
||||
}
|
||||
{
|
||||
service.Any("/prometheus/*url", rt.dsProxy)
|
||||
service.POST("/users", rt.userAddPost)
|
||||
service.GET("/users", rt.userFindAll)
|
||||
|
||||
service.GET("/targets", rt.targetGets)
|
||||
service.GET("/targets/tags", rt.targetGetTags)
|
||||
service.POST("/targets/tags", rt.targetBindTagsByService)
|
||||
service.DELETE("/targets/tags", rt.targetUnbindTagsByService)
|
||||
service.PUT("/targets/note", rt.targetUpdateNoteByService)
|
||||
|
||||
service.POST("/alert-rules", rt.alertRuleAddByService)
|
||||
service.DELETE("/alert-rules", rt.alertRuleDelByService)
|
||||
service.PUT("/alert-rule/:arid", rt.alertRulePutByService)
|
||||
service.GET("/alert-rule/:arid", rt.alertRuleGet)
|
||||
service.GET("/alert-rules", rt.alertRulesGetByService)
|
||||
|
||||
service.GET("/alert-mutes", rt.alertMuteGets)
|
||||
service.POST("/alert-mutes", rt.alertMuteAddByService)
|
||||
service.DELETE("/alert-mutes", rt.alertMuteDel)
|
||||
|
||||
service.GET("/alert-cur-events", rt.alertCurEventsList)
|
||||
service.GET("/alert-his-events", rt.alertHisEventsList)
|
||||
service.GET("/alert-his-event/:eid", rt.alertHisEventGet)
|
||||
|
||||
service.GET("/config/:id", rt.configGet)
|
||||
service.GET("/configs", rt.configsGet)
|
||||
service.PUT("/configs", rt.configsPut)
|
||||
service.POST("/configs", rt.configsPost)
|
||||
service.DELETE("/configs", rt.configsDel)
|
||||
|
||||
service.POST("/conf-prop/encrypt", rt.confPropEncrypt)
|
||||
service.POST("/conf-prop/decrypt", rt.confPropDecrypt)
|
||||
}
|
||||
}
|
||||
|
||||
if rt.HTTP.Heartbeat.Enable {
|
||||
heartbeat := r.Group("/v1/n9e")
|
||||
{
|
||||
if len(rt.HTTP.Heartbeat.BasicAuth) > 0 {
|
||||
heartbeat.Use(gin.BasicAuth(rt.HTTP.Heartbeat.BasicAuth))
|
||||
}
|
||||
heartbeat.POST("/heartbeat", rt.heartbeat)
|
||||
}
|
||||
}
|
||||
|
||||
rt.configNoRoute(r)
|
||||
}
|
||||
|
||||
func Render(c *gin.Context, data, msg interface{}) {
|
||||
if msg == nil {
|
||||
if data == nil {
|
||||
data = struct{}{}
|
||||
}
|
||||
c.JSON(http.StatusOK, gin.H{"data": data, "error": ""})
|
||||
} else {
|
||||
c.JSON(http.StatusOK, gin.H{"error": gin.H{"message": msg}})
|
||||
}
|
||||
}
|
||||
|
||||
func Dangerous(c *gin.Context, v interface{}, code ...int) {
|
||||
if v == nil {
|
||||
return
|
||||
}
|
||||
|
||||
switch t := v.(type) {
|
||||
case string:
|
||||
if t != "" {
|
||||
c.JSON(http.StatusOK, gin.H{"error": v})
|
||||
}
|
||||
case error:
|
||||
c.JSON(http.StatusOK, gin.H{"error": t.Error()})
|
||||
}
|
||||
}
|
||||
@@ -3,19 +3,20 @@ package router
|
||||
import (
|
||||
"net/http"
|
||||
|
||||
"github.com/didi/nightingale/v5/src/models"
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/toolkits/pkg/ginx"
|
||||
)
|
||||
|
||||
// no param
|
||||
func alertAggrViewGets(c *gin.Context) {
|
||||
lst, err := models.AlertAggrViewGets(c.MustGet("userid"))
|
||||
func (rt *Router) alertAggrViewGets(c *gin.Context) {
|
||||
lst, err := models.AlertAggrViewGets(rt.Ctx, c.MustGet("userid"))
|
||||
ginx.NewRender(c).Data(lst, err)
|
||||
}
|
||||
|
||||
// body: name, rule, cate
|
||||
func alertAggrViewAdd(c *gin.Context) {
|
||||
func (rt *Router) alertAggrViewAdd(c *gin.Context) {
|
||||
var f models.AlertAggrView
|
||||
ginx.BindJSON(c, &f)
|
||||
|
||||
@@ -27,31 +28,31 @@ func alertAggrViewAdd(c *gin.Context) {
|
||||
|
||||
f.Id = 0
|
||||
f.CreateBy = me.Id
|
||||
ginx.Dangerous(f.Add())
|
||||
ginx.Dangerous(f.Add(rt.Ctx))
|
||||
|
||||
ginx.NewRender(c).Data(f, nil)
|
||||
}
|
||||
|
||||
// body: ids
|
||||
func alertAggrViewDel(c *gin.Context) {
|
||||
func (rt *Router) alertAggrViewDel(c *gin.Context) {
|
||||
var f idsForm
|
||||
ginx.BindJSON(c, &f)
|
||||
f.Verify()
|
||||
|
||||
me := c.MustGet("user").(*models.User)
|
||||
if me.IsAdmin() {
|
||||
ginx.NewRender(c).Message(models.AlertAggrViewDel(f.Ids))
|
||||
ginx.NewRender(c).Message(models.AlertAggrViewDel(rt.Ctx, f.Ids))
|
||||
} else {
|
||||
ginx.NewRender(c).Message(models.AlertAggrViewDel(f.Ids, me.Id))
|
||||
ginx.NewRender(c).Message(models.AlertAggrViewDel(rt.Ctx, f.Ids, me.Id))
|
||||
}
|
||||
}
|
||||
|
||||
// body: id, name, rule, cate
|
||||
func alertAggrViewPut(c *gin.Context) {
|
||||
func (rt *Router) alertAggrViewPut(c *gin.Context) {
|
||||
var f models.AlertAggrView
|
||||
ginx.BindJSON(c, &f)
|
||||
|
||||
view, err := models.AlertAggrViewGet("id = ?", f.Id)
|
||||
view, err := models.AlertAggrViewGet(rt.Ctx, "id = ?", f.Id)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if view == nil {
|
||||
@@ -69,5 +70,5 @@ func alertAggrViewPut(c *gin.Context) {
|
||||
}
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Message(view.Update(f.Name, f.Rule, f.Cate, me.Id))
|
||||
ginx.NewRender(c).Message(view.Update(rt.Ctx, f.Name, f.Rule, f.Cate, me.Id))
|
||||
}
|
||||
@@ -5,10 +5,10 @@ import (
|
||||
"sort"
|
||||
"strings"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/toolkits/pkg/ginx"
|
||||
|
||||
"github.com/didi/nightingale/v5/src/models"
|
||||
)
|
||||
|
||||
func parseAggrRules(c *gin.Context) []*models.AggrRule {
|
||||
@@ -38,17 +38,31 @@ func parseAggrRules(c *gin.Context) []*models.AggrRule {
|
||||
return rules
|
||||
}
|
||||
|
||||
func alertCurEventsCard(c *gin.Context) {
|
||||
func (rt *Router) alertCurEventsCard(c *gin.Context) {
|
||||
stime, etime := getTimeRange(c)
|
||||
severity := ginx.QueryInt(c, "severity", -1)
|
||||
query := ginx.QueryStr(c, "query", "")
|
||||
busiGroupId := ginx.QueryInt64(c, "bgid", 0)
|
||||
clusters := queryClusters(c)
|
||||
dsIds := queryDatasourceIds(c)
|
||||
rules := parseAggrRules(c)
|
||||
prod := ginx.QueryStr(c, "prod", "")
|
||||
|
||||
prod := ginx.QueryStr(c, "prods", "")
|
||||
if prod == "" {
|
||||
prod = ginx.QueryStr(c, "rule_prods", "")
|
||||
}
|
||||
prods := []string{}
|
||||
if prod != "" {
|
||||
prods = strings.Split(prod, ",")
|
||||
}
|
||||
|
||||
cate := ginx.QueryStr(c, "cate", "$all")
|
||||
cates := []string{}
|
||||
if cate != "$all" {
|
||||
cates = strings.Split(cate, ",")
|
||||
}
|
||||
|
||||
// 最多获取50000个,获取太多也没啥意义
|
||||
list, err := models.AlertCurEventGets(prod, busiGroupId, stime, etime, severity, clusters, query, 50000, 0)
|
||||
list, err := models.AlertCurEventGets(rt.Ctx, prods, busiGroupId, stime, etime, severity, dsIds, cates, query, 50000, 0)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
cardmap := make(map[string]*AlertCard)
|
||||
@@ -99,15 +113,15 @@ type AlertCard struct {
|
||||
Severity int `json:"severity"`
|
||||
}
|
||||
|
||||
func alertCurEventsCardDetails(c *gin.Context) {
|
||||
func (rt *Router) alertCurEventsCardDetails(c *gin.Context) {
|
||||
var f idsForm
|
||||
ginx.BindJSON(c, &f)
|
||||
|
||||
list, err := models.AlertCurEventGetByIds(f.Ids)
|
||||
list, err := models.AlertCurEventGetByIds(rt.Ctx, f.Ids)
|
||||
if err == nil {
|
||||
cache := make(map[int64]*models.UserGroup)
|
||||
for i := 0; i < len(list); i++ {
|
||||
list[i].FillNotifyGroups(cache)
|
||||
list[i].FillNotifyGroups(rt.Ctx, cache)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -115,24 +129,39 @@ func alertCurEventsCardDetails(c *gin.Context) {
|
||||
}
|
||||
|
||||
// 列表方式,拉取活跃告警
|
||||
func alertCurEventsList(c *gin.Context) {
|
||||
func (rt *Router) alertCurEventsList(c *gin.Context) {
|
||||
stime, etime := getTimeRange(c)
|
||||
severity := ginx.QueryInt(c, "severity", -1)
|
||||
query := ginx.QueryStr(c, "query", "")
|
||||
limit := ginx.QueryInt(c, "limit", 20)
|
||||
busiGroupId := ginx.QueryInt64(c, "bgid", 0)
|
||||
clusters := queryClusters(c)
|
||||
prod := ginx.QueryStr(c, "prod", "")
|
||||
dsIds := queryDatasourceIds(c)
|
||||
|
||||
total, err := models.AlertCurEventTotal(prod, busiGroupId, stime, etime, severity, clusters, query)
|
||||
prod := ginx.QueryStr(c, "prods", "")
|
||||
if prod == "" {
|
||||
prod = ginx.QueryStr(c, "rule_prods", "")
|
||||
}
|
||||
|
||||
prods := []string{}
|
||||
if prod != "" {
|
||||
prods = strings.Split(prod, ",")
|
||||
}
|
||||
|
||||
cate := ginx.QueryStr(c, "cate", "$all")
|
||||
cates := []string{}
|
||||
if cate != "$all" {
|
||||
cates = strings.Split(cate, ",")
|
||||
}
|
||||
|
||||
total, err := models.AlertCurEventTotal(rt.Ctx, prods, busiGroupId, stime, etime, severity, dsIds, cates, query)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
list, err := models.AlertCurEventGets(prod, busiGroupId, stime, etime, severity, clusters, query, limit, ginx.Offset(c, limit))
|
||||
list, err := models.AlertCurEventGets(rt.Ctx, prods, busiGroupId, stime, etime, severity, dsIds, cates, query, limit, ginx.Offset(c, limit))
|
||||
ginx.Dangerous(err)
|
||||
|
||||
cache := make(map[int64]*models.UserGroup)
|
||||
for i := 0; i < len(list); i++ {
|
||||
list[i].FillNotifyGroups(cache)
|
||||
list[i].FillNotifyGroups(rt.Ctx, cache)
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Data(gin.H{
|
||||
@@ -141,7 +170,7 @@ func alertCurEventsList(c *gin.Context) {
|
||||
}, nil)
|
||||
}
|
||||
|
||||
func alertCurEventDel(c *gin.Context) {
|
||||
func (rt *Router) alertCurEventDel(c *gin.Context) {
|
||||
var f idsForm
|
||||
ginx.BindJSON(c, &f)
|
||||
f.Verify()
|
||||
@@ -149,21 +178,21 @@ func alertCurEventDel(c *gin.Context) {
|
||||
set := make(map[int64]struct{})
|
||||
|
||||
for i := 0; i < len(f.Ids); i++ {
|
||||
event, err := models.AlertCurEventGetById(f.Ids[i])
|
||||
event, err := models.AlertCurEventGetById(rt.Ctx, f.Ids[i])
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if _, has := set[event.GroupId]; !has {
|
||||
bgrwCheck(c, event.GroupId)
|
||||
rt.bgrwCheck(c, event.GroupId)
|
||||
set[event.GroupId] = struct{}{}
|
||||
}
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Message(models.AlertCurEventDel(f.Ids))
|
||||
ginx.NewRender(c).Message(models.AlertCurEventDel(rt.Ctx, f.Ids))
|
||||
}
|
||||
|
||||
func alertCurEventGet(c *gin.Context) {
|
||||
func (rt *Router) alertCurEventGet(c *gin.Context) {
|
||||
eid := ginx.UrlParamInt64(c, "eid")
|
||||
event, err := models.AlertCurEventGetById(eid)
|
||||
event, err := models.AlertCurEventGetById(rt.Ctx, eid)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if event == nil {
|
||||
@@ -1,12 +1,13 @@
|
||||
package router
|
||||
|
||||
import (
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/toolkits/pkg/ginx"
|
||||
|
||||
"github.com/didi/nightingale/v5/src/models"
|
||||
)
|
||||
|
||||
func getTimeRange(c *gin.Context) (stime, etime int64) {
|
||||
@@ -25,7 +26,7 @@ func getTimeRange(c *gin.Context) (stime, etime int64) {
|
||||
return
|
||||
}
|
||||
|
||||
func alertHisEventsList(c *gin.Context) {
|
||||
func (rt *Router) alertHisEventsList(c *gin.Context) {
|
||||
stime, etime := getTimeRange(c)
|
||||
|
||||
severity := ginx.QueryInt(c, "severity", -1)
|
||||
@@ -33,18 +34,33 @@ func alertHisEventsList(c *gin.Context) {
|
||||
query := ginx.QueryStr(c, "query", "")
|
||||
limit := ginx.QueryInt(c, "limit", 20)
|
||||
busiGroupId := ginx.QueryInt64(c, "bgid", 0)
|
||||
clusters := queryClusters(c)
|
||||
prod := ginx.QueryStr(c, "prod", "")
|
||||
dsIds := queryDatasourceIds(c)
|
||||
|
||||
total, err := models.AlertHisEventTotal(prod, busiGroupId, stime, etime, severity, recovered, clusters, query)
|
||||
prod := ginx.QueryStr(c, "prods", "")
|
||||
if prod == "" {
|
||||
prod = ginx.QueryStr(c, "rule_prods", "")
|
||||
}
|
||||
|
||||
prods := []string{}
|
||||
if prod != "" {
|
||||
prods = strings.Split(prod, ",")
|
||||
}
|
||||
|
||||
cate := ginx.QueryStr(c, "cate", "$all")
|
||||
cates := []string{}
|
||||
if cate != "$all" {
|
||||
cates = strings.Split(cate, ",")
|
||||
}
|
||||
|
||||
total, err := models.AlertHisEventTotal(rt.Ctx, prods, busiGroupId, stime, etime, severity, recovered, dsIds, cates, query)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
list, err := models.AlertHisEventGets(prod, busiGroupId, stime, etime, severity, recovered, clusters, query, limit, ginx.Offset(c, limit))
|
||||
list, err := models.AlertHisEventGets(rt.Ctx, prods, busiGroupId, stime, etime, severity, recovered, dsIds, cates, query, limit, ginx.Offset(c, limit))
|
||||
ginx.Dangerous(err)
|
||||
|
||||
cache := make(map[int64]*models.UserGroup)
|
||||
for i := 0; i < len(list); i++ {
|
||||
list[i].FillNotifyGroups(cache)
|
||||
list[i].FillNotifyGroups(rt.Ctx, cache)
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Data(gin.H{
|
||||
@@ -53,9 +69,9 @@ func alertHisEventsList(c *gin.Context) {
|
||||
}, nil)
|
||||
}
|
||||
|
||||
func alertHisEventGet(c *gin.Context) {
|
||||
func (rt *Router) alertHisEventGet(c *gin.Context) {
|
||||
eid := ginx.UrlParamInt64(c, "eid")
|
||||
event, err := models.AlertHisEventGetById(eid)
|
||||
event, err := models.AlertHisEventGetById(rt.Ctx, eid)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if event == nil {
|
||||
@@ -5,42 +5,51 @@ import (
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/toolkits/pkg/ginx"
|
||||
"github.com/toolkits/pkg/i18n"
|
||||
|
||||
"github.com/didi/nightingale/v5/src/models"
|
||||
)
|
||||
|
||||
// Return all, front-end search and paging
|
||||
func alertRuleGets(c *gin.Context) {
|
||||
func (rt *Router) alertRuleGets(c *gin.Context) {
|
||||
busiGroupId := ginx.UrlParamInt64(c, "id")
|
||||
ars, err := models.AlertRuleGets(busiGroupId)
|
||||
ars, err := models.AlertRuleGets(rt.Ctx, busiGroupId)
|
||||
if err == nil {
|
||||
cache := make(map[int64]*models.UserGroup)
|
||||
for i := 0; i < len(ars); i++ {
|
||||
ars[i].FillNotifyGroups(cache)
|
||||
ars[i].FillNotifyGroups(rt.Ctx, cache)
|
||||
ars[i].FillSeverities()
|
||||
}
|
||||
}
|
||||
ginx.NewRender(c).Data(ars, err)
|
||||
}
|
||||
|
||||
func alertRulesGetByService(c *gin.Context) {
|
||||
prods := strings.Fields(ginx.QueryStr(c, "prods", ""))
|
||||
func (rt *Router) alertRulesGetByService(c *gin.Context) {
|
||||
prods := strings.Split(ginx.QueryStr(c, "prods", ""), ",")
|
||||
query := ginx.QueryStr(c, "query", "")
|
||||
algorithm := ginx.QueryStr(c, "algorithm", "")
|
||||
cluster := ginx.QueryStr(c, "cluster", "")
|
||||
cate := ginx.QueryStr(c, "cate", "$all")
|
||||
cates := []string{}
|
||||
if cate != "$all" {
|
||||
cates = strings.Split(cate, ",")
|
||||
}
|
||||
|
||||
ars, err := models.AlertRulesGetsBy(prods, query)
|
||||
disabled := ginx.QueryInt(c, "disabled", -1)
|
||||
ars, err := models.AlertRulesGetsBy(rt.Ctx, prods, query, algorithm, cluster, cates, disabled)
|
||||
if err == nil {
|
||||
cache := make(map[int64]*models.UserGroup)
|
||||
for i := 0; i < len(ars); i++ {
|
||||
ars[i].FillNotifyGroups(cache)
|
||||
ars[i].FillNotifyGroups(rt.Ctx, cache)
|
||||
}
|
||||
}
|
||||
ginx.NewRender(c).Data(ars, err)
|
||||
}
|
||||
|
||||
// single or import
|
||||
func alertRuleAddByFE(c *gin.Context) {
|
||||
func (rt *Router) alertRuleAddByFE(c *gin.Context) {
|
||||
username := c.MustGet("username").(string)
|
||||
|
||||
var lst []models.AlertRule
|
||||
@@ -52,12 +61,14 @@ func alertRuleAddByFE(c *gin.Context) {
|
||||
}
|
||||
|
||||
bgid := ginx.UrlParamInt64(c, "id")
|
||||
reterr := alertRuleAdd(lst, username, bgid, c.GetHeader("X-Language"))
|
||||
reterr := rt.alertRuleAdd(lst, username, bgid, c.GetHeader("X-Language"))
|
||||
|
||||
ginx.NewRender(c).Data(reterr, nil)
|
||||
}
|
||||
|
||||
func alertRuleAddByService(c *gin.Context) {
|
||||
func (rt *Router) alertRuleAddByImport(c *gin.Context) {
|
||||
username := c.MustGet("username").(string)
|
||||
|
||||
var lst []models.AlertRule
|
||||
ginx.BindJSON(c, &lst)
|
||||
|
||||
@@ -65,11 +76,26 @@ func alertRuleAddByService(c *gin.Context) {
|
||||
if count == 0 {
|
||||
ginx.Bomb(http.StatusBadRequest, "input json is empty")
|
||||
}
|
||||
reterr := alertRuleAddForService(lst, "")
|
||||
|
||||
bgid := ginx.UrlParamInt64(c, "id")
|
||||
reterr := rt.alertRuleAdd(lst, username, bgid, c.GetHeader("X-Language"))
|
||||
|
||||
ginx.NewRender(c).Data(reterr, nil)
|
||||
}
|
||||
|
||||
func alertRuleAddForService(lst []models.AlertRule, username string) map[string]string {
|
||||
func (rt *Router) alertRuleAddByService(c *gin.Context) {
|
||||
var lst []models.AlertRule
|
||||
ginx.BindJSON(c, &lst)
|
||||
|
||||
count := len(lst)
|
||||
if count == 0 {
|
||||
ginx.Bomb(http.StatusBadRequest, "input json is empty")
|
||||
}
|
||||
reterr := rt.alertRuleAddForService(lst, "")
|
||||
ginx.NewRender(c).Data(reterr, nil)
|
||||
}
|
||||
|
||||
func (rt *Router) alertRuleAddForService(lst []models.AlertRule, username string) map[string]string {
|
||||
count := len(lst)
|
||||
// alert rule name -> error string
|
||||
reterr := make(map[string]string)
|
||||
@@ -85,7 +111,7 @@ func alertRuleAddForService(lst []models.AlertRule, username string) map[string]
|
||||
continue
|
||||
}
|
||||
|
||||
if err := lst[i].Add(); err != nil {
|
||||
if err := lst[i].Add(rt.Ctx); err != nil {
|
||||
reterr[lst[i].Name] = err.Error()
|
||||
} else {
|
||||
reterr[lst[i].Name] = ""
|
||||
@@ -94,7 +120,7 @@ func alertRuleAddForService(lst []models.AlertRule, username string) map[string]
|
||||
return reterr
|
||||
}
|
||||
|
||||
func alertRuleAdd(lst []models.AlertRule, username string, bgid int64, lang string) map[string]string {
|
||||
func (rt *Router) alertRuleAdd(lst []models.AlertRule, username string, bgid int64, lang string) map[string]string {
|
||||
count := len(lst)
|
||||
// alert rule name -> error string
|
||||
reterr := make(map[string]string)
|
||||
@@ -111,7 +137,7 @@ func alertRuleAdd(lst []models.AlertRule, username string, bgid int64, lang stri
|
||||
continue
|
||||
}
|
||||
|
||||
if err := lst[i].Add(); err != nil {
|
||||
if err := lst[i].Add(rt.Ctx); err != nil {
|
||||
reterr[lst[i].Name] = i18n.Sprintf(lang, err.Error())
|
||||
} else {
|
||||
reterr[lst[i].Name] = ""
|
||||
@@ -120,28 +146,28 @@ func alertRuleAdd(lst []models.AlertRule, username string, bgid int64, lang stri
|
||||
return reterr
|
||||
}
|
||||
|
||||
func alertRuleDel(c *gin.Context) {
|
||||
func (rt *Router) alertRuleDel(c *gin.Context) {
|
||||
var f idsForm
|
||||
ginx.BindJSON(c, &f)
|
||||
f.Verify()
|
||||
|
||||
// param(busiGroupId) for protect
|
||||
ginx.NewRender(c).Message(models.AlertRuleDels(f.Ids, ginx.UrlParamInt64(c, "id")))
|
||||
ginx.NewRender(c).Message(models.AlertRuleDels(rt.Ctx, f.Ids, ginx.UrlParamInt64(c, "id")))
|
||||
}
|
||||
|
||||
func alertRuleDelByService(c *gin.Context) {
|
||||
func (rt *Router) alertRuleDelByService(c *gin.Context) {
|
||||
var f idsForm
|
||||
ginx.BindJSON(c, &f)
|
||||
f.Verify()
|
||||
ginx.NewRender(c).Message(models.AlertRuleDels(f.Ids))
|
||||
ginx.NewRender(c).Message(models.AlertRuleDels(rt.Ctx, f.Ids))
|
||||
}
|
||||
|
||||
func alertRulePutByFE(c *gin.Context) {
|
||||
func (rt *Router) alertRulePutByFE(c *gin.Context) {
|
||||
var f models.AlertRule
|
||||
ginx.BindJSON(c, &f)
|
||||
|
||||
arid := ginx.UrlParamInt64(c, "arid")
|
||||
ar, err := models.AlertRuleGetById(arid)
|
||||
ar, err := models.AlertRuleGetById(rt.Ctx, arid)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if ar == nil {
|
||||
@@ -149,34 +175,35 @@ func alertRulePutByFE(c *gin.Context) {
|
||||
return
|
||||
}
|
||||
|
||||
bgrwCheck(c, ar.GroupId)
|
||||
rt.bgrwCheck(c, ar.GroupId)
|
||||
|
||||
f.UpdateBy = c.MustGet("username").(string)
|
||||
ginx.NewRender(c).Message(ar.Update(f))
|
||||
ginx.NewRender(c).Message(ar.Update(rt.Ctx, f))
|
||||
}
|
||||
|
||||
func alertRulePutByService(c *gin.Context) {
|
||||
func (rt *Router) alertRulePutByService(c *gin.Context) {
|
||||
var f models.AlertRule
|
||||
ginx.BindJSON(c, &f)
|
||||
|
||||
arid := ginx.UrlParamInt64(c, "arid")
|
||||
ar, err := models.AlertRuleGetById(arid)
|
||||
ar, err := models.AlertRuleGetById(rt.Ctx, arid)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if ar == nil {
|
||||
ginx.NewRender(c, http.StatusNotFound).Message("No such AlertRule")
|
||||
return
|
||||
}
|
||||
ginx.NewRender(c).Message(ar.Update(f))
|
||||
ginx.NewRender(c).Message(ar.Update(rt.Ctx, f))
|
||||
}
|
||||
|
||||
type alertRuleFieldForm struct {
|
||||
Ids []int64 `json:"ids"`
|
||||
Fields map[string]interface{} `json:"fields"`
|
||||
Action string `json:"action"`
|
||||
}
|
||||
|
||||
// update one field: cluster note severity disabled prom_eval_interval prom_for_duration notify_channels notify_groups notify_recovered notify_repeat_step callbacks runbook_url append_tags
|
||||
func alertRulePutFields(c *gin.Context) {
|
||||
func (rt *Router) alertRulePutFields(c *gin.Context) {
|
||||
var f alertRuleFieldForm
|
||||
ginx.BindJSON(c, &f)
|
||||
|
||||
@@ -188,23 +215,45 @@ func alertRulePutFields(c *gin.Context) {
|
||||
f.Fields["update_at"] = time.Now().Unix()
|
||||
|
||||
for i := 0; i < len(f.Ids); i++ {
|
||||
ar, err := models.AlertRuleGetById(f.Ids[i])
|
||||
ar, err := models.AlertRuleGetById(rt.Ctx, f.Ids[i])
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if ar == nil {
|
||||
continue
|
||||
}
|
||||
|
||||
ginx.Dangerous(ar.UpdateFieldsMap(f.Fields))
|
||||
if f.Action == "callback_add" {
|
||||
// 增加一个 callback 地址
|
||||
if callbacks, has := f.Fields["callbacks"]; has {
|
||||
callback := callbacks.(string)
|
||||
if !strings.Contains(ar.Callbacks, callback) {
|
||||
ginx.Dangerous(ar.UpdateFieldsMap(rt.Ctx, map[string]interface{}{"callbacks": ar.Callbacks + " " + callback}))
|
||||
continue
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if f.Action == "callback_del" {
|
||||
// 删除一个 callback 地址
|
||||
if callbacks, has := f.Fields["callbacks"]; has {
|
||||
callback := callbacks.(string)
|
||||
ginx.Dangerous(ar.UpdateFieldsMap(rt.Ctx, map[string]interface{}{"callbacks": strings.ReplaceAll(ar.Callbacks, callback, "")}))
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
||||
for k, v := range f.Fields {
|
||||
ginx.Dangerous(ar.UpdateColumn(rt.Ctx, k, v))
|
||||
}
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Message(nil)
|
||||
}
|
||||
|
||||
func alertRuleGet(c *gin.Context) {
|
||||
func (rt *Router) alertRuleGet(c *gin.Context) {
|
||||
arid := ginx.UrlParamInt64(c, "arid")
|
||||
|
||||
ar, err := models.AlertRuleGetById(arid)
|
||||
ar, err := models.AlertRuleGetById(rt.Ctx, arid)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if ar == nil {
|
||||
@@ -212,6 +261,8 @@ func alertRuleGet(c *gin.Context) {
|
||||
return
|
||||
}
|
||||
|
||||
err = ar.FillNotifyGroups(make(map[int64]*models.UserGroup))
|
||||
err = ar.FillNotifyGroups(rt.Ctx, make(map[int64]*models.UserGroup))
|
||||
ginx.Dangerous(err)
|
||||
|
||||
ginx.NewRender(c).Data(ar, err)
|
||||
}
|
||||
@@ -4,34 +4,38 @@ import (
|
||||
"net/http"
|
||||
"time"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/toolkits/pkg/ginx"
|
||||
|
||||
"github.com/didi/nightingale/v5/src/models"
|
||||
)
|
||||
|
||||
// Return all, front-end search and paging
|
||||
func alertSubscribeGets(c *gin.Context) {
|
||||
func (rt *Router) alertSubscribeGets(c *gin.Context) {
|
||||
bgid := ginx.UrlParamInt64(c, "id")
|
||||
lst, err := models.AlertSubscribeGets(bgid)
|
||||
lst, err := models.AlertSubscribeGets(rt.Ctx, bgid)
|
||||
if err == nil {
|
||||
ugcache := make(map[int64]*models.UserGroup)
|
||||
for i := 0; i < len(lst); i++ {
|
||||
ginx.Dangerous(lst[i].FillUserGroups(ugcache))
|
||||
ginx.Dangerous(lst[i].FillUserGroups(rt.Ctx, ugcache))
|
||||
}
|
||||
|
||||
rulecache := make(map[int64]string)
|
||||
for i := 0; i < len(lst); i++ {
|
||||
ginx.Dangerous(lst[i].FillRuleName(rulecache))
|
||||
ginx.Dangerous(lst[i].FillRuleName(rt.Ctx, rulecache))
|
||||
}
|
||||
|
||||
for i := 0; i < len(lst); i++ {
|
||||
ginx.Dangerous(lst[i].FillDatasourceIds(rt.Ctx))
|
||||
}
|
||||
}
|
||||
ginx.NewRender(c).Data(lst, err)
|
||||
}
|
||||
|
||||
func alertSubscribeGet(c *gin.Context) {
|
||||
func (rt *Router) alertSubscribeGet(c *gin.Context) {
|
||||
subid := ginx.UrlParamInt64(c, "sid")
|
||||
|
||||
sub, err := models.AlertSubscribeGet("id=?", subid)
|
||||
sub, err := models.AlertSubscribeGet(rt.Ctx, "id=?", subid)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if sub == nil {
|
||||
@@ -40,15 +44,17 @@ func alertSubscribeGet(c *gin.Context) {
|
||||
}
|
||||
|
||||
ugcache := make(map[int64]*models.UserGroup)
|
||||
ginx.Dangerous(sub.FillUserGroups(ugcache))
|
||||
ginx.Dangerous(sub.FillUserGroups(rt.Ctx, ugcache))
|
||||
|
||||
rulecache := make(map[int64]string)
|
||||
ginx.Dangerous(sub.FillRuleName(rulecache))
|
||||
ginx.Dangerous(sub.FillRuleName(rt.Ctx, rulecache))
|
||||
ginx.Dangerous(sub.FillDatasourceIds(rt.Ctx))
|
||||
ginx.Dangerous(sub.DB2FE())
|
||||
|
||||
ginx.NewRender(c).Data(sub, nil)
|
||||
}
|
||||
|
||||
func alertSubscribeAdd(c *gin.Context) {
|
||||
func (rt *Router) alertSubscribeAdd(c *gin.Context) {
|
||||
var f models.AlertSubscribe
|
||||
ginx.BindJSON(c, &f)
|
||||
|
||||
@@ -61,10 +67,10 @@ func alertSubscribeAdd(c *gin.Context) {
|
||||
ginx.Bomb(http.StatusBadRequest, "group_id invalid")
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Message(f.Add())
|
||||
ginx.NewRender(c).Message(f.Add(rt.Ctx))
|
||||
}
|
||||
|
||||
func alertSubscribePut(c *gin.Context) {
|
||||
func (rt *Router) alertSubscribePut(c *gin.Context) {
|
||||
var fs []models.AlertSubscribe
|
||||
ginx.BindJSON(c, &fs)
|
||||
|
||||
@@ -74,6 +80,9 @@ func alertSubscribePut(c *gin.Context) {
|
||||
fs[i].UpdateBy = username
|
||||
fs[i].UpdateAt = timestamp
|
||||
ginx.Dangerous(fs[i].Update(
|
||||
rt.Ctx,
|
||||
"name",
|
||||
"disabled",
|
||||
"cluster",
|
||||
"rule_id",
|
||||
"tags",
|
||||
@@ -84,16 +93,20 @@ func alertSubscribePut(c *gin.Context) {
|
||||
"user_group_ids",
|
||||
"update_at",
|
||||
"update_by",
|
||||
"webhooks",
|
||||
"for_duration",
|
||||
"redefine_webhooks",
|
||||
"datasource_ids",
|
||||
))
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Message(nil)
|
||||
}
|
||||
|
||||
func alertSubscribeDel(c *gin.Context) {
|
||||
func (rt *Router) alertSubscribeDel(c *gin.Context) {
|
||||
var f idsForm
|
||||
ginx.BindJSON(c, &f)
|
||||
f.Verify()
|
||||
|
||||
ginx.NewRender(c).Message(models.AlertSubscribeDel(f.Ids))
|
||||
ginx.NewRender(c).Message(models.AlertSubscribeDel(rt.Ctx, f.Ids))
|
||||
}
|
||||
237
center/router/router_board.go
Normal file
237
center/router/router_board.go
Normal file
@@ -0,0 +1,237 @@
|
||||
package router
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
"time"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/google/uuid"
|
||||
"github.com/toolkits/pkg/ginx"
|
||||
)
|
||||
|
||||
type boardForm struct {
|
||||
Name string `json:"name"`
|
||||
Ident string `json:"ident"`
|
||||
Tags string `json:"tags"`
|
||||
Configs string `json:"configs"`
|
||||
Public int `json:"public"`
|
||||
}
|
||||
|
||||
func (rt *Router) boardAdd(c *gin.Context) {
|
||||
var f boardForm
|
||||
ginx.BindJSON(c, &f)
|
||||
|
||||
me := c.MustGet("user").(*models.User)
|
||||
|
||||
board := &models.Board{
|
||||
GroupId: ginx.UrlParamInt64(c, "id"),
|
||||
Name: f.Name,
|
||||
Ident: f.Ident,
|
||||
Tags: f.Tags,
|
||||
Configs: f.Configs,
|
||||
CreateBy: me.Username,
|
||||
UpdateBy: me.Username,
|
||||
}
|
||||
|
||||
err := board.Add(rt.Ctx)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if f.Configs != "" {
|
||||
ginx.Dangerous(models.BoardPayloadSave(rt.Ctx, board.Id, f.Configs))
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Data(board, nil)
|
||||
}
|
||||
|
||||
func (rt *Router) boardGet(c *gin.Context) {
|
||||
bid := ginx.UrlParamStr(c, "bid")
|
||||
board, err := models.BoardGet(rt.Ctx, "id = ? or ident = ?", bid, bid)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if board == nil {
|
||||
ginx.Bomb(http.StatusNotFound, "No such dashboard")
|
||||
}
|
||||
|
||||
if board.Public == 0 {
|
||||
rt.auth()(c)
|
||||
rt.user()(c)
|
||||
|
||||
me := c.MustGet("user").(*models.User)
|
||||
if !me.IsAdmin() {
|
||||
// check permission
|
||||
rt.bgroCheck(c, board.GroupId)
|
||||
}
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Data(board, nil)
|
||||
}
|
||||
|
||||
func (rt *Router) boardPureGet(c *gin.Context) {
|
||||
board, err := models.BoardGetByID(rt.Ctx, ginx.UrlParamInt64(c, "bid"))
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if board == nil {
|
||||
ginx.Bomb(http.StatusNotFound, "No such dashboard")
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Data(board, nil)
|
||||
}
|
||||
|
||||
// bgrwCheck
|
||||
func (rt *Router) boardDel(c *gin.Context) {
|
||||
var f idsForm
|
||||
ginx.BindJSON(c, &f)
|
||||
f.Verify()
|
||||
|
||||
for i := 0; i < len(f.Ids); i++ {
|
||||
bid := f.Ids[i]
|
||||
|
||||
board, err := models.BoardGet(rt.Ctx, "id = ?", bid)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if board == nil {
|
||||
continue
|
||||
}
|
||||
|
||||
me := c.MustGet("user").(*models.User)
|
||||
if !me.IsAdmin() {
|
||||
// check permission
|
||||
rt.bgrwCheck(c, board.GroupId)
|
||||
}
|
||||
|
||||
ginx.Dangerous(board.Del(rt.Ctx))
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Message(nil)
|
||||
}
|
||||
|
||||
func (rt *Router) Board(id int64) *models.Board {
|
||||
obj, err := models.BoardGet(rt.Ctx, "id=?", id)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if obj == nil {
|
||||
ginx.Bomb(http.StatusNotFound, "No such dashboard")
|
||||
}
|
||||
|
||||
return obj
|
||||
}
|
||||
|
||||
// bgrwCheck
|
||||
func (rt *Router) boardPut(c *gin.Context) {
|
||||
var f boardForm
|
||||
ginx.BindJSON(c, &f)
|
||||
|
||||
me := c.MustGet("user").(*models.User)
|
||||
bo := rt.Board(ginx.UrlParamInt64(c, "bid"))
|
||||
|
||||
if !me.IsAdmin() {
|
||||
// check permission
|
||||
rt.bgrwCheck(c, bo.GroupId)
|
||||
}
|
||||
|
||||
can, err := bo.CanRenameIdent(rt.Ctx, f.Ident)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if !can {
|
||||
ginx.Bomb(http.StatusOK, "Ident duplicate")
|
||||
}
|
||||
|
||||
bo.Name = f.Name
|
||||
bo.Ident = f.Ident
|
||||
bo.Tags = f.Tags
|
||||
bo.UpdateBy = me.Username
|
||||
bo.UpdateAt = time.Now().Unix()
|
||||
|
||||
err = bo.Update(rt.Ctx, "name", "ident", "tags", "update_by", "update_at")
|
||||
ginx.NewRender(c).Data(bo, err)
|
||||
}
|
||||
|
||||
// bgrwCheck
|
||||
func (rt *Router) boardPutConfigs(c *gin.Context) {
|
||||
var f boardForm
|
||||
ginx.BindJSON(c, &f)
|
||||
|
||||
me := c.MustGet("user").(*models.User)
|
||||
|
||||
bid := ginx.UrlParamStr(c, "bid")
|
||||
bo, err := models.BoardGet(rt.Ctx, "id = ? or ident = ?", bid, bid)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if bo == nil {
|
||||
ginx.Bomb(http.StatusNotFound, "No such dashboard")
|
||||
}
|
||||
|
||||
// check permission
|
||||
if !me.IsAdmin() {
|
||||
rt.bgrwCheck(c, bo.GroupId)
|
||||
}
|
||||
|
||||
bo.UpdateBy = me.Username
|
||||
bo.UpdateAt = time.Now().Unix()
|
||||
ginx.Dangerous(bo.Update(rt.Ctx, "update_by", "update_at"))
|
||||
|
||||
bo.Configs = f.Configs
|
||||
ginx.Dangerous(models.BoardPayloadSave(rt.Ctx, bo.Id, f.Configs))
|
||||
|
||||
ginx.NewRender(c).Data(bo, nil)
|
||||
}
|
||||
|
||||
// bgrwCheck
|
||||
func (rt *Router) boardPutPublic(c *gin.Context) {
|
||||
var f boardForm
|
||||
ginx.BindJSON(c, &f)
|
||||
|
||||
me := c.MustGet("user").(*models.User)
|
||||
bo := rt.Board(ginx.UrlParamInt64(c, "bid"))
|
||||
|
||||
// check permission
|
||||
if !me.IsAdmin() {
|
||||
rt.bgrwCheck(c, bo.GroupId)
|
||||
}
|
||||
|
||||
bo.Public = f.Public
|
||||
bo.UpdateBy = me.Username
|
||||
bo.UpdateAt = time.Now().Unix()
|
||||
|
||||
err := bo.Update(rt.Ctx, "public", "update_by", "update_at")
|
||||
ginx.NewRender(c).Data(bo, err)
|
||||
}
|
||||
|
||||
func (rt *Router) boardGets(c *gin.Context) {
|
||||
bgid := ginx.UrlParamInt64(c, "id")
|
||||
query := ginx.QueryStr(c, "query", "")
|
||||
|
||||
boards, err := models.BoardGetsByGroupId(rt.Ctx, bgid, query)
|
||||
ginx.NewRender(c).Data(boards, err)
|
||||
}
|
||||
|
||||
func (rt *Router) boardClone(c *gin.Context) {
|
||||
me := c.MustGet("user").(*models.User)
|
||||
bo := rt.Board(ginx.UrlParamInt64(c, "bid"))
|
||||
|
||||
newBoard := &models.Board{
|
||||
Name: bo.Name + " Copy",
|
||||
Tags: bo.Tags,
|
||||
GroupId: bo.GroupId,
|
||||
CreateBy: me.Username,
|
||||
UpdateBy: me.Username,
|
||||
}
|
||||
|
||||
if bo.Ident != "" {
|
||||
newBoard.Ident = uuid.NewString()
|
||||
}
|
||||
|
||||
ginx.Dangerous(newBoard.Add(rt.Ctx))
|
||||
|
||||
// clone payload
|
||||
payload, err := models.BoardPayloadGet(rt.Ctx, bo.Id)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if payload != "" {
|
||||
ginx.Dangerous(models.BoardPayloadSave(rt.Ctx, newBoard.Id, payload))
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Message(nil)
|
||||
}
|
||||
311
center/router/router_builtin.go
Normal file
311
center/router/router_builtin.go
Normal file
@@ -0,0 +1,311 @@
|
||||
package router
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"path"
|
||||
"strings"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/toolkits/pkg/file"
|
||||
"github.com/toolkits/pkg/ginx"
|
||||
"github.com/toolkits/pkg/logger"
|
||||
"github.com/toolkits/pkg/runner"
|
||||
)
|
||||
|
||||
// 创建 builtin_cate
|
||||
func (rt *Router) builtinCateFavoriteAdd(c *gin.Context) {
|
||||
var f models.BuiltinCate
|
||||
ginx.BindJSON(c, &f)
|
||||
|
||||
if f.Name == "" {
|
||||
ginx.Bomb(http.StatusBadRequest, "name is empty")
|
||||
}
|
||||
|
||||
me := c.MustGet("user").(*models.User)
|
||||
f.UserId = me.Id
|
||||
|
||||
ginx.NewRender(c).Message(f.Create(rt.Ctx))
|
||||
}
|
||||
|
||||
// 删除 builtin_cate
|
||||
func (rt *Router) builtinCateFavoriteDel(c *gin.Context) {
|
||||
name := ginx.UrlParamStr(c, "name")
|
||||
me := c.MustGet("user").(*models.User)
|
||||
|
||||
ginx.NewRender(c).Message(models.BuiltinCateDelete(rt.Ctx, name, me.Id))
|
||||
}
|
||||
|
||||
type Payload struct {
|
||||
Cate string `json:"cate"`
|
||||
Fname string `json:"fname"`
|
||||
Name string `json:"name"`
|
||||
Configs interface{} `json:"configs"`
|
||||
Tags string `json:"tags"`
|
||||
}
|
||||
|
||||
type BoardCate struct {
|
||||
Name string `json:"name"`
|
||||
IconUrl string `json:"icon_url"`
|
||||
Boards []Payload `json:"boards"`
|
||||
Favorite bool `json:"favorite"`
|
||||
}
|
||||
|
||||
func (rt *Router) builtinBoardDetailGets(c *gin.Context) {
|
||||
var payload Payload
|
||||
ginx.BindJSON(c, &payload)
|
||||
|
||||
fp := rt.Center.BuiltinIntegrationsDir
|
||||
if fp == "" {
|
||||
fp = path.Join(runner.Cwd, "integrations")
|
||||
}
|
||||
|
||||
fn := fp + "/" + payload.Cate + "/dashboards/" + payload.Fname
|
||||
content, err := file.ReadBytes(fn)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
err = json.Unmarshal(content, &payload)
|
||||
ginx.NewRender(c).Data(payload, err)
|
||||
}
|
||||
|
||||
func (rt *Router) builtinBoardCateGets(c *gin.Context) {
|
||||
fp := rt.Center.BuiltinIntegrationsDir
|
||||
if fp == "" {
|
||||
fp = path.Join(runner.Cwd, "integrations")
|
||||
}
|
||||
|
||||
me := c.MustGet("user").(*models.User)
|
||||
buildinFavoritesMap, err := models.BuiltinCateGetByUserId(rt.Ctx, me.Id)
|
||||
if err != nil {
|
||||
logger.Warningf("get builtin favorites fail: %v", err)
|
||||
}
|
||||
|
||||
var boardCates []BoardCate
|
||||
dirList, err := file.DirsUnder(fp)
|
||||
ginx.Dangerous(err)
|
||||
for _, dir := range dirList {
|
||||
var boardCate BoardCate
|
||||
boardCate.Name = dir
|
||||
files, err := file.FilesUnder(fp + "/" + dir + "/dashboards")
|
||||
ginx.Dangerous(err)
|
||||
|
||||
var boards []Payload
|
||||
for _, f := range files {
|
||||
fn := fp + "/" + dir + "/dashboards/" + f
|
||||
content, err := file.ReadBytes(fn)
|
||||
if err != nil {
|
||||
logger.Warningf("add board fail: %v", err)
|
||||
continue
|
||||
}
|
||||
|
||||
var payload Payload
|
||||
err = json.Unmarshal(content, &payload)
|
||||
if err != nil {
|
||||
logger.Warningf("add board:%s fail: %v", fn, err)
|
||||
continue
|
||||
}
|
||||
payload.Cate = dir
|
||||
payload.Fname = f
|
||||
payload.Configs = ""
|
||||
boards = append(boards, payload)
|
||||
}
|
||||
boardCate.Boards = boards
|
||||
|
||||
if _, ok := buildinFavoritesMap[dir]; ok {
|
||||
boardCate.Favorite = true
|
||||
}
|
||||
|
||||
iconFiles, _ := file.FilesUnder(fp + "/" + dir + "/icon")
|
||||
if len(iconFiles) > 0 {
|
||||
boardCate.IconUrl = fmt.Sprintf("/api/n9e/integrations/icon/%s/%s", dir, iconFiles[0])
|
||||
}
|
||||
|
||||
boardCates = append(boardCates, boardCate)
|
||||
}
|
||||
ginx.NewRender(c).Data(boardCates, nil)
|
||||
}
|
||||
|
||||
func (rt *Router) builtinBoardGets(c *gin.Context) {
|
||||
fp := rt.Center.BuiltinIntegrationsDir
|
||||
if fp == "" {
|
||||
fp = path.Join(runner.Cwd, "integrations")
|
||||
}
|
||||
|
||||
var fileList []string
|
||||
dirList, err := file.DirsUnder(fp)
|
||||
ginx.Dangerous(err)
|
||||
for _, dir := range dirList {
|
||||
files, err := file.FilesUnder(fp + "/" + dir + "/dashboards")
|
||||
ginx.Dangerous(err)
|
||||
fileList = append(fileList, files...)
|
||||
}
|
||||
|
||||
names := make([]string, 0, len(fileList))
|
||||
for _, f := range fileList {
|
||||
if !strings.HasSuffix(f, ".json") {
|
||||
continue
|
||||
}
|
||||
|
||||
name := strings.TrimSuffix(f, ".json")
|
||||
names = append(names, name)
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Data(names, nil)
|
||||
}
|
||||
|
||||
type AlertCate struct {
|
||||
Name string `json:"name"`
|
||||
IconUrl string `json:"icon_url"`
|
||||
AlertRules []models.AlertRule `json:"alert_rules"`
|
||||
Favorite bool `json:"favorite"`
|
||||
}
|
||||
|
||||
func (rt *Router) builtinAlertCateGets(c *gin.Context) {
|
||||
fp := rt.Center.BuiltinIntegrationsDir
|
||||
if fp == "" {
|
||||
fp = path.Join(runner.Cwd, "integrations")
|
||||
}
|
||||
|
||||
me := c.MustGet("user").(*models.User)
|
||||
buildinFavoritesMap, err := models.BuiltinCateGetByUserId(rt.Ctx, me.Id)
|
||||
if err != nil {
|
||||
logger.Warningf("get builtin favorites fail: %v", err)
|
||||
}
|
||||
|
||||
var alertCates []AlertCate
|
||||
dirList, err := file.DirsUnder(fp)
|
||||
ginx.Dangerous(err)
|
||||
for _, dir := range dirList {
|
||||
var alertCate AlertCate
|
||||
alertCate.Name = dir
|
||||
files, err := file.FilesUnder(fp + "/" + dir + "/alerts")
|
||||
ginx.Dangerous(err)
|
||||
|
||||
var alertRules []models.AlertRule
|
||||
for _, f := range files {
|
||||
fn := fp + "/" + dir + "/alerts/" + f
|
||||
content, err := file.ReadBytes(fn)
|
||||
if err != nil {
|
||||
logger.Warningf("add board fail: %v", err)
|
||||
continue
|
||||
}
|
||||
|
||||
var ars []models.AlertRule
|
||||
err = json.Unmarshal(content, &ars)
|
||||
if err != nil {
|
||||
logger.Warningf("add board:%s fail: %v", fn, err)
|
||||
continue
|
||||
}
|
||||
alertRules = append(alertRules, ars...)
|
||||
}
|
||||
alertCate.AlertRules = alertRules
|
||||
iconFiles, _ := file.FilesUnder(fp + "/" + dir + "/icon")
|
||||
if len(iconFiles) > 0 {
|
||||
alertCate.IconUrl = fmt.Sprintf("/api/n9e/integrations/icon/%s/%s", dir, iconFiles[0])
|
||||
}
|
||||
|
||||
if _, ok := buildinFavoritesMap[dir]; ok {
|
||||
alertCate.Favorite = true
|
||||
}
|
||||
|
||||
alertCates = append(alertCates, alertCate)
|
||||
}
|
||||
ginx.NewRender(c).Data(alertCates, nil)
|
||||
}
|
||||
|
||||
type builtinAlertRulesList struct {
|
||||
Name string `json:"name"`
|
||||
IconUrl string `json:"icon_url"`
|
||||
AlertRules map[string][]models.AlertRule `json:"alert_rules"`
|
||||
Favorite bool `json:"favorite"`
|
||||
}
|
||||
|
||||
func (rt *Router) builtinAlertRules(c *gin.Context) {
|
||||
fp := rt.Center.BuiltinIntegrationsDir
|
||||
if fp == "" {
|
||||
fp = path.Join(runner.Cwd, "integrations")
|
||||
}
|
||||
|
||||
me := c.MustGet("user").(*models.User)
|
||||
buildinFavoritesMap, err := models.BuiltinCateGetByUserId(rt.Ctx, me.Id)
|
||||
if err != nil {
|
||||
logger.Warningf("get builtin favorites fail: %v", err)
|
||||
}
|
||||
|
||||
var alertCates []builtinAlertRulesList
|
||||
dirList, err := file.DirsUnder(fp)
|
||||
ginx.Dangerous(err)
|
||||
for _, dir := range dirList {
|
||||
var alertCate builtinAlertRulesList
|
||||
alertCate.Name = dir
|
||||
files, err := file.FilesUnder(fp + "/" + dir + "/alerts")
|
||||
ginx.Dangerous(err)
|
||||
|
||||
alertRules := make(map[string][]models.AlertRule)
|
||||
for _, f := range files {
|
||||
fn := fp + "/" + dir + "/alerts/" + f
|
||||
content, err := file.ReadBytes(fn)
|
||||
if err != nil {
|
||||
logger.Warningf("add board fail: %v", err)
|
||||
continue
|
||||
}
|
||||
|
||||
var ars []models.AlertRule
|
||||
err = json.Unmarshal(content, &ars)
|
||||
if err != nil {
|
||||
logger.Warningf("add board:%s fail: %v", fn, err)
|
||||
continue
|
||||
}
|
||||
alertRules[strings.TrimSuffix(f, ".json")] = ars
|
||||
}
|
||||
|
||||
alertCate.AlertRules = alertRules
|
||||
iconFiles, _ := file.FilesUnder(fp + "/" + dir + "/icon")
|
||||
if len(iconFiles) > 0 {
|
||||
alertCate.IconUrl = fmt.Sprintf("/api/n9e/integrations/icon/%s/%s", dir, iconFiles[0])
|
||||
}
|
||||
|
||||
if _, ok := buildinFavoritesMap[dir]; ok {
|
||||
alertCate.Favorite = true
|
||||
}
|
||||
|
||||
alertCates = append(alertCates, alertCate)
|
||||
}
|
||||
ginx.NewRender(c).Data(alertCates, nil)
|
||||
}
|
||||
|
||||
// read the json file content
|
||||
func (rt *Router) builtinBoardGet(c *gin.Context) {
|
||||
name := ginx.UrlParamStr(c, "name")
|
||||
dirpath := rt.Center.BuiltinIntegrationsDir
|
||||
if dirpath == "" {
|
||||
dirpath = path.Join(runner.Cwd, "integrations")
|
||||
}
|
||||
|
||||
dirList, err := file.DirsUnder(dirpath)
|
||||
ginx.Dangerous(err)
|
||||
for _, dir := range dirList {
|
||||
jsonFile := dirpath + "/" + dir + "/dashboards/" + name + ".json"
|
||||
if file.IsExist(jsonFile) {
|
||||
body, err := file.ReadString(jsonFile)
|
||||
ginx.NewRender(c).Data(body, err)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
ginx.Bomb(http.StatusBadRequest, "%s not found", name)
|
||||
}
|
||||
|
||||
func (rt *Router) builtinIcon(c *gin.Context) {
|
||||
fp := rt.Center.BuiltinIntegrationsDir
|
||||
if fp == "" {
|
||||
fp = path.Join(runner.Cwd, "integrations")
|
||||
}
|
||||
|
||||
cate := ginx.UrlParamStr(c, "cate")
|
||||
iconPath := fp + "/" + cate + "/icon/" + ginx.UrlParamStr(c, "name")
|
||||
c.File(path.Join(iconPath))
|
||||
}
|
||||
@@ -3,12 +3,12 @@ package router
|
||||
import (
|
||||
"net/http"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/toolkits/pkg/ginx"
|
||||
"github.com/toolkits/pkg/logger"
|
||||
"github.com/toolkits/pkg/str"
|
||||
|
||||
"github.com/didi/nightingale/v5/src/models"
|
||||
)
|
||||
|
||||
type busiGroupForm struct {
|
||||
@@ -18,7 +18,7 @@ type busiGroupForm struct {
|
||||
Members []models.BusiGroupMember `json:"members"`
|
||||
}
|
||||
|
||||
func busiGroupAdd(c *gin.Context) {
|
||||
func (rt *Router) busiGroupAdd(c *gin.Context) {
|
||||
var f busiGroupForm
|
||||
ginx.BindJSON(c, &f)
|
||||
|
||||
@@ -39,10 +39,10 @@ func busiGroupAdd(c *gin.Context) {
|
||||
}
|
||||
|
||||
username := c.MustGet("username").(string)
|
||||
ginx.Dangerous(models.BusiGroupAdd(f.Name, f.LabelEnable, f.LabelValue, f.Members, username))
|
||||
ginx.Dangerous(models.BusiGroupAdd(rt.Ctx, f.Name, f.LabelEnable, f.LabelValue, f.Members, username))
|
||||
|
||||
// 如果创建成功,拿着name去查,应该可以查到
|
||||
newbg, err := models.BusiGroupGet("name=?", f.Name)
|
||||
newbg, err := models.BusiGroupGet(rt.Ctx, "name=?", f.Name)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if newbg == nil {
|
||||
@@ -53,40 +53,52 @@ func busiGroupAdd(c *gin.Context) {
|
||||
ginx.NewRender(c).Data(newbg.Id, nil)
|
||||
}
|
||||
|
||||
func busiGroupPut(c *gin.Context) {
|
||||
func (rt *Router) busiGroupPut(c *gin.Context) {
|
||||
var f busiGroupForm
|
||||
ginx.BindJSON(c, &f)
|
||||
|
||||
username := c.MustGet("username").(string)
|
||||
targetbg := c.MustGet("busi_group").(*models.BusiGroup)
|
||||
ginx.NewRender(c).Message(targetbg.Update(f.Name, f.LabelEnable, f.LabelValue, username))
|
||||
ginx.NewRender(c).Message(targetbg.Update(rt.Ctx, f.Name, f.LabelEnable, f.LabelValue, username))
|
||||
}
|
||||
|
||||
func busiGroupMemberAdd(c *gin.Context) {
|
||||
func (rt *Router) busiGroupMemberAdd(c *gin.Context) {
|
||||
var members []models.BusiGroupMember
|
||||
ginx.BindJSON(c, &members)
|
||||
|
||||
username := c.MustGet("username").(string)
|
||||
targetbg := c.MustGet("busi_group").(*models.BusiGroup)
|
||||
|
||||
ginx.NewRender(c).Message(targetbg.AddMembers(members, username))
|
||||
for i := 0; i < len(members); i++ {
|
||||
if members[i].BusiGroupId != targetbg.Id {
|
||||
ginx.Bomb(http.StatusBadRequest, "business group id invalid")
|
||||
}
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Message(targetbg.AddMembers(rt.Ctx, members, username))
|
||||
}
|
||||
|
||||
func busiGroupMemberDel(c *gin.Context) {
|
||||
func (rt *Router) busiGroupMemberDel(c *gin.Context) {
|
||||
var members []models.BusiGroupMember
|
||||
ginx.BindJSON(c, &members)
|
||||
|
||||
username := c.MustGet("username").(string)
|
||||
targetbg := c.MustGet("busi_group").(*models.BusiGroup)
|
||||
|
||||
ginx.NewRender(c).Message(targetbg.DelMembers(members, username))
|
||||
for i := 0; i < len(members); i++ {
|
||||
if members[i].BusiGroupId != targetbg.Id {
|
||||
ginx.Bomb(http.StatusBadRequest, "business group id invalid")
|
||||
}
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Message(targetbg.DelMembers(rt.Ctx, members, username))
|
||||
}
|
||||
|
||||
func busiGroupDel(c *gin.Context) {
|
||||
func (rt *Router) busiGroupDel(c *gin.Context) {
|
||||
username := c.MustGet("username").(string)
|
||||
targetbg := c.MustGet("busi_group").(*models.BusiGroup)
|
||||
|
||||
err := targetbg.Del()
|
||||
err := targetbg.Del(rt.Ctx)
|
||||
if err != nil {
|
||||
logger.Infof("busi_group_delete fail: operator=%s, group_name=%s error=%v", username, targetbg.Name, err)
|
||||
} else {
|
||||
@@ -97,26 +109,29 @@ func busiGroupDel(c *gin.Context) {
|
||||
}
|
||||
|
||||
// 我是超管、或者我是业务组成员
|
||||
func busiGroupGets(c *gin.Context) {
|
||||
func (rt *Router) busiGroupGets(c *gin.Context) {
|
||||
limit := ginx.QueryInt(c, "limit", defaultLimit)
|
||||
query := ginx.QueryStr(c, "query", "")
|
||||
all := ginx.QueryBool(c, "all", false)
|
||||
|
||||
me := c.MustGet("user").(*models.User)
|
||||
lst, err := me.BusiGroups(limit, query, all)
|
||||
lst, err := me.BusiGroups(rt.Ctx, limit, query, all)
|
||||
if len(lst) == 0 {
|
||||
lst = []models.BusiGroup{}
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Data(lst, err)
|
||||
}
|
||||
|
||||
// 这个接口只有在活跃告警页面才调用,获取各个BG的活跃告警数量
|
||||
func busiGroupAlertingsGets(c *gin.Context) {
|
||||
func (rt *Router) busiGroupAlertingsGets(c *gin.Context) {
|
||||
ids := ginx.QueryStr(c, "ids", "")
|
||||
ret, err := models.AlertNumbers(str.IdsInt64(ids))
|
||||
ret, err := models.AlertNumbers(rt.Ctx, str.IdsInt64(ids))
|
||||
ginx.NewRender(c).Data(ret, err)
|
||||
}
|
||||
|
||||
func busiGroupGet(c *gin.Context) {
|
||||
bg := BusiGroup(ginx.UrlParamInt64(c, "id"))
|
||||
ginx.Dangerous(bg.FillUserGroups())
|
||||
func (rt *Router) busiGroupGet(c *gin.Context) {
|
||||
bg := BusiGroup(rt.Ctx, ginx.UrlParamInt64(c, "id"))
|
||||
ginx.Dangerous(bg.FillUserGroups(rt.Ctx))
|
||||
ginx.NewRender(c).Data(bg, nil)
|
||||
}
|
||||
@@ -3,26 +3,26 @@ package router
|
||||
import (
|
||||
"time"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/toolkits/pkg/ginx"
|
||||
"github.com/toolkits/pkg/str"
|
||||
|
||||
"github.com/didi/nightingale/v5/src/models"
|
||||
)
|
||||
|
||||
func chartShareGets(c *gin.Context) {
|
||||
func (rt *Router) chartShareGets(c *gin.Context) {
|
||||
ids := ginx.QueryStr(c, "ids", "")
|
||||
lst, err := models.ChartShareGetsByIds(str.IdsInt64(ids, ","))
|
||||
lst, err := models.ChartShareGetsByIds(rt.Ctx, str.IdsInt64(ids, ","))
|
||||
ginx.NewRender(c).Data(lst, err)
|
||||
}
|
||||
|
||||
type chartShareForm struct {
|
||||
Configs string `json:"configs"`
|
||||
DatasourceId int64 `json:"datasource_id"`
|
||||
Configs string `json:"configs"`
|
||||
}
|
||||
|
||||
func chartShareAdd(c *gin.Context) {
|
||||
func (rt *Router) chartShareAdd(c *gin.Context) {
|
||||
username := c.MustGet("username").(string)
|
||||
cluster := MustGetCluster(c)
|
||||
|
||||
var forms []chartShareForm
|
||||
ginx.BindJSON(c, &forms)
|
||||
@@ -32,12 +32,12 @@ func chartShareAdd(c *gin.Context) {
|
||||
|
||||
for _, f := range forms {
|
||||
chart := models.ChartShare{
|
||||
Cluster: cluster,
|
||||
Configs: f.Configs,
|
||||
CreateBy: username,
|
||||
CreateAt: now,
|
||||
DatasourceId: f.DatasourceId,
|
||||
Configs: f.Configs,
|
||||
CreateBy: username,
|
||||
CreateAt: now,
|
||||
}
|
||||
ginx.Dangerous(chart.Add())
|
||||
ginx.Dangerous(chart.Add(rt.Ctx))
|
||||
ids = append(ids, chart.Id)
|
||||
}
|
||||
|
||||
64
center/router/router_config.go
Normal file
64
center/router/router_config.go
Normal file
@@ -0,0 +1,64 @@
|
||||
package router
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/toolkits/pkg/ginx"
|
||||
)
|
||||
|
||||
func (rt *Router) notifyChannelsGets(c *gin.Context) {
|
||||
var labelAndKeys []models.LabelAndKey
|
||||
cval, err := models.ConfigsGet(rt.Ctx, models.NOTIFYCHANNEL)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if cval == "" {
|
||||
ginx.NewRender(c).Data(labelAndKeys, nil)
|
||||
return
|
||||
}
|
||||
|
||||
var notifyChannels []models.NotifyChannel
|
||||
err = json.Unmarshal([]byte(cval), ¬ifyChannels)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
for _, v := range notifyChannels {
|
||||
if v.Hide {
|
||||
continue
|
||||
}
|
||||
var labelAndKey models.LabelAndKey
|
||||
labelAndKey.Label = v.Name
|
||||
labelAndKey.Key = v.Ident
|
||||
labelAndKeys = append(labelAndKeys, labelAndKey)
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Data(labelAndKeys, nil)
|
||||
}
|
||||
|
||||
func (rt *Router) contactKeysGets(c *gin.Context) {
|
||||
var labelAndKeys []models.LabelAndKey
|
||||
cval, err := models.ConfigsGet(rt.Ctx, models.NOTIFYCONTACT)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if cval == "" {
|
||||
ginx.NewRender(c).Data(labelAndKeys, nil)
|
||||
return
|
||||
}
|
||||
|
||||
var notifyContacts []models.NotifyContact
|
||||
err = json.Unmarshal([]byte(cval), ¬ifyContacts)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
for _, v := range notifyContacts {
|
||||
if v.Hide {
|
||||
continue
|
||||
}
|
||||
var labelAndKey models.LabelAndKey
|
||||
labelAndKey.Label = v.Name
|
||||
labelAndKey.Key = v.Ident
|
||||
labelAndKeys = append(labelAndKeys, labelAndKey)
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Data(labelAndKeys, nil)
|
||||
}
|
||||
49
center/router/router_configs.go
Normal file
49
center/router/router_configs.go
Normal file
@@ -0,0 +1,49 @@
|
||||
package router
|
||||
|
||||
import (
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/toolkits/pkg/ginx"
|
||||
)
|
||||
|
||||
func (rt *Router) configsGet(c *gin.Context) {
|
||||
prefix := ginx.QueryStr(c, "prefix", "")
|
||||
limit := ginx.QueryInt(c, "limit", 10)
|
||||
configs, err := models.ConfigsGets(rt.Ctx, prefix, limit, ginx.Offset(c, limit))
|
||||
ginx.NewRender(c).Data(configs, err)
|
||||
}
|
||||
|
||||
func (rt *Router) configGet(c *gin.Context) {
|
||||
id := ginx.UrlParamInt64(c, "id")
|
||||
configs, err := models.ConfigGet(rt.Ctx, id)
|
||||
ginx.NewRender(c).Data(configs, err)
|
||||
}
|
||||
|
||||
func (rt *Router) configsDel(c *gin.Context) {
|
||||
var f idsForm
|
||||
ginx.BindJSON(c, &f)
|
||||
ginx.NewRender(c).Message(models.ConfigsDel(rt.Ctx, f.Ids))
|
||||
}
|
||||
|
||||
func (rt *Router) configsPut(c *gin.Context) {
|
||||
var arr []models.Configs
|
||||
ginx.BindJSON(c, &arr)
|
||||
|
||||
for i := 0; i < len(arr); i++ {
|
||||
ginx.Dangerous(arr[i].Update(rt.Ctx))
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Message(nil)
|
||||
}
|
||||
|
||||
func (rt *Router) configsPost(c *gin.Context) {
|
||||
var arr []models.Configs
|
||||
ginx.BindJSON(c, &arr)
|
||||
|
||||
for i := 0; i < len(arr); i++ {
|
||||
ginx.Dangerous(arr[i].Add(rt.Ctx))
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Message(nil)
|
||||
}
|
||||
63
center/router/router_crypto.go
Normal file
63
center/router/router_crypto.go
Normal file
@@ -0,0 +1,63 @@
|
||||
package router
|
||||
|
||||
import (
|
||||
"github.com/ccfos/nightingale/v6/pkg/secu"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/toolkits/pkg/ginx"
|
||||
)
|
||||
|
||||
type confPropCrypto struct {
|
||||
Data string `json:"data" binding:"required"`
|
||||
Key string `json:"key" binding:"required"`
|
||||
}
|
||||
|
||||
func (rt *Router) confPropEncrypt(c *gin.Context) {
|
||||
var f confPropCrypto
|
||||
ginx.BindJSON(c, &f)
|
||||
|
||||
k := len(f.Key)
|
||||
switch k {
|
||||
default:
|
||||
c.String(400, "The key length should be 16, 24 or 32")
|
||||
return
|
||||
case 16, 24, 32:
|
||||
break
|
||||
}
|
||||
|
||||
s, err := secu.DealWithEncrypt(f.Data, f.Key)
|
||||
if err != nil {
|
||||
c.String(500, err.Error())
|
||||
}
|
||||
|
||||
c.JSON(200, gin.H{
|
||||
"src": f.Data,
|
||||
"key": f.Key,
|
||||
"encrypt": s,
|
||||
})
|
||||
}
|
||||
|
||||
func (rt *Router) confPropDecrypt(c *gin.Context) {
|
||||
var f confPropCrypto
|
||||
ginx.BindJSON(c, &f)
|
||||
|
||||
k := len(f.Key)
|
||||
switch k {
|
||||
default:
|
||||
c.String(400, "The key length should be 16, 24 or 32")
|
||||
return
|
||||
case 16, 24, 32:
|
||||
break
|
||||
}
|
||||
|
||||
s, err := secu.DealWithDecrypt(f.Data, f.Key)
|
||||
if err != nil {
|
||||
c.String(500, err.Error())
|
||||
}
|
||||
|
||||
c.JSON(200, gin.H{
|
||||
"src": f.Data,
|
||||
"key": f.Key,
|
||||
"decrypt": s,
|
||||
})
|
||||
}
|
||||
19
center/router/router_dashboard.go
Normal file
19
center/router/router_dashboard.go
Normal file
@@ -0,0 +1,19 @@
|
||||
package router
|
||||
|
||||
type ChartPure struct {
|
||||
Configs string `json:"configs"`
|
||||
Weight int `json:"weight"`
|
||||
}
|
||||
|
||||
type ChartGroupPure struct {
|
||||
Name string `json:"name"`
|
||||
Weight int `json:"weight"`
|
||||
Charts []ChartPure `json:"charts"`
|
||||
}
|
||||
|
||||
type DashboardPure struct {
|
||||
Name string `json:"name"`
|
||||
Tags string `json:"tags"`
|
||||
Configs string `json:"configs"`
|
||||
ChartGroups []ChartGroupPure `json:"chart_groups"`
|
||||
}
|
||||
178
center/router/router_datasource.go
Normal file
178
center/router/router_datasource.go
Normal file
@@ -0,0 +1,178 @@
|
||||
package router
|
||||
|
||||
import (
|
||||
"crypto/tls"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"net/url"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/toolkits/pkg/ginx"
|
||||
"github.com/toolkits/pkg/logger"
|
||||
)
|
||||
|
||||
func (rt *Router) pluginList(c *gin.Context) {
|
||||
Render(c, rt.Center.Plugins, nil)
|
||||
}
|
||||
|
||||
type listReq struct {
|
||||
Name string `json:"name"`
|
||||
Type string `json:"plugin_type"`
|
||||
Category string `json:"category"`
|
||||
}
|
||||
|
||||
func (rt *Router) datasourceList(c *gin.Context) {
|
||||
var req listReq
|
||||
ginx.BindJSON(c, &req)
|
||||
|
||||
typ := req.Type
|
||||
category := req.Category
|
||||
name := req.Name
|
||||
|
||||
list, err := models.GetDatasourcesGetsBy(rt.Ctx, typ, category, name, "")
|
||||
Render(c, list, err)
|
||||
}
|
||||
|
||||
type datasourceBrief struct {
|
||||
Id int64 `json:"id"`
|
||||
Name string `json:"name"`
|
||||
PluginType string `json:"plugin_type"`
|
||||
}
|
||||
|
||||
func (rt *Router) datasourceBriefs(c *gin.Context) {
|
||||
var dss []datasourceBrief
|
||||
list, err := models.GetDatasourcesGetsBy(rt.Ctx, "", "", "", "")
|
||||
ginx.Dangerous(err)
|
||||
|
||||
for i := range list {
|
||||
dss = append(dss, datasourceBrief{
|
||||
Id: list[i].Id,
|
||||
Name: list[i].Name,
|
||||
PluginType: list[i].PluginType,
|
||||
})
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Data(dss, err)
|
||||
}
|
||||
|
||||
func (rt *Router) datasourceUpsert(c *gin.Context) {
|
||||
var req models.Datasource
|
||||
ginx.BindJSON(c, &req)
|
||||
username := Username(c)
|
||||
req.UpdatedBy = username
|
||||
|
||||
var err error
|
||||
var count int64
|
||||
|
||||
err = DatasourceCheck(req)
|
||||
if err != nil {
|
||||
Dangerous(c, err)
|
||||
return
|
||||
}
|
||||
|
||||
if req.Id == 0 {
|
||||
req.CreatedBy = username
|
||||
req.Status = "enabled"
|
||||
count, err = models.GetDatasourcesCountBy(rt.Ctx, "", "", req.Name)
|
||||
if err != nil {
|
||||
Render(c, nil, err)
|
||||
return
|
||||
}
|
||||
|
||||
if count > 0 {
|
||||
Render(c, nil, "name already exists")
|
||||
return
|
||||
}
|
||||
err = req.Add(rt.Ctx)
|
||||
} else {
|
||||
err = req.Update(rt.Ctx, "name", "description", "cluster_name", "settings", "http", "auth", "updated_by", "updated_at")
|
||||
}
|
||||
|
||||
Render(c, nil, err)
|
||||
}
|
||||
|
||||
func DatasourceCheck(ds models.Datasource) error {
|
||||
if ds.HTTPJson.Url == "" {
|
||||
return fmt.Errorf("url is empty")
|
||||
}
|
||||
|
||||
client := &http.Client{
|
||||
Transport: &http.Transport{
|
||||
TLSClientConfig: &tls.Config{
|
||||
InsecureSkipVerify: ds.HTTPJson.TLS.SkipTlsVerify,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
fullURL := ds.HTTPJson.Url
|
||||
req, err := http.NewRequest("GET", fullURL, nil)
|
||||
if err != nil {
|
||||
logger.Errorf("Error creating request: %v", err)
|
||||
return fmt.Errorf("request url:%s failed", fullURL)
|
||||
}
|
||||
|
||||
if ds.PluginType == models.PROMETHEUS {
|
||||
subPath := "/api/v1/query"
|
||||
query := url.Values{}
|
||||
query.Add("query", "1+1")
|
||||
fullURL = fmt.Sprintf("%s%s?%s", ds.HTTPJson.Url, subPath, query.Encode())
|
||||
|
||||
req, err = http.NewRequest("POST", fullURL, nil)
|
||||
if err != nil {
|
||||
logger.Errorf("Error creating request: %v", err)
|
||||
return fmt.Errorf("request url:%s failed", fullURL)
|
||||
}
|
||||
}
|
||||
|
||||
if ds.AuthJson.BasicAuthUser != "" {
|
||||
req.SetBasicAuth(ds.AuthJson.BasicAuthUser, ds.AuthJson.BasicAuthPassword)
|
||||
}
|
||||
|
||||
for k, v := range ds.HTTPJson.Headers {
|
||||
req.Header.Set(k, v)
|
||||
}
|
||||
|
||||
resp, err := client.Do(req)
|
||||
if err != nil {
|
||||
logger.Errorf("Error making request: %v\n", err)
|
||||
return fmt.Errorf("request url:%s failed", fullURL)
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != 200 {
|
||||
logger.Errorf("Error making request: %v\n", resp.StatusCode)
|
||||
return fmt.Errorf("request url:%s failed code:%d", fullURL, resp.StatusCode)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (rt *Router) datasourceGet(c *gin.Context) {
|
||||
var req models.Datasource
|
||||
ginx.BindJSON(c, &req)
|
||||
err := req.Get(rt.Ctx)
|
||||
Render(c, req, err)
|
||||
}
|
||||
|
||||
func (rt *Router) datasourceUpdataStatus(c *gin.Context) {
|
||||
var req models.Datasource
|
||||
ginx.BindJSON(c, &req)
|
||||
username := Username(c)
|
||||
req.UpdatedBy = username
|
||||
err := req.Update(rt.Ctx, "status", "updated_by", "updated_at")
|
||||
Render(c, req, err)
|
||||
}
|
||||
|
||||
func (rt *Router) datasourceDel(c *gin.Context) {
|
||||
var ids []int64
|
||||
ginx.BindJSON(c, &ids)
|
||||
err := models.DatasourceDel(rt.Ctx, ids)
|
||||
Render(c, nil, err)
|
||||
}
|
||||
|
||||
func Username(c *gin.Context) string {
|
||||
|
||||
return c.MustGet("username").(string)
|
||||
}
|
||||
121
center/router/router_funcs.go
Normal file
121
center/router/router_funcs.go
Normal file
@@ -0,0 +1,121 @@
|
||||
package router
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net/http"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/alert/aconf"
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
"github.com/ccfos/nightingale/v6/pkg/ctx"
|
||||
"github.com/ccfos/nightingale/v6/pkg/ibex"
|
||||
"github.com/gin-gonic/gin"
|
||||
|
||||
"github.com/toolkits/pkg/ginx"
|
||||
)
|
||||
|
||||
const defaultLimit = 300
|
||||
|
||||
func queryDatasourceIds(c *gin.Context) []int64 {
|
||||
datasourceIds := ginx.QueryStr(c, "datasource_ids", "")
|
||||
datasourceIds = strings.ReplaceAll(datasourceIds, ",", " ")
|
||||
idsStr := strings.Fields(datasourceIds)
|
||||
ids := make([]int64, len(idsStr))
|
||||
for i, idStr := range idsStr {
|
||||
id, _ := strconv.ParseInt(idStr, 10, 64)
|
||||
ids[i] = id
|
||||
}
|
||||
return ids
|
||||
}
|
||||
|
||||
type idsForm struct {
|
||||
Ids []int64 `json:"ids"`
|
||||
}
|
||||
|
||||
func (f idsForm) Verify() {
|
||||
if len(f.Ids) == 0 {
|
||||
ginx.Bomb(http.StatusBadRequest, "ids empty")
|
||||
}
|
||||
}
|
||||
|
||||
func User(ctx *ctx.Context, id int64) *models.User {
|
||||
obj, err := models.UserGetById(ctx, id)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if obj == nil {
|
||||
ginx.Bomb(http.StatusNotFound, "No such user")
|
||||
}
|
||||
|
||||
return obj
|
||||
}
|
||||
|
||||
func UserGroup(ctx *ctx.Context, id int64) *models.UserGroup {
|
||||
obj, err := models.UserGroupGetById(ctx, id)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if obj == nil {
|
||||
ginx.Bomb(http.StatusNotFound, "No such UserGroup")
|
||||
}
|
||||
|
||||
return obj
|
||||
}
|
||||
|
||||
func BusiGroup(ctx *ctx.Context, id int64) *models.BusiGroup {
|
||||
obj, err := models.BusiGroupGetById(ctx, id)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if obj == nil {
|
||||
ginx.Bomb(http.StatusNotFound, "No such BusiGroup")
|
||||
}
|
||||
|
||||
return obj
|
||||
}
|
||||
|
||||
func Dashboard(ctx *ctx.Context, id int64) *models.Dashboard {
|
||||
obj, err := models.DashboardGet(ctx, "id=?", id)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if obj == nil {
|
||||
ginx.Bomb(http.StatusNotFound, "No such dashboard")
|
||||
}
|
||||
|
||||
return obj
|
||||
}
|
||||
|
||||
type DoneIdsReply struct {
|
||||
Err string `json:"err"`
|
||||
Dat struct {
|
||||
List []int64 `json:"list"`
|
||||
} `json:"dat"`
|
||||
}
|
||||
|
||||
type TaskCreateReply struct {
|
||||
Err string `json:"err"`
|
||||
Dat int64 `json:"dat"` // task.id
|
||||
}
|
||||
|
||||
// return task.id, error
|
||||
func TaskCreate(v interface{}, ibexc aconf.Ibex) (int64, error) {
|
||||
var res TaskCreateReply
|
||||
err := ibex.New(
|
||||
ibexc.Address,
|
||||
ibexc.BasicAuthUser,
|
||||
ibexc.BasicAuthPass,
|
||||
ibexc.Timeout,
|
||||
).
|
||||
Path("/ibex/v1/tasks").
|
||||
In(v).
|
||||
Out(&res).
|
||||
POST()
|
||||
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
if res.Err != "" {
|
||||
return 0, fmt.Errorf("response.err: %v", res.Err)
|
||||
}
|
||||
|
||||
return res.Dat, nil
|
||||
}
|
||||
52
center/router/router_heartbeat.go
Normal file
52
center/router/router_heartbeat.go
Normal file
@@ -0,0 +1,52 @@
|
||||
package router
|
||||
|
||||
import (
|
||||
"compress/gzip"
|
||||
"encoding/json"
|
||||
"io/ioutil"
|
||||
"time"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/toolkits/pkg/ginx"
|
||||
)
|
||||
|
||||
func (rt *Router) heartbeat(c *gin.Context) {
|
||||
var bs []byte
|
||||
var err error
|
||||
var r *gzip.Reader
|
||||
var req models.HostMeta
|
||||
if c.GetHeader("Content-Encoding") == "gzip" {
|
||||
r, err = gzip.NewReader(c.Request.Body)
|
||||
if err != nil {
|
||||
c.String(400, err.Error())
|
||||
return
|
||||
}
|
||||
defer r.Close()
|
||||
bs, err = ioutil.ReadAll(r)
|
||||
ginx.Dangerous(err)
|
||||
} else {
|
||||
defer c.Request.Body.Close()
|
||||
bs, err = ioutil.ReadAll(c.Request.Body)
|
||||
ginx.Dangerous(err)
|
||||
}
|
||||
|
||||
err = json.Unmarshal(bs, &req)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
req.Offset = (time.Now().UnixMilli() - req.UnixTime)
|
||||
req.RemoteAddr = c.ClientIP()
|
||||
rt.MetaSet.Set(req.Hostname, req)
|
||||
|
||||
gid := ginx.QueryInt64(c, "gid", 0)
|
||||
|
||||
if gid != 0 {
|
||||
target, has := rt.TargetCache.Get(req.Hostname)
|
||||
if has && target.GroupId != gid {
|
||||
err = models.TargetUpdateBgid(rt.Ctx, []string{req.Hostname}, gid, false)
|
||||
}
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Message(err)
|
||||
}
|
||||
539
center/router/router_login.go
Normal file
539
center/router/router_login.go
Normal file
@@ -0,0 +1,539 @@
|
||||
package router
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net/http"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
"github.com/ccfos/nightingale/v6/pkg/cas"
|
||||
"github.com/ccfos/nightingale/v6/pkg/ldapx"
|
||||
"github.com/ccfos/nightingale/v6/pkg/oauth2x"
|
||||
"github.com/ccfos/nightingale/v6/pkg/oidcx"
|
||||
"github.com/pelletier/go-toml/v2"
|
||||
|
||||
"github.com/dgrijalva/jwt-go"
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/toolkits/pkg/ginx"
|
||||
"github.com/toolkits/pkg/logger"
|
||||
)
|
||||
|
||||
type loginForm struct {
|
||||
Username string `json:"username" binding:"required"`
|
||||
Password string `json:"password" binding:"required"`
|
||||
}
|
||||
|
||||
func (rt *Router) loginPost(c *gin.Context) {
|
||||
var f loginForm
|
||||
ginx.BindJSON(c, &f)
|
||||
|
||||
user, err := models.PassLogin(rt.Ctx, f.Username, f.Password)
|
||||
if err != nil {
|
||||
// pass validate fail, try ldap
|
||||
if rt.Sso.LDAP.Enable {
|
||||
roles := strings.Join(rt.Sso.LDAP.DefaultRoles, " ")
|
||||
user, err = models.LdapLogin(rt.Ctx, f.Username, f.Password, roles, rt.Sso.LDAP)
|
||||
if err != nil {
|
||||
logger.Debugf("ldap login failed: %v username: %s", err, f.Username)
|
||||
ginx.NewRender(c).Message(err)
|
||||
return
|
||||
}
|
||||
user.RolesLst = strings.Fields(user.Roles)
|
||||
} else {
|
||||
ginx.NewRender(c).Message(err)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
if user == nil {
|
||||
// Theoretically impossible
|
||||
ginx.NewRender(c).Message("Username or password invalid")
|
||||
return
|
||||
}
|
||||
|
||||
userIdentity := fmt.Sprintf("%d-%s", user.Id, user.Username)
|
||||
|
||||
ts, err := rt.createTokens(rt.HTTP.JWTAuth.SigningKey, userIdentity)
|
||||
ginx.Dangerous(err)
|
||||
ginx.Dangerous(rt.createAuth(c.Request.Context(), userIdentity, ts))
|
||||
|
||||
ginx.NewRender(c).Data(gin.H{
|
||||
"user": user,
|
||||
"access_token": ts.AccessToken,
|
||||
"refresh_token": ts.RefreshToken,
|
||||
}, nil)
|
||||
}
|
||||
|
||||
func (rt *Router) logoutPost(c *gin.Context) {
|
||||
metadata, err := rt.extractTokenMetadata(c.Request)
|
||||
if err != nil {
|
||||
ginx.NewRender(c, http.StatusBadRequest).Message("failed to parse jwt token")
|
||||
return
|
||||
}
|
||||
|
||||
delErr := rt.deleteTokens(c.Request.Context(), metadata)
|
||||
if delErr != nil {
|
||||
ginx.NewRender(c).Message(http.StatusText(http.StatusInternalServerError))
|
||||
return
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Message("")
|
||||
}
|
||||
|
||||
type refreshForm struct {
|
||||
RefreshToken string `json:"refresh_token" binding:"required"`
|
||||
}
|
||||
|
||||
func (rt *Router) refreshPost(c *gin.Context) {
|
||||
var f refreshForm
|
||||
ginx.BindJSON(c, &f)
|
||||
|
||||
// verify the token
|
||||
token, err := jwt.Parse(f.RefreshToken, func(token *jwt.Token) (interface{}, error) {
|
||||
if _, ok := token.Method.(*jwt.SigningMethodHMAC); !ok {
|
||||
return nil, fmt.Errorf("unexpected jwt signing method: %v", token.Header["alg"])
|
||||
}
|
||||
return []byte(rt.HTTP.JWTAuth.SigningKey), nil
|
||||
})
|
||||
|
||||
// if there is an error, the token must have expired
|
||||
if err != nil {
|
||||
// redirect to login page
|
||||
ginx.NewRender(c, http.StatusUnauthorized).Message("refresh token expired")
|
||||
return
|
||||
}
|
||||
|
||||
// Since token is valid, get the uuid:
|
||||
claims, ok := token.Claims.(jwt.MapClaims) //the token claims should conform to MapClaims
|
||||
if ok && token.Valid {
|
||||
refreshUuid, ok := claims["refresh_uuid"].(string) //convert the interface to string
|
||||
if !ok {
|
||||
// Theoretically impossible
|
||||
ginx.NewRender(c, http.StatusUnauthorized).Message("failed to parse refresh_uuid from jwt")
|
||||
return
|
||||
}
|
||||
|
||||
userIdentity, ok := claims["user_identity"].(string)
|
||||
if !ok {
|
||||
// Theoretically impossible
|
||||
ginx.NewRender(c, http.StatusUnauthorized).Message("failed to parse user_identity from jwt")
|
||||
return
|
||||
}
|
||||
|
||||
userid, err := strconv.ParseInt(strings.Split(userIdentity, "-")[0], 10, 64)
|
||||
if err != nil {
|
||||
ginx.NewRender(c, http.StatusUnauthorized).Message("failed to parse user_identity from jwt")
|
||||
return
|
||||
}
|
||||
|
||||
u, err := models.UserGetById(rt.Ctx, userid)
|
||||
if err != nil {
|
||||
ginx.NewRender(c, http.StatusInternalServerError).Message("failed to query user by id")
|
||||
return
|
||||
}
|
||||
|
||||
if u == nil {
|
||||
// user already deleted
|
||||
ginx.NewRender(c, http.StatusUnauthorized).Message("user already deleted")
|
||||
return
|
||||
}
|
||||
|
||||
// Delete the previous Refresh Token
|
||||
err = rt.deleteAuth(c.Request.Context(), refreshUuid)
|
||||
if err != nil {
|
||||
ginx.NewRender(c, http.StatusUnauthorized).Message(http.StatusText(http.StatusInternalServerError))
|
||||
return
|
||||
}
|
||||
|
||||
// Delete previous Access Token
|
||||
rt.deleteAuth(c.Request.Context(), strings.Split(refreshUuid, "++")[0])
|
||||
|
||||
// Create new pairs of refresh and access tokens
|
||||
ts, err := rt.createTokens(rt.HTTP.JWTAuth.SigningKey, userIdentity)
|
||||
ginx.Dangerous(err)
|
||||
ginx.Dangerous(rt.createAuth(c.Request.Context(), userIdentity, ts))
|
||||
|
||||
ginx.NewRender(c).Data(gin.H{
|
||||
"access_token": ts.AccessToken,
|
||||
"refresh_token": ts.RefreshToken,
|
||||
}, nil)
|
||||
} else {
|
||||
// redirect to login page
|
||||
ginx.NewRender(c, http.StatusUnauthorized).Message("refresh token expired")
|
||||
}
|
||||
}
|
||||
|
||||
func (rt *Router) loginRedirect(c *gin.Context) {
|
||||
redirect := ginx.QueryStr(c, "redirect", "/")
|
||||
|
||||
v, exists := c.Get("userid")
|
||||
if exists {
|
||||
userid := v.(int64)
|
||||
user, err := models.UserGetById(rt.Ctx, userid)
|
||||
ginx.Dangerous(err)
|
||||
if user == nil {
|
||||
ginx.Bomb(200, "user not found")
|
||||
}
|
||||
|
||||
if user.Username != "" { // already login
|
||||
ginx.NewRender(c).Data(redirect, nil)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
if !rt.Sso.OIDC.Enable {
|
||||
ginx.NewRender(c).Data("", nil)
|
||||
return
|
||||
}
|
||||
|
||||
redirect, err := rt.Sso.OIDC.Authorize(rt.Redis, redirect)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
ginx.NewRender(c).Data(redirect, err)
|
||||
}
|
||||
|
||||
type CallbackOutput struct {
|
||||
Redirect string `json:"redirect"`
|
||||
User *models.User `json:"user"`
|
||||
AccessToken string `json:"access_token"`
|
||||
RefreshToken string `json:"refresh_token"`
|
||||
}
|
||||
|
||||
func (rt *Router) loginCallback(c *gin.Context) {
|
||||
code := ginx.QueryStr(c, "code", "")
|
||||
state := ginx.QueryStr(c, "state", "")
|
||||
|
||||
ret, err := rt.Sso.OIDC.Callback(rt.Redis, c.Request.Context(), code, state)
|
||||
if err != nil {
|
||||
logger.Debugf("sso.callback() get ret %+v error %v", ret, err)
|
||||
ginx.NewRender(c).Data(CallbackOutput{}, err)
|
||||
return
|
||||
}
|
||||
|
||||
user, err := models.UserGet(rt.Ctx, "username=?", ret.Username)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if user != nil {
|
||||
if rt.Sso.OIDC.CoverAttributes {
|
||||
if ret.Nickname != "" {
|
||||
user.Nickname = ret.Nickname
|
||||
}
|
||||
|
||||
if ret.Email != "" {
|
||||
user.Email = ret.Email
|
||||
}
|
||||
|
||||
if ret.Phone != "" {
|
||||
user.Phone = ret.Phone
|
||||
}
|
||||
|
||||
user.UpdateAt = time.Now().Unix()
|
||||
user.Update(rt.Ctx, "email", "nickname", "phone", "update_at")
|
||||
}
|
||||
} else {
|
||||
now := time.Now().Unix()
|
||||
user = &models.User{
|
||||
Username: ret.Username,
|
||||
Password: "******",
|
||||
Nickname: ret.Nickname,
|
||||
Phone: ret.Phone,
|
||||
Email: ret.Email,
|
||||
Portrait: "",
|
||||
Roles: strings.Join(rt.Sso.OIDC.DefaultRoles, " "),
|
||||
RolesLst: rt.Sso.OIDC.DefaultRoles,
|
||||
Contacts: []byte("{}"),
|
||||
CreateAt: now,
|
||||
UpdateAt: now,
|
||||
CreateBy: "oidc",
|
||||
UpdateBy: "oidc",
|
||||
}
|
||||
|
||||
// create user from oidc
|
||||
ginx.Dangerous(user.Add(rt.Ctx))
|
||||
}
|
||||
|
||||
// set user login state
|
||||
userIdentity := fmt.Sprintf("%d-%s", user.Id, user.Username)
|
||||
ts, err := rt.createTokens(rt.HTTP.JWTAuth.SigningKey, userIdentity)
|
||||
ginx.Dangerous(err)
|
||||
ginx.Dangerous(rt.createAuth(c.Request.Context(), userIdentity, ts))
|
||||
|
||||
redirect := "/"
|
||||
if ret.Redirect != "/login" {
|
||||
redirect = ret.Redirect
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Data(CallbackOutput{
|
||||
Redirect: redirect,
|
||||
User: user,
|
||||
AccessToken: ts.AccessToken,
|
||||
RefreshToken: ts.RefreshToken,
|
||||
}, nil)
|
||||
}
|
||||
|
||||
type RedirectOutput struct {
|
||||
Redirect string `json:"redirect"`
|
||||
State string `json:"state"`
|
||||
}
|
||||
|
||||
func (rt *Router) loginRedirectCas(c *gin.Context) {
|
||||
redirect := ginx.QueryStr(c, "redirect", "/")
|
||||
|
||||
v, exists := c.Get("userid")
|
||||
if exists {
|
||||
userid := v.(int64)
|
||||
user, err := models.UserGetById(rt.Ctx, userid)
|
||||
ginx.Dangerous(err)
|
||||
if user == nil {
|
||||
ginx.Bomb(200, "user not found")
|
||||
}
|
||||
|
||||
if user.Username != "" { // already login
|
||||
ginx.NewRender(c).Data(redirect, nil)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
if !rt.Sso.CAS.Enable {
|
||||
logger.Error("cas is not enable")
|
||||
ginx.NewRender(c).Data("", nil)
|
||||
return
|
||||
}
|
||||
|
||||
redirect, state, err := rt.Sso.CAS.Authorize(rt.Redis, redirect)
|
||||
|
||||
ginx.Dangerous(err)
|
||||
ginx.NewRender(c).Data(RedirectOutput{
|
||||
Redirect: redirect,
|
||||
State: state,
|
||||
}, err)
|
||||
}
|
||||
|
||||
func (rt *Router) loginCallbackCas(c *gin.Context) {
|
||||
ticket := ginx.QueryStr(c, "ticket", "")
|
||||
state := ginx.QueryStr(c, "state", "")
|
||||
ret, err := rt.Sso.CAS.ValidateServiceTicket(c.Request.Context(), ticket, state, rt.Redis)
|
||||
if err != nil {
|
||||
logger.Errorf("ValidateServiceTicket: %s", err)
|
||||
ginx.NewRender(c).Data("", err)
|
||||
return
|
||||
}
|
||||
user, err := models.UserGet(rt.Ctx, "username=?", ret.Username)
|
||||
if err != nil {
|
||||
logger.Errorf("UserGet: %s", err)
|
||||
}
|
||||
ginx.Dangerous(err)
|
||||
if user != nil {
|
||||
if rt.Sso.CAS.CoverAttributes {
|
||||
if ret.Nickname != "" {
|
||||
user.Nickname = ret.Nickname
|
||||
}
|
||||
|
||||
if ret.Email != "" {
|
||||
user.Email = ret.Email
|
||||
}
|
||||
|
||||
if ret.Phone != "" {
|
||||
user.Phone = ret.Phone
|
||||
}
|
||||
|
||||
user.UpdateAt = time.Now().Unix()
|
||||
ginx.Dangerous(user.Update(rt.Ctx, "email", "nickname", "phone", "update_at"))
|
||||
}
|
||||
} else {
|
||||
now := time.Now().Unix()
|
||||
user = &models.User{
|
||||
Username: ret.Username,
|
||||
Password: "******",
|
||||
Nickname: ret.Nickname,
|
||||
Portrait: "",
|
||||
Roles: strings.Join(rt.Sso.CAS.DefaultRoles, " "),
|
||||
RolesLst: rt.Sso.CAS.DefaultRoles,
|
||||
Contacts: []byte("{}"),
|
||||
Phone: ret.Phone,
|
||||
Email: ret.Email,
|
||||
CreateAt: now,
|
||||
UpdateAt: now,
|
||||
CreateBy: "CAS",
|
||||
UpdateBy: "CAS",
|
||||
}
|
||||
// create user from cas
|
||||
ginx.Dangerous(user.Add(rt.Ctx))
|
||||
}
|
||||
|
||||
// set user login state
|
||||
userIdentity := fmt.Sprintf("%d-%s", user.Id, user.Username)
|
||||
ts, err := rt.createTokens(rt.HTTP.JWTAuth.SigningKey, userIdentity)
|
||||
if err != nil {
|
||||
logger.Errorf("createTokens: %s", err)
|
||||
}
|
||||
ginx.Dangerous(err)
|
||||
ginx.Dangerous(rt.createAuth(c.Request.Context(), userIdentity, ts))
|
||||
|
||||
redirect := "/"
|
||||
if ret.Redirect != "/login" {
|
||||
redirect = ret.Redirect
|
||||
}
|
||||
ginx.NewRender(c).Data(CallbackOutput{
|
||||
Redirect: redirect,
|
||||
User: user,
|
||||
AccessToken: ts.AccessToken,
|
||||
RefreshToken: ts.RefreshToken,
|
||||
}, nil)
|
||||
}
|
||||
|
||||
func (rt *Router) loginRedirectOAuth(c *gin.Context) {
|
||||
redirect := ginx.QueryStr(c, "redirect", "/")
|
||||
|
||||
v, exists := c.Get("userid")
|
||||
if exists {
|
||||
userid := v.(int64)
|
||||
user, err := models.UserGetById(rt.Ctx, userid)
|
||||
ginx.Dangerous(err)
|
||||
if user == nil {
|
||||
ginx.Bomb(200, "user not found")
|
||||
}
|
||||
|
||||
if user.Username != "" { // already login
|
||||
ginx.NewRender(c).Data(redirect, nil)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
if !rt.Sso.OAuth2.Enable {
|
||||
ginx.NewRender(c).Data("", nil)
|
||||
return
|
||||
}
|
||||
|
||||
redirect, err := rt.Sso.OAuth2.Authorize(rt.Redis, redirect)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
ginx.NewRender(c).Data(redirect, err)
|
||||
}
|
||||
|
||||
func (rt *Router) loginCallbackOAuth(c *gin.Context) {
|
||||
code := ginx.QueryStr(c, "code", "")
|
||||
state := ginx.QueryStr(c, "state", "")
|
||||
|
||||
ret, err := rt.Sso.OAuth2.Callback(rt.Redis, c.Request.Context(), code, state)
|
||||
if err != nil {
|
||||
logger.Debugf("sso.callback() get ret %+v error %v", ret, err)
|
||||
ginx.NewRender(c).Data(CallbackOutput{}, err)
|
||||
return
|
||||
}
|
||||
|
||||
user, err := models.UserGet(rt.Ctx, "username=?", ret.Username)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if user != nil {
|
||||
if rt.Sso.OAuth2.CoverAttributes {
|
||||
if ret.Nickname != "" {
|
||||
user.Nickname = ret.Nickname
|
||||
}
|
||||
|
||||
if ret.Email != "" {
|
||||
user.Email = ret.Email
|
||||
}
|
||||
|
||||
if ret.Phone != "" {
|
||||
user.Phone = ret.Phone
|
||||
}
|
||||
|
||||
user.UpdateAt = time.Now().Unix()
|
||||
user.Update(rt.Ctx, "email", "nickname", "phone", "update_at")
|
||||
}
|
||||
} else {
|
||||
now := time.Now().Unix()
|
||||
user = &models.User{
|
||||
Username: ret.Username,
|
||||
Password: "******",
|
||||
Nickname: ret.Nickname,
|
||||
Phone: ret.Phone,
|
||||
Email: ret.Email,
|
||||
Portrait: "",
|
||||
Roles: strings.Join(rt.Sso.OAuth2.DefaultRoles, " "),
|
||||
RolesLst: rt.Sso.OAuth2.DefaultRoles,
|
||||
Contacts: []byte("{}"),
|
||||
CreateAt: now,
|
||||
UpdateAt: now,
|
||||
CreateBy: "oauth2",
|
||||
UpdateBy: "oauth2",
|
||||
}
|
||||
|
||||
// create user from oidc
|
||||
ginx.Dangerous(user.Add(rt.Ctx))
|
||||
}
|
||||
|
||||
// set user login state
|
||||
userIdentity := fmt.Sprintf("%d-%s", user.Id, user.Username)
|
||||
ts, err := rt.createTokens(rt.HTTP.JWTAuth.SigningKey, userIdentity)
|
||||
ginx.Dangerous(err)
|
||||
ginx.Dangerous(rt.createAuth(c.Request.Context(), userIdentity, ts))
|
||||
|
||||
redirect := "/"
|
||||
if ret.Redirect != "/login" {
|
||||
redirect = ret.Redirect
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Data(CallbackOutput{
|
||||
Redirect: redirect,
|
||||
User: user,
|
||||
AccessToken: ts.AccessToken,
|
||||
RefreshToken: ts.RefreshToken,
|
||||
}, nil)
|
||||
}
|
||||
|
||||
type SsoConfigOutput struct {
|
||||
OidcDisplayName string `json:"oidcDisplayName"`
|
||||
CasDisplayName string `json:"casDisplayName"`
|
||||
OauthDisplayName string `json:"oauthDisplayName"`
|
||||
}
|
||||
|
||||
func (rt *Router) ssoConfigNameGet(c *gin.Context) {
|
||||
ginx.NewRender(c).Data(SsoConfigOutput{
|
||||
OidcDisplayName: rt.Sso.OIDC.GetDisplayName(),
|
||||
CasDisplayName: rt.Sso.CAS.GetDisplayName(),
|
||||
OauthDisplayName: rt.Sso.OAuth2.GetDisplayName(),
|
||||
}, nil)
|
||||
}
|
||||
|
||||
func (rt *Router) ssoConfigGets(c *gin.Context) {
|
||||
ginx.NewRender(c).Data(models.SsoConfigGets(rt.Ctx))
|
||||
}
|
||||
|
||||
func (rt *Router) ssoConfigUpdate(c *gin.Context) {
|
||||
var f models.SsoConfig
|
||||
ginx.BindJSON(c, &f)
|
||||
|
||||
err := f.Update(rt.Ctx)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
switch f.Name {
|
||||
case "LDAP":
|
||||
var config ldapx.Config
|
||||
err := toml.Unmarshal([]byte(f.Content), &config)
|
||||
ginx.Dangerous(err)
|
||||
rt.Sso.LDAP.Reload(config)
|
||||
case "OIDC":
|
||||
var config oidcx.Config
|
||||
err := toml.Unmarshal([]byte(f.Content), &config)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
err = rt.Sso.OIDC.Reload(config)
|
||||
ginx.Dangerous(err)
|
||||
case "CAS":
|
||||
var config cas.Config
|
||||
err := toml.Unmarshal([]byte(f.Content), &config)
|
||||
ginx.Dangerous(err)
|
||||
rt.Sso.CAS.Reload(config)
|
||||
case "OAuth2":
|
||||
var config oauth2x.Config
|
||||
err := toml.Unmarshal([]byte(f.Content), &config)
|
||||
ginx.Dangerous(err)
|
||||
rt.Sso.OAuth2.Reload(config)
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Message(nil)
|
||||
}
|
||||
@@ -1,50 +1,24 @@
|
||||
package router
|
||||
|
||||
import (
|
||||
"path"
|
||||
"github.com/ccfos/nightingale/v6/center/cconf"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/toolkits/pkg/file"
|
||||
"github.com/toolkits/pkg/ginx"
|
||||
"github.com/toolkits/pkg/runner"
|
||||
|
||||
"github.com/didi/nightingale/v5/src/webapi/config"
|
||||
)
|
||||
|
||||
func metricsDescGetFile(c *gin.Context) {
|
||||
fp := config.C.MetricsYamlFile
|
||||
if fp == "" {
|
||||
fp = path.Join(runner.Cwd, "etc", "metrics.yaml")
|
||||
}
|
||||
|
||||
if !file.IsExist(fp) {
|
||||
c.String(404, "%s not found", fp)
|
||||
return
|
||||
}
|
||||
|
||||
ret := make(map[string]string)
|
||||
err := file.ReadYaml(fp, &ret)
|
||||
if err != nil {
|
||||
c.String(500, err.Error())
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(200, ret)
|
||||
func (rt *Router) metricsDescGetFile(c *gin.Context) {
|
||||
c.JSON(200, rt.Center.MetricDesc)
|
||||
}
|
||||
|
||||
// 前端传过来一个metric数组,后端去查询有没有对应的释义,返回map
|
||||
func metricsDescGetMap(c *gin.Context) {
|
||||
func (rt *Router) metricsDescGetMap(c *gin.Context) {
|
||||
var arr []string
|
||||
ginx.BindJSON(c, &arr)
|
||||
|
||||
ret := make(map[string]string)
|
||||
for i := 0; i < len(arr); i++ {
|
||||
desc, has := config.Metrics.Get(arr[i])
|
||||
if !has {
|
||||
ret[arr[i]] = ""
|
||||
} else {
|
||||
ret[arr[i]] = desc.(string)
|
||||
}
|
||||
for _, key := range arr {
|
||||
ret[key] = cconf.GetMetricDesc(c.GetHeader("X-Language"), key)
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Data(ret, nil)
|
||||
@@ -3,19 +3,20 @@ package router
|
||||
import (
|
||||
"net/http"
|
||||
|
||||
"github.com/didi/nightingale/v5/src/models"
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/toolkits/pkg/ginx"
|
||||
)
|
||||
|
||||
// no param
|
||||
func metricViewGets(c *gin.Context) {
|
||||
lst, err := models.MetricViewGets(c.MustGet("userid"))
|
||||
func (rt *Router) metricViewGets(c *gin.Context) {
|
||||
lst, err := models.MetricViewGets(rt.Ctx, c.MustGet("userid"))
|
||||
ginx.NewRender(c).Data(lst, err)
|
||||
}
|
||||
|
||||
// body: name, configs, cate
|
||||
func metricViewAdd(c *gin.Context) {
|
||||
func (rt *Router) metricViewAdd(c *gin.Context) {
|
||||
var f models.MetricView
|
||||
ginx.BindJSON(c, &f)
|
||||
|
||||
@@ -28,31 +29,31 @@ func metricViewAdd(c *gin.Context) {
|
||||
f.Id = 0
|
||||
f.CreateBy = me.Id
|
||||
|
||||
ginx.Dangerous(f.Add())
|
||||
ginx.Dangerous(f.Add(rt.Ctx))
|
||||
|
||||
ginx.NewRender(c).Data(f, nil)
|
||||
}
|
||||
|
||||
// body: ids
|
||||
func metricViewDel(c *gin.Context) {
|
||||
func (rt *Router) metricViewDel(c *gin.Context) {
|
||||
var f idsForm
|
||||
ginx.BindJSON(c, &f)
|
||||
f.Verify()
|
||||
|
||||
me := c.MustGet("user").(*models.User)
|
||||
if me.IsAdmin() {
|
||||
ginx.NewRender(c).Message(models.MetricViewDel(f.Ids))
|
||||
ginx.NewRender(c).Message(models.MetricViewDel(rt.Ctx, f.Ids))
|
||||
} else {
|
||||
ginx.NewRender(c).Message(models.MetricViewDel(f.Ids, me.Id))
|
||||
ginx.NewRender(c).Message(models.MetricViewDel(rt.Ctx, f.Ids, me.Id))
|
||||
}
|
||||
}
|
||||
|
||||
// body: id, name, configs, cate
|
||||
func metricViewPut(c *gin.Context) {
|
||||
func (rt *Router) metricViewPut(c *gin.Context) {
|
||||
var f models.MetricView
|
||||
ginx.BindJSON(c, &f)
|
||||
|
||||
view, err := models.MetricViewGet("id = ?", f.Id)
|
||||
view, err := models.MetricViewGet(rt.Ctx, "id = ?", f.Id)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if view == nil {
|
||||
@@ -71,5 +72,5 @@ func metricViewPut(c *gin.Context) {
|
||||
}
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Message(view.Update(f.Name, f.Configs, f.Cate, me.Id))
|
||||
ginx.NewRender(c).Message(view.Update(rt.Ctx, f.Name, f.Configs, f.Cate, me.Id))
|
||||
}
|
||||
105
center/router/router_mute.go
Normal file
105
center/router/router_mute.go
Normal file
@@ -0,0 +1,105 @@
|
||||
package router
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/toolkits/pkg/ginx"
|
||||
)
|
||||
|
||||
// Return all, front-end search and paging
|
||||
func (rt *Router) alertMuteGetsByBG(c *gin.Context) {
|
||||
bgid := ginx.UrlParamInt64(c, "id")
|
||||
lst, err := models.AlertMuteGetsByBG(rt.Ctx, bgid)
|
||||
|
||||
ginx.NewRender(c).Data(lst, err)
|
||||
}
|
||||
|
||||
func (rt *Router) alertMuteGets(c *gin.Context) {
|
||||
prods := strings.Fields(ginx.QueryStr(c, "prods", ""))
|
||||
bgid := ginx.QueryInt64(c, "bgid", 0)
|
||||
query := ginx.QueryStr(c, "query", "")
|
||||
lst, err := models.AlertMuteGets(rt.Ctx, prods, bgid, query)
|
||||
|
||||
ginx.NewRender(c).Data(lst, err)
|
||||
}
|
||||
|
||||
func (rt *Router) alertMuteAdd(c *gin.Context) {
|
||||
var f models.AlertMute
|
||||
ginx.BindJSON(c, &f)
|
||||
|
||||
username := c.MustGet("username").(string)
|
||||
f.CreateBy = username
|
||||
f.GroupId = ginx.UrlParamInt64(c, "id")
|
||||
|
||||
ginx.NewRender(c).Message(f.Add(rt.Ctx))
|
||||
}
|
||||
|
||||
func (rt *Router) alertMuteAddByService(c *gin.Context) {
|
||||
var f models.AlertMute
|
||||
ginx.BindJSON(c, &f)
|
||||
|
||||
ginx.NewRender(c).Message(f.Add(rt.Ctx))
|
||||
}
|
||||
|
||||
func (rt *Router) alertMuteDel(c *gin.Context) {
|
||||
var f idsForm
|
||||
ginx.BindJSON(c, &f)
|
||||
f.Verify()
|
||||
|
||||
ginx.NewRender(c).Message(models.AlertMuteDel(rt.Ctx, f.Ids))
|
||||
}
|
||||
|
||||
func (rt *Router) alertMutePutByFE(c *gin.Context) {
|
||||
var f models.AlertMute
|
||||
ginx.BindJSON(c, &f)
|
||||
|
||||
amid := ginx.UrlParamInt64(c, "amid")
|
||||
am, err := models.AlertMuteGetById(rt.Ctx, amid)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if am == nil {
|
||||
ginx.NewRender(c, http.StatusNotFound).Message("No such AlertMute")
|
||||
return
|
||||
}
|
||||
|
||||
rt.bgrwCheck(c, am.GroupId)
|
||||
|
||||
f.UpdateBy = c.MustGet("username").(string)
|
||||
ginx.NewRender(c).Message(am.Update(rt.Ctx, f))
|
||||
}
|
||||
|
||||
type alertMuteFieldForm struct {
|
||||
Ids []int64 `json:"ids"`
|
||||
Fields map[string]interface{} `json:"fields"`
|
||||
}
|
||||
|
||||
func (rt *Router) alertMutePutFields(c *gin.Context) {
|
||||
var f alertMuteFieldForm
|
||||
ginx.BindJSON(c, &f)
|
||||
|
||||
if len(f.Fields) == 0 {
|
||||
ginx.Bomb(http.StatusBadRequest, "fields empty")
|
||||
}
|
||||
|
||||
f.Fields["update_by"] = c.MustGet("username").(string)
|
||||
f.Fields["update_at"] = time.Now().Unix()
|
||||
|
||||
for i := 0; i < len(f.Ids); i++ {
|
||||
am, err := models.AlertMuteGetById(rt.Ctx, f.Ids[i])
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if am == nil {
|
||||
continue
|
||||
}
|
||||
|
||||
am.FE2DB()
|
||||
ginx.Dangerous(am.UpdateFieldsMap(rt.Ctx, f.Fields))
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Message(nil)
|
||||
}
|
||||
@@ -9,14 +9,12 @@ import (
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/golang-jwt/jwt"
|
||||
"github.com/google/uuid"
|
||||
"github.com/toolkits/pkg/ginx"
|
||||
|
||||
"github.com/didi/nightingale/v5/src/models"
|
||||
"github.com/didi/nightingale/v5/src/storage"
|
||||
"github.com/didi/nightingale/v5/src/webapi/config"
|
||||
)
|
||||
|
||||
type AccessDetails struct {
|
||||
@@ -24,14 +22,14 @@ type AccessDetails struct {
|
||||
UserIdentity string
|
||||
}
|
||||
|
||||
func handleProxyUser(c *gin.Context) *models.User {
|
||||
headerUserNameKey := config.C.ProxyAuth.HeaderUserNameKey
|
||||
func (rt *Router) handleProxyUser(c *gin.Context) *models.User {
|
||||
headerUserNameKey := rt.HTTP.ProxyAuth.HeaderUserNameKey
|
||||
username := c.GetHeader(headerUserNameKey)
|
||||
if username == "" {
|
||||
ginx.Bomb(http.StatusUnauthorized, "unauthorized")
|
||||
}
|
||||
|
||||
user, err := models.UserGetByUsername(username)
|
||||
user, err := models.UserGetByUsername(rt.Ctx, username)
|
||||
if err != nil {
|
||||
ginx.Bomb(http.StatusInternalServerError, err.Error())
|
||||
}
|
||||
@@ -41,13 +39,13 @@ func handleProxyUser(c *gin.Context) *models.User {
|
||||
user = &models.User{
|
||||
Username: username,
|
||||
Nickname: username,
|
||||
Roles: strings.Join(config.C.ProxyAuth.DefaultRoles, " "),
|
||||
Roles: strings.Join(rt.HTTP.ProxyAuth.DefaultRoles, " "),
|
||||
CreateAt: now,
|
||||
UpdateAt: now,
|
||||
CreateBy: "system",
|
||||
UpdateBy: "system",
|
||||
}
|
||||
err = user.Add()
|
||||
err = user.Add(rt.Ctx)
|
||||
if err != nil {
|
||||
ginx.Bomb(http.StatusInternalServerError, err.Error())
|
||||
}
|
||||
@@ -55,23 +53,23 @@ func handleProxyUser(c *gin.Context) *models.User {
|
||||
return user
|
||||
}
|
||||
|
||||
func proxyAuth() gin.HandlerFunc {
|
||||
func (rt *Router) proxyAuth() gin.HandlerFunc {
|
||||
return func(c *gin.Context) {
|
||||
user := handleProxyUser(c)
|
||||
user := rt.handleProxyUser(c)
|
||||
c.Set("userid", user.Id)
|
||||
c.Set("username", user)
|
||||
c.Set("username", user.Username)
|
||||
c.Next()
|
||||
}
|
||||
}
|
||||
|
||||
func jwtAuth() gin.HandlerFunc {
|
||||
func (rt *Router) jwtAuth() gin.HandlerFunc {
|
||||
return func(c *gin.Context) {
|
||||
metadata, err := extractTokenMetadata(c.Request)
|
||||
metadata, err := rt.extractTokenMetadata(c.Request)
|
||||
if err != nil {
|
||||
ginx.Bomb(http.StatusUnauthorized, "unauthorized")
|
||||
}
|
||||
|
||||
userIdentity, err := fetchAuth(c.Request.Context(), metadata.AccessUuid)
|
||||
userIdentity, err := rt.fetchAuth(c.Request.Context(), metadata.AccessUuid)
|
||||
if err != nil {
|
||||
ginx.Bomb(http.StatusUnauthorized, "unauthorized")
|
||||
}
|
||||
@@ -94,40 +92,39 @@ func jwtAuth() gin.HandlerFunc {
|
||||
}
|
||||
}
|
||||
|
||||
func auth() gin.HandlerFunc {
|
||||
if config.C.ProxyAuth.Enable {
|
||||
return proxyAuth()
|
||||
func (rt *Router) auth() gin.HandlerFunc {
|
||||
if rt.HTTP.ProxyAuth.Enable {
|
||||
return rt.proxyAuth()
|
||||
} else {
|
||||
return jwtAuth()
|
||||
return rt.jwtAuth()
|
||||
}
|
||||
}
|
||||
|
||||
// if proxy auth is enabled, mock jwt login/logout/refresh request
|
||||
func jwtMock() gin.HandlerFunc {
|
||||
func (rt *Router) jwtMock() gin.HandlerFunc {
|
||||
return func(c *gin.Context) {
|
||||
if !config.C.ProxyAuth.Enable {
|
||||
if !rt.HTTP.ProxyAuth.Enable {
|
||||
c.Next()
|
||||
return
|
||||
}
|
||||
if strings.Contains(c.FullPath(), "logout") {
|
||||
ginx.Bomb(http.StatusBadRequest, "logout is not supported when proxy auth is enabled")
|
||||
}
|
||||
user := handleProxyUser(c)
|
||||
user := rt.handleProxyUser(c)
|
||||
ginx.NewRender(c).Data(gin.H{
|
||||
"user": user,
|
||||
"access_token": "",
|
||||
"refresh_token": "",
|
||||
}, nil)
|
||||
c.Abort()
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
func user() gin.HandlerFunc {
|
||||
func (rt *Router) user() gin.HandlerFunc {
|
||||
return func(c *gin.Context) {
|
||||
userid := c.MustGet("userid").(int64)
|
||||
|
||||
user, err := models.UserGetById(userid)
|
||||
user, err := models.UserGetById(rt.Ctx, userid)
|
||||
if err != nil {
|
||||
ginx.Bomb(http.StatusUnauthorized, "unauthorized")
|
||||
}
|
||||
@@ -142,12 +139,12 @@ func user() gin.HandlerFunc {
|
||||
}
|
||||
}
|
||||
|
||||
func userGroupWrite() gin.HandlerFunc {
|
||||
func (rt *Router) userGroupWrite() gin.HandlerFunc {
|
||||
return func(c *gin.Context) {
|
||||
me := c.MustGet("user").(*models.User)
|
||||
ug := UserGroup(ginx.UrlParamInt64(c, "id"))
|
||||
ug := UserGroup(rt.Ctx, ginx.UrlParamInt64(c, "id"))
|
||||
|
||||
can, err := me.CanModifyUserGroup(ug)
|
||||
can, err := me.CanModifyUserGroup(rt.Ctx, ug)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if !can {
|
||||
@@ -159,12 +156,12 @@ func userGroupWrite() gin.HandlerFunc {
|
||||
}
|
||||
}
|
||||
|
||||
func bgro() gin.HandlerFunc {
|
||||
func (rt *Router) bgro() gin.HandlerFunc {
|
||||
return func(c *gin.Context) {
|
||||
me := c.MustGet("user").(*models.User)
|
||||
bg := BusiGroup(ginx.UrlParamInt64(c, "id"))
|
||||
bg := BusiGroup(rt.Ctx, ginx.UrlParamInt64(c, "id"))
|
||||
|
||||
can, err := me.CanDoBusiGroup(bg)
|
||||
can, err := me.CanDoBusiGroup(rt.Ctx, bg)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if !can {
|
||||
@@ -177,12 +174,12 @@ func bgro() gin.HandlerFunc {
|
||||
}
|
||||
|
||||
// bgrw 逐步要被干掉,不安全
|
||||
func bgrw() gin.HandlerFunc {
|
||||
func (rt *Router) bgrw() gin.HandlerFunc {
|
||||
return func(c *gin.Context) {
|
||||
me := c.MustGet("user").(*models.User)
|
||||
bg := BusiGroup(ginx.UrlParamInt64(c, "id"))
|
||||
bg := BusiGroup(rt.Ctx, ginx.UrlParamInt64(c, "id"))
|
||||
|
||||
can, err := me.CanDoBusiGroup(bg, "rw")
|
||||
can, err := me.CanDoBusiGroup(rt.Ctx, bg, "rw")
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if !can {
|
||||
@@ -195,11 +192,11 @@ func bgrw() gin.HandlerFunc {
|
||||
}
|
||||
|
||||
// bgrwCheck 要逐渐替换掉bgrw方法,更安全
|
||||
func bgrwCheck(c *gin.Context, bgid int64) {
|
||||
func (rt *Router) bgrwCheck(c *gin.Context, bgid int64) {
|
||||
me := c.MustGet("user").(*models.User)
|
||||
bg := BusiGroup(bgid)
|
||||
bg := BusiGroup(rt.Ctx, bgid)
|
||||
|
||||
can, err := me.CanDoBusiGroup(bg, "rw")
|
||||
can, err := me.CanDoBusiGroup(rt.Ctx, bg, "rw")
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if !can {
|
||||
@@ -209,7 +206,7 @@ func bgrwCheck(c *gin.Context, bgid int64) {
|
||||
c.Set("busi_group", bg)
|
||||
}
|
||||
|
||||
func bgrwChecks(c *gin.Context, bgids []int64) {
|
||||
func (rt *Router) bgrwChecks(c *gin.Context, bgids []int64) {
|
||||
set := make(map[int64]struct{})
|
||||
|
||||
for i := 0; i < len(bgids); i++ {
|
||||
@@ -217,16 +214,16 @@ func bgrwChecks(c *gin.Context, bgids []int64) {
|
||||
continue
|
||||
}
|
||||
|
||||
bgrwCheck(c, bgids[i])
|
||||
rt.bgrwCheck(c, bgids[i])
|
||||
set[bgids[i]] = struct{}{}
|
||||
}
|
||||
}
|
||||
|
||||
func bgroCheck(c *gin.Context, bgid int64) {
|
||||
func (rt *Router) bgroCheck(c *gin.Context, bgid int64) {
|
||||
me := c.MustGet("user").(*models.User)
|
||||
bg := BusiGroup(bgid)
|
||||
bg := BusiGroup(rt.Ctx, bgid)
|
||||
|
||||
can, err := me.CanDoBusiGroup(bg, "ro")
|
||||
can, err := me.CanDoBusiGroup(rt.Ctx, bg)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if !can {
|
||||
@@ -236,11 +233,11 @@ func bgroCheck(c *gin.Context, bgid int64) {
|
||||
c.Set("busi_group", bg)
|
||||
}
|
||||
|
||||
func perm(operation string) gin.HandlerFunc {
|
||||
func (rt *Router) perm(operation string) gin.HandlerFunc {
|
||||
return func(c *gin.Context) {
|
||||
me := c.MustGet("user").(*models.User)
|
||||
|
||||
can, err := me.CheckPerm(operation)
|
||||
can, err := me.CheckPerm(rt.Ctx, operation)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if !can {
|
||||
@@ -251,11 +248,11 @@ func perm(operation string) gin.HandlerFunc {
|
||||
}
|
||||
}
|
||||
|
||||
func admin() gin.HandlerFunc {
|
||||
func (rt *Router) admin() gin.HandlerFunc {
|
||||
return func(c *gin.Context) {
|
||||
userid := c.MustGet("userid").(int64)
|
||||
|
||||
user, err := models.UserGetById(userid)
|
||||
user, err := models.UserGetById(rt.Ctx, userid)
|
||||
if err != nil {
|
||||
ginx.Bomb(http.StatusUnauthorized, "unauthorized")
|
||||
}
|
||||
@@ -282,8 +279,8 @@ func admin() gin.HandlerFunc {
|
||||
}
|
||||
}
|
||||
|
||||
func extractTokenMetadata(r *http.Request) (*AccessDetails, error) {
|
||||
token, err := verifyToken(config.C.JWTAuth.SigningKey, extractToken(r))
|
||||
func (rt *Router) extractTokenMetadata(r *http.Request) (*AccessDetails, error) {
|
||||
token, err := rt.verifyToken(rt.HTTP.JWTAuth.SigningKey, rt.extractToken(r))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -304,7 +301,7 @@ func extractTokenMetadata(r *http.Request) (*AccessDetails, error) {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
func extractToken(r *http.Request) string {
|
||||
func (rt *Router) extractToken(r *http.Request) string {
|
||||
tok := r.Header.Get("Authorization")
|
||||
|
||||
if len(tok) > 6 && strings.ToUpper(tok[0:7]) == "BEARER " {
|
||||
@@ -314,17 +311,17 @@ func extractToken(r *http.Request) string {
|
||||
return ""
|
||||
}
|
||||
|
||||
func createAuth(ctx context.Context, userIdentity string, td *TokenDetails) error {
|
||||
func (rt *Router) createAuth(ctx context.Context, userIdentity string, td *TokenDetails) error {
|
||||
at := time.Unix(td.AtExpires, 0)
|
||||
rt := time.Unix(td.RtExpires, 0)
|
||||
rte := time.Unix(td.RtExpires, 0)
|
||||
now := time.Now()
|
||||
|
||||
errAccess := storage.Redis.Set(ctx, wrapJwtKey(td.AccessUuid), userIdentity, at.Sub(now)).Err()
|
||||
errAccess := rt.Redis.Set(ctx, rt.wrapJwtKey(td.AccessUuid), userIdentity, at.Sub(now)).Err()
|
||||
if errAccess != nil {
|
||||
return errAccess
|
||||
}
|
||||
|
||||
errRefresh := storage.Redis.Set(ctx, wrapJwtKey(td.RefreshUuid), userIdentity, rt.Sub(now)).Err()
|
||||
errRefresh := rt.Redis.Set(ctx, rt.wrapJwtKey(td.RefreshUuid), userIdentity, rte.Sub(now)).Err()
|
||||
if errRefresh != nil {
|
||||
return errRefresh
|
||||
}
|
||||
@@ -332,26 +329,26 @@ func createAuth(ctx context.Context, userIdentity string, td *TokenDetails) erro
|
||||
return nil
|
||||
}
|
||||
|
||||
func fetchAuth(ctx context.Context, givenUuid string) (string, error) {
|
||||
return storage.Redis.Get(ctx, wrapJwtKey(givenUuid)).Result()
|
||||
func (rt *Router) fetchAuth(ctx context.Context, givenUuid string) (string, error) {
|
||||
return rt.Redis.Get(ctx, rt.wrapJwtKey(givenUuid)).Result()
|
||||
}
|
||||
|
||||
func deleteAuth(ctx context.Context, givenUuid string) error {
|
||||
return storage.Redis.Del(ctx, wrapJwtKey(givenUuid)).Err()
|
||||
func (rt *Router) deleteAuth(ctx context.Context, givenUuid string) error {
|
||||
return rt.Redis.Del(ctx, rt.wrapJwtKey(givenUuid)).Err()
|
||||
}
|
||||
|
||||
func deleteTokens(ctx context.Context, authD *AccessDetails) error {
|
||||
func (rt *Router) deleteTokens(ctx context.Context, authD *AccessDetails) error {
|
||||
// get the refresh uuid
|
||||
refreshUuid := authD.AccessUuid + "++" + authD.UserIdentity
|
||||
|
||||
// delete access token
|
||||
err := storage.Redis.Del(ctx, wrapJwtKey(authD.AccessUuid)).Err()
|
||||
err := rt.Redis.Del(ctx, rt.wrapJwtKey(authD.AccessUuid)).Err()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// delete refresh token
|
||||
err = storage.Redis.Del(ctx, wrapJwtKey(refreshUuid)).Err()
|
||||
err = rt.Redis.Del(ctx, rt.wrapJwtKey(refreshUuid)).Err()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -359,8 +356,8 @@ func deleteTokens(ctx context.Context, authD *AccessDetails) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func wrapJwtKey(key string) string {
|
||||
return config.C.JWTAuth.RedisKeyPrefix + key
|
||||
func (rt *Router) wrapJwtKey(key string) string {
|
||||
return rt.HTTP.JWTAuth.RedisKeyPrefix + key
|
||||
}
|
||||
|
||||
type TokenDetails struct {
|
||||
@@ -372,12 +369,12 @@ type TokenDetails struct {
|
||||
RtExpires int64
|
||||
}
|
||||
|
||||
func createTokens(signingKey, userIdentity string) (*TokenDetails, error) {
|
||||
func (rt *Router) createTokens(signingKey, userIdentity string) (*TokenDetails, error) {
|
||||
td := &TokenDetails{}
|
||||
td.AtExpires = time.Now().Add(time.Minute * time.Duration(config.C.JWTAuth.AccessExpired)).Unix()
|
||||
td.AtExpires = time.Now().Add(time.Minute * time.Duration(rt.HTTP.JWTAuth.AccessExpired)).Unix()
|
||||
td.AccessUuid = uuid.NewString()
|
||||
|
||||
td.RtExpires = time.Now().Add(time.Minute * time.Duration(config.C.JWTAuth.RefreshExpired)).Unix()
|
||||
td.RtExpires = time.Now().Add(time.Minute * time.Duration(rt.HTTP.JWTAuth.RefreshExpired)).Unix()
|
||||
td.RefreshUuid = td.AccessUuid + "++" + userIdentity
|
||||
|
||||
var err error
|
||||
@@ -398,8 +395,8 @@ func createTokens(signingKey, userIdentity string) (*TokenDetails, error) {
|
||||
rtClaims["refresh_uuid"] = td.RefreshUuid
|
||||
rtClaims["user_identity"] = userIdentity
|
||||
rtClaims["exp"] = td.RtExpires
|
||||
rt := jwt.NewWithClaims(jwt.SigningMethodHS256, rtClaims)
|
||||
td.RefreshToken, err = rt.SignedString([]byte(signingKey))
|
||||
jrt := jwt.NewWithClaims(jwt.SigningMethodHS256, rtClaims)
|
||||
td.RefreshToken, err = jrt.SignedString([]byte(signingKey))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -407,7 +404,7 @@ func createTokens(signingKey, userIdentity string) (*TokenDetails, error) {
|
||||
return td, nil
|
||||
}
|
||||
|
||||
func verifyToken(signingKey, tokenString string) (*jwt.Token, error) {
|
||||
func (rt *Router) verifyToken(signingKey, tokenString string) (*jwt.Token, error) {
|
||||
if tokenString == "" {
|
||||
return nil, fmt.Errorf("bearer token not found")
|
||||
}
|
||||
190
center/router/router_notify_config.go
Normal file
190
center/router/router_notify_config.go
Normal file
@@ -0,0 +1,190 @@
|
||||
package router
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/alert/aconf"
|
||||
"github.com/ccfos/nightingale/v6/alert/sender"
|
||||
"github.com/ccfos/nightingale/v6/memsto"
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
"github.com/pelletier/go-toml/v2"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/toolkits/pkg/ginx"
|
||||
)
|
||||
|
||||
func (rt *Router) webhookGets(c *gin.Context) {
|
||||
var webhooks []models.Webhook
|
||||
cval, err := models.ConfigsGet(rt.Ctx, models.WEBHOOKKEY)
|
||||
ginx.Dangerous(err)
|
||||
if cval == "" {
|
||||
ginx.NewRender(c).Data(webhooks, nil)
|
||||
return
|
||||
}
|
||||
|
||||
err = json.Unmarshal([]byte(cval), &webhooks)
|
||||
ginx.NewRender(c).Data(webhooks, err)
|
||||
}
|
||||
|
||||
func (rt *Router) webhookPuts(c *gin.Context) {
|
||||
var webhooks []models.Webhook
|
||||
ginx.BindJSON(c, &webhooks)
|
||||
for i := 0; i < len(webhooks); i++ {
|
||||
for k, v := range webhooks[i].HeaderMap {
|
||||
webhooks[i].Headers = append(webhooks[i].Headers, k)
|
||||
webhooks[i].Headers = append(webhooks[i].Headers, v)
|
||||
}
|
||||
}
|
||||
|
||||
data, err := json.Marshal(webhooks)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
ginx.NewRender(c).Message(models.ConfigsSet(rt.Ctx, models.WEBHOOKKEY, string(data)))
|
||||
}
|
||||
|
||||
func (rt *Router) notifyScriptGet(c *gin.Context) {
|
||||
var notifyScript models.NotifyScript
|
||||
cval, err := models.ConfigsGet(rt.Ctx, models.NOTIFYSCRIPT)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if cval == "" {
|
||||
ginx.NewRender(c).Data(notifyScript, nil)
|
||||
return
|
||||
}
|
||||
|
||||
err = json.Unmarshal([]byte(cval), ¬ifyScript)
|
||||
ginx.NewRender(c).Data(notifyScript, err)
|
||||
}
|
||||
|
||||
func (rt *Router) notifyScriptPut(c *gin.Context) {
|
||||
var notifyScript models.NotifyScript
|
||||
ginx.BindJSON(c, ¬ifyScript)
|
||||
|
||||
data, err := json.Marshal(notifyScript)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
ginx.NewRender(c).Message(models.ConfigsSet(rt.Ctx, models.NOTIFYSCRIPT, string(data)))
|
||||
}
|
||||
|
||||
func (rt *Router) notifyChannelGets(c *gin.Context) {
|
||||
var notifyChannels []models.NotifyChannel
|
||||
cval, err := models.ConfigsGet(rt.Ctx, models.NOTIFYCHANNEL)
|
||||
ginx.Dangerous(err)
|
||||
if cval == "" {
|
||||
ginx.NewRender(c).Data(notifyChannels, nil)
|
||||
return
|
||||
}
|
||||
|
||||
err = json.Unmarshal([]byte(cval), ¬ifyChannels)
|
||||
ginx.NewRender(c).Data(notifyChannels, err)
|
||||
}
|
||||
|
||||
func (rt *Router) notifyChannelPuts(c *gin.Context) {
|
||||
var notifyChannels []models.NotifyChannel
|
||||
ginx.BindJSON(c, ¬ifyChannels)
|
||||
|
||||
channels := []string{models.Dingtalk, models.Wecom, models.Feishu, models.Mm, models.Telegram, models.Email}
|
||||
|
||||
m := make(map[string]struct{})
|
||||
for _, v := range notifyChannels {
|
||||
m[v.Ident] = struct{}{}
|
||||
}
|
||||
|
||||
for _, v := range channels {
|
||||
if _, ok := m[v]; !ok {
|
||||
ginx.Bomb(200, "channel %s ident can not modify", v)
|
||||
}
|
||||
}
|
||||
|
||||
data, err := json.Marshal(notifyChannels)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
ginx.NewRender(c).Message(models.ConfigsSet(rt.Ctx, models.NOTIFYCHANNEL, string(data)))
|
||||
}
|
||||
|
||||
func (rt *Router) notifyContactGets(c *gin.Context) {
|
||||
var notifyContacts []models.NotifyContact
|
||||
cval, err := models.ConfigsGet(rt.Ctx, models.NOTIFYCONTACT)
|
||||
ginx.Dangerous(err)
|
||||
if cval == "" {
|
||||
ginx.NewRender(c).Data(notifyContacts, nil)
|
||||
return
|
||||
}
|
||||
|
||||
err = json.Unmarshal([]byte(cval), ¬ifyContacts)
|
||||
ginx.NewRender(c).Data(notifyContacts, err)
|
||||
}
|
||||
|
||||
func (rt *Router) notifyContactPuts(c *gin.Context) {
|
||||
var notifyContacts []models.NotifyContact
|
||||
ginx.BindJSON(c, ¬ifyContacts)
|
||||
|
||||
keys := []string{models.DingtalkKey, models.WecomKey, models.FeishuKey, models.MmKey, models.TelegramKey}
|
||||
|
||||
m := make(map[string]struct{})
|
||||
for _, v := range notifyContacts {
|
||||
m[v.Ident] = struct{}{}
|
||||
}
|
||||
|
||||
for _, v := range keys {
|
||||
if _, ok := m[v]; !ok {
|
||||
ginx.Bomb(200, "contact %s ident can not modify", v)
|
||||
}
|
||||
}
|
||||
|
||||
data, err := json.Marshal(notifyContacts)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
ginx.NewRender(c).Message(models.ConfigsSet(rt.Ctx, models.NOTIFYCONTACT, string(data)))
|
||||
}
|
||||
|
||||
func (rt *Router) notifyConfigGet(c *gin.Context) {
|
||||
key := ginx.QueryStr(c, "ckey")
|
||||
cval, err := models.ConfigsGet(rt.Ctx, key)
|
||||
if cval == "" {
|
||||
switch key {
|
||||
case models.IBEX:
|
||||
cval = memsto.DefaultIbex
|
||||
case models.SMTP:
|
||||
cval = memsto.DefaultSMTP
|
||||
}
|
||||
}
|
||||
ginx.NewRender(c).Data(cval, err)
|
||||
}
|
||||
|
||||
func (rt *Router) notifyConfigPut(c *gin.Context) {
|
||||
var f models.Configs
|
||||
ginx.BindJSON(c, &f)
|
||||
switch f.Ckey {
|
||||
case models.SMTP:
|
||||
var smtp aconf.SMTPConfig
|
||||
err := toml.Unmarshal([]byte(f.Cval), &smtp)
|
||||
ginx.Dangerous(err)
|
||||
case models.IBEX:
|
||||
var ibex aconf.Ibex
|
||||
err := toml.Unmarshal([]byte(f.Cval), &ibex)
|
||||
ginx.Dangerous(err)
|
||||
default:
|
||||
ginx.Bomb(200, "key %s can not modify", f.Ckey)
|
||||
}
|
||||
|
||||
err := models.ConfigsSet(rt.Ctx, f.Ckey, f.Cval)
|
||||
if err != nil {
|
||||
ginx.Bomb(200, err.Error())
|
||||
}
|
||||
|
||||
if f.Ckey == models.SMTP {
|
||||
// 重置邮件发送器
|
||||
var smtp aconf.SMTPConfig
|
||||
err := toml.Unmarshal([]byte(f.Cval), &smtp)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if smtp.Host == "" || smtp.Port == 0 {
|
||||
ginx.Bomb(200, "smtp host or port can not be empty")
|
||||
}
|
||||
|
||||
go sender.RestartEmailSender(smtp)
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Message(nil)
|
||||
}
|
||||
80
center/router/router_notify_tpl.go
Normal file
80
center/router/router_notify_tpl.go
Normal file
@@ -0,0 +1,80 @@
|
||||
package router
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"html/template"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/center/cconf"
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
"github.com/ccfos/nightingale/v6/pkg/tplx"
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/toolkits/pkg/ginx"
|
||||
)
|
||||
|
||||
func (rt *Router) notifyTplGets(c *gin.Context) {
|
||||
lst, err := models.NotifyTplGets(rt.Ctx)
|
||||
|
||||
ginx.NewRender(c).Data(lst, err)
|
||||
}
|
||||
|
||||
func (rt *Router) notifyTplUpdateContent(c *gin.Context) {
|
||||
var f models.NotifyTpl
|
||||
ginx.BindJSON(c, &f)
|
||||
|
||||
if err := templateValidate(f); err != nil {
|
||||
ginx.NewRender(c).Message(err.Error())
|
||||
return
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Message(f.UpdateContent(rt.Ctx))
|
||||
}
|
||||
|
||||
func (rt *Router) notifyTplUpdate(c *gin.Context) {
|
||||
var f models.NotifyTpl
|
||||
ginx.BindJSON(c, &f)
|
||||
|
||||
if err := templateValidate(f); err != nil {
|
||||
ginx.NewRender(c).Message(err.Error())
|
||||
return
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Message(f.Update(rt.Ctx))
|
||||
}
|
||||
|
||||
func templateValidate(f models.NotifyTpl) error {
|
||||
if f.Content == "" {
|
||||
return nil
|
||||
}
|
||||
if _, err := template.New(f.Channel).Funcs(tplx.TemplateFuncMap).Parse(f.Content); err != nil {
|
||||
return fmt.Errorf("notify template verify illegal:%s", err.Error())
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (rt *Router) notifyTplPreview(c *gin.Context) {
|
||||
var event models.AlertCurEvent
|
||||
err := json.Unmarshal([]byte(cconf.EVENT_EXAMPLE), &event)
|
||||
if err != nil {
|
||||
ginx.NewRender(c).Message(err.Error())
|
||||
return
|
||||
}
|
||||
|
||||
var f models.NotifyTpl
|
||||
ginx.BindJSON(c, &f)
|
||||
|
||||
tpl, err := template.New(f.Channel).Funcs(tplx.TemplateFuncMap).Parse(f.Content)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
var body bytes.Buffer
|
||||
var ret string
|
||||
if err := tpl.Execute(&body, event); err != nil {
|
||||
ret = err.Error()
|
||||
} else {
|
||||
ret = body.String()
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Data(ret, nil)
|
||||
}
|
||||
159
center/router/router_proxy.go
Normal file
159
center/router/router_proxy.go
Normal file
@@ -0,0 +1,159 @@
|
||||
package router
|
||||
|
||||
import (
|
||||
"context"
|
||||
"crypto/tls"
|
||||
"net"
|
||||
"net/http"
|
||||
"net/http/httputil"
|
||||
"net/url"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
pkgprom "github.com/ccfos/nightingale/v6/pkg/prom"
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/prometheus/common/model"
|
||||
"github.com/toolkits/pkg/ginx"
|
||||
)
|
||||
|
||||
type queryFormItem struct {
|
||||
Start int64 `json:"start" binding:"required"`
|
||||
End int64 `json:"end" binding:"required"`
|
||||
Step int64 `json:"step" binding:"required"`
|
||||
Query string `json:"query" binding:"required"`
|
||||
}
|
||||
|
||||
type batchQueryForm struct {
|
||||
DatasourceId int64 `json:"datasource_id" binding:"required"`
|
||||
Queries []queryFormItem `json:"queries" binding:"required"`
|
||||
}
|
||||
|
||||
func (rt *Router) promBatchQueryRange(c *gin.Context) {
|
||||
var f batchQueryForm
|
||||
ginx.Dangerous(c.BindJSON(&f))
|
||||
|
||||
cli := rt.PromClients.GetCli(f.DatasourceId)
|
||||
|
||||
var lst []model.Value
|
||||
|
||||
for _, item := range f.Queries {
|
||||
r := pkgprom.Range{
|
||||
Start: time.Unix(item.Start, 0),
|
||||
End: time.Unix(item.End, 0),
|
||||
Step: time.Duration(item.Step) * time.Second,
|
||||
}
|
||||
|
||||
resp, _, err := cli.QueryRange(context.Background(), item.Query, r)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
lst = append(lst, resp)
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Data(lst, nil)
|
||||
}
|
||||
|
||||
type batchInstantForm struct {
|
||||
DatasourceId int64 `json:"datasource_id" binding:"required"`
|
||||
Queries []InstantFormItem `json:"queries" binding:"required"`
|
||||
}
|
||||
|
||||
type InstantFormItem struct {
|
||||
Time int64 `json:"time" binding:"required"`
|
||||
Query string `json:"query" binding:"required"`
|
||||
}
|
||||
|
||||
func (rt *Router) promBatchQueryInstant(c *gin.Context) {
|
||||
var f batchInstantForm
|
||||
ginx.Dangerous(c.BindJSON(&f))
|
||||
|
||||
cli := rt.PromClients.GetCli(f.DatasourceId)
|
||||
|
||||
var lst []model.Value
|
||||
|
||||
for _, item := range f.Queries {
|
||||
resp, _, err := cli.Query(context.Background(), item.Query, time.Unix(item.Time, 0))
|
||||
ginx.Dangerous(err)
|
||||
|
||||
lst = append(lst, resp)
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Data(lst, nil)
|
||||
}
|
||||
|
||||
func (rt *Router) dsProxy(c *gin.Context) {
|
||||
dsId := ginx.UrlParamInt64(c, "id")
|
||||
ds := rt.DatasourceCache.GetById(dsId)
|
||||
|
||||
if ds == nil {
|
||||
c.String(http.StatusBadRequest, "no such datasource")
|
||||
return
|
||||
}
|
||||
|
||||
target, err := url.Parse(ds.HTTPJson.Url)
|
||||
if err != nil {
|
||||
c.String(http.StatusInternalServerError, "invalid url: %s", ds.HTTPJson.Url)
|
||||
return
|
||||
}
|
||||
|
||||
director := func(req *http.Request) {
|
||||
req.URL.Scheme = target.Scheme
|
||||
req.URL.Host = target.Host
|
||||
req.Host = target.Host
|
||||
|
||||
req.Header.Set("Host", target.Host)
|
||||
|
||||
// fe request e.g. /api/n9e/proxy/:id/*
|
||||
arr := strings.Split(req.URL.Path, "/")
|
||||
if len(arr) < 6 {
|
||||
c.String(http.StatusBadRequest, "invalid url path")
|
||||
return
|
||||
}
|
||||
|
||||
req.URL.Path = strings.TrimRight(target.Path, "/") + "/" + strings.Join(arr[5:], "/")
|
||||
if target.RawQuery == "" || req.URL.RawQuery == "" {
|
||||
req.URL.RawQuery = target.RawQuery + req.URL.RawQuery
|
||||
} else {
|
||||
req.URL.RawQuery = target.RawQuery + "&" + req.URL.RawQuery
|
||||
}
|
||||
|
||||
if _, ok := req.Header["User-Agent"]; !ok {
|
||||
req.Header.Set("User-Agent", "")
|
||||
}
|
||||
|
||||
if ds.AuthJson.BasicAuthUser != "" {
|
||||
req.SetBasicAuth(ds.AuthJson.BasicAuthUser, ds.AuthJson.BasicAuthPassword)
|
||||
}
|
||||
|
||||
headerCount := len(ds.HTTPJson.Headers)
|
||||
if headerCount > 0 {
|
||||
for key, value := range ds.HTTPJson.Headers {
|
||||
req.Header.Set(key, value)
|
||||
if key == "Host" {
|
||||
req.Host = value
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
errFunc := func(w http.ResponseWriter, r *http.Request, err error) {
|
||||
http.Error(w, err.Error(), http.StatusBadGateway)
|
||||
}
|
||||
|
||||
transport := &http.Transport{
|
||||
TLSClientConfig: &tls.Config{InsecureSkipVerify: ds.HTTPJson.TLS.SkipTlsVerify},
|
||||
Proxy: http.ProxyFromEnvironment,
|
||||
DialContext: (&net.Dialer{
|
||||
Timeout: time.Duration(ds.HTTPJson.DialTimeout) * time.Millisecond,
|
||||
}).DialContext,
|
||||
ResponseHeaderTimeout: time.Duration(ds.HTTPJson.Timeout) * time.Millisecond,
|
||||
MaxIdleConnsPerHost: ds.HTTPJson.MaxIdleConnsPerHost,
|
||||
}
|
||||
|
||||
proxy := &httputil.ReverseProxy{
|
||||
Director: director,
|
||||
Transport: transport,
|
||||
ErrorHandler: errFunc,
|
||||
}
|
||||
|
||||
proxy.ServeHTTP(c.Writer, c.Request)
|
||||
}
|
||||
@@ -1,24 +1,28 @@
|
||||
package router
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"net/http"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/didi/nightingale/v5/src/models"
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/toolkits/pkg/ginx"
|
||||
)
|
||||
|
||||
func recordingRuleGets(c *gin.Context) {
|
||||
func (rt *Router) recordingRuleGets(c *gin.Context) {
|
||||
busiGroupId := ginx.UrlParamInt64(c, "id")
|
||||
ars, err := models.RecordingRuleGets(busiGroupId)
|
||||
ars, err := models.RecordingRuleGets(rt.Ctx, busiGroupId)
|
||||
ginx.NewRender(c).Data(ars, err)
|
||||
}
|
||||
|
||||
func recordingRuleGet(c *gin.Context) {
|
||||
func (rt *Router) recordingRuleGet(c *gin.Context) {
|
||||
rrid := ginx.UrlParamInt64(c, "rrid")
|
||||
|
||||
ar, err := models.RecordingRuleGetById(rrid)
|
||||
ar, err := models.RecordingRuleGetById(rt.Ctx, rrid)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if ar == nil {
|
||||
@@ -29,7 +33,7 @@ func recordingRuleGet(c *gin.Context) {
|
||||
ginx.NewRender(c).Data(ar, err)
|
||||
}
|
||||
|
||||
func recordingRuleAddByFE(c *gin.Context) {
|
||||
func (rt *Router) recordingRuleAddByFE(c *gin.Context) {
|
||||
username := c.MustGet("username").(string)
|
||||
|
||||
var lst []models.RecordingRule
|
||||
@@ -49,7 +53,7 @@ func recordingRuleAddByFE(c *gin.Context) {
|
||||
lst[i].UpdateBy = username
|
||||
lst[i].FE2DB()
|
||||
|
||||
if err := lst[i].Add(); err != nil {
|
||||
if err := lst[i].Add(rt.Ctx); err != nil {
|
||||
reterr[lst[i].Name] = err.Error()
|
||||
} else {
|
||||
reterr[lst[i].Name] = ""
|
||||
@@ -58,12 +62,12 @@ func recordingRuleAddByFE(c *gin.Context) {
|
||||
ginx.NewRender(c).Data(reterr, nil)
|
||||
}
|
||||
|
||||
func recordingRulePutByFE(c *gin.Context) {
|
||||
func (rt *Router) recordingRulePutByFE(c *gin.Context) {
|
||||
var f models.RecordingRule
|
||||
ginx.BindJSON(c, &f)
|
||||
|
||||
rrid := ginx.UrlParamInt64(c, "rrid")
|
||||
ar, err := models.RecordingRuleGetById(rrid)
|
||||
ar, err := models.RecordingRuleGetById(rt.Ctx, rrid)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if ar == nil {
|
||||
@@ -71,19 +75,19 @@ func recordingRulePutByFE(c *gin.Context) {
|
||||
return
|
||||
}
|
||||
|
||||
bgrwCheck(c, ar.GroupId)
|
||||
rt.bgrwCheck(c, ar.GroupId)
|
||||
|
||||
f.UpdateBy = c.MustGet("username").(string)
|
||||
ginx.NewRender(c).Message(ar.Update(f))
|
||||
ginx.NewRender(c).Message(ar.Update(rt.Ctx, f))
|
||||
|
||||
}
|
||||
|
||||
func recordingRuleDel(c *gin.Context) {
|
||||
func (rt *Router) recordingRuleDel(c *gin.Context) {
|
||||
var f idsForm
|
||||
ginx.BindJSON(c, &f)
|
||||
f.Verify()
|
||||
|
||||
ginx.NewRender(c).Message(models.RecordingRuleDels(f.Ids, ginx.UrlParamInt64(c, "id")))
|
||||
ginx.NewRender(c).Message(models.RecordingRuleDels(rt.Ctx, f.Ids, ginx.UrlParamInt64(c, "id")))
|
||||
|
||||
}
|
||||
|
||||
@@ -92,7 +96,7 @@ type recordRuleFieldForm struct {
|
||||
Fields map[string]interface{} `json:"fields"`
|
||||
}
|
||||
|
||||
func recordingRulePutFields(c *gin.Context) {
|
||||
func (rt *Router) recordingRulePutFields(c *gin.Context) {
|
||||
var f recordRuleFieldForm
|
||||
ginx.BindJSON(c, &f)
|
||||
|
||||
@@ -103,15 +107,34 @@ func recordingRulePutFields(c *gin.Context) {
|
||||
f.Fields["update_by"] = c.MustGet("username").(string)
|
||||
f.Fields["update_at"] = time.Now().Unix()
|
||||
|
||||
if _, ok := f.Fields["datasource_ids"]; ok {
|
||||
// datasource_ids = "1 2 3"
|
||||
idsStr := strings.Fields(f.Fields["datasource_ids"].(string))
|
||||
ids := make([]int64, 0)
|
||||
for _, idStr := range idsStr {
|
||||
id, err := strconv.ParseInt(idStr, 10, 64)
|
||||
if err != nil {
|
||||
ginx.Bomb(http.StatusBadRequest, "datasource_ids error")
|
||||
}
|
||||
ids = append(ids, id)
|
||||
}
|
||||
|
||||
bs, err := json.Marshal(ids)
|
||||
if err != nil {
|
||||
ginx.Bomb(http.StatusBadRequest, "datasource_ids error")
|
||||
}
|
||||
f.Fields["datasource_ids"] = string(bs)
|
||||
}
|
||||
|
||||
for i := 0; i < len(f.Ids); i++ {
|
||||
ar, err := models.RecordingRuleGetById(f.Ids[i])
|
||||
ar, err := models.RecordingRuleGetById(rt.Ctx, f.Ids[i])
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if ar == nil {
|
||||
continue
|
||||
}
|
||||
|
||||
ginx.Dangerous(ar.UpdateFieldsMap(f.Fields))
|
||||
ginx.Dangerous(ar.UpdateFieldsMap(rt.Ctx, f.Fields))
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Message(nil)
|
||||
85
center/router/router_role.go
Normal file
85
center/router/router_role.go
Normal file
@@ -0,0 +1,85 @@
|
||||
package router
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
"strings"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/toolkits/pkg/ginx"
|
||||
)
|
||||
|
||||
func (rt *Router) rolesGets(c *gin.Context) {
|
||||
lst, err := models.RoleGetsAll(rt.Ctx)
|
||||
ginx.NewRender(c).Data(lst, err)
|
||||
}
|
||||
|
||||
func (rt *Router) permsGets(c *gin.Context) {
|
||||
user := c.MustGet("user").(*models.User)
|
||||
lst, err := models.OperationsOfRole(rt.Ctx, strings.Fields(user.Roles))
|
||||
ginx.NewRender(c).Data(lst, err)
|
||||
}
|
||||
|
||||
// 创建角色
|
||||
func (rt *Router) roleAdd(c *gin.Context) {
|
||||
var f models.Role
|
||||
ginx.BindJSON(c, &f)
|
||||
|
||||
err := f.Add(rt.Ctx)
|
||||
ginx.NewRender(c).Message(err)
|
||||
}
|
||||
|
||||
// 更新角色
|
||||
func (rt *Router) rolePut(c *gin.Context) {
|
||||
var f models.Role
|
||||
ginx.BindJSON(c, &f)
|
||||
oldRule, err := models.RoleGet(rt.Ctx, "id=?", f.Id)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if oldRule == nil {
|
||||
ginx.Bomb(http.StatusOK, "role not found")
|
||||
}
|
||||
|
||||
if oldRule.Name == "Admin" {
|
||||
ginx.Bomb(http.StatusOK, "admin role can not be modified")
|
||||
}
|
||||
|
||||
if oldRule.Name != f.Name {
|
||||
// name changed, check duplication
|
||||
num, err := models.RoleCount(rt.Ctx, "name=? and id<>?", f.Name, oldRule.Id)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if num > 0 {
|
||||
ginx.Bomb(http.StatusOK, "role name already exists")
|
||||
}
|
||||
}
|
||||
|
||||
oldRule.Name = f.Name
|
||||
oldRule.Note = f.Note
|
||||
|
||||
ginx.NewRender(c).Message(oldRule.Update(rt.Ctx, "name", "note"))
|
||||
}
|
||||
|
||||
func (rt *Router) roleDel(c *gin.Context) {
|
||||
id := ginx.UrlParamInt64(c, "id")
|
||||
target, err := models.RoleGet(rt.Ctx, "id=?", id)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if target.Name == "Admin" {
|
||||
ginx.Bomb(http.StatusOK, "admin role can not be modified")
|
||||
}
|
||||
|
||||
if target == nil {
|
||||
ginx.NewRender(c).Message(nil)
|
||||
return
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Message(target.Del(rt.Ctx))
|
||||
}
|
||||
|
||||
// 角色列表
|
||||
func (rt *Router) roleGets(c *gin.Context) {
|
||||
lst, err := models.RoleGetsAll(rt.Ctx)
|
||||
ginx.NewRender(c).Data(lst, err)
|
||||
}
|
||||
43
center/router/router_role_operation.go
Normal file
43
center/router/router_role_operation.go
Normal file
@@ -0,0 +1,43 @@
|
||||
package router
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/toolkits/pkg/ginx"
|
||||
)
|
||||
|
||||
func (rt *Router) operationOfRole(c *gin.Context) {
|
||||
id := ginx.UrlParamInt64(c, "id")
|
||||
role, err := models.RoleGet(rt.Ctx, "id=?", id)
|
||||
ginx.Dangerous(err)
|
||||
if role == nil {
|
||||
ginx.Bomb(http.StatusOK, "role not found")
|
||||
}
|
||||
|
||||
ops, err := models.OperationsOfRole(rt.Ctx, []string{role.Name})
|
||||
ginx.NewRender(c).Data(ops, err)
|
||||
}
|
||||
|
||||
func (rt *Router) roleBindOperation(c *gin.Context) {
|
||||
id := ginx.UrlParamInt64(c, "id")
|
||||
role, err := models.RoleGet(rt.Ctx, "id=?", id)
|
||||
ginx.Dangerous(err)
|
||||
if role == nil {
|
||||
ginx.Bomb(http.StatusOK, "role not found")
|
||||
}
|
||||
|
||||
if role.Name == "Admin" {
|
||||
ginx.Bomb(http.StatusOK, "admin role can not be modified")
|
||||
}
|
||||
|
||||
var ops []string
|
||||
ginx.BindJSON(c, &ops)
|
||||
|
||||
ginx.NewRender(c).Message(models.RoleOperationBind(rt.Ctx, role.Name, ops))
|
||||
}
|
||||
|
||||
func (rt *Router) operations(c *gin.Context) {
|
||||
ginx.NewRender(c).Data(rt.Operations.Ops, nil)
|
||||
}
|
||||
@@ -1,14 +1,14 @@
|
||||
package router
|
||||
|
||||
import (
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
"github.com/ccfos/nightingale/v6/pkg/ormx"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/toolkits/pkg/ginx"
|
||||
|
||||
"github.com/didi/nightingale/v5/src/models"
|
||||
"github.com/didi/nightingale/v5/src/pkg/ormx"
|
||||
)
|
||||
|
||||
func selfProfileGet(c *gin.Context) {
|
||||
func (rt *Router) selfProfileGet(c *gin.Context) {
|
||||
user := c.MustGet("user").(*models.User)
|
||||
if user.IsAdmin() {
|
||||
user.Admin = true
|
||||
@@ -24,7 +24,7 @@ type selfProfileForm struct {
|
||||
Contacts ormx.JSONObj `json:"contacts"`
|
||||
}
|
||||
|
||||
func selfProfilePut(c *gin.Context) {
|
||||
func (rt *Router) selfProfilePut(c *gin.Context) {
|
||||
var f selfProfileForm
|
||||
ginx.BindJSON(c, &f)
|
||||
|
||||
@@ -36,7 +36,7 @@ func selfProfilePut(c *gin.Context) {
|
||||
user.Contacts = f.Contacts
|
||||
user.UpdateBy = user.Username
|
||||
|
||||
ginx.NewRender(c).Message(user.UpdateAllFields())
|
||||
ginx.NewRender(c).Message(user.UpdateAllFields(rt.Ctx))
|
||||
}
|
||||
|
||||
type selfPasswordForm struct {
|
||||
@@ -44,9 +44,9 @@ type selfPasswordForm struct {
|
||||
NewPass string `json:"newpass" binding:"required"`
|
||||
}
|
||||
|
||||
func selfPasswordPut(c *gin.Context) {
|
||||
func (rt *Router) selfPasswordPut(c *gin.Context) {
|
||||
var f selfPasswordForm
|
||||
ginx.BindJSON(c, &f)
|
||||
user := c.MustGet("user").(*models.User)
|
||||
ginx.NewRender(c).Message(user.ChangePassword(f.OldPass, f.NewPass))
|
||||
ginx.NewRender(c).Message(user.ChangePassword(rt.Ctx, f.OldPass, f.NewPass))
|
||||
}
|
||||
18
center/router/router_server.go
Normal file
18
center/router/router_server.go
Normal file
@@ -0,0 +1,18 @@
|
||||
package router
|
||||
|
||||
import (
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/toolkits/pkg/ginx"
|
||||
)
|
||||
|
||||
func (rt *Router) serversGet(c *gin.Context) {
|
||||
list, err := models.AlertingEngineGets(rt.Ctx, "")
|
||||
ginx.NewRender(c).Data(list, err)
|
||||
}
|
||||
|
||||
func (rt *Router) serverClustersGet(c *gin.Context) {
|
||||
list, err := models.AlertingEngineGetsClusters(rt.Ctx, "")
|
||||
ginx.NewRender(c).Data(list, err)
|
||||
}
|
||||
@@ -1,34 +1,98 @@
|
||||
package router
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
"github.com/ccfos/nightingale/v6/storage"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/prometheus/common/model"
|
||||
"github.com/toolkits/pkg/ginx"
|
||||
|
||||
"github.com/didi/nightingale/v5/src/models"
|
||||
"github.com/toolkits/pkg/logger"
|
||||
)
|
||||
|
||||
func targetGets(c *gin.Context) {
|
||||
type TargetQuery struct {
|
||||
Filters []models.HostQuery `json:"queries"`
|
||||
P int `json:"p"`
|
||||
Limit int `json:"limit"`
|
||||
}
|
||||
|
||||
func (rt *Router) targetGetsByHostFilter(c *gin.Context) {
|
||||
var f TargetQuery
|
||||
ginx.BindJSON(c, &f)
|
||||
|
||||
query := models.GetHostsQuery(f.Filters)
|
||||
|
||||
hosts, err := models.TargetGetsByFilter(rt.Ctx, query, f.Limit, (f.P-1)*f.Limit)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
total, err := models.TargetCountByFilter(rt.Ctx, query)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
ginx.NewRender(c).Data(gin.H{
|
||||
"list": hosts,
|
||||
"total": total,
|
||||
}, nil)
|
||||
}
|
||||
|
||||
func (rt *Router) targetGets(c *gin.Context) {
|
||||
bgid := ginx.QueryInt64(c, "bgid", -1)
|
||||
query := ginx.QueryStr(c, "query", "")
|
||||
limit := ginx.QueryInt(c, "limit", 30)
|
||||
clusters := queryClusters(c)
|
||||
dsIds := queryDatasourceIds(c)
|
||||
|
||||
total, err := models.TargetTotal(bgid, clusters, query)
|
||||
total, err := models.TargetTotal(rt.Ctx, bgid, dsIds, query)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
list, err := models.TargetGets(bgid, clusters, query, limit, ginx.Offset(c, limit))
|
||||
list, err := models.TargetGets(rt.Ctx, bgid, dsIds, query, limit, ginx.Offset(c, limit))
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if err == nil {
|
||||
now := time.Now()
|
||||
cache := make(map[int64]*models.BusiGroup)
|
||||
|
||||
var keys []string
|
||||
for i := 0; i < len(list); i++ {
|
||||
ginx.Dangerous(list[i].FillGroup(cache))
|
||||
ginx.Dangerous(list[i].FillGroup(rt.Ctx, cache))
|
||||
keys = append(keys, models.WrapIdent(list[i].Ident))
|
||||
}
|
||||
|
||||
if len(keys) > 0 {
|
||||
metaMap := make(map[string]*models.HostMeta)
|
||||
vals := storage.MGet(context.Background(), rt.Redis, keys)
|
||||
for _, value := range vals {
|
||||
var meta models.HostMeta
|
||||
if value == nil {
|
||||
continue
|
||||
}
|
||||
err := json.Unmarshal(value, &meta)
|
||||
if err != nil {
|
||||
logger.Warningf("unmarshal %v host meta failed: %v", value, err)
|
||||
continue
|
||||
}
|
||||
metaMap[meta.Hostname] = &meta
|
||||
}
|
||||
|
||||
for i := 0; i < len(list); i++ {
|
||||
if now.Unix()-list[i].UpdateAt < 120 {
|
||||
list[i].TargetUp = 1
|
||||
}
|
||||
|
||||
if meta, ok := metaMap[list[i].Ident]; ok {
|
||||
list[i].FillMeta(meta)
|
||||
} else {
|
||||
// 未上报过元数据的主机,cpuNum默认为-1, 用于前端展示 unknown
|
||||
list[i].CpuNum = -1
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Data(gin.H{
|
||||
@@ -37,10 +101,10 @@ func targetGets(c *gin.Context) {
|
||||
}, nil)
|
||||
}
|
||||
|
||||
func targetGetTags(c *gin.Context) {
|
||||
idents := ginx.QueryStr(c, "idents")
|
||||
func (rt *Router) targetGetTags(c *gin.Context) {
|
||||
idents := ginx.QueryStr(c, "idents", "")
|
||||
idents = strings.ReplaceAll(idents, ",", " ")
|
||||
lst, err := models.TargetGetTags(strings.Fields(idents))
|
||||
lst, err := models.TargetGetTags(rt.Ctx, strings.Fields(idents))
|
||||
ginx.NewRender(c).Data(lst, err)
|
||||
}
|
||||
|
||||
@@ -49,11 +113,7 @@ type targetTagsForm struct {
|
||||
Tags []string `json:"tags" binding:"required"`
|
||||
}
|
||||
|
||||
func (t targetTagsForm) Verify() {
|
||||
|
||||
}
|
||||
|
||||
func targetBindTagsByFE(c *gin.Context) {
|
||||
func (rt *Router) targetBindTagsByFE(c *gin.Context) {
|
||||
var f targetTagsForm
|
||||
ginx.BindJSON(c, &f)
|
||||
|
||||
@@ -61,12 +121,12 @@ func targetBindTagsByFE(c *gin.Context) {
|
||||
ginx.Bomb(http.StatusBadRequest, "idents empty")
|
||||
}
|
||||
|
||||
checkTargetPerm(c, f.Idents)
|
||||
rt.checkTargetPerm(c, f.Idents)
|
||||
|
||||
ginx.NewRender(c).Message(targetBindTags(f))
|
||||
ginx.NewRender(c).Message(rt.targetBindTags(f))
|
||||
}
|
||||
|
||||
func targetBindTagsByService(c *gin.Context) {
|
||||
func (rt *Router) targetBindTagsByService(c *gin.Context) {
|
||||
var f targetTagsForm
|
||||
ginx.BindJSON(c, &f)
|
||||
|
||||
@@ -74,10 +134,10 @@ func targetBindTagsByService(c *gin.Context) {
|
||||
ginx.Bomb(http.StatusBadRequest, "idents empty")
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Message(targetBindTags(f))
|
||||
ginx.NewRender(c).Message(rt.targetBindTags(f))
|
||||
}
|
||||
|
||||
func targetBindTags(f targetTagsForm) error {
|
||||
func (rt *Router) targetBindTags(f targetTagsForm) error {
|
||||
for i := 0; i < len(f.Tags); i++ {
|
||||
arr := strings.Split(f.Tags[i], "=")
|
||||
if len(arr) != 2 {
|
||||
@@ -102,7 +162,7 @@ func targetBindTags(f targetTagsForm) error {
|
||||
}
|
||||
|
||||
for i := 0; i < len(f.Idents); i++ {
|
||||
target, err := models.TargetGetByIdent(f.Idents[i])
|
||||
target, err := models.TargetGetByIdent(rt.Ctx, f.Idents[i])
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -120,7 +180,7 @@ func targetBindTags(f targetTagsForm) error {
|
||||
}
|
||||
}
|
||||
|
||||
err = target.AddTags(f.Tags)
|
||||
err = target.AddTags(rt.Ctx, f.Tags)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -128,7 +188,7 @@ func targetBindTags(f targetTagsForm) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func targetUnbindTagsByFE(c *gin.Context) {
|
||||
func (rt *Router) targetUnbindTagsByFE(c *gin.Context) {
|
||||
var f targetTagsForm
|
||||
ginx.BindJSON(c, &f)
|
||||
|
||||
@@ -136,12 +196,12 @@ func targetUnbindTagsByFE(c *gin.Context) {
|
||||
ginx.Bomb(http.StatusBadRequest, "idents empty")
|
||||
}
|
||||
|
||||
checkTargetPerm(c, f.Idents)
|
||||
rt.checkTargetPerm(c, f.Idents)
|
||||
|
||||
ginx.NewRender(c).Message(targetUnbindTags(f))
|
||||
ginx.NewRender(c).Message(rt.targetUnbindTags(f))
|
||||
}
|
||||
|
||||
func targetUnbindTagsByService(c *gin.Context) {
|
||||
func (rt *Router) targetUnbindTagsByService(c *gin.Context) {
|
||||
var f targetTagsForm
|
||||
ginx.BindJSON(c, &f)
|
||||
|
||||
@@ -149,12 +209,12 @@ func targetUnbindTagsByService(c *gin.Context) {
|
||||
ginx.Bomb(http.StatusBadRequest, "idents empty")
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Message(targetUnbindTags(f))
|
||||
ginx.NewRender(c).Message(rt.targetUnbindTags(f))
|
||||
}
|
||||
|
||||
func targetUnbindTags(f targetTagsForm) error {
|
||||
func (rt *Router) targetUnbindTags(f targetTagsForm) error {
|
||||
for i := 0; i < len(f.Idents); i++ {
|
||||
target, err := models.TargetGetByIdent(f.Idents[i])
|
||||
target, err := models.TargetGetByIdent(rt.Ctx, f.Idents[i])
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -163,7 +223,7 @@ func targetUnbindTags(f targetTagsForm) error {
|
||||
continue
|
||||
}
|
||||
|
||||
err = target.DelTags(f.Tags)
|
||||
err = target.DelTags(rt.Ctx, f.Tags)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -176,7 +236,7 @@ type targetNoteForm struct {
|
||||
Note string `json:"note"`
|
||||
}
|
||||
|
||||
func targetUpdateNote(c *gin.Context) {
|
||||
func (rt *Router) targetUpdateNote(c *gin.Context) {
|
||||
var f targetNoteForm
|
||||
ginx.BindJSON(c, &f)
|
||||
|
||||
@@ -184,12 +244,12 @@ func targetUpdateNote(c *gin.Context) {
|
||||
ginx.Bomb(http.StatusBadRequest, "idents empty")
|
||||
}
|
||||
|
||||
checkTargetPerm(c, f.Idents)
|
||||
rt.checkTargetPerm(c, f.Idents)
|
||||
|
||||
ginx.NewRender(c).Message(models.TargetUpdateNote(f.Idents, f.Note))
|
||||
ginx.NewRender(c).Message(models.TargetUpdateNote(rt.Ctx, f.Idents, f.Note))
|
||||
}
|
||||
|
||||
func targetUpdateNoteByService(c *gin.Context) {
|
||||
func (rt *Router) targetUpdateNoteByService(c *gin.Context) {
|
||||
var f targetNoteForm
|
||||
ginx.BindJSON(c, &f)
|
||||
|
||||
@@ -197,7 +257,7 @@ func targetUpdateNoteByService(c *gin.Context) {
|
||||
ginx.Bomb(http.StatusBadRequest, "idents empty")
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Message(models.TargetUpdateNote(f.Idents, f.Note))
|
||||
ginx.NewRender(c).Message(models.TargetUpdateNote(rt.Ctx, f.Idents, f.Note))
|
||||
}
|
||||
|
||||
type targetBgidForm struct {
|
||||
@@ -205,7 +265,7 @@ type targetBgidForm struct {
|
||||
Bgid int64 `json:"bgid"`
|
||||
}
|
||||
|
||||
func targetUpdateBgid(c *gin.Context) {
|
||||
func (rt *Router) targetUpdateBgid(c *gin.Context) {
|
||||
var f targetBgidForm
|
||||
ginx.BindJSON(c, &f)
|
||||
|
||||
@@ -215,7 +275,7 @@ func targetUpdateBgid(c *gin.Context) {
|
||||
|
||||
user := c.MustGet("user").(*models.User)
|
||||
if user.IsAdmin() {
|
||||
ginx.NewRender(c).Message(models.TargetUpdateBgid(f.Idents, f.Bgid, false))
|
||||
ginx.NewRender(c).Message(models.TargetUpdateBgid(rt.Ctx, f.Idents, f.Bgid, false))
|
||||
return
|
||||
}
|
||||
|
||||
@@ -223,7 +283,7 @@ func targetUpdateBgid(c *gin.Context) {
|
||||
// 把要操作的机器分成两部分,一部分是bgid为0,需要管理员分配,另一部分bgid>0,说明是业务组内部想调整
|
||||
// 比如原来分配给didiyun的机器,didiyun的管理员想把部分机器调整到didiyun-ceph下
|
||||
// 对于调整的这种情况,当前登录用户要对这批机器有操作权限,同时还要对目标BG有操作权限
|
||||
orphans, err := models.IdentsFilter(f.Idents, "group_id = ?", 0)
|
||||
orphans, err := models.IdentsFilter(rt.Ctx, f.Idents, "group_id = ?", 0)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
// 机器里边存在未归组的,登录用户就需要是admin
|
||||
@@ -231,15 +291,15 @@ func targetUpdateBgid(c *gin.Context) {
|
||||
ginx.Bomb(http.StatusForbidden, "No permission. Only admin can assign BG")
|
||||
}
|
||||
|
||||
reBelongs, err := models.IdentsFilter(f.Idents, "group_id > ?", 0)
|
||||
reBelongs, err := models.IdentsFilter(rt.Ctx, f.Idents, "group_id > ?", 0)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if len(reBelongs) > 0 {
|
||||
// 对于这些要重新分配的机器,操作者要对这些机器本身有权限,同时要对目标bgid有权限
|
||||
checkTargetPerm(c, f.Idents)
|
||||
rt.checkTargetPerm(c, f.Idents)
|
||||
|
||||
bg := BusiGroup(f.Bgid)
|
||||
can, err := user.CanDoBusiGroup(bg, "rw")
|
||||
bg := BusiGroup(rt.Ctx, f.Bgid)
|
||||
can, err := user.CanDoBusiGroup(rt.Ctx, bg, "rw")
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if !can {
|
||||
@@ -248,19 +308,19 @@ func targetUpdateBgid(c *gin.Context) {
|
||||
}
|
||||
} else if f.Bgid == 0 {
|
||||
// 退还机器
|
||||
checkTargetPerm(c, f.Idents)
|
||||
rt.checkTargetPerm(c, f.Idents)
|
||||
} else {
|
||||
ginx.Bomb(http.StatusBadRequest, "invalid bgid")
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Message(models.TargetUpdateBgid(f.Idents, f.Bgid, false))
|
||||
ginx.NewRender(c).Message(models.TargetUpdateBgid(rt.Ctx, f.Idents, f.Bgid, false))
|
||||
}
|
||||
|
||||
type identsForm struct {
|
||||
Idents []string `json:"idents" binding:"required"`
|
||||
}
|
||||
|
||||
func targetDel(c *gin.Context) {
|
||||
func (rt *Router) targetDel(c *gin.Context) {
|
||||
var f identsForm
|
||||
ginx.BindJSON(c, &f)
|
||||
|
||||
@@ -268,14 +328,14 @@ func targetDel(c *gin.Context) {
|
||||
ginx.Bomb(http.StatusBadRequest, "idents empty")
|
||||
}
|
||||
|
||||
checkTargetPerm(c, f.Idents)
|
||||
rt.checkTargetPerm(c, f.Idents)
|
||||
|
||||
ginx.NewRender(c).Message(models.TargetDel(f.Idents))
|
||||
ginx.NewRender(c).Message(models.TargetDel(rt.Ctx, f.Idents))
|
||||
}
|
||||
|
||||
func checkTargetPerm(c *gin.Context, idents []string) {
|
||||
func (rt *Router) checkTargetPerm(c *gin.Context, idents []string) {
|
||||
user := c.MustGet("user").(*models.User)
|
||||
nopri, err := user.NopriIdents(idents)
|
||||
nopri, err := user.NopriIdents(rt.Ctx, idents)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if len(nopri) > 0 {
|
||||
@@ -8,15 +8,14 @@ import (
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/toolkits/pkg/ginx"
|
||||
"github.com/toolkits/pkg/str"
|
||||
|
||||
"github.com/didi/nightingale/v5/src/models"
|
||||
"github.com/didi/nightingale/v5/src/webapi/config"
|
||||
)
|
||||
|
||||
func taskGets(c *gin.Context) {
|
||||
func (rt *Router) taskGets(c *gin.Context) {
|
||||
bgid := ginx.UrlParamInt64(c, "id")
|
||||
mine := ginx.QueryBool(c, "mine", false)
|
||||
days := ginx.QueryInt64(c, "days", 7)
|
||||
@@ -31,10 +30,10 @@ func taskGets(c *gin.Context) {
|
||||
|
||||
beginTime := time.Now().Unix() - days*24*3600
|
||||
|
||||
total, err := models.TaskRecordTotal(bgid, beginTime, creator, query)
|
||||
total, err := models.TaskRecordTotal(rt.Ctx, bgid, beginTime, creator, query)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
list, err := models.TaskRecordGets(bgid, beginTime, creator, query, limit, ginx.Offset(c, limit))
|
||||
list, err := models.TaskRecordGets(rt.Ctx, bgid, beginTime, creator, query, limit, ginx.Offset(c, limit))
|
||||
ginx.Dangerous(err)
|
||||
|
||||
ginx.NewRender(c).Data(gin.H{
|
||||
@@ -121,7 +120,7 @@ func (f *taskForm) HandleFH(fh string) {
|
||||
f.Title = f.Title + " FH: " + fh
|
||||
}
|
||||
|
||||
func taskAdd(c *gin.Context) {
|
||||
func (rt *Router) taskAdd(c *gin.Context) {
|
||||
var f taskForm
|
||||
ginx.BindJSON(c, &f)
|
||||
|
||||
@@ -135,10 +134,10 @@ func taskAdd(c *gin.Context) {
|
||||
f.HandleFH(f.Hosts[0])
|
||||
|
||||
// check permission
|
||||
checkTargetPerm(c, f.Hosts)
|
||||
rt.checkTargetPerm(c, f.Hosts)
|
||||
|
||||
// call ibex
|
||||
taskId, err := TaskCreate(f)
|
||||
taskId, err := TaskCreate(f, rt.NotifyConfigCache.GetIbex())
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if taskId <= 0 {
|
||||
@@ -149,9 +148,9 @@ func taskAdd(c *gin.Context) {
|
||||
record := models.TaskRecord{
|
||||
Id: taskId,
|
||||
GroupId: bgid,
|
||||
IbexAddress: config.C.Ibex.Address,
|
||||
IbexAuthUser: config.C.Ibex.BasicAuthUser,
|
||||
IbexAuthPass: config.C.Ibex.BasicAuthPass,
|
||||
IbexAddress: rt.NotifyConfigCache.GetIbex().Address,
|
||||
IbexAuthUser: rt.NotifyConfigCache.GetIbex().BasicAuthUser,
|
||||
IbexAuthPass: rt.NotifyConfigCache.GetIbex().BasicAuthPass,
|
||||
Title: f.Title,
|
||||
Account: f.Account,
|
||||
Batch: f.Batch,
|
||||
@@ -164,14 +163,14 @@ func taskAdd(c *gin.Context) {
|
||||
CreateBy: f.Creator,
|
||||
}
|
||||
|
||||
err = record.Add()
|
||||
err = record.Add(rt.Ctx)
|
||||
ginx.NewRender(c).Data(taskId, err)
|
||||
}
|
||||
|
||||
func taskProxy(c *gin.Context) {
|
||||
target, err := url.Parse(config.C.Ibex.Address)
|
||||
func (rt *Router) taskProxy(c *gin.Context) {
|
||||
target, err := url.Parse(rt.NotifyConfigCache.GetIbex().Address)
|
||||
if err != nil {
|
||||
ginx.NewRender(c).Message("invalid ibex address: %s", config.C.Ibex.Address)
|
||||
ginx.NewRender(c).Message("invalid ibex address: %s", rt.NotifyConfigCache.GetIbex().Address)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -193,8 +192,8 @@ func taskProxy(c *gin.Context) {
|
||||
req.URL.RawQuery = target.RawQuery + "&" + req.URL.RawQuery
|
||||
}
|
||||
|
||||
if config.C.Ibex.BasicAuthUser != "" {
|
||||
req.SetBasicAuth(config.C.Ibex.BasicAuthUser, config.C.Ibex.BasicAuthPass)
|
||||
if rt.NotifyConfigCache.GetIbex().BasicAuthUser != "" {
|
||||
req.SetBasicAuth(rt.NotifyConfigCache.GetIbex().BasicAuthUser, rt.NotifyConfigCache.GetIbex().BasicAuthPass)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -6,22 +6,22 @@ import (
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/toolkits/pkg/ginx"
|
||||
"github.com/toolkits/pkg/str"
|
||||
|
||||
"github.com/didi/nightingale/v5/src/models"
|
||||
)
|
||||
|
||||
func taskTplGets(c *gin.Context) {
|
||||
func (rt *Router) taskTplGets(c *gin.Context) {
|
||||
query := ginx.QueryStr(c, "query", "")
|
||||
limit := ginx.QueryInt(c, "limit", 20)
|
||||
groupId := ginx.UrlParamInt64(c, "id")
|
||||
|
||||
total, err := models.TaskTplTotal(groupId, query)
|
||||
total, err := models.TaskTplTotal(rt.Ctx, groupId, query)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
list, err := models.TaskTplGets(groupId, query, limit, ginx.Offset(c, limit))
|
||||
list, err := models.TaskTplGets(rt.Ctx, groupId, query, limit, ginx.Offset(c, limit))
|
||||
ginx.Dangerous(err)
|
||||
|
||||
ginx.NewRender(c).Data(gin.H{
|
||||
@@ -30,17 +30,17 @@ func taskTplGets(c *gin.Context) {
|
||||
}, nil)
|
||||
}
|
||||
|
||||
func taskTplGet(c *gin.Context) {
|
||||
func (rt *Router) taskTplGet(c *gin.Context) {
|
||||
tid := ginx.UrlParamInt64(c, "tid")
|
||||
|
||||
tpl, err := models.TaskTplGet("id = ?", tid)
|
||||
tpl, err := models.TaskTplGet(rt.Ctx, "id = ?", tid)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if tpl == nil {
|
||||
ginx.Bomb(404, "no such task template")
|
||||
}
|
||||
|
||||
hosts, err := tpl.Hosts()
|
||||
hosts, err := tpl.Hosts(rt.Ctx)
|
||||
|
||||
ginx.NewRender(c).Data(gin.H{
|
||||
"tpl": tpl,
|
||||
@@ -61,7 +61,7 @@ type taskTplForm struct {
|
||||
Hosts []string `json:"hosts"`
|
||||
}
|
||||
|
||||
func taskTplAdd(c *gin.Context) {
|
||||
func (rt *Router) taskTplAdd(c *gin.Context) {
|
||||
var f taskTplForm
|
||||
ginx.BindJSON(c, &f)
|
||||
|
||||
@@ -87,13 +87,13 @@ func taskTplAdd(c *gin.Context) {
|
||||
UpdateAt: now,
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Message(tpl.Save(f.Hosts))
|
||||
ginx.NewRender(c).Message(tpl.Save(rt.Ctx, f.Hosts))
|
||||
}
|
||||
|
||||
func taskTplPut(c *gin.Context) {
|
||||
func (rt *Router) taskTplPut(c *gin.Context) {
|
||||
tid := ginx.UrlParamInt64(c, "tid")
|
||||
|
||||
tpl, err := models.TaskTplGet("id = ?", tid)
|
||||
tpl, err := models.TaskTplGet(rt.Ctx, "id = ?", tid)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if tpl == nil {
|
||||
@@ -120,13 +120,13 @@ func taskTplPut(c *gin.Context) {
|
||||
tpl.UpdateBy = user.Username
|
||||
tpl.UpdateAt = time.Now().Unix()
|
||||
|
||||
ginx.NewRender(c).Message(tpl.Update(f.Hosts))
|
||||
ginx.NewRender(c).Message(tpl.Update(rt.Ctx, f.Hosts))
|
||||
}
|
||||
|
||||
func taskTplDel(c *gin.Context) {
|
||||
func (rt *Router) taskTplDel(c *gin.Context) {
|
||||
tid := ginx.UrlParamInt64(c, "tid")
|
||||
|
||||
tpl, err := models.TaskTplGet("id = ?", tid)
|
||||
tpl, err := models.TaskTplGet(rt.Ctx, "id = ?", tid)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if tpl == nil {
|
||||
@@ -134,7 +134,7 @@ func taskTplDel(c *gin.Context) {
|
||||
return
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Message(tpl.Del())
|
||||
ginx.NewRender(c).Message(tpl.Del(rt.Ctx))
|
||||
}
|
||||
|
||||
type tplTagsForm struct {
|
||||
@@ -171,7 +171,7 @@ func (f *tplTagsForm) Verify() {
|
||||
}
|
||||
}
|
||||
|
||||
func taskTplBindTags(c *gin.Context) {
|
||||
func (rt *Router) taskTplBindTags(c *gin.Context) {
|
||||
var f tplTagsForm
|
||||
ginx.BindJSON(c, &f)
|
||||
f.Verify()
|
||||
@@ -179,20 +179,20 @@ func taskTplBindTags(c *gin.Context) {
|
||||
username := c.MustGet("username").(string)
|
||||
|
||||
for i := 0; i < len(f.Ids); i++ {
|
||||
tpl, err := models.TaskTplGet("id = ?", f.Ids[i])
|
||||
tpl, err := models.TaskTplGet(rt.Ctx, "id = ?", f.Ids[i])
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if tpl == nil {
|
||||
continue
|
||||
}
|
||||
|
||||
ginx.Dangerous(tpl.AddTags(f.Tags, username))
|
||||
ginx.Dangerous(tpl.AddTags(rt.Ctx, f.Tags, username))
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Message(nil)
|
||||
}
|
||||
|
||||
func taskTplUnbindTags(c *gin.Context) {
|
||||
func (rt *Router) taskTplUnbindTags(c *gin.Context) {
|
||||
var f tplTagsForm
|
||||
ginx.BindJSON(c, &f)
|
||||
f.Verify()
|
||||
@@ -200,14 +200,14 @@ func taskTplUnbindTags(c *gin.Context) {
|
||||
username := c.MustGet("username").(string)
|
||||
|
||||
for i := 0; i < len(f.Ids); i++ {
|
||||
tpl, err := models.TaskTplGet("id = ?", f.Ids[i])
|
||||
tpl, err := models.TaskTplGet(rt.Ctx, "id = ?", f.Ids[i])
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if tpl == nil {
|
||||
continue
|
||||
}
|
||||
|
||||
ginx.Dangerous(tpl.DelTags(f.Tags, username))
|
||||
ginx.Dangerous(tpl.DelTags(rt.Ctx, f.Tags, username))
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Message(nil)
|
||||
@@ -4,21 +4,37 @@ import (
|
||||
"net/http"
|
||||
"strings"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
"github.com/ccfos/nightingale/v6/pkg/ormx"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/toolkits/pkg/ginx"
|
||||
|
||||
"github.com/didi/nightingale/v5/src/models"
|
||||
"github.com/didi/nightingale/v5/src/pkg/ormx"
|
||||
)
|
||||
|
||||
func userGets(c *gin.Context) {
|
||||
func (rt *Router) userFindAll(c *gin.Context) {
|
||||
limit := ginx.QueryInt(c, "limit", 20)
|
||||
query := ginx.QueryStr(c, "query", "")
|
||||
|
||||
total, err := models.UserTotal(query)
|
||||
total, err := models.UserTotal(rt.Ctx, query)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
list, err := models.UserGets(query, limit, ginx.Offset(c, limit))
|
||||
list, err := models.UserGets(rt.Ctx, query, limit, ginx.Offset(c, limit))
|
||||
ginx.Dangerous(err)
|
||||
|
||||
ginx.NewRender(c).Data(gin.H{
|
||||
"list": list,
|
||||
"total": total,
|
||||
}, nil)
|
||||
}
|
||||
|
||||
func (rt *Router) userGets(c *gin.Context) {
|
||||
limit := ginx.QueryInt(c, "limit", 20)
|
||||
query := ginx.QueryStr(c, "query", "")
|
||||
|
||||
total, err := models.UserTotal(rt.Ctx, query)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
list, err := models.UserGets(rt.Ctx, query, limit, ginx.Offset(c, limit))
|
||||
ginx.Dangerous(err)
|
||||
|
||||
user := c.MustGet("user").(*models.User)
|
||||
@@ -41,11 +57,11 @@ type userAddForm struct {
|
||||
Contacts ormx.JSONObj `json:"contacts"`
|
||||
}
|
||||
|
||||
func userAddPost(c *gin.Context) {
|
||||
func (rt *Router) userAddPost(c *gin.Context) {
|
||||
var f userAddForm
|
||||
ginx.BindJSON(c, &f)
|
||||
|
||||
password, err := models.CryptoPass(f.Password)
|
||||
password, err := models.CryptoPass(rt.Ctx, f.Password)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if len(f.Roles) == 0 {
|
||||
@@ -67,11 +83,11 @@ func userAddPost(c *gin.Context) {
|
||||
UpdateBy: user.Username,
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Message(u.Add())
|
||||
ginx.NewRender(c).Message(u.Add(rt.Ctx))
|
||||
}
|
||||
|
||||
func userProfileGet(c *gin.Context) {
|
||||
user := User(ginx.UrlParamInt64(c, "id"))
|
||||
func (rt *Router) userProfileGet(c *gin.Context) {
|
||||
user := User(rt.Ctx, ginx.UrlParamInt64(c, "id"))
|
||||
ginx.NewRender(c).Data(user, nil)
|
||||
}
|
||||
|
||||
@@ -83,7 +99,7 @@ type userProfileForm struct {
|
||||
Contacts ormx.JSONObj `json:"contacts"`
|
||||
}
|
||||
|
||||
func userProfilePut(c *gin.Context) {
|
||||
func (rt *Router) userProfilePut(c *gin.Context) {
|
||||
var f userProfileForm
|
||||
ginx.BindJSON(c, &f)
|
||||
|
||||
@@ -91,7 +107,7 @@ func userProfilePut(c *gin.Context) {
|
||||
ginx.Bomb(http.StatusBadRequest, "roles empty")
|
||||
}
|
||||
|
||||
target := User(ginx.UrlParamInt64(c, "id"))
|
||||
target := User(rt.Ctx, ginx.UrlParamInt64(c, "id"))
|
||||
target.Nickname = f.Nickname
|
||||
target.Phone = f.Phone
|
||||
target.Email = f.Email
|
||||
@@ -99,28 +115,28 @@ func userProfilePut(c *gin.Context) {
|
||||
target.Contacts = f.Contacts
|
||||
target.UpdateBy = c.MustGet("username").(string)
|
||||
|
||||
ginx.NewRender(c).Message(target.UpdateAllFields())
|
||||
ginx.NewRender(c).Message(target.UpdateAllFields(rt.Ctx))
|
||||
}
|
||||
|
||||
type userPasswordForm struct {
|
||||
Password string `json:"password" binding:"required"`
|
||||
}
|
||||
|
||||
func userPasswordPut(c *gin.Context) {
|
||||
func (rt *Router) userPasswordPut(c *gin.Context) {
|
||||
var f userPasswordForm
|
||||
ginx.BindJSON(c, &f)
|
||||
|
||||
target := User(ginx.UrlParamInt64(c, "id"))
|
||||
target := User(rt.Ctx, ginx.UrlParamInt64(c, "id"))
|
||||
|
||||
cryptoPass, err := models.CryptoPass(f.Password)
|
||||
cryptoPass, err := models.CryptoPass(rt.Ctx, f.Password)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
ginx.NewRender(c).Message(target.UpdatePassword(cryptoPass, c.MustGet("username").(string)))
|
||||
ginx.NewRender(c).Message(target.UpdatePassword(rt.Ctx, cryptoPass, c.MustGet("username").(string)))
|
||||
}
|
||||
|
||||
func userDel(c *gin.Context) {
|
||||
func (rt *Router) userDel(c *gin.Context) {
|
||||
id := ginx.UrlParamInt64(c, "id")
|
||||
target, err := models.UserGetById(id)
|
||||
target, err := models.UserGetById(rt.Ctx, id)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if target == nil {
|
||||
@@ -128,5 +144,5 @@ func userDel(c *gin.Context) {
|
||||
return
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Message(target.Del())
|
||||
ginx.NewRender(c).Message(target.Del(rt.Ctx))
|
||||
}
|
||||
@@ -4,26 +4,27 @@ import (
|
||||
"net/http"
|
||||
"time"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/toolkits/pkg/ginx"
|
||||
|
||||
"github.com/didi/nightingale/v5/src/models"
|
||||
"github.com/toolkits/pkg/logger"
|
||||
)
|
||||
|
||||
func checkBusiGroupPerm(c *gin.Context) {
|
||||
func (rt *Router) checkBusiGroupPerm(c *gin.Context) {
|
||||
me := c.MustGet("user").(*models.User)
|
||||
bg := BusiGroup(ginx.UrlParamInt64(c, "id"))
|
||||
bg := BusiGroup(rt.Ctx, ginx.UrlParamInt64(c, "id"))
|
||||
|
||||
can, err := me.CanDoBusiGroup(bg, ginx.UrlParamStr(c, "perm"))
|
||||
can, err := me.CanDoBusiGroup(rt.Ctx, bg, ginx.UrlParamStr(c, "perm"))
|
||||
ginx.NewRender(c).Data(can, err)
|
||||
}
|
||||
|
||||
func userGroupGets(c *gin.Context) {
|
||||
func (rt *Router) userGroupGets(c *gin.Context) {
|
||||
limit := ginx.QueryInt(c, "limit", 1500)
|
||||
query := ginx.QueryStr(c, "query", "")
|
||||
|
||||
me := c.MustGet("user").(*models.User)
|
||||
lst, err := me.UserGroups(limit, query)
|
||||
lst, err := me.UserGroups(rt.Ctx, limit, query)
|
||||
|
||||
ginx.NewRender(c).Data(lst, err)
|
||||
}
|
||||
@@ -33,7 +34,7 @@ type userGroupForm struct {
|
||||
Note string `json:"note"`
|
||||
}
|
||||
|
||||
func userGroupAdd(c *gin.Context) {
|
||||
func (rt *Router) userGroupAdd(c *gin.Context) {
|
||||
var f userGroupForm
|
||||
ginx.BindJSON(c, &f)
|
||||
|
||||
@@ -46,16 +47,16 @@ func userGroupAdd(c *gin.Context) {
|
||||
UpdateBy: me.Username,
|
||||
}
|
||||
|
||||
err := ug.Add()
|
||||
err := ug.Add(rt.Ctx)
|
||||
if err == nil {
|
||||
// Even failure is not a big deal
|
||||
models.UserGroupMemberAdd(ug.Id, me.Id)
|
||||
models.UserGroupMemberAdd(rt.Ctx, ug.Id, me.Id)
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Data(ug.Id, err)
|
||||
}
|
||||
|
||||
func userGroupPut(c *gin.Context) {
|
||||
func (rt *Router) userGroupPut(c *gin.Context) {
|
||||
var f userGroupForm
|
||||
ginx.BindJSON(c, &f)
|
||||
|
||||
@@ -64,7 +65,7 @@ func userGroupPut(c *gin.Context) {
|
||||
|
||||
if ug.Name != f.Name {
|
||||
// name changed, check duplication
|
||||
num, err := models.UserGroupCount("name=? and id<>?", f.Name, ug.Id)
|
||||
num, err := models.UserGroupCount(rt.Ctx, "name=? and id<>?", f.Name, ug.Id)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if num > 0 {
|
||||
@@ -77,17 +78,18 @@ func userGroupPut(c *gin.Context) {
|
||||
ug.UpdateBy = me.Username
|
||||
ug.UpdateAt = time.Now().Unix()
|
||||
|
||||
ginx.NewRender(c).Message(ug.Update("Name", "Note", "UpdateAt", "UpdateBy"))
|
||||
ginx.NewRender(c).Message(ug.Update(rt.Ctx, "Name", "Note", "UpdateAt", "UpdateBy"))
|
||||
}
|
||||
|
||||
// Return all members, front-end search and paging
|
||||
func userGroupGet(c *gin.Context) {
|
||||
ug := UserGroup(ginx.UrlParamInt64(c, "id"))
|
||||
func (rt *Router) userGroupGet(c *gin.Context) {
|
||||
ug := UserGroup(rt.Ctx, ginx.UrlParamInt64(c, "id"))
|
||||
|
||||
ids, err := models.MemberIds(ug.Id)
|
||||
ids, err := models.MemberIds(rt.Ctx, ug.Id)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
users, err := models.UserGetsByIds(ids)
|
||||
logger.Info("userGroupGet", ids)
|
||||
users, err := models.UserGetsByIds(rt.Ctx, ids)
|
||||
|
||||
ginx.NewRender(c).Data(gin.H{
|
||||
"users": users,
|
||||
@@ -95,12 +97,12 @@ func userGroupGet(c *gin.Context) {
|
||||
}, err)
|
||||
}
|
||||
|
||||
func userGroupDel(c *gin.Context) {
|
||||
func (rt *Router) userGroupDel(c *gin.Context) {
|
||||
ug := c.MustGet("user_group").(*models.UserGroup)
|
||||
ginx.NewRender(c).Message(ug.Del())
|
||||
ginx.NewRender(c).Message(ug.Del(rt.Ctx))
|
||||
}
|
||||
|
||||
func userGroupMemberAdd(c *gin.Context) {
|
||||
func (rt *Router) userGroupMemberAdd(c *gin.Context) {
|
||||
var f idsForm
|
||||
ginx.BindJSON(c, &f)
|
||||
f.Verify()
|
||||
@@ -108,17 +110,17 @@ func userGroupMemberAdd(c *gin.Context) {
|
||||
me := c.MustGet("user").(*models.User)
|
||||
ug := c.MustGet("user_group").(*models.UserGroup)
|
||||
|
||||
err := ug.AddMembers(f.Ids)
|
||||
err := ug.AddMembers(rt.Ctx, f.Ids)
|
||||
if err == nil {
|
||||
ug.UpdateAt = time.Now().Unix()
|
||||
ug.UpdateBy = me.Username
|
||||
ug.Update("UpdateAt", "UpdateBy")
|
||||
ug.Update(rt.Ctx, "UpdateAt", "UpdateBy")
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Message(err)
|
||||
}
|
||||
|
||||
func userGroupMemberDel(c *gin.Context) {
|
||||
func (rt *Router) userGroupMemberDel(c *gin.Context) {
|
||||
var f idsForm
|
||||
ginx.BindJSON(c, &f)
|
||||
f.Verify()
|
||||
@@ -126,11 +128,11 @@ func userGroupMemberDel(c *gin.Context) {
|
||||
me := c.MustGet("user").(*models.User)
|
||||
ug := c.MustGet("user_group").(*models.UserGroup)
|
||||
|
||||
err := ug.DelMembers(f.Ids)
|
||||
err := ug.DelMembers(rt.Ctx, f.Ids)
|
||||
if err == nil {
|
||||
ug.UpdateAt = time.Now().Unix()
|
||||
ug.UpdateBy = me.Username
|
||||
ug.Update("UpdateAt", "UpdateBy")
|
||||
ug.Update(rt.Ctx, "UpdateAt", "UpdateBy")
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Message(err)
|
||||
169
center/sso/init.go
Normal file
169
center/sso/init.go
Normal file
@@ -0,0 +1,169 @@
|
||||
package sso
|
||||
|
||||
import (
|
||||
"log"
|
||||
|
||||
"github.com/BurntSushi/toml"
|
||||
"github.com/ccfos/nightingale/v6/center/cconf"
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
"github.com/ccfos/nightingale/v6/pkg/cas"
|
||||
"github.com/ccfos/nightingale/v6/pkg/ctx"
|
||||
"github.com/ccfos/nightingale/v6/pkg/ldapx"
|
||||
"github.com/ccfos/nightingale/v6/pkg/oauth2x"
|
||||
"github.com/ccfos/nightingale/v6/pkg/oidcx"
|
||||
|
||||
"github.com/toolkits/pkg/logger"
|
||||
)
|
||||
|
||||
type SsoClient struct {
|
||||
OIDC *oidcx.SsoClient
|
||||
LDAP *ldapx.SsoClient
|
||||
CAS *cas.SsoClient
|
||||
OAuth2 *oauth2x.SsoClient
|
||||
}
|
||||
|
||||
const LDAP = `
|
||||
Enable = false
|
||||
Host = 'ldap.example.org'
|
||||
Port = 389
|
||||
BaseDn = 'dc=example,dc=org'
|
||||
BindUser = 'cn=manager,dc=example,dc=org'
|
||||
BindPass = '*******'
|
||||
AuthFilter = '(&(uid=%s))'
|
||||
CoverAttributes = true
|
||||
TLS = false
|
||||
StartTLS = true
|
||||
DefaultRoles = ['Standard']
|
||||
|
||||
[Attributes]
|
||||
Nickname = 'cn'
|
||||
Phone = 'mobile'
|
||||
Email = 'mail'
|
||||
`
|
||||
|
||||
const OAuth2 = `
|
||||
Enable = false
|
||||
DisplayName = 'OAuth2登录'
|
||||
RedirectURL = 'http://127.0.0.1:18000/callback/oauth'
|
||||
SsoAddr = 'https://sso.example.com/oauth2/authorize'
|
||||
TokenAddr = 'https://sso.example.com/oauth2/token'
|
||||
UserInfoAddr = 'https://api.example.com/api/v1/user/info'
|
||||
TranTokenMethod = 'header'
|
||||
ClientId = ''
|
||||
ClientSecret = ''
|
||||
CoverAttributes = true
|
||||
DefaultRoles = ['Standard']
|
||||
UserinfoIsArray = false
|
||||
UserinfoPrefix = 'data'
|
||||
Scopes = ['profile', 'email', 'phone']
|
||||
|
||||
[Attributes]
|
||||
Username = 'username'
|
||||
Nickname = 'nickname'
|
||||
Phone = 'phone_number'
|
||||
Email = 'email'
|
||||
`
|
||||
|
||||
const CAS = `
|
||||
Enable = false
|
||||
SsoAddr = 'https://cas.example.com/cas/'
|
||||
RedirectURL = 'http://127.0.0.1:18000/callback/cas'
|
||||
DisplayName = 'CAS登录'
|
||||
CoverAttributes = false
|
||||
DefaultRoles = ['Standard']
|
||||
|
||||
[Attributes]
|
||||
Nickname = 'nickname'
|
||||
Phone = 'phone_number'
|
||||
Email = 'email'
|
||||
`
|
||||
const OIDC = `
|
||||
Enable = false
|
||||
DisplayName = 'OIDC登录'
|
||||
RedirectURL = 'http://n9e.com/callback'
|
||||
SsoAddr = 'http://sso.example.org'
|
||||
ClientId = ''
|
||||
ClientSecret = ''
|
||||
CoverAttributes = true
|
||||
DefaultRoles = ['Standard']
|
||||
|
||||
[Attributes]
|
||||
Nickname = 'nickname'
|
||||
Phone = 'phone_number'
|
||||
Email = 'email'
|
||||
`
|
||||
|
||||
func Init(center cconf.Center, ctx *ctx.Context) *SsoClient {
|
||||
ssoClient := new(SsoClient)
|
||||
m := make(map[string]string)
|
||||
m["LDAP"] = LDAP
|
||||
m["CAS"] = CAS
|
||||
m["OIDC"] = OIDC
|
||||
m["OAuth2"] = OAuth2
|
||||
|
||||
for name, config := range m {
|
||||
count, err := models.SsoConfigCountByName(ctx, name)
|
||||
if err != nil {
|
||||
logger.Error(err)
|
||||
continue
|
||||
}
|
||||
|
||||
if count > 0 {
|
||||
continue
|
||||
}
|
||||
|
||||
ssoConfig := models.SsoConfig{
|
||||
Name: name,
|
||||
Content: config,
|
||||
}
|
||||
|
||||
err = ssoConfig.Create(ctx)
|
||||
if err != nil {
|
||||
log.Fatalln(err)
|
||||
}
|
||||
}
|
||||
|
||||
configs, err := models.SsoConfigGets(ctx)
|
||||
if err != nil {
|
||||
log.Fatalln(err)
|
||||
}
|
||||
|
||||
for _, cfg := range configs {
|
||||
switch cfg.Name {
|
||||
case "LDAP":
|
||||
var config ldapx.Config
|
||||
err := toml.Unmarshal([]byte(cfg.Content), &config)
|
||||
if err != nil {
|
||||
log.Fatalln("init ldap failed", err)
|
||||
}
|
||||
ssoClient.LDAP = ldapx.New(config)
|
||||
case "OIDC":
|
||||
var config oidcx.Config
|
||||
err := toml.Unmarshal([]byte(cfg.Content), &config)
|
||||
if err != nil {
|
||||
log.Fatalln("init oidc failed:", err)
|
||||
}
|
||||
oidcClient, err := oidcx.New(config)
|
||||
if err != nil {
|
||||
logger.Error("init oidc failed:", err)
|
||||
} else {
|
||||
ssoClient.OIDC = oidcClient
|
||||
}
|
||||
case "CAS":
|
||||
var config cas.Config
|
||||
err := toml.Unmarshal([]byte(cfg.Content), &config)
|
||||
if err != nil {
|
||||
log.Fatalln("init cas failed:", err)
|
||||
}
|
||||
ssoClient.CAS = cas.New(config)
|
||||
case "OAuth2":
|
||||
var config oauth2x.Config
|
||||
err := toml.Unmarshal([]byte(cfg.Content), &config)
|
||||
if err != nil {
|
||||
log.Fatalln("init oauth2 failed:", err)
|
||||
}
|
||||
ssoClient.OAuth2 = oauth2x.New(config)
|
||||
}
|
||||
}
|
||||
return ssoClient
|
||||
}
|
||||
9
cli/cli.go
Normal file
9
cli/cli.go
Normal file
@@ -0,0 +1,9 @@
|
||||
package cli
|
||||
|
||||
import (
|
||||
"github.com/ccfos/nightingale/v6/cli/upgrade"
|
||||
)
|
||||
|
||||
func Upgrade(configFile string) error {
|
||||
return upgrade.Upgrade(configFile)
|
||||
}
|
||||
63
cli/upgrade/config.go
Normal file
63
cli/upgrade/config.go
Normal file
@@ -0,0 +1,63 @@
|
||||
package upgrade
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"path"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/pkg/cfg"
|
||||
"github.com/ccfos/nightingale/v6/pkg/ormx"
|
||||
"github.com/ccfos/nightingale/v6/pkg/tlsx"
|
||||
"github.com/koding/multiconfig"
|
||||
)
|
||||
|
||||
type Config struct {
|
||||
DB ormx.DBConfig
|
||||
Clusters []ClusterOptions
|
||||
}
|
||||
|
||||
type ClusterOptions struct {
|
||||
Name string
|
||||
Prom string
|
||||
|
||||
BasicAuthUser string
|
||||
BasicAuthPass string
|
||||
|
||||
Headers []string
|
||||
|
||||
Timeout int64
|
||||
DialTimeout int64
|
||||
|
||||
UseTLS bool
|
||||
tlsx.ClientConfig
|
||||
|
||||
MaxIdleConnsPerHost int
|
||||
}
|
||||
|
||||
func Parse(fpath string, configPtr interface{}) error {
|
||||
var (
|
||||
tBuf []byte
|
||||
)
|
||||
loaders := []multiconfig.Loader{
|
||||
&multiconfig.TagLoader{},
|
||||
&multiconfig.EnvironmentLoader{},
|
||||
}
|
||||
s := cfg.NewFileScanner()
|
||||
|
||||
s.Read(path.Join(fpath))
|
||||
tBuf = append(tBuf, s.Data()...)
|
||||
tBuf = append(tBuf, []byte("\n")...)
|
||||
|
||||
if s.Err() != nil {
|
||||
return s.Err()
|
||||
}
|
||||
|
||||
if len(tBuf) != 0 {
|
||||
loaders = append(loaders, &multiconfig.TOMLLoader{Reader: bytes.NewReader(tBuf)})
|
||||
}
|
||||
|
||||
m := multiconfig.DefaultLoader{
|
||||
Loader: multiconfig.MultiLoader(loaders...),
|
||||
Validator: multiconfig.MultiValidator(&multiconfig.RequiredValidator{}),
|
||||
}
|
||||
return m.Load(configPtr)
|
||||
}
|
||||
21
cli/upgrade/readme.md
Normal file
21
cli/upgrade/readme.md
Normal file
@@ -0,0 +1,21 @@
|
||||
# v5 升级 v6 手册
|
||||
0. 操作之前,记得备注下数据库!
|
||||
|
||||
1. 需要先将你正在使用的夜莺数据源表结构更新到和 v5.15.0 一致,[release](https://github.com/ccfos/nightingale/releases) 页面有每个版本表结构的更新说明,可以根据你正在使用的版本,按照说明,逐个执行的更新表结构的语句
|
||||
|
||||
2. 解压 n9e 安装包,导入 upgrade.sql 到 n9e_v5 数据库
|
||||
```
|
||||
mysql -h 127.0.0.1 -u root -p1234 < cli/upgrade/upgrade.sql
|
||||
```
|
||||
|
||||
3. 执行 n9e-cli 完成数据库表结构升级, webapi.conf 为 v5 版本 n9e-webapi 正在使用的配置文件
|
||||
```
|
||||
./n9e-cli --upgrade --config webapi.conf
|
||||
```
|
||||
|
||||
4. 修改 n9e 配置文件中的数据库为 n9e_v5,启动 n9e 进程
|
||||
```
|
||||
nohup ./n9e &> n9e.log &
|
||||
```
|
||||
|
||||
5. n9e 监听的端口为 17000,需要将之前的 web 端口和数据上报的端口,都调整为 17000
|
||||
117
cli/upgrade/upgrade.go
Normal file
117
cli/upgrade/upgrade.go
Normal file
@@ -0,0 +1,117 @@
|
||||
package upgrade
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
"github.com/ccfos/nightingale/v6/pkg/ctx"
|
||||
"github.com/ccfos/nightingale/v6/storage"
|
||||
"github.com/toolkits/pkg/logger"
|
||||
)
|
||||
|
||||
func Upgrade(configFile string) error {
|
||||
var config Config
|
||||
Parse(configFile, &config)
|
||||
|
||||
db, err := storage.New(config.DB)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
ctx := ctx.NewContext(context.Background(), db)
|
||||
for _, cluster := range config.Clusters {
|
||||
count, err := models.GetDatasourcesCountBy(ctx, "", "", cluster.Name)
|
||||
if err != nil {
|
||||
logger.Errorf("get datasource %s count error: %v", cluster.Name, err)
|
||||
continue
|
||||
}
|
||||
if count > 0 {
|
||||
continue
|
||||
}
|
||||
|
||||
header := make(map[string]string)
|
||||
headerCount := len(cluster.Headers)
|
||||
if headerCount > 0 && headerCount%2 == 0 {
|
||||
for i := 0; i < len(cluster.Headers); i += 2 {
|
||||
header[cluster.Headers[i]] = cluster.Headers[i+1]
|
||||
}
|
||||
}
|
||||
|
||||
authJosn := models.Auth{
|
||||
BasicAuthUser: cluster.BasicAuthUser,
|
||||
BasicAuthPassword: cluster.BasicAuthPass,
|
||||
}
|
||||
|
||||
httpJson := models.HTTP{
|
||||
Timeout: cluster.Timeout,
|
||||
DialTimeout: cluster.DialTimeout,
|
||||
TLS: models.TLS{
|
||||
SkipTlsVerify: cluster.UseTLS,
|
||||
},
|
||||
MaxIdleConnsPerHost: cluster.MaxIdleConnsPerHost,
|
||||
Url: cluster.Prom,
|
||||
Headers: header,
|
||||
}
|
||||
|
||||
datasrouce := models.Datasource{
|
||||
PluginId: 1,
|
||||
PluginType: "prometheus",
|
||||
PluginTypeName: "Prometheus Like",
|
||||
Name: cluster.Name,
|
||||
HTTPJson: httpJson,
|
||||
AuthJson: authJosn,
|
||||
ClusterName: "default",
|
||||
Status: "enabled",
|
||||
}
|
||||
|
||||
err = datasrouce.Add(ctx)
|
||||
if err != nil {
|
||||
logger.Errorf("add datasource %s error: %v", cluster.Name, err)
|
||||
}
|
||||
}
|
||||
|
||||
datasources, err := models.GetDatasources(ctx)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
m := make(map[string]models.Datasource)
|
||||
for i := 0; i < len(datasources); i++ {
|
||||
m[datasources[i].Name] = datasources[i]
|
||||
}
|
||||
|
||||
err = models.AlertRuleUpgradeToV6(ctx, m)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// alert mute
|
||||
err = models.AlertMuteUpgradeToV6(ctx, m)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// alert subscribe
|
||||
err = models.AlertSubscribeUpgradeToV6(ctx, m)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// recoding rule
|
||||
err = models.RecordingRuleUpgradeToV6(ctx, m)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// alert cur event
|
||||
err = models.AlertCurEventUpgradeToV6(ctx, m)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// alert his event
|
||||
err = models.AlertHisEventUpgradeToV6(ctx, m)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
92
cli/upgrade/upgrade.sql
Normal file
92
cli/upgrade/upgrade.sql
Normal file
@@ -0,0 +1,92 @@
|
||||
use n9e_v5;
|
||||
|
||||
insert into `role_operation`(role_name, operation) values('Guest', '/log/explorer');
|
||||
insert into `role_operation`(role_name, operation) values('Guest', '/trace/explorer');
|
||||
|
||||
insert into `role_operation`(role_name, operation) values('Standard', '/log/explorer');
|
||||
insert into `role_operation`(role_name, operation) values('Standard', '/trace/explorer');
|
||||
insert into `role_operation`(role_name, operation) values('Standard', '/alert-rules-built-in');
|
||||
insert into `role_operation`(role_name, operation) values('Standard', '/dashboards-built-in');
|
||||
insert into `role_operation`(role_name, operation) values('Standard', '/trace/dependencies');
|
||||
|
||||
alter table `board` add built_in tinyint(1) not null default 0 comment '0:false 1:true';
|
||||
alter table `board` add hide tinyint(1) not null default 0 comment '0:false 1:true';
|
||||
|
||||
alter table `chart_share` add datasource_id bigint unsigned not null default 0;
|
||||
alter table `chart_share` drop dashboard_id;
|
||||
|
||||
alter table `alert_rule` add datasource_ids varchar(255) not null default '';
|
||||
alter table `alert_rule` add rule_config text not null comment 'rule_config';
|
||||
alter table `alert_rule` add annotations text not null comment 'annotations';
|
||||
|
||||
alter table `alert_mute` add datasource_ids varchar(255) not null default '';
|
||||
alter table `alert_mute` add periodic_mutes varchar(4096) not null default '[]';
|
||||
alter table `alert_mute` add mute_time_type tinyint(1) not null default 0;
|
||||
|
||||
alter table `alert_subscribe` add datasource_ids varchar(255) not null default '';
|
||||
alter table `alert_subscribe` add prod varchar(255) not null default '';
|
||||
alter table `alert_subscribe` add webhooks text;
|
||||
alter table `alert_subscribe` add redefine_webhooks tinyint(1) default 0;
|
||||
alter table `alert_subscribe` add for_duration bigint not null default 0;
|
||||
|
||||
alter table `recording_rule` add datasource_ids varchar(255) default '';
|
||||
|
||||
alter table `target` modify cluster varchar(128) not null default '';
|
||||
|
||||
alter table `alert_cur_event` add datasource_id bigint unsigned not null default 0;
|
||||
alter table `alert_cur_event` add annotations text not null comment 'annotations';
|
||||
alter table `alert_cur_event` add rule_config text not null comment 'rule_config';
|
||||
|
||||
alter table `alert_his_event` add datasource_id bigint unsigned not null default 0;
|
||||
alter table `alert_his_event` add annotations text not null comment 'annotations';
|
||||
alter table `alert_his_event` add rule_config text not null comment 'rule_config';
|
||||
|
||||
alter table `alerting_engines` add datasource_id bigint unsigned not null default 0;
|
||||
alter table `alerting_engines` change cluster engine_cluster varchar(128) not null default '' comment 'n9e engine cluster';
|
||||
|
||||
alter table `task_record` add event_id bigint not null comment 'event id' default 0;
|
||||
|
||||
CREATE TABLE `datasource`
|
||||
(
|
||||
`id` int unsigned NOT NULL AUTO_INCREMENT,
|
||||
`name` varchar(255) not null default '',
|
||||
`description` varchar(255) not null default '',
|
||||
`category` varchar(255) not null default '',
|
||||
`plugin_id` int unsigned not null default 0,
|
||||
`plugin_type` varchar(255) not null default '',
|
||||
`plugin_type_name` varchar(255) not null default '',
|
||||
`cluster_name` varchar(255) not null default '',
|
||||
`settings` text not null,
|
||||
`status` varchar(255) not null default '',
|
||||
`http` varchar(4096) not null default '',
|
||||
`auth` varchar(8192) not null default '',
|
||||
`created_at` bigint not null default 0,
|
||||
`created_by` varchar(64) not null default '',
|
||||
`updated_at` bigint not null default 0,
|
||||
`updated_by` varchar(64) not null default '',
|
||||
PRIMARY KEY (`id`)
|
||||
) ENGINE = InnoDB DEFAULT CHARSET = utf8mb4;
|
||||
|
||||
CREATE TABLE `builtin_cate` (
|
||||
`id` bigint unsigned not null auto_increment,
|
||||
`name` varchar(191) not null,
|
||||
`user_id` bigint not null default 0,
|
||||
PRIMARY KEY (`id`)
|
||||
) ENGINE = InnoDB DEFAULT CHARSET = utf8mb4;
|
||||
|
||||
CREATE TABLE `notify_tpl` (
|
||||
`id` bigint unsigned not null auto_increment,
|
||||
`channel` varchar(32) not null,
|
||||
`name` varchar(255) not null,
|
||||
`content` text not null,
|
||||
PRIMARY KEY (`id`),
|
||||
UNIQUE KEY (`channel`)
|
||||
) ENGINE = InnoDB DEFAULT CHARSET = utf8mb4;
|
||||
|
||||
CREATE TABLE `sso_config` (
|
||||
`id` bigint unsigned not null auto_increment,
|
||||
`name` varchar(191) not null,
|
||||
`content` text not null,
|
||||
PRIMARY KEY (`id`),
|
||||
UNIQUE KEY (`name`)
|
||||
) ENGINE = InnoDB DEFAULT CHARSET = utf8mb4;
|
||||
69
cmd/alert/main.go
Normal file
69
cmd/alert/main.go
Normal file
@@ -0,0 +1,69 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"flag"
|
||||
"fmt"
|
||||
"log"
|
||||
"os"
|
||||
"os/signal"
|
||||
"syscall"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/alert"
|
||||
"github.com/ccfos/nightingale/v6/pkg/osx"
|
||||
"github.com/ccfos/nightingale/v6/pkg/version"
|
||||
|
||||
"github.com/toolkits/pkg/runner"
|
||||
)
|
||||
|
||||
var (
|
||||
showVersion = flag.Bool("version", false, "Show version.")
|
||||
configDir = flag.String("configs", osx.GetEnv("N9E_CONFIGS", "etc"), "Specify configuration directory.(env:N9E_CONFIGS)")
|
||||
cryptoKey = flag.String("crypto-key", "", "Specify the secret key for configuration file field encryption.")
|
||||
)
|
||||
|
||||
func main() {
|
||||
flag.Parse()
|
||||
|
||||
if *showVersion {
|
||||
fmt.Println(version.Version)
|
||||
os.Exit(0)
|
||||
}
|
||||
|
||||
printEnv()
|
||||
|
||||
cleanFunc, err := alert.Initialize(*configDir, *cryptoKey)
|
||||
if err != nil {
|
||||
log.Fatalln("failed to initialize:", err)
|
||||
}
|
||||
|
||||
code := 1
|
||||
sc := make(chan os.Signal, 1)
|
||||
signal.Notify(sc, syscall.SIGHUP, syscall.SIGINT, syscall.SIGTERM, syscall.SIGQUIT)
|
||||
|
||||
EXIT:
|
||||
for {
|
||||
sig := <-sc
|
||||
fmt.Println("received signal:", sig.String())
|
||||
switch sig {
|
||||
case syscall.SIGQUIT, syscall.SIGTERM, syscall.SIGINT:
|
||||
code = 0
|
||||
break EXIT
|
||||
case syscall.SIGHUP:
|
||||
// reload configuration?
|
||||
default:
|
||||
break EXIT
|
||||
}
|
||||
}
|
||||
|
||||
cleanFunc()
|
||||
fmt.Println("process exited")
|
||||
os.Exit(code)
|
||||
}
|
||||
|
||||
func printEnv() {
|
||||
runner.Init()
|
||||
fmt.Println("runner.cwd:", runner.Cwd)
|
||||
fmt.Println("runner.hostname:", runner.Hostname)
|
||||
fmt.Println("runner.fd_limits:", runner.FdLimits())
|
||||
fmt.Println("runner.vm_limits:", runner.VMLimits())
|
||||
}
|
||||
69
cmd/center/main.go
Normal file
69
cmd/center/main.go
Normal file
@@ -0,0 +1,69 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"flag"
|
||||
"fmt"
|
||||
"log"
|
||||
"os"
|
||||
"os/signal"
|
||||
"syscall"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/center"
|
||||
"github.com/ccfos/nightingale/v6/pkg/osx"
|
||||
"github.com/ccfos/nightingale/v6/pkg/version"
|
||||
|
||||
"github.com/toolkits/pkg/runner"
|
||||
)
|
||||
|
||||
var (
|
||||
showVersion = flag.Bool("version", false, "Show version.")
|
||||
configDir = flag.String("configs", osx.GetEnv("N9E_CONFIGS", "etc"), "Specify configuration directory.(env:N9E_CONFIGS)")
|
||||
cryptoKey = flag.String("crypto-key", "", "Specify the secret key for configuration file field encryption.")
|
||||
)
|
||||
|
||||
func main() {
|
||||
flag.Parse()
|
||||
|
||||
if *showVersion {
|
||||
fmt.Println(version.Version)
|
||||
os.Exit(0)
|
||||
}
|
||||
|
||||
printEnv()
|
||||
|
||||
cleanFunc, err := center.Initialize(*configDir, *cryptoKey)
|
||||
if err != nil {
|
||||
log.Fatalln("failed to initialize:", err)
|
||||
}
|
||||
|
||||
code := 1
|
||||
sc := make(chan os.Signal, 1)
|
||||
signal.Notify(sc, syscall.SIGHUP, syscall.SIGINT, syscall.SIGTERM, syscall.SIGQUIT)
|
||||
|
||||
EXIT:
|
||||
for {
|
||||
sig := <-sc
|
||||
fmt.Println("received signal:", sig.String())
|
||||
switch sig {
|
||||
case syscall.SIGQUIT, syscall.SIGTERM, syscall.SIGINT:
|
||||
code = 0
|
||||
break EXIT
|
||||
case syscall.SIGHUP:
|
||||
// reload configuration?
|
||||
default:
|
||||
break EXIT
|
||||
}
|
||||
}
|
||||
|
||||
cleanFunc()
|
||||
fmt.Println("process exited")
|
||||
os.Exit(code)
|
||||
}
|
||||
|
||||
func printEnv() {
|
||||
runner.Init()
|
||||
fmt.Println("runner.cwd:", runner.Cwd)
|
||||
fmt.Println("runner.hostname:", runner.Hostname)
|
||||
fmt.Println("runner.fd_limits:", runner.FdLimits())
|
||||
fmt.Println("runner.vm_limits:", runner.VMLimits())
|
||||
}
|
||||
40
cmd/cli/main.go
Normal file
40
cmd/cli/main.go
Normal file
@@ -0,0 +1,40 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"flag"
|
||||
"fmt"
|
||||
"os"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/cli"
|
||||
"github.com/ccfos/nightingale/v6/pkg/version"
|
||||
)
|
||||
|
||||
var (
|
||||
upgrade = flag.Bool("upgrade", false, "Upgrade the database.")
|
||||
showVersion = flag.Bool("version", false, "Show version.")
|
||||
configFile = flag.String("config", "", "Specify webapi.conf of v5.x version")
|
||||
)
|
||||
|
||||
func main() {
|
||||
flag.Parse()
|
||||
|
||||
if *showVersion {
|
||||
fmt.Println(version.Version)
|
||||
os.Exit(0)
|
||||
}
|
||||
|
||||
if *upgrade {
|
||||
if *configFile == "" {
|
||||
fmt.Println("Please specify the configuration directory.")
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
err := cli.Upgrade(*configFile)
|
||||
if err != nil {
|
||||
fmt.Println(err)
|
||||
os.Exit(1)
|
||||
}
|
||||
fmt.Print("Upgrade successfully.")
|
||||
os.Exit(0)
|
||||
}
|
||||
}
|
||||
69
cmd/pushgw/main.go
Normal file
69
cmd/pushgw/main.go
Normal file
@@ -0,0 +1,69 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"flag"
|
||||
"fmt"
|
||||
"log"
|
||||
"os"
|
||||
"os/signal"
|
||||
"syscall"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/pkg/osx"
|
||||
"github.com/ccfos/nightingale/v6/pkg/version"
|
||||
"github.com/ccfos/nightingale/v6/pushgw"
|
||||
|
||||
"github.com/toolkits/pkg/runner"
|
||||
)
|
||||
|
||||
var (
|
||||
showVersion = flag.Bool("version", false, "Show version.")
|
||||
configDir = flag.String("configs", osx.GetEnv("N9E_CONFIGS", "etc"), "Specify configuration directory.(env:N9E_CONFIGS)")
|
||||
cryptoKey = flag.String("crypto-key", "", "Specify the secret key for configuration file field encryption.")
|
||||
)
|
||||
|
||||
func main() {
|
||||
flag.Parse()
|
||||
|
||||
if *showVersion {
|
||||
fmt.Println(version.Version)
|
||||
os.Exit(0)
|
||||
}
|
||||
|
||||
printEnv()
|
||||
|
||||
cleanFunc, err := pushgw.Initialize(*configDir, *cryptoKey)
|
||||
if err != nil {
|
||||
log.Fatalln("failed to initialize:", err)
|
||||
}
|
||||
|
||||
code := 1
|
||||
sc := make(chan os.Signal, 1)
|
||||
signal.Notify(sc, syscall.SIGHUP, syscall.SIGINT, syscall.SIGTERM, syscall.SIGQUIT)
|
||||
|
||||
EXIT:
|
||||
for {
|
||||
sig := <-sc
|
||||
fmt.Println("received signal:", sig.String())
|
||||
switch sig {
|
||||
case syscall.SIGQUIT, syscall.SIGTERM, syscall.SIGINT:
|
||||
code = 0
|
||||
break EXIT
|
||||
case syscall.SIGHUP:
|
||||
// reload configuration?
|
||||
default:
|
||||
break EXIT
|
||||
}
|
||||
}
|
||||
|
||||
cleanFunc()
|
||||
fmt.Println("process exited")
|
||||
os.Exit(code)
|
||||
}
|
||||
|
||||
func printEnv() {
|
||||
runner.Init()
|
||||
fmt.Println("runner.cwd:", runner.Cwd)
|
||||
fmt.Println("runner.hostname:", runner.Hostname)
|
||||
fmt.Println("runner.fd_limits:", runner.FdLimits())
|
||||
fmt.Println("runner.vm_limits:", runner.VMLimits())
|
||||
}
|
||||
76
conf/conf.go
Normal file
76
conf/conf.go
Normal file
@@ -0,0 +1,76 @@
|
||||
package conf
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"strings"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/alert/aconf"
|
||||
"github.com/ccfos/nightingale/v6/center/cconf"
|
||||
"github.com/ccfos/nightingale/v6/pkg/cfg"
|
||||
"github.com/ccfos/nightingale/v6/pkg/httpx"
|
||||
"github.com/ccfos/nightingale/v6/pkg/logx"
|
||||
"github.com/ccfos/nightingale/v6/pkg/ormx"
|
||||
"github.com/ccfos/nightingale/v6/pushgw/pconf"
|
||||
"github.com/ccfos/nightingale/v6/storage"
|
||||
)
|
||||
|
||||
type ConfigType struct {
|
||||
Global GlobalConfig
|
||||
Log logx.Config
|
||||
HTTP httpx.Config
|
||||
DB ormx.DBConfig
|
||||
Redis storage.RedisConfig
|
||||
|
||||
Pushgw pconf.Pushgw
|
||||
Alert aconf.Alert
|
||||
Center cconf.Center
|
||||
}
|
||||
|
||||
type GlobalConfig struct {
|
||||
RunMode string
|
||||
}
|
||||
|
||||
func InitConfig(configDir, cryptoKey string) (*ConfigType, error) {
|
||||
var config = new(ConfigType)
|
||||
|
||||
if err := cfg.LoadConfigByDir(configDir, config); err != nil {
|
||||
return nil, fmt.Errorf("failed to load configs of directory: %s error: %s", configDir, err)
|
||||
}
|
||||
|
||||
config.Pushgw.PreCheck()
|
||||
config.Alert.PreCheck()
|
||||
config.Center.PreCheck()
|
||||
|
||||
err := decryptConfig(config, cryptoKey)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if config.Alert.Heartbeat.IP == "" {
|
||||
// auto detect
|
||||
// config.Alert.Heartbeat.IP = fmt.Sprint(GetOutboundIP())
|
||||
// 自动获取IP在有些环境下容易出错,这里用hostname+pid来作唯一标识
|
||||
|
||||
hostname, err := os.Hostname()
|
||||
if err != nil {
|
||||
fmt.Println("failed to get hostname:", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
if strings.Contains(hostname, "localhost") {
|
||||
fmt.Println("Warning! hostname contains substring localhost, setting a more unique hostname is recommended")
|
||||
}
|
||||
|
||||
config.Alert.Heartbeat.IP = hostname
|
||||
|
||||
// if config.Alert.Heartbeat.IP == "" {
|
||||
// fmt.Println("heartbeat ip auto got is blank")
|
||||
// os.Exit(1)
|
||||
// }
|
||||
}
|
||||
|
||||
config.Alert.Heartbeat.Endpoint = fmt.Sprintf("%s:%d", config.Alert.Heartbeat.IP, config.HTTP.Port)
|
||||
|
||||
return config, nil
|
||||
}
|
||||
62
conf/crypto.go
Normal file
62
conf/crypto.go
Normal file
@@ -0,0 +1,62 @@
|
||||
package conf
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/pkg/secu"
|
||||
)
|
||||
|
||||
func decryptConfig(config *ConfigType, cryptoKey string) error {
|
||||
decryptDsn, err := secu.DealWithDecrypt(config.DB.DSN, cryptoKey)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to decrypt the db dsn: %s", err)
|
||||
}
|
||||
|
||||
config.DB.DSN = decryptDsn
|
||||
|
||||
for k := range config.HTTP.Alert.BasicAuth {
|
||||
decryptPwd, err := secu.DealWithDecrypt(config.HTTP.Alert.BasicAuth[k], cryptoKey)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to decrypt http basic auth password: %s", err)
|
||||
}
|
||||
|
||||
config.HTTP.Alert.BasicAuth[k] = decryptPwd
|
||||
}
|
||||
|
||||
for k := range config.HTTP.Pushgw.BasicAuth {
|
||||
decryptPwd, err := secu.DealWithDecrypt(config.HTTP.Pushgw.BasicAuth[k], cryptoKey)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to decrypt http basic auth password: %s", err)
|
||||
}
|
||||
|
||||
config.HTTP.Pushgw.BasicAuth[k] = decryptPwd
|
||||
}
|
||||
|
||||
for k := range config.HTTP.Heartbeat.BasicAuth {
|
||||
decryptPwd, err := secu.DealWithDecrypt(config.HTTP.Heartbeat.BasicAuth[k], cryptoKey)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to decrypt http basic auth password: %s", err)
|
||||
}
|
||||
|
||||
config.HTTP.Heartbeat.BasicAuth[k] = decryptPwd
|
||||
}
|
||||
|
||||
for k := range config.HTTP.Service.BasicAuth {
|
||||
decryptPwd, err := secu.DealWithDecrypt(config.HTTP.Service.BasicAuth[k], cryptoKey)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to decrypt http basic auth password: %s", err)
|
||||
}
|
||||
config.HTTP.Service.BasicAuth[k] = decryptPwd
|
||||
}
|
||||
|
||||
for i, v := range config.Pushgw.Writers {
|
||||
decryptWriterPwd, err := secu.DealWithDecrypt(v.BasicAuthPass, cryptoKey)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to decrypt writer basic auth password: %s", err)
|
||||
}
|
||||
|
||||
config.Pushgw.Writers[i].BasicAuthPass = decryptWriterPwd
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
@@ -0,0 +1,7 @@
|
||||
## Active Contributors
|
||||
|
||||
- [xiaoziv](https://github.com/xiaoziv)
|
||||
- [tanxiao1990](https://github.com/tanxiao1990)
|
||||
- [bbaobelief](https://github.com/bbaobelief)
|
||||
- [freedomkk-qfeng](https://github.com/freedomkk-qfeng)
|
||||
- [lsy1990](https://github.com/lsy1990)
|
||||
|
||||
@@ -0,0 +1,5 @@
|
||||
## Committers
|
||||
|
||||
- [YeningQin](https://github.com/710leo)
|
||||
- [FeiKong](https://github.com/kongfei605)
|
||||
- [XiaqingDai](https://github.com/jsers)
|
||||
|
||||
@@ -1,29 +1,36 @@
|
||||
# 夜莺开源项目和社区治理架构(草案)
|
||||
[夜莺监控](https://github.com/ccfos/nightingale "夜莺监控")是一款开源云原生监控系统,由滴滴设计开发,2020 年 3 月份开源之后,凭借其优秀的产品设计、灵活性架构和明确清晰的定位,夜莺监控快速发展为国内最活跃的企业级云原生监控方案。[截止当前](具体指2022年8月 "截止当前"),在 [Github](https://github.com/ccfos/nightingale "Github") 上已经迭代发布了 **70** 多个版本,获得了 **5K** 多个 Star,**80** 多位代码贡献者。快速的迭代,也让夜莺监控的用户群越来越大,涉及各行各业。
|
||||
|
||||
## 社区架构
|
||||
更进一步,夜莺监控于 2022 年 5 月 11 日,正式捐赠予中国计算机学会开源发展委员会 [CCF ODC](https://www.ccf.org.cn/kyfzwyh/ "CCF ODC"),为 CCF ODC 成立后接受捐赠的第一个开源项目。
|
||||
|
||||
### 用户(User)
|
||||
开源项目要更有生命力,离不开开放的治理架构和源源不断的开发者共同参与。夜莺监控项目加入 CCF 开源大家庭后,能在计算机学会的支持和带动下,进一步结合云原生、可观测、国产化等多个技术发展的需求,建立开放、中立的开源治理架构,打造更专业、有活力的开发者社区。
|
||||
|
||||
> 欢迎任何个人、公司以及组织,使用夜莺监控,并积极的反馈 bug、提交功能需求、以及相互帮助,我们推荐使用 [github issue](https://github.com/ccfos/nightingale/issues) 来跟踪 bug 和管理需求。
|
||||
**今天,我们郑重发布夜莺监控开源社区治理架构,并公示相关的任命和社区荣誉,期待开源的道路上,一起同行。**
|
||||
|
||||
社区用户,可以通过在 **[Who is Using Nightingale](https://github.com/ccfos/nightingale/issues/897)** 登记您的使用情况,并分享您使用夜莺监控的经验,将会自动进入 **[END USERS](./end-users.md)** 列表,并获得社区的 **VIP Support**。
|
||||
# 夜莺监控开源社区架构
|
||||
|
||||
### 贡献者(Contributer)
|
||||
### User|用户
|
||||
|
||||
> 欢迎每一位用户,包括但不限于以下列方式参与到夜莺开源社区并做出贡献:
|
||||
> 欢迎任何个人、公司以及组织,使用夜莺监控,并积极的反馈 bug、提交功能需求、以及相互帮助,我们推荐使用 [Github Issue](https://github.com/ccfos/nightingale/issues "Github Issue") 来跟踪 bug 和管理需求。
|
||||
|
||||
1. 在 [github issue](https://github.com/ccfos/nightingale/issues) 中积极参与讨论,参与社区活动;
|
||||
社区用户,可以通过在 **[Who is Using Nightingale](https://github.com/ccfos/nightingale/issues/897 "Who is Using Nightingale")** 登记您的使用情况,并分享您使用夜莺监控的经验,将会自动进入 **[END USERS](https://github.com/ccfos/nightingale/blob/main/doc/end-users.md "END USERS")** 文件列表,并获得社区的 **VIP Support**。
|
||||
|
||||
### Contributor|贡献者
|
||||
|
||||
> 欢迎每一位用户,包括但不限于以下方式参与到夜莺开源社区并做出贡献:
|
||||
|
||||
1. 在 [Github Issue](https://github.com/ccfos/nightingale/issues "Github Issue") 中积极参与讨论,参与社区活动;
|
||||
1. 提交代码补丁;
|
||||
1. 翻译、修订、补充和完善[文档](https://n9e.github.io);
|
||||
1. 翻译、修订、补充和完善[文档](https://n9e.github.io "文档");
|
||||
1. 分享夜莺监控的使用经验,积极布道;
|
||||
1. 提交建议 / 批评;
|
||||
|
||||
年度累计向 [CCFOS/NIGHTINGALE](https://github.com/ccfos/nightingale) 提交 **5** 个PR(被合并),或者因为其他贡献被**项目管委会**一致认可,将会自动进入到 **[ACTIVE CONTRIBUTORS](./active-contributors.md)** 列表,并获得 **[CCF ODC](https://www.ccf.org.cn/kyfzwyh/)** 颁发的电子证书,享有夜莺开源社区一定的权益和福利。
|
||||
年度累计向 [CCFOS/NIGHTINGALE](https://github.com/ccfos/nightingale "CCFOS/NIGHTINGALE") 提交 **5** 个PR(被合并),或者因为其他贡献被**项目管委会**一致认可,将会自动进入到 **[ACTIVE CONTRIBUTORS](https://github.com/ccfos/nightingale/blob/main/doc/active-contributors.md "ACTIVE CONTRIBUTORS")** 列表,并获得夜莺开源社区颁发的证书,享有夜莺开源社区一定的权益和福利。
|
||||
|
||||
所有向 [CCFOS/NIGHTINGALE](https://github.com/ccfos/nightingale "CCFOS/NIGHTINGALE") 提交过PR(被合并),或者做出过重要贡献的 Contributor,都会被永久记载于 [CONTRIBUTORS](https://github.com/ccfos/nightingale/blob/main/doc/contributors.md "CONTRIBUTORS") 列表。
|
||||
|
||||
### 提交者(Committer)
|
||||
### Committer|提交者
|
||||
|
||||
> Committer 是指拥有 [CCFOS/NIGHTINGALE](https://github.com/ccfos/nightingale) 代码仓库写操作权限的贡献者,他们拥有 ccf.org.cn 为后缀的邮箱地址(待上线)。原则上 Committer 能够自主决策某个代码补丁是否可以合入到夜莺代码仓库,但是项目管委会拥有最终的决策权。
|
||||
> Committer 是指拥有 [CCFOS/NIGHTINGALE](https://github.com/ccfos/nightingale "CCFOS/NIGHTINGALE") 代码仓库写操作权限的贡献者。原则上 Committer 能够自主决策某个代码补丁是否可以合入到夜莺代码仓库,但是项目管委会拥有最终的决策权。
|
||||
|
||||
Committer 承担以下一个或多个职责:
|
||||
- 积极回应 Issues;
|
||||
@@ -31,44 +38,43 @@ Committer 承担以下一个或多个职责:
|
||||
- 参加开发者例行会议,积极讨论项目规划和技术方案;
|
||||
- 代表夜莺开源社区出席相关技术会议并做演讲;
|
||||
|
||||
Committer 记录并公示于 **[COMMITTERS](./committers.md)** 列表,并获得 **[CCF ODC](https://www.ccf.org.cn/kyfzwyh/)** 颁发的电子证书,以及享有夜莺开源社区的各种权益和福利。
|
||||
Committer 记录并公示于 **[COMMITTERS](https://github.com/ccfos/nightingale/blob/main/doc/committers.md "COMMITTERS")** 列表,并获得夜莺开源社区颁发的证书,以及享有夜莺开源社区的各种权益和福利。
|
||||
|
||||
|
||||
### 项目管委会成员(PMC Member)
|
||||
### PMC|项目管委会
|
||||
|
||||
> 项目管委会成员,从 Contributor 或者 Committer 中选举产生,他们拥有 [CCFOS/NIGHTINGALE](https://github.com/ccfos/nightingale) 代码仓库的写操作权限,拥有 ccf.org.cn 为后缀的邮箱地址(待上线),拥有 Nightingale 社区相关事务的投票权、以及提名 Committer 候选人的权利。 项目管委会作为一个实体,为整个项目的发展全权负责。项目管委会成员记录并公示于 **[PMC](./pmc.md)** 列表。
|
||||
> PMC(项目管委会)作为一个实体,来管理和领导夜莺项目,为整个项目的发展全权负责。项目管委会相关内容记录并公示于文件[PMC](https://github.com/ccfos/nightingale/blob/main/doc/pmc.md "PMC").
|
||||
|
||||
### 项目管委会主席(PMC Chair)
|
||||
- 项目管委会成员(PMC Member),从 Contributor 或者 Committer 中选举产生,他们拥有 [CCFOS/NIGHTINGALE](https://github.com/ccfos/nightingale "CCFOS/NIGHTINGALE") 代码仓库的写操作权限,拥有 Nightingale 社区相关事务的投票权、以及提名 Committer 候选人的权利。
|
||||
- 项目管委会主席(PMC Chair),从项目管委会成员中投票产生。管委会主席是 **[CCF ODC](https://www.ccf.org.cn/kyfzwyh/ "CCF ODC")** 和项目管委会之间的沟通桥梁,履行特定的项目管理职责。
|
||||
|
||||
> 项目管委会主席采用任命制,由 **[CCF ODC](https://www.ccf.org.cn/kyfzwyh/)** 从项目管委会成员中任命产生。项目管委会作为一个统一的实体,来管理和领导夜莺项目。管委会主席是 CCF ODC 和项目管委会之间的沟通桥梁,履行特定的项目管理职责。
|
||||
|
||||
## 沟通机制(Communication)
|
||||
## Communication|沟通机制
|
||||
1. 我们推荐使用邮件列表来反馈建议(待发布);
|
||||
2. 我们推荐使用 [github issue](https://github.com/ccfos/nightingale/issues) 跟踪 bug 和管理需求;
|
||||
3. 我们推荐使用 [github milestone](https://github.com/ccfos/nightingale/milestones) 来管理项目进度和规划;
|
||||
2. 我们推荐使用 [Github Issue](https://github.com/ccfos/nightingale/issues "Github Issue") 跟踪 bug 和管理需求;
|
||||
3. 我们推荐使用 [Github Milestone](https://github.com/ccfos/nightingale/milestones "Github Milestone") 来管理项目进度和规划;
|
||||
4. 我们推荐使用腾讯会议来定期召开项目例会(会议 ID 待发布);
|
||||
|
||||
## 文档(Documentation)
|
||||
1. 我们推荐使用 [github pages](https://n9e.github.io) 来沉淀文档;
|
||||
2. 我们推荐使用 [gitlink wiki](https://www.gitlink.org.cn/ccfos/nightingale/wiki/faq) 来沉淀FAQ;
|
||||
## Documentation|文档
|
||||
1. 我们推荐使用 [Github Pages](https://n9e.github.io "Github Pages") 来沉淀文档;
|
||||
2. 我们推荐使用 [Gitlink Wiki](https://www.gitlink.org.cn/ccfos/nightingale/wiki/faq "Gitlink Wiki") 来沉淀 FAQ;
|
||||
|
||||
|
||||
## 运营机制(Operation)
|
||||
## Operation|运营机制
|
||||
|
||||
1. 我们定期组织用户、贡献者、项目管委会成员之间的沟通会议,讨论项目开发的目标、方案、进度,以及讨论相关需求的合理性、优先级等议题;
|
||||
2. 我们定期组织 meetup (线上&线下),创造良好的用户交流分享环境,并沉淀相关内容到文档站点;
|
||||
3. 我们定期组织夜莺开发者大会,分享 best user story、同步年度开发目标和计划、讨论新技术方向等;
|
||||
3. 我们定期组织夜莺开发者大会,分享 [best user story](https://n9e.github.io/docs/prologue/share/ "best user story")、同步年度开发目标和计划、讨论新技术方向等;
|
||||
|
||||
## 社区指导原则(Philosophy)
|
||||
## Philosophy|社区指导原则
|
||||
|
||||
**尊重、认可和记录每一位贡献者的工作。**
|
||||
>尊重、认可和记录每一位贡献者的工作。
|
||||
|
||||
## 关于提问的原则
|
||||
|
||||
按照**尊重、认可、记录每一位贡献者的工作**原则,我们提倡**高效的提问**,这既是对开发者时间的尊重,也是对整个社区的知识沉淀的贡献:
|
||||
|
||||
1. 提问之前请先查阅 [FAQ](https://www.gitlink.org.cn/ccfos/nightingale/wiki/faq) ;
|
||||
2. 提问之前请先搜索 [github issue](https://github.com/ccfos/nightingale/issues);
|
||||
3. 我们优先推荐通过提交 github issue 来提问,如果[有问题点击这里](https://github.com/ccfos/nightingale/issues/new?assignees=&labels=kind%2Fbug&template=bug_report.yml) | [有需求建议点击这里](https://github.com/ccfos/nightingale/issues/new?assignees=&labels=kind%2Ffeature&template=enhancement.md);
|
||||
1. 提问之前请先查阅 [FAQ](https://www.gitlink.org.cn/ccfos/nightingale/wiki/faq "FAQ") ;
|
||||
2. 提问之前请先搜索 [Github Issues](https://github.com/ccfos/nightingale/issues "Github Issue");
|
||||
3. 我们优先推荐通过提交 [Github Issue](https://github.com/ccfos/nightingale/issues "Github Issue") 来提问,如果[有问题点击这里](https://github.com/ccfos/nightingale/issues/new?assignees=&labels=kind%2Fbug&template=bug_report.yml "有问题点击这里") | [有需求建议点击这里](https://github.com/ccfos/nightingale/issues/new?assignees=&labels=kind%2Ffeature&template=enhancement.md "有需求建议点击这里");
|
||||
|
||||
最后,我们推荐你加入微信群,针对相关开放式问题,相互交流咨询 (请先加好友:[UlricGO](https://www.gitlink.org.cn/UlricQin/gist/tree/master/self.jpeg) 备注:夜莺加群+姓名+公司,交流群里会有开发者团队和专业、热心的群友回答问题);
|
||||
最后,我们推荐你加入微信群,针对相关开放式问题,相互交流咨询 (请先加好友:[UlricGO](https://www.gitlink.org.cn/UlricQin/gist/tree/master/self.jpeg "UlricGO") 备注:夜莺加群+姓名+公司,交流群里会有开发者团队和专业、热心的群友回答问题)。
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user