Compare commits

...

132 Commits

Author SHA1 Message Date
ning
6ac70925ed optimize log 2025-01-22 20:12:20 +08:00
kongfei605
1915701ce0 chore: default interval for cloudwatch collecting (#2418) 2025-01-07 10:35:53 +08:00
kongfei605
7fd9cd5a3d chore: update tpl and readme for cloudwatch (#2417) 2025-01-06 19:13:45 +08:00
ning
0e2f386419 refactor: change tdengine 2025-01-06 17:29:58 +08:00
ning
b96b08fb9e refactor: change tdengine 2025-01-06 17:22:22 +08:00
ning
eebd1021de refactor: change tdengine 2025-01-06 17:04:46 +08:00
ning
ef61a4cfa7 refactor: change tdengine 2025-01-06 16:50:29 +08:00
ning
2563d2891d feat: add cron pattern validation for alert rule 2025-01-06 11:45:43 +08:00
ning
6ae8ef0d9f feat: add random delay before starting alert rule worker 2025-01-06 11:21:26 +08:00
Yening Qin
38adbefe9c feat: dashboard support add annotations (#2416) 2025-01-05 17:46:17 +08:00
Yening Qin
3f5e0c056d feat: support elasticsearch alert (#2400)
Co-authored-by: Xu Bin <140785332+Reditiny@users.noreply.github.com>
2025-01-05 17:44:17 +08:00
flashbo
b0131a3799 feat: support setting builtin component to disabled (#2406) 2025-01-02 11:11:54 +08:00
Yening Qin
cbb03a7c63 refactor: optimize enum type for alert rule with var
Co-authored-by: Xu Bin <140785332+Reditiny@users.noreply.github.com>
2024-12-28 22:18:39 +08:00
ning
080d412124 docs: add sql 2024-12-26 14:29:50 +08:00
ning
752e02f32d docs: add sql 2024-12-26 14:26:15 +08:00
CRISPpp
e05d59d72a refactor: update target update group part (#2388) 2024-12-25 15:35:19 +08:00
Yening Qin
854e30551a fix: docker compose of postgres init error (#2370) (#2392)
Co-authored-by: CRISPpp <78430796+CRISPpp@users.noreply.github.com>
2024-12-24 15:55:03 +08:00
Yening Qin
0b6dc5beba refactor get user from context (#2391)
Co-authored-by: flashbo <36443248+lwb0214@users.noreply.github.com>
2024-12-24 15:52:55 +08:00
ning
8685a95fa5 Merge branch 'main' of github.com:ccfos/nightingale 2024-12-24 15:40:22 +08:00
ning
7ca7fd8d66 refactor: event set AnnotationsJSON 2024-12-24 15:40:08 +08:00
Yening Qin
1b5dc81b6c fix: the dedup logic when adding tags to target (#2386)
Co-authored-by: flashbo <36443248+lwb0214@users.noreply.github.com>
2024-12-24 11:48:58 +08:00
Ulric Qin
04495f0892 set ignore_host to true 2024-12-20 18:10:48 +08:00
Yening Qin
8158ce1b90 refactor: global webhook add env proxy (#2375)
Co-authored-by: Xu Bin <140785332+Reditiny@users.noreply.github.com>
2024-12-20 14:21:47 +08:00
Yening Qin
a43952e168 refactor: es_index_pattern add cross_cluster_enabled (#2372) 2024-12-19 14:12:27 +08:00
Yening Qin
5702fc81d0 refactor: group delete check (#2368)
Co-authored-by: Xu Bin <140785332+Reditiny@users.noreply.github.com>
2024-12-18 17:00:18 +08:00
Xu Bin
7cc65a2ca7 refactor: add id for configsGetAll (#2361) 2024-12-16 20:38:53 +08:00
ning
7bb6c6541a chore: uodate gomod 2024-12-15 19:42:00 +08:00
ning
8b4cfe65e3 Merge branch 'main' of github.com:ccfos/nightingale 2024-12-13 10:56:27 +08:00
ning
7227de8c22 docs: update migrate.sql 2024-12-13 10:56:15 +08:00
CRISPpp
069e267af8 docs: update sqlite.sql (#2356) 2024-12-13 10:18:59 +08:00
ning
7c5c9a95c3 refactor: change sqlite driver 2024-12-12 21:32:51 +08:00
ning
e3da7f344b docs: update goreleaser.yaml 2024-12-12 21:12:57 +08:00
Yening Qin
dd741a177f docs: rename es integration 2024-12-12 19:27:36 +08:00
ning
4fdd25f020 docs: set HTTP.APIForService.Enable to false 2024-12-12 19:24:07 +08:00
Yening Qin
62350bfbc6 fix: alert rule with var (#2357) 2024-12-12 16:59:09 +08:00
CRISPpp
5ee1baaf07 feat: add config dir and config file check (#2350)
Co-authored-by: Yening Qin <710leo@gmail.com>
2024-12-12 13:24:11 +08:00
Xu Bin
fa12889f06 fix: alert rule check with var when not exact match (#2354) 2024-12-12 11:04:48 +08:00
Yening Qin
39306a5bf0 refactor: optimize webhook send (#2352) 2024-12-11 17:51:20 +08:00
ning
0aea38e564 refactor: write queue limit 2024-12-10 21:03:55 +08:00
CRISPpp
45e9253b2a feat: add global metric write rate control (#2347) 2024-12-10 20:43:02 +08:00
CRISPpp
9385ca9931 feat: add pre check for deleting busi_group (#2346) 2024-12-09 20:32:46 +08:00
ning
fdd3d14871 docs: change default db type to sqlite 2024-12-06 21:04:10 +08:00
Yening Qin
e890034c19 feat: auto init db (#2345)
Co-authored-by: CRISPpp <78430796+CRISPpp@users.noreply.github.com>
2024-12-06 20:32:17 +08:00
Yening Qin
3aaab9e6ad fix: event prom eval interval (#2343) 2024-12-06 20:24:49 +08:00
CRISPpp
7f7d707cfc fix: role_operation abnormal count (#2338) 2024-12-06 16:31:47 +08:00
Xu Bin
98402e9f8a fix: quotation mark for alert rule var (#2339) 2024-12-06 16:07:47 +08:00
Xu Bin
017094fd78 fix: var support for aggregate function (#2334) 2024-12-06 11:57:51 +08:00
Yening Qin
8b6b896362 feat: redis support miniredis type (#2337)
Co-authored-by: CRISPpp <78430796+CRISPpp@users.noreply.github.com>
2024-12-06 10:46:05 +08:00
ning
acaa00cfb6 refactor: migrate add more log 2024-12-05 17:55:27 +08:00
flashbo
87f3d8595d fix: targets filter logic (#2333) 2024-12-05 14:31:57 +08:00
flashbo
42791a374d feat: targets support sorting by time (#2331) 2024-12-05 14:20:30 +08:00
kongfei605
3855c25805 chore: update dashboards for mongodb (#2332) 2024-12-04 16:19:21 +08:00
Xu Bin
10ec0ccbd1 fix: alert rule cron eval (#2330) 2024-12-03 16:58:30 +08:00
flashbo
94cf304222 refactor: improve target bind bg logic (#2327) 2024-12-03 15:00:25 +08:00
ning
994de4635a docs: update n9e.sql 2024-12-02 20:18:32 +08:00
ning
9a0013a406 docs: update n9e.sql 2024-12-02 09:16:50 +08:00
CRISPpp
6dcd5dd01e docs: complete initialization n9e.sql (#2325) 2024-11-29 18:52:00 +08:00
ning
70126e3aec refactor: sync.map clear 2024-11-29 18:23:45 +08:00
ning
767482d358 refactor: optimize prom query 2024-11-28 15:58:31 +08:00
YangHgRi
9a46106cc0 refactor: specify parameter type in function to improve type safety and clarity (#2317) 2024-11-26 20:55:49 +08:00
Yening Qin
da9ea67cee feat: alert rule annotation support prom query template func (#2314)
Co-authored-by: Xu Bin <140785332+Reditiny@users.noreply.github.com>
2024-11-22 16:58:00 +08:00
6666walnut
c13ecd780b fix: get busi-groups api err (#2313)
Co-authored-by: wangjing17 <wangjing17@foundersc.com>
2024-11-22 16:55:32 +08:00
ning
cab37c796a refactor: target_busi_group set utf8mb4_general_ci 2024-11-22 13:28:36 +08:00
ning
078578772b refactor: change builtin component logo type 2024-11-21 20:22:37 +08:00
Yening Qin
31883ec844 refactor: alert rule support cron (#2309)
Co-authored-by: Xu Bin <140785332+Reditiny@users.noreply.github.com>
2024-11-21 13:14:19 +08:00
ning
6100cd084a refactor: event recovery notify 2024-11-21 11:00:30 +08:00
Yening Qin
b82e260d65 feat: alert rule support var (#2307)
Co-authored-by: Xu Bin <140785332+Reditiny@users.noreply.github.com>
2024-11-20 20:34:33 +08:00
ning
3983386af3 fix: get notify-record api err 2024-11-20 20:02:38 +08:00
Ulric Qin
83f2054062 lock version of prom 2024-11-20 17:49:21 +08:00
Ulric Qin
83e0b3cb98 Merge branch 'main' of github.com:ccfos/nightingale 2024-11-20 17:23:29 +08:00
Ulric Qin
f6bfa17e2e update init sql 2024-11-20 17:23:17 +08:00
flashbo
3d8019b738 refactor: event notify record (#2296) 2024-11-19 20:25:17 +08:00
Ulric Qin
ee1be71be6 Merge branch 'main' of github.com:ccfos/nightingale 2024-11-19 19:47:14 +08:00
Ulric Qin
7f2fb459bb update mysql and redis dashboard 2024-11-19 19:47:00 +08:00
ning
fde6a9c75e refactor: change is_recovered type 2024-11-19 19:23:21 +08:00
Ulric Qin
a2b506e263 add input.redis in docker-compose 2024-11-19 17:28:56 +08:00
Ulric Qin
30024a4951 add redis dashboard 2024-11-19 17:26:33 +08:00
Ulric Qin
2c3996812a remove global labels: source=categraf 2024-11-19 17:24:10 +08:00
Ulric Qin
51d35900f2 add input.mysql in docker-compose/etc-categraf 2024-11-19 17:05:46 +08:00
Ulric Qin
852fd2ea6e add field is_recovered when call ibex 2024-11-19 16:49:28 +08:00
Ulric Qin
e1a57217ab update mysql dashboard 2024-11-19 11:28:22 +08:00
Ulric Qin
1e7dad1a67 Merge branch 'main' of github.com:ccfos/nightingale 2024-11-19 11:26:40 +08:00
Ulric Qin
534e40ad63 add mysql dashboard 2024-11-19 11:26:27 +08:00
CRISPpp
15daa3826c feat: add console log with n9e address and root username/passwd when init root (#2302) 2024-11-19 10:31:41 +08:00
ning
d5efb5b6d4 update go.mod 2024-11-19 10:29:39 +08:00
ning
7ebd776881 docs: update doris dashboard tpl 2024-11-18 20:03:29 +08:00
Ulric Qin
0e5cda1cee support proxy when call center 2024-11-18 16:04:15 +08:00
Ulric Qin
64dad19377 Merge branch 'main' of github.com:ccfos/nightingale 2024-11-18 16:01:12 +08:00
Ulric Qin
48f199f8f5 sender support ProxyFromEnvironment 2024-11-18 16:00:56 +08:00
Yening Qin
f7e4df7415 refactor: self monitor metric (#2285)
Co-authored-by: Xu Bin <140785332+Reditiny@users.noreply.github.com>
2024-11-18 11:46:34 +08:00
ning
37fe01ab54 docs: update workflows 2024-11-15 17:45:17 +08:00
ning
cbfe661bce docs: update go version in mod 2024-11-15 17:35:54 +08:00
Yening Qin
890c12f0d4 feat: alert rule query add unit (#2299)
Co-authored-by: Xu Bin <140785332+Reditiny@users.noreply.github.com>
2024-11-15 17:09:41 +08:00
Yening Qin
643c6c997c fix: proxy parse url (#2297) 2024-11-15 16:37:01 +08:00
robin
b201836b40 fix:create user info for notify_tpl (#2292) 2024-11-15 11:57:23 +08:00
robin
b5eced1540 docs: add doris template (#2260) 2024-11-14 22:50:45 +08:00
Yening Qin
a13004eab7 feat: allow override global webhook (#2257)
Co-authored-by: flashbo <36443248+lwb0214@users.noreply.github.com>
2024-11-14 22:35:23 +08:00
Yening Qin
a0c56548e5 refactor: migrate label (#2293) 2024-11-14 22:25:20 +08:00
ning
e3d97386a8 refactor: dash tpl uuid 2024-11-14 22:14:18 +08:00
ning
051b0ca045 Merge branch 'main' of github.com:ccfos/nightingale 2024-11-14 19:18:20 +08:00
ning
2941ced011 fix: import builtin board 2024-11-14 19:15:02 +08:00
Ulric Qin
97d6908edd fix mongodb dashboard 2024-11-14 18:28:52 +08:00
710leo
c7117b9461 fix: proxy api parse url 2024-11-13 23:00:49 +08:00
Yening Qin
78417b1d5b refactor: optimize rule datasource set (#2288)
Co-authored-by: Xu Bin <140785332+Reditiny@users.noreply.github.com>
2024-11-13 20:28:50 +08:00
Yening Qin
79f3404810 refactor event notify (#2287) 2024-11-13 19:50:11 +08:00
ning
81e51c60eb refactor: subscribe add check 2024-11-12 20:26:56 +08:00
shardingHe
af9cd55ca5 docs: add metrics config for oracle (#2276)
Co-authored-by: shardingHe <wangzihe@flashcat.cloud>
2024-11-12 13:59:35 +08:00
710leo
d4afdb2b6e refactor: change log 2024-11-06 22:34:30 +08:00
flashbo
2befc8b0f1 refactor: migrate bg label (#2269) 2024-11-06 21:48:29 +08:00
Yening Qin
14fd2eb26d refactor: update tdengine query (#2270) 2024-11-06 20:27:21 +08:00
ning
0a938518d7 refactor: target_busi_group table name 2024-11-06 13:00:35 +08:00
ning
0eed5afa7e refactor: update target_busi_group character 2024-11-05 14:46:41 +08:00
Yening Qin
f82eaf0a1f refactor: optimize tdentine (#2262) 2024-11-04 17:33:18 +08:00
ning
f03278d68d refactor: append tags 2024-11-04 16:43:39 +08:00
shardingHe
7d1e143f60 docs: sync configurations for bind & ldap (#2253)
Co-authored-by: shardingHe <wangzihe@flashcat.cloud>
2024-11-02 16:49:49 +08:00
ning
078a0c7b1c refactor: prom query log 2024-11-01 15:28:23 +08:00
flashbo
d9cac65a18 refactor: improve prom_rule import (#2251) 2024-10-30 14:28:00 +08:00
ning
dd025ca87c refactor: migrate db and host_miss tag append 2024-10-30 14:20:16 +08:00
ning
04734b8940 Merge branch 'main' of github.com:ccfos/nightingale 2024-10-29 12:09:50 +08:00
ning
bf7bcf4196 docs: update notify tpl 2024-10-29 12:09:26 +08:00
ulricqin
16195abb89 Update docker-compose.yaml 2024-10-29 12:08:40 +08:00
ning
3f4891d65d refactor: event queue push 2024-10-28 20:51:21 +08:00
Yening Qin
102549c6a1 refactor: webhook send event (#2248)
Co-authored-by: Xu Bin <140785332+Reditiny@users.noreply.github.com>
2024-10-28 20:33:29 +08:00
Yening Qin
5213b1d7f1 refactor: es update config (#2247)
Co-authored-by: flashbo <36443248+lwb0214@users.noreply.github.com>
2024-10-28 20:32:45 +08:00
Yening Qin
24de97fb1e refactor: update default engine name (#2245) 2024-10-28 15:50:52 +08:00
ning
9c2cf679e0 refactor: center set default engine_name 2024-10-28 13:37:55 +08:00
Yening Qin
2aa4941010 refactor: optimize recover notify(#2242)
Co-authored-by: Xu Bin <140785332+Reditiny@users.noreply.github.com>
2024-10-25 16:53:44 +08:00
flashbo
a812f14442 refactor: record notify for callback (#2231) 2024-10-25 16:50:12 +08:00
flashbo
4fb7e8e2b5 refactor: fill group names in target (#2241) 2024-10-25 16:30:09 +08:00
ulricqin
113ad67104 Update README.md 2024-10-25 12:10:28 +08:00
flashbo
49d843540a refactor: add ExtraInfoMap in alert event (#2240) 2024-10-25 11:03:56 +08:00
Yening Qin
21f0e3310f fix: event relabel when target_label is blank (#2228)
Co-authored-by: Xu Bin <140785332+Reditiny@users.noreply.github.com>
2024-10-24 14:09:41 +08:00
175 changed files with 21945 additions and 2381 deletions

View File

@@ -5,7 +5,7 @@ on:
tags:
- 'v*'
env:
GO_VERSION: 1.18
GO_VERSION: 1.23
jobs:
goreleaser:

3
.gitignore vendored
View File

@@ -9,6 +9,7 @@
*.o
*.a
*.so
*.db
*.sw[po]
*.tar.gz
*.[568vq]
@@ -64,3 +65,5 @@ queries.active
/n9e-*
n9e.sql
!/datasource

View File

@@ -38,6 +38,7 @@
- 👉 [文档中心](https://flashcat.cloud/docs/) | [下载中心](https://flashcat.cloud/download/nightingale/)
- ❤️ [报告 Bug](https://github.com/ccfos/nightingale/issues/new?assignees=&labels=&projects=&template=question.yml)
- 为了提供更快速的访问体验,上述文档和下载站点托管于 [FlashcatCloud](https://flashcat.cloud)
- 💡 前后端代码分离,前端代码仓库:[https://github.com/n9e/fe](https://github.com/n9e/fe)
## 功能特点

View File

@@ -4,6 +4,8 @@ import (
"context"
"fmt"
"github.com/ccfos/nightingale/v6/dscache"
"github.com/ccfos/nightingale/v6/alert/aconf"
"github.com/ccfos/nightingale/v6/alert/astats"
"github.com/ccfos/nightingale/v6/alert/dispatch"
@@ -21,12 +23,11 @@ import (
"github.com/ccfos/nightingale/v6/pkg/ctx"
"github.com/ccfos/nightingale/v6/pkg/httpx"
"github.com/ccfos/nightingale/v6/pkg/logx"
"github.com/ccfos/nightingale/v6/pkg/macros"
"github.com/ccfos/nightingale/v6/prom"
"github.com/ccfos/nightingale/v6/pushgw/pconf"
"github.com/ccfos/nightingale/v6/pushgw/writer"
"github.com/ccfos/nightingale/v6/storage"
"github.com/ccfos/nightingale/v6/tdengine"
"github.com/flashcatcloud/ibex/src/cmd/ibex"
)
@@ -65,11 +66,13 @@ func Initialize(configDir string, cryptoKey string) (func(), error) {
configCvalCache := memsto.NewCvalCache(ctx, syncStats)
promClients := prom.NewPromClient(ctx)
tdengineClients := tdengine.NewTdengineClient(ctx, config.Alert.Heartbeat)
dispatch.InitRegisterQueryFunc(promClients)
externalProcessors := process.NewExternalProcessors()
Start(config.Alert, config.Pushgw, syncStats, alertStats, externalProcessors, targetCache, busiGroupCache, alertMuteCache, alertRuleCache, notifyConfigCache, taskTplsCache, dsCache, ctx, promClients, tdengineClients, userCache, userGroupCache)
macros.RegisterMacro(macros.MacroInVain)
dscache.Init(ctx, false)
Start(config.Alert, config.Pushgw, syncStats, alertStats, externalProcessors, targetCache, busiGroupCache, alertMuteCache, alertRuleCache, notifyConfigCache, taskTplsCache, dsCache, ctx, promClients, userCache, userGroupCache)
r := httpx.GinEngine(config.Global.RunMode, config.HTTP,
configCvalCache.PrintBodyPaths, configCvalCache.PrintAccessLog)
@@ -92,7 +95,7 @@ func Initialize(configDir string, cryptoKey string) (func(), error) {
func Start(alertc aconf.Alert, pushgwc pconf.Pushgw, syncStats *memsto.Stats, alertStats *astats.Stats, externalProcessors *process.ExternalProcessorsType, targetCache *memsto.TargetCacheType, busiGroupCache *memsto.BusiGroupCacheType,
alertMuteCache *memsto.AlertMuteCacheType, alertRuleCache *memsto.AlertRuleCacheType, notifyConfigCache *memsto.NotifyConfigCacheType, taskTplsCache *memsto.TaskTplCache, datasourceCache *memsto.DatasourceCacheType, ctx *ctx.Context,
promClients *prom.PromClientMap, tdendgineClients *tdengine.TdengineClientMap, userCache *memsto.UserCacheType, userGroupCache *memsto.UserGroupCacheType) {
promClients *prom.PromClientMap, userCache *memsto.UserCacheType, userGroupCache *memsto.UserGroupCacheType) {
alertSubscribeCache := memsto.NewAlertSubscribeCache(ctx, syncStats)
recordingRuleCache := memsto.NewRecordingRuleCache(ctx, syncStats)
targetsOfAlertRulesCache := memsto.NewTargetOfAlertRuleCache(ctx, alertc.Heartbeat.EngineName, syncStats)
@@ -102,10 +105,10 @@ func Start(alertc aconf.Alert, pushgwc pconf.Pushgw, syncStats *memsto.Stats, al
naming := naming.NewNaming(ctx, alertc.Heartbeat, alertStats)
writers := writer.NewWriters(pushgwc)
record.NewScheduler(alertc, recordingRuleCache, promClients, writers, alertStats)
record.NewScheduler(alertc, recordingRuleCache, promClients, writers, alertStats, datasourceCache)
eval.NewScheduler(alertc, externalProcessors, alertRuleCache, targetCache, targetsOfAlertRulesCache,
busiGroupCache, alertMuteCache, datasourceCache, promClients, tdendgineClients, naming, ctx, alertStats)
busiGroupCache, alertMuteCache, datasourceCache, promClients, naming, ctx, alertStats)
dp := dispatch.NewDispatch(alertRuleCache, userCache, userGroupCache, alertSubscribeCache, targetCache, notifyConfigCache, taskTplsCache, alertc.Alerting, ctx, alertStats)
consumer := dispatch.NewConsumer(alertc.Alerting, ctx, dp, promClients)

View File

@@ -38,7 +38,7 @@ func NewSyncStats() *Stats {
Subsystem: subsystem,
Name: "rule_eval_error_total",
Help: "Number of rule eval error.",
}, []string{"datasource", "stage"})
}, []string{"datasource", "stage", "busi_group", "rule_id"})
CounterQueryDataErrorTotal := prometheus.NewCounterVec(prometheus.CounterOpts{
Namespace: namespace,

View File

@@ -1,6 +1,7 @@
package dispatch
import (
"context"
"encoding/json"
"fmt"
"strings"
@@ -13,8 +14,10 @@ import (
"github.com/ccfos/nightingale/v6/pkg/ctx"
"github.com/ccfos/nightingale/v6/pkg/poster"
promsdk "github.com/ccfos/nightingale/v6/pkg/prom"
"github.com/ccfos/nightingale/v6/pkg/tplx"
"github.com/ccfos/nightingale/v6/prom"
"github.com/prometheus/common/model"
"github.com/toolkits/pkg/concurrent/semaphore"
"github.com/toolkits/pkg/logger"
)
@@ -27,6 +30,18 @@ type Consumer struct {
promClients *prom.PromClientMap
}
func InitRegisterQueryFunc(promClients *prom.PromClientMap) {
tplx.RegisterQueryFunc(func(datasourceID int64, promql string) model.Value {
if promClients.IsNil(datasourceID) {
return nil
}
readerClient := promClients.GetCli(datasourceID)
value, _, _ := readerClient.Query(context.Background(), promql, time.Now())
return value
})
}
// 创建一个 Consumer 实例
func NewConsumer(alerting aconf.Alerting, ctx *ctx.Context, dispatch *Dispatch, promClients *prom.PromClientMap) *Consumer {
return &Consumer{
@@ -113,7 +128,7 @@ func (e *Consumer) persist(event *models.AlertCurEvent) {
event.Id, err = poster.PostByUrlsWithResp[int64](e.ctx, "/v1/n9e/event-persist", event)
if err != nil {
logger.Errorf("event:%+v persist err:%v", event, err)
e.dispatch.Astats.CounterRuleEvalErrorTotal.WithLabelValues(fmt.Sprintf("%v", event.DatasourceId), "persist_event").Inc()
e.dispatch.Astats.CounterRuleEvalErrorTotal.WithLabelValues(fmt.Sprintf("%v", event.DatasourceId), "persist_event", event.GroupName, fmt.Sprintf("%v", event.RuleId)).Inc()
}
return
}
@@ -121,7 +136,7 @@ func (e *Consumer) persist(event *models.AlertCurEvent) {
err := models.EventPersist(e.ctx, event)
if err != nil {
logger.Errorf("event%+v persist err:%v", event, err)
e.dispatch.Astats.CounterRuleEvalErrorTotal.WithLabelValues(fmt.Sprintf("%v", event.DatasourceId), "persist_event").Inc()
e.dispatch.Astats.CounterRuleEvalErrorTotal.WithLabelValues(fmt.Sprintf("%v", event.DatasourceId), "persist_event", event.GroupName, fmt.Sprintf("%v", event.RuleId)).Inc()
}
}
@@ -169,7 +184,7 @@ func (e *Consumer) queryRecoveryVal(event *models.AlertCurEvent) {
logger.Errorf("rule_eval:%s promql:%s, warnings:%v", getKey(event), promql, warnings)
}
anomalyPoints := common.ConvertAnomalyPoints(value)
anomalyPoints := models.ConvertAnomalyPoints(value)
if len(anomalyPoints) == 0 {
logger.Warningf("rule_eval:%s promql:%s, result is empty", getKey(event), promql)
event.AnnotationsJSON["recovery_promql_error"] = fmt.Sprintf("promql:%s error:%s", promql, "result is empty")

View File

@@ -139,6 +139,12 @@ func (e *Dispatch) HandleEventNotify(event *models.AlertCurEvent, isSubscribe bo
if rule == nil {
return
}
if e.blockEventNotify(rule, event) {
logger.Infof("block event notify: rule_id:%d event:%+v", rule.Id, event)
return
}
fillUsers(event, e.userCache, e.userGroupCache)
var (
@@ -175,6 +181,25 @@ func (e *Dispatch) HandleEventNotify(event *models.AlertCurEvent, isSubscribe bo
}
}
func (e *Dispatch) blockEventNotify(rule *models.AlertRule, event *models.AlertCurEvent) bool {
ruleType := rule.GetRuleType()
// 若为机器则先看机器是否删除
if ruleType == models.HOST {
host, ok := e.targetCache.Get(event.TagsMap["ident"])
if !ok || host == nil {
return true
}
}
// 恢复通知,检测规则配置是否改变
// if event.IsRecovered && event.RuleHash != rule.Hash() {
// return true
// }
return false
}
func (e *Dispatch) handleSubs(event *models.AlertCurEvent) {
// handle alert subscribes
subscribes := make([]*models.AlertSubscribe, 0)
@@ -266,10 +291,12 @@ func (e *Dispatch) Send(rule *models.AlertRule, event *models.AlertCurEvent, not
e.SendCallbacks(rule, notifyTarget, event)
// handle global webhooks
if e.alerting.WebhookBatchSend {
sender.BatchSendWebhooks(e.ctx, notifyTarget.ToWebhookList(), event, e.Astats)
} else {
sender.SingleSendWebhooks(e.ctx, notifyTarget.ToWebhookList(), event, e.Astats)
if !event.OverrideGlobalWebhook() {
if e.alerting.WebhookBatchSend {
sender.BatchSendWebhooks(e.ctx, notifyTarget.ToWebhookMap(), event, e.Astats)
} else {
sender.SingleSendWebhooks(e.ctx, notifyTarget.ToWebhookMap(), event, e.Astats)
}
}
// handle plugin call
@@ -283,10 +310,10 @@ func (e *Dispatch) Send(rule *models.AlertRule, event *models.AlertCurEvent, not
}
func (e *Dispatch) SendCallbacks(rule *models.AlertRule, notifyTarget *NotifyTarget, event *models.AlertCurEvent) {
uids := notifyTarget.ToUidList()
urls := notifyTarget.ToCallbackList()
whMap := notifyTarget.ToWebhookMap()
ogw := event.OverrideGlobalWebhook()
for _, urlStr := range urls {
if len(urlStr) == 0 {
continue
@@ -294,7 +321,7 @@ func (e *Dispatch) SendCallbacks(rule *models.AlertRule, notifyTarget *NotifyTar
cbCtx := sender.BuildCallBackContext(e.ctx, urlStr, rule, []*models.AlertCurEvent{event}, uids, e.userCache, e.alerting.WebhookBatchSend, e.Astats)
if wh, ok := whMap[cbCtx.CallBackURL]; ok && wh.Enable {
if wh, ok := whMap[cbCtx.CallBackURL]; !ogw && ok && wh.Enable {
logger.Debugf("SendCallbacks: webhook[%s] is in global conf.", cbCtx.CallBackURL)
continue
}

View File

@@ -76,52 +76,8 @@ func (s *NotifyTarget) ToCallbackList() []string {
return callbacks
}
func (s *NotifyTarget) ToWebhookList() []*models.Webhook {
webhooks := make([]*models.Webhook, 0, len(s.webhooks))
for _, wh := range s.webhooks {
if wh.Batch == 0 {
wh.Batch = 1000
}
if wh.Timeout == 0 {
wh.Timeout = 10
}
if wh.RetryCount == 0 {
wh.RetryCount = 10
}
if wh.RetryInterval == 0 {
wh.RetryInterval = 10
}
webhooks = append(webhooks, wh)
}
return webhooks
}
func (s *NotifyTarget) ToWebhookMap() map[string]*models.Webhook {
webhookMap := make(map[string]*models.Webhook, len(s.webhooks))
for _, wh := range s.webhooks {
if wh.Batch == 0 {
wh.Batch = 1000
}
if wh.Timeout == 0 {
wh.Timeout = 10
}
if wh.RetryCount == 0 {
wh.RetryCount = 10
}
if wh.RetryInterval == 0 {
wh.RetryInterval = 10
}
webhookMap[wh.Url] = wh
}
return webhookMap
return s.webhooks
}
func (s *NotifyTarget) ToUidList() []int64 {

View File

@@ -13,8 +13,6 @@ import (
"github.com/ccfos/nightingale/v6/memsto"
"github.com/ccfos/nightingale/v6/pkg/ctx"
"github.com/ccfos/nightingale/v6/prom"
"github.com/ccfos/nightingale/v6/tdengine"
"github.com/toolkits/pkg/logger"
)
@@ -33,8 +31,7 @@ type Scheduler struct {
alertMuteCache *memsto.AlertMuteCacheType
datasourceCache *memsto.DatasourceCacheType
promClients *prom.PromClientMap
tdengineClients *tdengine.TdengineClientMap
promClients *prom.PromClientMap
naming *naming.Naming
@@ -45,7 +42,7 @@ type Scheduler struct {
func NewScheduler(aconf aconf.Alert, externalProcessors *process.ExternalProcessorsType, arc *memsto.AlertRuleCacheType,
targetCache *memsto.TargetCacheType, toarc *memsto.TargetsOfAlertRuleCacheType,
busiGroupCache *memsto.BusiGroupCacheType, alertMuteCache *memsto.AlertMuteCacheType, datasourceCache *memsto.DatasourceCacheType,
promClients *prom.PromClientMap, tdengineClients *tdengine.TdengineClientMap, naming *naming.Naming, ctx *ctx.Context, stats *astats.Stats) *Scheduler {
promClients *prom.PromClientMap, naming *naming.Naming, ctx *ctx.Context, stats *astats.Stats) *Scheduler {
scheduler := &Scheduler{
aconf: aconf,
alertRules: make(map[string]*AlertRuleWorker),
@@ -59,9 +56,8 @@ func NewScheduler(aconf aconf.Alert, externalProcessors *process.ExternalProcess
alertMuteCache: alertMuteCache,
datasourceCache: datasourceCache,
promClients: promClients,
tdengineClients: tdengineClients,
naming: naming,
promClients: promClients,
naming: naming,
ctx: ctx,
stats: stats,
@@ -95,9 +91,8 @@ func (s *Scheduler) syncAlertRules() {
}
ruleType := rule.GetRuleType()
if rule.IsPrometheusRule() || rule.IsLokiRule() || rule.IsTdengineRule() {
datasourceIds := s.promClients.Hit(rule.DatasourceIdsJson)
datasourceIds = append(datasourceIds, s.tdengineClients.Hit(rule.DatasourceIdsJson)...)
if rule.IsPrometheusRule() || rule.IsLokiRule() || rule.IsTdengineRule() || rule.IsClickHouseRule() || rule.IsElasticSearch() {
datasourceIds := s.datasourceCache.GetIDsByDsCateAndQueries(rule.Cate, rule.DatasourceQueries)
for _, dsId := range datasourceIds {
if !naming.DatasourceHashRing.IsHit(strconv.FormatInt(dsId, 10), fmt.Sprintf("%d", rule.Id), s.aconf.Heartbeat.Endpoint) {
continue
@@ -119,7 +114,7 @@ func (s *Scheduler) syncAlertRules() {
}
processor := process.NewProcessor(s.aconf.Heartbeat.EngineName, rule, dsId, s.alertRuleCache, s.targetCache, s.targetsOfAlertRuleCache, s.busiGroupCache, s.alertMuteCache, s.datasourceCache, s.ctx, s.stats)
alertRule := NewAlertRuleWorker(rule, dsId, processor, s.promClients, s.tdengineClients, s.ctx)
alertRule := NewAlertRuleWorker(rule, dsId, processor, s.promClients, s.ctx)
alertRuleWorkers[alertRule.Hash()] = alertRule
}
} else if rule.IsHostRule() {
@@ -128,12 +123,13 @@ func (s *Scheduler) syncAlertRules() {
continue
}
processor := process.NewProcessor(s.aconf.Heartbeat.EngineName, rule, 0, s.alertRuleCache, s.targetCache, s.targetsOfAlertRuleCache, s.busiGroupCache, s.alertMuteCache, s.datasourceCache, s.ctx, s.stats)
alertRule := NewAlertRuleWorker(rule, 0, processor, s.promClients, s.tdengineClients, s.ctx)
alertRule := NewAlertRuleWorker(rule, 0, processor, s.promClients, s.ctx)
alertRuleWorkers[alertRule.Hash()] = alertRule
} else {
// 如果 rule 不是通过 prometheus engine 来告警的,则创建为 externalRule
// if rule is not processed by prometheus engine, create it as externalRule
for _, dsId := range rule.DatasourceIdsJson {
dsIds := s.datasourceCache.GetIDsByDsCateAndQueries(rule.Cate, rule.DatasourceQueries)
for _, dsId := range dsIds {
ds := s.datasourceCache.GetById(dsId)
if ds == nil {
logger.Debugf("datasource %d not found", dsId)

File diff suppressed because it is too large Load Diff

View File

@@ -269,3 +269,190 @@ func allValueDeepEqual(got, want map[uint64][]uint64) bool {
}
return true
}
// allValueDeepEqualOmitOrder 判断两个字符串切片是否相等,不考虑顺序
func allValueDeepEqualOmitOrder(got, want []string) bool {
if len(got) != len(want) {
return false
}
slices.Sort(got)
slices.Sort(want)
for i := range got {
if got[i] != want[i] {
return false
}
}
return true
}
func Test_removeVal(t *testing.T) {
type args struct {
promql string
}
tests := []struct {
name string
args args
want string
}{
// TODO: Add test cases.
{
name: "removeVal1",
args: args{
promql: "mem{test1=\"$test1\",test2=\"$test2\",test3=\"$test3\"} > $val",
},
want: "mem{} > $val",
},
{
name: "removeVal2",
args: args{
promql: "mem{test1=\"test1\",test2=\"$test2\",test3=\"$test3\"} > $val",
},
want: "mem{test1=\"test1\"} > $val",
},
{
name: "removeVal3",
args: args{
promql: "mem{test1=\"$test1\",test2=\"test2\",test3=\"$test3\"} > $val",
},
want: "mem{test2=\"test2\"} > $val",
},
{
name: "removeVal4",
args: args{
promql: "mem{test1=\"$test1\",test2=\"$test2\",test3=\"test3\"} > $val",
},
want: "mem{test3=\"test3\"} > $val",
},
{
name: "removeVal5",
args: args{
promql: "mem{test1=\"$test1\",test2=\"test2\",test3=\"test3\"} > $val",
},
want: "mem{test2=\"test2\",test3=\"test3\"} > $val",
},
{
name: "removeVal6",
args: args{
promql: "mem{test1=\"test1\",test2=\"$test2\",test3=\"test3\"} > $val",
},
want: "mem{test1=\"test1\",test3=\"test3\"} > $val",
},
{
name: "removeVal7",
args: args{
promql: "mem{test1=\"test1\",test2=\"test2\",test3='$test3'} > $val",
},
want: "mem{test1=\"test1\",test2=\"test2\"} > $val",
},
{
name: "removeVal8",
args: args{
promql: "mem{test1=\"test1\",test2=\"test2\",test3=\"test3\"} > $val",
},
want: "mem{test1=\"test1\",test2=\"test2\",test3=\"test3\"} > $val",
},
{
name: "removeVal9",
args: args{
promql: "mem{test1=\"$test1\",test2=\"test2\"} > $val1 and mem{test3=\"test3\",test4=\"test4\"} > $val2",
},
want: "mem{test2=\"test2\"} > $val1 and mem{test3=\"test3\",test4=\"test4\"} > $val2",
},
{
name: "removeVal10",
args: args{
promql: "mem{test1=\"test1\",test2='$test2'} > $val1 and mem{test3=\"test3\",test4=\"test4\"} > $val2",
},
want: "mem{test1=\"test1\"} > $val1 and mem{test3=\"test3\",test4=\"test4\"} > $val2",
},
{
name: "removeVal11",
args: args{
promql: "mem{test1='test1',test2=\"test2\"} > $val1 and mem{test3=\"$test3\",test4=\"test4\"} > $val2",
},
want: "mem{test1='test1',test2=\"test2\"} > $val1 and mem{test4=\"test4\"} > $val2",
},
{
name: "removeVal12",
args: args{
promql: "mem{test1=\"test1\",test2=\"test2\"} > $val1 and mem{test3=\"test3\",test4=\"$test4\"} > $val2",
},
want: "mem{test1=\"test1\",test2=\"test2\"} > $val1 and mem{test3=\"test3\"} > $val2",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := removeVal(tt.args.promql); got != tt.want {
t.Errorf("removeVal() = %v, want %v", got, tt.want)
}
})
}
}
func TestExtractVarMapping(t *testing.T) {
tests := []struct {
name string
promql string
want map[string]string
}{
{
name: "单个花括号单个变量",
promql: `mem_used_percent{host="$my_host"} > $val`,
want: map[string]string{"my_host": "host"},
},
{
name: "单个花括号多个变量",
promql: `mem_used_percent{host="$my_host",region="$region",env="prod"} > $val`,
want: map[string]string{"my_host": "host", "region": "region"},
},
{
name: "多个花括号多个变量",
promql: `sum(rate(mem_used_percent{host="$my_host"})) by (instance) + avg(node_load1{region="$region"}) > $val`,
want: map[string]string{"my_host": "host", "region": "region"},
},
{
name: "相同变量出现多次",
promql: `sum(rate(mem_used_percent{host="$my_host"})) + avg(node_load1{host="$my_host"}) > $val`,
want: map[string]string{"my_host": "host"},
},
{
name: "没有变量",
promql: `mem_used_percent{host="localhost",region="cn"} > 80`,
want: map[string]string{},
},
{
name: "没有花括号",
promql: `80 > $val`,
want: map[string]string{},
},
{
name: "格式不规范的标签",
promql: `mem_used_percent{host=$my_host,region = $region} > $val`,
want: map[string]string{"my_host": "host", "region": "region"},
},
{
name: "空花括号",
promql: `mem_used_percent{} > $val`,
want: map[string]string{},
},
{
name: "不完整的花括号",
promql: `mem_used_percent{host="$my_host"`,
want: map[string]string{},
},
{
name: "复杂表达式",
promql: `sum(rate(http_requests_total{handler="$handler",code="$code"}[5m])) by (handler) / sum(rate(http_requests_total{handler="$handler"}[5m])) by (handler) * 100 > $threshold`,
want: map[string]string{"handler": "handler", "code": "code"},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got := ExtractVarMapping(tt.promql)
if !reflect.DeepEqual(got, tt.want) {
t.Errorf("ExtractVarMapping() = %v, want %v", got, tt.want)
}
})
}
}

View File

@@ -2,6 +2,7 @@ package process
import (
"bytes"
"encoding/json"
"fmt"
"html/template"
"sort"
@@ -21,6 +22,7 @@ import (
"github.com/ccfos/nightingale/v6/pushgw/writer"
"github.com/prometheus/prometheus/prompb"
"github.com/robfig/cron/v3"
"github.com/toolkits/pkg/logger"
"github.com/toolkits/pkg/str"
)
@@ -78,6 +80,9 @@ type Processor struct {
HandleFireEventHook HandleEventFunc
HandleRecoverEventHook HandleEventFunc
EventMuteHook EventMuteHookFunc
ScheduleEntry cron.Entry
PromEvalInterval int
}
func (p *Processor) Key() string {
@@ -89,9 +94,9 @@ func (p *Processor) DatasourceId() int64 {
}
func (p *Processor) Hash() string {
return str.MD5(fmt.Sprintf("%d_%d_%s_%d",
return str.MD5(fmt.Sprintf("%d_%s_%s_%d",
p.rule.Id,
p.rule.PromEvalInterval,
p.rule.CronPattern,
p.rule.RuleConfig,
p.datasourceId,
))
@@ -126,7 +131,7 @@ func NewProcessor(engineName string, rule *models.AlertRule, datasourceId int64,
return p
}
func (p *Processor) Handle(anomalyPoints []common.AnomalyPoint, from string, inhibit bool) {
func (p *Processor) Handle(anomalyPoints []models.AnomalyPoint, from string, inhibit bool) {
// 有可能rule的一些配置已经发生变化比如告警接收人、callbacks等
// 这些信息的修改是不会引起worker restart的但是确实会影响告警处理逻辑
// 所以这里直接从memsto.AlertRuleCache中获取并覆盖
@@ -134,10 +139,13 @@ func (p *Processor) Handle(anomalyPoints []common.AnomalyPoint, from string, inh
cachedRule := p.alertRuleCache.Get(p.rule.Id)
if cachedRule == nil {
logger.Errorf("rule not found %+v", anomalyPoints)
p.Stats.CounterRuleEvalErrorTotal.WithLabelValues(fmt.Sprintf("%v", p.DatasourceId()), "handle_event").Inc()
p.Stats.CounterRuleEvalErrorTotal.WithLabelValues(fmt.Sprintf("%v", p.DatasourceId()), "handle_event", p.BusiGroupCache.GetNameByBusiGroupId(p.rule.GroupId), fmt.Sprintf("%v", p.rule.Id)).Inc()
return
}
// 在 rule 变化之前取到 ruleHash
ruleHash := p.rule.Hash()
p.rule = cachedRule
now := time.Now().Unix()
alertingKeys := map[string]struct{}{}
@@ -145,7 +153,7 @@ func (p *Processor) Handle(anomalyPoints []common.AnomalyPoint, from string, inh
// 根据 event 的 tag 将 events 分组,处理告警抑制的情况
eventsMap := make(map[string][]*models.AlertCurEvent)
for _, anomalyPoint := range anomalyPoints {
event := p.BuildEvent(anomalyPoint, from, now)
event := p.BuildEvent(anomalyPoint, from, now, ruleHash)
// 如果 event 被 mute 了,本质也是 fire 的状态,这里无论如何都添加到 alertingKeys 中,防止 fire 的事件自动恢复了
hash := event.Hash
alertingKeys[hash] = struct{}{}
@@ -175,7 +183,7 @@ func (p *Processor) Handle(anomalyPoints []common.AnomalyPoint, from string, inh
}
}
func (p *Processor) BuildEvent(anomalyPoint common.AnomalyPoint, from string, now int64) *models.AlertCurEvent {
func (p *Processor) BuildEvent(anomalyPoint models.AnomalyPoint, from string, now int64, ruleHash string) *models.AlertCurEvent {
p.fillTags(anomalyPoint)
p.mayHandleIdent()
hash := Hash(p.rule.Id, p.datasourceId, anomalyPoint)
@@ -201,22 +209,29 @@ func (p *Processor) BuildEvent(anomalyPoint common.AnomalyPoint, from string, no
event.TargetNote = p.targetNote
event.TriggerValue = anomalyPoint.ReadableValue()
event.TriggerValues = anomalyPoint.Values
event.TriggerValuesJson = models.EventTriggerValues{ValuesWithUnit: anomalyPoint.ValuesUnit}
event.TagsJSON = p.tagsArr
event.Tags = strings.Join(p.tagsArr, ",,")
event.IsRecovered = false
event.Callbacks = p.rule.Callbacks
event.CallbacksJSON = p.rule.CallbacksJSON
event.Annotations = p.rule.Annotations
event.AnnotationsJSON = make(map[string]string)
event.RuleConfig = p.rule.RuleConfig
event.RuleConfigJson = p.rule.RuleConfigJson
event.Severity = anomalyPoint.Severity
event.ExtraConfig = p.rule.ExtraConfigJSON
event.PromQl = anomalyPoint.Query
event.RecoverConfig = anomalyPoint.RecoverConfig
event.RuleHash = ruleHash
if err := json.Unmarshal([]byte(p.rule.Annotations), &event.AnnotationsJSON); err != nil {
event.AnnotationsJSON = make(map[string]string) // 解析失败时使用空 map
logger.Warningf("unmarshal annotations json failed: %v, rule: %d", err, p.rule.Id)
}
if p.target != "" {
if pt, exist := p.TargetCache.Get(p.target); exist {
pt.GroupNames = p.BusiGroupCache.GetNamesByBusiGroupIds(pt.GroupIds)
event.Target = pt
} else {
logger.Infof("Target[ident: %s] doesn't exist in cache.", p.target)
@@ -414,6 +429,7 @@ func (p *Processor) handleEvent(events []*models.AlertCurEvent) {
p.pendingsUseByRecover.Set(event.Hash, event)
}
event.PromEvalInterval = p.PromEvalInterval
if p.rule.PromForDuration == 0 {
fireEvents = append(fireEvents, event)
if severity > event.Severity {
@@ -508,7 +524,7 @@ func (p *Processor) pushEventToQueue(e *models.AlertCurEvent) {
dispatch.LogEvent(e, "push_queue")
if !queue.EventQueue.PushFront(e) {
logger.Warningf("event_push_queue: queue is full, event:%+v", e)
p.Stats.CounterRuleEvalErrorTotal.WithLabelValues(fmt.Sprintf("%v", p.DatasourceId()), "push_event_queue").Inc()
p.Stats.CounterRuleEvalErrorTotal.WithLabelValues(fmt.Sprintf("%v", p.DatasourceId()), "push_event_queue", p.BusiGroupCache.GetNameByBusiGroupId(p.rule.GroupId), fmt.Sprintf("%v", p.rule.Id)).Inc()
}
}
@@ -519,7 +535,7 @@ func (p *Processor) RecoverAlertCurEventFromDb() {
curEvents, err := models.AlertCurEventGetByRuleIdAndDsId(p.ctx, p.rule.Id, p.datasourceId)
if err != nil {
logger.Errorf("recover event from db for rule:%s failed, err:%s", p.Key(), err)
p.Stats.CounterRuleEvalErrorTotal.WithLabelValues(fmt.Sprintf("%v", p.DatasourceId()), "get_recover_event").Inc()
p.Stats.CounterRuleEvalErrorTotal.WithLabelValues(fmt.Sprintf("%v", p.DatasourceId()), "get_recover_event", p.BusiGroupCache.GetNameByBusiGroupId(p.rule.GroupId), fmt.Sprintf("%v", p.rule.Id)).Inc()
p.fires = NewAlertCurEventMap(nil)
return
}
@@ -538,6 +554,7 @@ func (p *Processor) RecoverAlertCurEventFromDb() {
event.DB2Mem()
target, exists := p.TargetCache.Get(event.TargetIdent)
if exists {
target.GroupNames = p.BusiGroupCache.GetNamesByBusiGroupIds(target.GroupIds)
event.Target = target
}
@@ -552,7 +569,7 @@ func (p *Processor) RecoverAlertCurEventFromDb() {
p.pendingsUseByRecover = NewAlertCurEventMap(pendingsUseByRecoverMap)
}
func (p *Processor) fillTags(anomalyPoint common.AnomalyPoint) {
func (p *Processor) fillTags(anomalyPoint models.AnomalyPoint) {
// handle series tags
tagsMap := make(map[string]string)
for label, value := range anomalyPoint.Labels {
@@ -642,10 +659,10 @@ func labelMapToArr(m map[string]string) []string {
return labelStrings
}
func Hash(ruleId, datasourceId int64, vector common.AnomalyPoint) string {
func Hash(ruleId, datasourceId int64, vector models.AnomalyPoint) string {
return str.MD5(fmt.Sprintf("%d_%s_%d_%d_%s", ruleId, vector.Labels.String(), datasourceId, vector.Severity, vector.Query))
}
func TagHash(vector common.AnomalyPoint) string {
func TagHash(vector models.AnomalyPoint) string {
return str.MD5(vector.Labels.String())
}

View File

@@ -26,9 +26,11 @@ type Scheduler struct {
writers *writer.WritersType
stats *astats.Stats
datasourceCache *memsto.DatasourceCacheType
}
func NewScheduler(aconf aconf.Alert, rrc *memsto.RecordingRuleCacheType, promClients *prom.PromClientMap, writers *writer.WritersType, stats *astats.Stats) *Scheduler {
func NewScheduler(aconf aconf.Alert, rrc *memsto.RecordingRuleCacheType, promClients *prom.PromClientMap, writers *writer.WritersType, stats *astats.Stats, datasourceCache *memsto.DatasourceCacheType) *Scheduler {
scheduler := &Scheduler{
aconf: aconf,
recordRules: make(map[string]*RecordRuleContext),
@@ -39,6 +41,8 @@ func NewScheduler(aconf aconf.Alert, rrc *memsto.RecordingRuleCacheType, promCli
writers: writers,
stats: stats,
datasourceCache: datasourceCache,
}
go scheduler.LoopSyncRules(context.Background())
@@ -67,7 +71,7 @@ func (s *Scheduler) syncRecordRules() {
continue
}
datasourceIds := s.promClients.Hit(rule.DatasourceIdsJson)
datasourceIds := s.datasourceCache.GetIDsByDsCateAndQueries("prometheus", rule.DatasourceQueries)
for _, dsId := range datasourceIds {
if !naming.DatasourceHashRing.IsHit(strconv.FormatInt(dsId, 10), fmt.Sprintf("%d", rule.Id), s.aconf.Heartbeat.Endpoint) {
continue

View File

@@ -6,7 +6,6 @@ import (
"strings"
"time"
"github.com/ccfos/nightingale/v6/alert/common"
"github.com/ccfos/nightingale/v6/alert/dispatch"
"github.com/ccfos/nightingale/v6/alert/mute"
"github.com/ccfos/nightingale/v6/alert/naming"
@@ -92,7 +91,7 @@ func (rt *Router) eventPersist(c *gin.Context) {
type eventForm struct {
Alert bool `json:"alert"`
AnomalyPoints []common.AnomalyPoint `json:"vectors"`
AnomalyPoints []models.AnomalyPoint `json:"vectors"`
RuleId int64 `json:"rule_id"`
DatasourceId int64 `json:"datasource_id"`
Inhibit bool `json:"inhibit"`

View File

@@ -129,43 +129,39 @@ func (c *DefaultCallBacker) CallBack(ctx CallBackContext) {
return
}
ctx.Stats.AlertNotifyTotal.WithLabelValues("rule_callback").Inc()
resp, code, err := poster.PostJSON(ctx.CallBackURL, 5*time.Second, event, 3)
if err != nil {
logger.Errorf("event_callback_fail(rule_id=%d url=%s), event:%+v, resp: %s, err: %v, code: %d",
event.RuleId, ctx.CallBackURL, event, string(resp), err, code)
ctx.Stats.AlertNotifyErrorTotal.WithLabelValues("rule_callback").Inc()
} else {
logger.Infof("event_callback_succ(rule_id=%d url=%s), event:%+v, resp: %s, code: %d",
event.RuleId, ctx.CallBackURL, event, string(resp), code)
}
doSendAndRecord(ctx.Ctx, ctx.CallBackURL, ctx.CallBackURL, event, "callback", ctx.Stats, ctx.Events)
}
func doSendAndRecord(ctx *ctx.Context, url, token string, body interface{}, channel string,
stats *astats.Stats, event *models.AlertCurEvent) {
stats *astats.Stats, events []*models.AlertCurEvent) {
res, err := doSend(url, body, channel, stats)
NotifyRecord(ctx, event, channel, token, res, err)
NotifyRecord(ctx, events, channel, token, res, err)
}
func NotifyRecord(ctx *ctx.Context, evt *models.AlertCurEvent, channel, target, res string, err error) {
noti := models.NewNotificationRecord(evt, channel, target)
if err != nil {
noti.SetStatus(models.NotiStatusFailure)
noti.SetDetails(err.Error())
} else if res != "" {
noti.SetDetails(string(res))
func NotifyRecord(ctx *ctx.Context, evts []*models.AlertCurEvent, channel, target, res string, err error) {
// 一个通知可能对应多个 event都需要记录
notis := make([]*models.NotificaitonRecord, 0, len(evts))
for _, evt := range evts {
noti := models.NewNotificationRecord(evt, channel, target)
if err != nil {
noti.SetStatus(models.NotiStatusFailure)
noti.SetDetails(err.Error())
} else if res != "" {
noti.SetDetails(string(res))
}
notis = append(notis, noti)
}
if !ctx.IsCenter {
_, err := poster.PostByUrlsWithResp[int64](ctx, "/v1/n9e/notify-record", noti)
_, err := poster.PostByUrlsWithResp[[]int64](ctx, "/v1/n9e/notify-record", notis)
if err != nil {
logger.Errorf("add noti:%v failed, err: %v", noti, err)
logger.Errorf("add notis:%v failed, err: %v", notis, err)
}
return
}
if err := noti.Add(ctx); err != nil {
logger.Errorf("add noti:%v failed, err: %v", noti, err)
if err := models.DB(ctx).CreateInBatches(notis, 100).Error; err != nil {
logger.Errorf("add notis:%v failed, err: %v", notis, err)
}
}
@@ -195,8 +191,8 @@ func PushCallbackEvent(ctx *ctx.Context, webhook *models.Webhook, event *models.
if queue == nil {
queue = &WebhookQueue{
list: NewSafeListLimited(QueueMaxSize),
closeCh: make(chan struct{}),
eventQueue: NewSafeEventQueue(QueueMaxSize),
closeCh: make(chan struct{}),
}
CallbackEventQueueLock.Lock()
@@ -206,8 +202,8 @@ func PushCallbackEvent(ctx *ctx.Context, webhook *models.Webhook, event *models.
StartConsumer(ctx, queue, webhook.Batch, webhook, stats)
}
succ := queue.list.PushFront(event)
succ := queue.eventQueue.Push(event)
if !succ {
logger.Warningf("Write channel(%s) full, current channel size: %d event:%v", webhook.Url, queue.list.Len(), event)
logger.Warningf("Write channel(%s) full, current channel size: %d event:%v", webhook.Url, queue.eventQueue.Len(), event)
}
}

View File

@@ -67,7 +67,7 @@ func (ds *DingtalkSender) Send(ctx MessageContext) {
}
}
doSendAndRecord(ctx.Ctx, url, tokens[i], body, models.Dingtalk, ctx.Stats, ctx.Events[0])
doSendAndRecord(ctx.Ctx, url, tokens[i], body, models.Dingtalk, ctx.Stats, ctx.Events)
}
}
@@ -97,10 +97,7 @@ func (ds *DingtalkSender) CallBack(ctx CallBackContext) {
body.Markdown.Text = message
}
doSendAndRecord(ctx.Ctx, ctx.CallBackURL, ctx.CallBackURL, body,
"callback", ctx.Stats, ctx.Events[0])
ctx.Stats.AlertNotifyTotal.WithLabelValues("rule_callback").Inc()
doSendAndRecord(ctx.Ctx, ctx.CallBackURL, ctx.CallBackURL, body, "callback", ctx.Stats, ctx.Events)
}
// extract urls and ats from Users

View File

@@ -25,8 +25,8 @@ type EmailSender struct {
}
type EmailContext struct {
event *models.AlertCurEvent
mail *gomail.Message
events []*models.AlertCurEvent
mail *gomail.Message
}
func (es *EmailSender) Send(ctx MessageContext) {
@@ -42,7 +42,7 @@ func (es *EmailSender) Send(ctx MessageContext) {
subject = ctx.Events[0].RuleName
}
content := BuildTplMessage(models.Email, es.contentTpl, ctx.Events)
es.WriteEmail(subject, content, tos, ctx.Events[0])
es.WriteEmail(subject, content, tos, ctx.Events)
ctx.Stats.AlertNotifyTotal.WithLabelValues(models.Email).Add(float64(len(tos)))
}
@@ -79,8 +79,7 @@ func SendEmail(subject, content string, tos []string, stmp aconf.SMTPConfig) err
return nil
}
func (es *EmailSender) WriteEmail(subject, content string, tos []string,
event *models.AlertCurEvent) {
func (es *EmailSender) WriteEmail(subject, content string, tos []string, events []*models.AlertCurEvent) {
m := gomail.NewMessage()
m.SetHeader("From", es.smtp.From)
@@ -88,7 +87,7 @@ func (es *EmailSender) WriteEmail(subject, content string, tos []string,
m.SetHeader("Subject", subject)
m.SetBody("text/html", content)
mailch <- &EmailContext{event, m}
mailch <- &EmailContext{events, m}
}
func dialSmtp(d *gomail.Dialer) gomail.SendCloser {
@@ -202,7 +201,11 @@ func startEmailSender(ctx *ctx.Context, smtp aconf.SMTPConfig) {
}
for _, to := range m.mail.GetHeader("To") {
NotifyRecord(ctx, m.event, models.Email, to, "", err)
msg := ""
if err == nil {
msg = "ok"
}
NotifyRecord(ctx, m.events, models.Email, to, msg, err)
}
size++

View File

@@ -54,9 +54,7 @@ func (fs *FeishuSender) CallBack(ctx CallBackContext) {
},
}
doSendAndRecord(ctx.Ctx, ctx.CallBackURL, ctx.CallBackURL, body, "callback",
ctx.Stats, ctx.Events[0])
ctx.Stats.AlertNotifyTotal.WithLabelValues("rule_callback").Inc()
doSendAndRecord(ctx.Ctx, ctx.CallBackURL, ctx.CallBackURL, body, "callback", ctx.Stats, ctx.Events)
}
func (fs *FeishuSender) Send(ctx MessageContext) {
@@ -78,7 +76,7 @@ func (fs *FeishuSender) Send(ctx MessageContext) {
IsAtAll: false,
}
}
doSendAndRecord(ctx.Ctx, url, tokens[i], body, models.Feishu, ctx.Stats, ctx.Events[0])
doSendAndRecord(ctx.Ctx, url, tokens[i], body, models.Feishu, ctx.Stats, ctx.Events)
}
}

View File

@@ -135,8 +135,7 @@ func (fs *FeishuCardSender) CallBack(ctx CallBackContext) {
}
parsedURL.RawQuery = ""
doSendAndRecord(ctx.Ctx, parsedURL.String(), parsedURL.String(), body, "callback",
ctx.Stats, ctx.Events[0])
doSendAndRecord(ctx.Ctx, parsedURL.String(), parsedURL.String(), body, "callback", ctx.Stats, ctx.Events)
}
func (fs *FeishuCardSender) Send(ctx MessageContext) {
@@ -160,8 +159,7 @@ func (fs *FeishuCardSender) Send(ctx MessageContext) {
body.Card.Elements[0].Text.Content = message
body.Card.Elements[2].Elements[0].Content = SendTitle
for i, url := range urls {
doSendAndRecord(ctx.Ctx, url, tokens[i], body, models.FeishuCard,
ctx.Stats, ctx.Events[0])
doSendAndRecord(ctx.Ctx, url, tokens[i], body, models.FeishuCard, ctx.Stats, ctx.Events)
}
}

View File

@@ -118,6 +118,7 @@ func CallIbex(ctx *ctx.Context, id int64, host string,
// 附加告警级别 告警触发值标签
tagsMap["alert_severity"] = strconv.Itoa(event.Severity)
tagsMap["alert_trigger_value"] = event.TriggerValue
tagsMap["is_recovered"] = strconv.FormatBool(event.IsRecovered)
tags, err := json.Marshal(tagsMap)
if err != nil {

View File

@@ -27,30 +27,29 @@ func (lk *LarkSender) CallBack(ctx CallBackContext) {
},
}
doSendAndRecord(ctx.Ctx, ctx.CallBackURL, ctx.CallBackURL, body, "callback",
ctx.Stats, ctx.Events[0])
ctx.Stats.AlertNotifyTotal.WithLabelValues("rule_callback").Inc()
doSendAndRecord(ctx.Ctx, ctx.CallBackURL, ctx.CallBackURL, body, "callback", ctx.Stats, ctx.Events)
}
func (lk *LarkSender) Send(ctx MessageContext) {
if len(ctx.Users) == 0 || len(ctx.Events) == 0 {
return
}
urls := lk.extract(ctx.Users)
urls, tokens := lk.extract(ctx.Users)
message := BuildTplMessage(models.Lark, lk.tpl, ctx.Events)
for _, url := range urls {
for i, url := range urls {
body := feishu{
Msgtype: "text",
Content: feishuContent{
Text: message,
},
}
doSend(url, body, models.Lark, ctx.Stats)
doSendAndRecord(ctx.Ctx, url, tokens[i], body, models.Lark, ctx.Stats, ctx.Events)
}
}
func (lk *LarkSender) extract(users []*models.User) []string {
func (lk *LarkSender) extract(users []*models.User) ([]string, []string) {
urls := make([]string, 0, len(users))
tokens := make([]string, 0, len(users))
for _, user := range users {
if token, has := user.ExtractToken(models.Lark); has {
@@ -59,7 +58,8 @@ func (lk *LarkSender) extract(users []*models.User) []string {
url = "https://open.larksuite.com/open-apis/bot/v2/hook/" + token
}
urls = append(urls, url)
tokens = append(tokens, token)
}
}
return urls
return urls, tokens
}

View File

@@ -56,15 +56,14 @@ func (fs *LarkCardSender) CallBack(ctx CallBackContext) {
}
parsedURL.RawQuery = ""
doSendAndRecord(ctx.Ctx, ctx.CallBackURL, ctx.CallBackURL, body, "callback",
ctx.Stats, ctx.Events[0])
doSendAndRecord(ctx.Ctx, ctx.CallBackURL, ctx.CallBackURL, body, "callback", ctx.Stats, ctx.Events)
}
func (fs *LarkCardSender) Send(ctx MessageContext) {
if len(ctx.Users) == 0 || len(ctx.Events) == 0 {
return
}
urls, _ := fs.extract(ctx.Users)
urls, tokens := fs.extract(ctx.Users)
message := BuildTplMessage(models.LarkCard, fs.tpl, ctx.Events)
color := "red"
lowerUnicode := strings.ToLower(message)
@@ -80,14 +79,14 @@ func (fs *LarkCardSender) Send(ctx MessageContext) {
body.Card.Header.Template = color
body.Card.Elements[0].Text.Content = message
body.Card.Elements[2].Elements[0].Content = SendTitle
for _, url := range urls {
doSend(url, body, models.LarkCard, ctx.Stats)
for i, url := range urls {
doSendAndRecord(ctx.Ctx, url, tokens[i], body, models.LarkCard, ctx.Stats, ctx.Events)
}
}
func (fs *LarkCardSender) extract(users []*models.User) ([]string, []string) {
urls := make([]string, 0, len(users))
ats := make([]string, 0)
tokens := make([]string, 0)
for i := range users {
if token, has := users[i].ExtractToken(models.Lark); has {
url := token
@@ -95,7 +94,8 @@ func (fs *LarkCardSender) extract(users []*models.User) ([]string, []string) {
url = "https://open.larksuite.com/open-apis/bot/v2/hook/" + strings.TrimSpace(token)
}
urls = append(urls, url)
tokens = append(tokens, token)
}
}
return urls, ats
return urls, tokens
}

View File

@@ -43,7 +43,7 @@ func (ms *MmSender) Send(ctx MessageContext) {
Text: message,
Tokens: urls,
Stats: ctx.Stats,
}, ctx.Events[0])
}, ctx.Events, models.Mm)
}
func (ms *MmSender) CallBack(ctx CallBackContext) {
@@ -56,9 +56,7 @@ func (ms *MmSender) CallBack(ctx CallBackContext) {
Text: message,
Tokens: []string{ctx.CallBackURL},
Stats: ctx.Stats,
}, ctx.Events[0])
ctx.Stats.AlertNotifyTotal.WithLabelValues("rule_callback").Inc()
}, ctx.Events, "callback")
}
func (ms *MmSender) extract(users []*models.User) []string {
@@ -71,11 +69,12 @@ func (ms *MmSender) extract(users []*models.User) []string {
return tokens
}
func SendMM(ctx *ctx.Context, message MatterMostMessage, event *models.AlertCurEvent) {
func SendMM(ctx *ctx.Context, message MatterMostMessage, events []*models.AlertCurEvent, channel string) {
for i := 0; i < len(message.Tokens); i++ {
u, err := url.Parse(message.Tokens[i])
if err != nil {
logger.Errorf("mm_sender: failed to parse error=%v", err)
NotifyRecord(ctx, events, channel, message.Tokens[i], "", err)
continue
}
@@ -104,7 +103,7 @@ func SendMM(ctx *ctx.Context, message MatterMostMessage, event *models.AlertCurE
Username: username,
Text: txt + message.Text,
}
doSendAndRecord(ctx, ur, message.Tokens[i], body, models.Mm, message.Stats, event)
doSendAndRecord(ctx, ur, message.Tokens[i], body, channel, message.Stats, events)
}
}
}

View File

@@ -85,7 +85,7 @@ func alertingCallScript(ctx *ctx.Context, stdinBytes []byte, notifyScript models
}
err, isTimeout := sys.WrapTimeout(cmd, time.Duration(config.Timeout)*time.Second)
NotifyRecord(ctx, event, channel, cmd.String(), "", buildErr(err, isTimeout))
NotifyRecord(ctx, []*models.AlertCurEvent{event}, channel, cmd.String(), "", buildErr(err, isTimeout))
if isTimeout {
if err == nil {

View File

@@ -1,6 +1,7 @@
package sender
import (
"errors"
"html/template"
"strings"
@@ -40,9 +41,7 @@ func (ts *TelegramSender) CallBack(ctx CallBackContext) {
Text: message,
Tokens: []string{ctx.CallBackURL},
Stats: ctx.Stats,
}, ctx.Events[0])
ctx.Stats.AlertNotifyTotal.WithLabelValues("rule_callback").Inc()
}, ctx.Events, "callback")
}
func (ts *TelegramSender) Send(ctx MessageContext) {
@@ -56,7 +55,7 @@ func (ts *TelegramSender) Send(ctx MessageContext) {
Text: message,
Tokens: tokens,
Stats: ctx.Stats,
}, ctx.Events[0])
}, ctx.Events, models.Telegram)
}
func (ts *TelegramSender) extract(users []*models.User) []string {
@@ -69,10 +68,11 @@ func (ts *TelegramSender) extract(users []*models.User) []string {
return tokens
}
func SendTelegram(ctx *ctx.Context, message TelegramMessage, event *models.AlertCurEvent) {
func SendTelegram(ctx *ctx.Context, message TelegramMessage, events []*models.AlertCurEvent, channel string) {
for i := 0; i < len(message.Tokens); i++ {
if !strings.Contains(message.Tokens[i], "/") && !strings.HasPrefix(message.Tokens[i], "https://") {
logger.Errorf("telegram_sender: result=fail invalid token=%s", message.Tokens[i])
NotifyRecord(ctx, events, channel, message.Tokens[i], "", errors.New("invalid token"))
continue
}
var url string
@@ -93,6 +93,6 @@ func SendTelegram(ctx *ctx.Context, message TelegramMessage, event *models.Alert
Text: message.Text,
}
doSendAndRecord(ctx, url, message.Tokens[i], body, models.Telegram, message.Stats, event)
doSendAndRecord(ctx, url, message.Tokens[i], body, channel, message.Stats, events)
}
}

View File

@@ -59,17 +59,21 @@ func sendWebhook(webhook *models.Webhook, event interface{}, stats *astats.Stats
if webhook != nil {
insecureSkipVerify = webhook.SkipVerify
}
client := http.Client{
Timeout: time.Duration(conf.Timeout) * time.Second,
Transport: &http.Transport{
TLSClientConfig: &tls.Config{InsecureSkipVerify: insecureSkipVerify},
},
if conf.Client == nil {
logger.Warningf("event_%s, event:%s, url: [%s], error: [%s]", channel, string(bs), conf.Url, "client is nil")
conf.Client = &http.Client{
Timeout: time.Duration(conf.Timeout) * time.Second,
Transport: &http.Transport{
TLSClientConfig: &tls.Config{InsecureSkipVerify: insecureSkipVerify},
},
}
}
stats.AlertNotifyTotal.WithLabelValues(channel).Inc()
var resp *http.Response
var body []byte
resp, err = client.Do(req)
resp, err = conf.Client.Do(req)
if err != nil {
stats.AlertNotifyErrorTotal.WithLabelValues(channel).Inc()
@@ -91,12 +95,12 @@ func sendWebhook(webhook *models.Webhook, event interface{}, stats *astats.Stats
return false, string(body), nil
}
func SingleSendWebhooks(ctx *ctx.Context, webhooks []*models.Webhook, event *models.AlertCurEvent, stats *astats.Stats) {
func SingleSendWebhooks(ctx *ctx.Context, webhooks map[string]*models.Webhook, event *models.AlertCurEvent, stats *astats.Stats) {
for _, conf := range webhooks {
retryCount := 0
for retryCount < 3 {
needRetry, res, err := sendWebhook(conf, event, stats)
NotifyRecord(ctx, event, "webhook", conf.Url, res, err)
NotifyRecord(ctx, []*models.AlertCurEvent{event}, "webhook", conf.Url, res, err)
if !needRetry {
break
}
@@ -106,7 +110,7 @@ func SingleSendWebhooks(ctx *ctx.Context, webhooks []*models.Webhook, event *mod
}
}
func BatchSendWebhooks(ctx *ctx.Context, webhooks []*models.Webhook, event *models.AlertCurEvent, stats *astats.Stats) {
func BatchSendWebhooks(ctx *ctx.Context, webhooks map[string]*models.Webhook, event *models.AlertCurEvent, stats *astats.Stats) {
for _, conf := range webhooks {
logger.Infof("push event:%+v to queue:%v", event, conf)
PushEvent(ctx, conf, event, stats)
@@ -121,8 +125,8 @@ var EventQueueLock sync.RWMutex
const QueueMaxSize = 100000
type WebhookQueue struct {
list *SafeListLimited
closeCh chan struct{}
eventQueue *SafeEventQueue
closeCh chan struct{}
}
func PushEvent(ctx *ctx.Context, webhook *models.Webhook, event *models.AlertCurEvent, stats *astats.Stats) {
@@ -132,8 +136,8 @@ func PushEvent(ctx *ctx.Context, webhook *models.Webhook, event *models.AlertCur
if queue == nil {
queue = &WebhookQueue{
list: NewSafeListLimited(QueueMaxSize),
closeCh: make(chan struct{}),
eventQueue: NewSafeEventQueue(QueueMaxSize),
closeCh: make(chan struct{}),
}
EventQueueLock.Lock()
@@ -143,10 +147,10 @@ func PushEvent(ctx *ctx.Context, webhook *models.Webhook, event *models.AlertCur
StartConsumer(ctx, queue, webhook.Batch, webhook, stats)
}
succ := queue.list.PushFront(event)
succ := queue.eventQueue.Push(event)
if !succ {
stats.AlertNotifyErrorTotal.WithLabelValues("push_event_queue").Inc()
logger.Warningf("Write channel(%s) full, current channel size: %d event:%v", webhook.Url, queue.list.Len(), event)
logger.Warningf("Write channel(%s) full, current channel size: %d event:%v", webhook.Url, queue.eventQueue.Len(), event)
}
}
@@ -157,7 +161,7 @@ func StartConsumer(ctx *ctx.Context, queue *WebhookQueue, popSize int, webhook *
logger.Infof("event queue:%v closed", queue)
return
default:
events := queue.list.PopBack(popSize)
events := queue.eventQueue.PopN(popSize)
if len(events) == 0 {
time.Sleep(time.Millisecond * 400)
continue
@@ -166,7 +170,7 @@ func StartConsumer(ctx *ctx.Context, queue *WebhookQueue, popSize int, webhook *
retryCount := 0
for retryCount < webhook.RetryCount {
needRetry, res, err := sendWebhook(webhook, events, stats)
go RecordEvents(ctx, webhook, events, stats, res, err)
go NotifyRecord(ctx, events, "webhook", webhook.Url, res, err)
if !needRetry {
break
}
@@ -176,10 +180,3 @@ func StartConsumer(ctx *ctx.Context, queue *WebhookQueue, popSize int, webhook *
}
}
}
func RecordEvents(ctx *ctx.Context, webhook *models.Webhook, events []*models.AlertCurEvent, stats *astats.Stats, res string, err error) {
for _, event := range events {
time.Sleep(time.Millisecond * 10)
NotifyRecord(ctx, event, "webhook", webhook.Url, res, err)
}
}

View File

@@ -0,0 +1,109 @@
package sender
import (
"container/list"
"sync"
"github.com/ccfos/nightingale/v6/models"
)
type SafeEventQueue struct {
lock sync.RWMutex
maxSize int
queueHigh *list.List
queueMiddle *list.List
queueLow *list.List
}
const (
High = 1
Middle = 2
Low = 3
)
func NewSafeEventQueue(maxSize int) *SafeEventQueue {
return &SafeEventQueue{
maxSize: maxSize,
lock: sync.RWMutex{},
queueHigh: list.New(),
queueMiddle: list.New(),
queueLow: list.New(),
}
}
func (spq *SafeEventQueue) Len() int {
spq.lock.RLock()
defer spq.lock.RUnlock()
return spq.queueHigh.Len() + spq.queueMiddle.Len() + spq.queueLow.Len()
}
// len 无锁读取长度,不要在本文件外调用
func (spq *SafeEventQueue) len() int {
return spq.queueHigh.Len() + spq.queueMiddle.Len() + spq.queueLow.Len()
}
func (spq *SafeEventQueue) Push(event *models.AlertCurEvent) bool {
spq.lock.Lock()
defer spq.lock.Unlock()
for spq.len() > spq.maxSize {
return false
}
switch event.Severity {
case High:
spq.queueHigh.PushBack(event)
case Middle:
spq.queueMiddle.PushBack(event)
case Low:
spq.queueLow.PushBack(event)
default:
return false
}
return true
}
// pop 无锁弹出事件,不要在本文件外调用
func (spq *SafeEventQueue) pop() *models.AlertCurEvent {
if spq.len() == 0 {
return nil
}
var elem interface{}
if spq.queueHigh.Len() > 0 {
elem = spq.queueHigh.Remove(spq.queueHigh.Front())
} else if spq.queueMiddle.Len() > 0 {
elem = spq.queueMiddle.Remove(spq.queueMiddle.Front())
} else {
elem = spq.queueLow.Remove(spq.queueLow.Front())
}
event, ok := elem.(*models.AlertCurEvent)
if !ok {
return nil
}
return event
}
func (spq *SafeEventQueue) Pop() *models.AlertCurEvent {
spq.lock.Lock()
defer spq.lock.Unlock()
return spq.pop()
}
func (spq *SafeEventQueue) PopN(n int) []*models.AlertCurEvent {
spq.lock.Lock()
defer spq.lock.Unlock()
events := make([]*models.AlertCurEvent, 0, n)
count := 0
for count < n && spq.len() > 0 {
event := spq.pop()
if event != nil {
events = append(events, event)
}
count++
}
return events
}

View File

@@ -0,0 +1,157 @@
package sender
import (
"sync"
"testing"
"time"
"github.com/ccfos/nightingale/v6/models"
"github.com/stretchr/testify/assert"
)
func TestSafePriorityQueue_ConcurrentPushPop(t *testing.T) {
spq := NewSafeEventQueue(100000)
var wg sync.WaitGroup
numGoroutines := 100
numEvents := 1000
// 并发 Push
wg.Add(numGoroutines)
for i := 0; i < numGoroutines; i++ {
go func(goroutineID int) {
defer wg.Done()
for j := 0; j < numEvents; j++ {
event := &models.AlertCurEvent{
Severity: goroutineID%3 + 1,
TriggerTime: time.Now().UnixNano(),
}
spq.Push(event)
}
}(i)
}
wg.Wait()
// 检查队列长度是否正确
expectedLen := numGoroutines * numEvents
assert.Equal(t, expectedLen, spq.Len(), "Queue length mismatch after concurrent pushes")
// 并发 Pop
wg.Add(numGoroutines)
for i := 0; i < numGoroutines; i++ {
go func() {
defer wg.Done()
for {
event := spq.Pop()
if event == nil {
return
}
}
}()
}
wg.Wait()
// 最终队列应该为空
assert.Equal(t, 0, spq.Len(), "Queue should be empty after concurrent pops")
}
func TestSafePriorityQueue_ConcurrentPopMax(t *testing.T) {
spq := NewSafeEventQueue(100000)
// 添加初始数据
for i := 0; i < 1000; i++ {
spq.Push(&models.AlertCurEvent{
Severity: i%3 + 1,
TriggerTime: time.Now().UnixNano(),
})
}
var wg sync.WaitGroup
numGoroutines := 10
popMax := 100
// 并发 PopN
wg.Add(numGoroutines)
for i := 0; i < numGoroutines; i++ {
go func() {
defer wg.Done()
events := spq.PopN(popMax)
assert.LessOrEqual(t, len(events), popMax, "PopN exceeded maximum")
}()
}
wg.Wait()
// 检查队列长度是否正确
expectedRemaining := 1000 - (numGoroutines * popMax)
if expectedRemaining < 0 {
expectedRemaining = 0
}
assert.Equal(t, expectedRemaining, spq.Len(), "Queue length mismatch after concurrent PopN")
}
func TestSafePriorityQueue_ConcurrentPushPopWithDifferentSeverities(t *testing.T) {
spq := NewSafeEventQueue(100000)
var wg sync.WaitGroup
numGoroutines := 50
numEvents := 500
// 并发 Push 不同优先级的事件
wg.Add(numGoroutines)
for i := 0; i < numGoroutines; i++ {
go func(goroutineID int) {
defer wg.Done()
for j := 0; j < numEvents; j++ {
event := &models.AlertCurEvent{
Severity: goroutineID%3 + 1, // 模拟不同的 Severity
TriggerTime: time.Now().UnixNano(),
}
spq.Push(event)
}
}(i)
}
wg.Wait()
// 检查队列长度是否正确
expectedLen := numGoroutines * numEvents
assert.Equal(t, expectedLen, spq.Len(), "Queue length mismatch after concurrent pushes")
// 检查事件的顺序是否按照优先级排列
var lastEvent *models.AlertCurEvent
for spq.Len() > 0 {
event := spq.Pop()
if lastEvent != nil {
assert.LessOrEqual(t, lastEvent.Severity, event.Severity, "Events are not in correct priority order")
}
lastEvent = event
}
}
func TestSafePriorityQueue_ExceedMaxSize(t *testing.T) {
spq := NewSafeEventQueue(5)
// 插入超过最大容量的事件
for i := 0; i < 10; i++ {
spq.Push(&models.AlertCurEvent{
Severity: i % 3,
TriggerTime: int64(i),
})
}
// 验证队列的长度是否不超过 maxSize
assert.LessOrEqual(t, spq.Len(), spq.maxSize)
// 验证队列中剩余事件的内容
expectedEvents := 5
if spq.Len() < 5 {
expectedEvents = spq.Len()
}
// 检查最后存入的事件是否是按优先级排序
for i := 0; i < expectedEvents; i++ {
event := spq.Pop()
if event != nil {
assert.LessOrEqual(t, event.Severity, 2)
}
}
}

View File

@@ -37,9 +37,7 @@ func (ws *WecomSender) CallBack(ctx CallBackContext) {
},
}
doSendAndRecord(ctx.Ctx, ctx.CallBackURL, ctx.CallBackURL, body, "callback",
ctx.Stats, ctx.Events[0])
ctx.Stats.AlertNotifyTotal.WithLabelValues("rule_callback").Inc()
doSendAndRecord(ctx.Ctx, ctx.CallBackURL, ctx.CallBackURL, body, "callback", ctx.Stats, ctx.Events)
}
func (ws *WecomSender) Send(ctx MessageContext) {
@@ -55,7 +53,7 @@ func (ws *WecomSender) Send(ctx MessageContext) {
Content: message,
},
}
doSendAndRecord(ctx.Ctx, url, tokens[i], body, models.Wecom, ctx.Stats, ctx.Events[0])
doSendAndRecord(ctx.Ctx, url, tokens[i], body, models.Wecom, ctx.Stats, ctx.Events)
}
}

View File

@@ -4,8 +4,11 @@ import (
"context"
"fmt"
"github.com/ccfos/nightingale/v6/dscache"
"github.com/ccfos/nightingale/v6/alert"
"github.com/ccfos/nightingale/v6/alert/astats"
"github.com/ccfos/nightingale/v6/alert/dispatch"
"github.com/ccfos/nightingale/v6/alert/process"
alertrt "github.com/ccfos/nightingale/v6/alert/router"
"github.com/ccfos/nightingale/v6/center/cconf"
@@ -26,14 +29,13 @@ import (
"github.com/ccfos/nightingale/v6/pkg/httpx"
"github.com/ccfos/nightingale/v6/pkg/i18nx"
"github.com/ccfos/nightingale/v6/pkg/logx"
"github.com/ccfos/nightingale/v6/pkg/macros"
"github.com/ccfos/nightingale/v6/pkg/version"
"github.com/ccfos/nightingale/v6/prom"
"github.com/ccfos/nightingale/v6/pushgw/idents"
pushgwrt "github.com/ccfos/nightingale/v6/pushgw/router"
"github.com/ccfos/nightingale/v6/pushgw/writer"
"github.com/ccfos/nightingale/v6/storage"
"github.com/ccfos/nightingale/v6/tdengine"
"github.com/flashcatcloud/ibex/src/cmd/ibex"
)
@@ -48,6 +50,10 @@ func Initialize(configDir string, cryptoKey string) (func(), error) {
cconf.MergeOperationConf()
if config.Alert.Heartbeat.EngineName == "" {
config.Alert.Heartbeat.EngineName = "default"
}
logxClean, err := logx.Init(config.Log)
if err != nil {
return nil, err
@@ -63,7 +69,7 @@ func Initialize(configDir string, cryptoKey string) (func(), error) {
}
ctx := ctx.NewContext(context.Background(), db, true)
migrate.Migrate(db)
models.InitRoot(ctx)
isRootInit := models.InitRoot(ctx)
config.HTTP.JWTAuth.SigningKey = models.InitJWTSigningKey(ctx)
@@ -99,10 +105,14 @@ func Initialize(configDir string, cryptoKey string) (func(), error) {
sso := sso.Init(config.Center, ctx, configCache)
promClients := prom.NewPromClient(ctx)
tdengineClients := tdengine.NewTdengineClient(ctx, config.Alert.Heartbeat)
dispatch.InitRegisterQueryFunc(promClients)
externalProcessors := process.NewExternalProcessors()
alert.Start(config.Alert, config.Pushgw, syncStats, alertStats, externalProcessors, targetCache, busiGroupCache, alertMuteCache, alertRuleCache, notifyConfigCache, taskTplCache, dsCache, ctx, promClients, tdengineClients, userCache, userGroupCache)
macros.RegisterMacro(macros.MacroInVain)
dscache.Init(ctx, false)
alert.Start(config.Alert, config.Pushgw, syncStats, alertStats, externalProcessors, targetCache, busiGroupCache, alertMuteCache, alertRuleCache, notifyConfigCache, taskTplCache, dsCache, ctx, promClients, userCache, userGroupCache)
writers := writer.NewWriters(config.Pushgw)
@@ -112,11 +122,15 @@ func Initialize(configDir string, cryptoKey string) (func(), error) {
alertrtRouter := alertrt.New(config.HTTP, config.Alert, alertMuteCache, targetCache, busiGroupCache, alertStats, ctx, externalProcessors)
centerRouter := centerrt.New(config.HTTP, config.Center, config.Alert, config.Ibex,
cconf.Operations, dsCache, notifyConfigCache, promClients, tdengineClients,
cconf.Operations, dsCache, notifyConfigCache, promClients,
redis, sso, ctx, metas, idents, targetCache, userCache, userGroupCache)
pushgwRouter := pushgwrt.New(config.HTTP, config.Pushgw, config.Alert, targetCache, busiGroupCache, idents, metas, writers, ctx)
go models.MigrateBg(ctx, pushgwRouter.Pushgw.BusiGroupLabelKey)
go func() {
if config.Center.MigrateBusiGroupLabel || models.CanMigrateBg(ctx) {
models.MigrateBg(ctx, pushgwRouter.Pushgw.BusiGroupLabelKey)
}
}()
r := httpx.GinEngine(config.Global.RunMode, config.HTTP, configCvalCache.PrintBodyPaths, configCvalCache.PrintAccessLog)
@@ -132,6 +146,11 @@ func Initialize(configDir string, cryptoKey string) (func(), error) {
httpClean := httpx.Init(config.HTTP, r)
fmt.Printf("please view n9e at http://%v:%v\n", config.Alert.Heartbeat.IP, config.HTTP.Port)
if isRootInit {
fmt.Println("username/password: root/root.2020")
}
return func() {
logxClean()
httpClean()

View File

@@ -113,6 +113,12 @@ func Init(ctx *ctx.Context, builtinIntegrationsDir string) {
logger.Warning("delete builtin metrics fail ", err)
}
// 删除 uuid%1000 不为 0 uuid > 1000000000000000000 且 type 为 dashboard 的记录
err = models.DB(ctx).Exec("delete from builtin_payloads where uuid%1000 != 0 and uuid > 1000000000000000000 and type = 'dashboard' and updated_by = 'system'").Error
if err != nil {
logger.Warning("delete builtin payloads fail ", err)
}
// alerts
files, err = file.FilesUnder(componentDir + "/alerts")
if err == nil && len(files) > 0 {
@@ -218,7 +224,8 @@ func Init(ctx *ctx.Context, builtinIntegrationsDir string) {
}
if dashboard.UUID == 0 {
dashboard.UUID = time.Now().UnixNano()
time.Sleep(time.Microsecond)
dashboard.UUID = time.Now().UnixMicro()
// 补全文件中的 uuid
bs, err = json.MarshalIndent(dashboard, "", " ")
if err != nil {

View File

@@ -92,7 +92,6 @@ func (s *Set) updateMeta(items map[string]models.HostMeta) {
func (s *Set) updateTargets(m map[string]models.HostMeta) error {
if s.redis == nil {
logger.Warningf("redis is nil")
return nil
}

View File

@@ -24,7 +24,6 @@ import (
"github.com/ccfos/nightingale/v6/prom"
"github.com/ccfos/nightingale/v6/pushgw/idents"
"github.com/ccfos/nightingale/v6/storage"
"github.com/ccfos/nightingale/v6/tdengine"
"github.com/gin-gonic/gin"
"github.com/rakyll/statik/fs"
@@ -42,7 +41,6 @@ type Router struct {
DatasourceCache *memsto.DatasourceCacheType
NotifyConfigCache *memsto.NotifyConfigCacheType
PromClients *prom.PromClientMap
TdendgineClients *tdengine.TdengineClientMap
Redis storage.Redis
MetaSet *metas.Set
IdentSet *idents.Set
@@ -57,7 +55,7 @@ type Router struct {
func New(httpConfig httpx.Config, center cconf.Center, alert aconf.Alert, ibex conf.Ibex,
operations cconf.Operation, ds *memsto.DatasourceCacheType, ncc *memsto.NotifyConfigCacheType,
pc *prom.PromClientMap, tdendgineClients *tdengine.TdengineClientMap, redis storage.Redis,
pc *prom.PromClientMap, redis storage.Redis,
sso *sso.SsoClient, ctx *ctx.Context, metaSet *metas.Set, idents *idents.Set,
tc *memsto.TargetCacheType, uc *memsto.UserCacheType, ugc *memsto.UserGroupCacheType) *Router {
return &Router{
@@ -69,7 +67,6 @@ func New(httpConfig httpx.Config, center cconf.Center, alert aconf.Alert, ibex c
DatasourceCache: ds,
NotifyConfigCache: ncc,
PromClients: pc,
TdendgineClients: tdendgineClients,
Redis: redis,
MetaSet: metaSet,
IdentSet: idents,
@@ -184,26 +181,53 @@ func (rt *Router) Config(r *gin.Engine) {
pages.POST("/query-range-batch", rt.promBatchQueryRange)
pages.POST("/query-instant-batch", rt.promBatchQueryInstant)
pages.GET("/datasource/brief", rt.datasourceBriefs)
pages.POST("/datasource/query", rt.datasourceQuery)
pages.POST("/ds-query", rt.QueryData)
pages.POST("/logs-query", rt.QueryLog)
pages.POST("/logs-query", rt.QueryLogV2)
pages.POST("/tdengine-databases", rt.tdengineDatabases)
pages.POST("/tdengine-tables", rt.tdengineTables)
pages.POST("/tdengine-columns", rt.tdengineColumns)
pages.POST("/log-query-batch", rt.QueryLogBatch)
// 数据库元数据接口
pages.POST("/db-databases", rt.ShowDatabases)
pages.POST("/db-tables", rt.ShowTables)
pages.POST("/db-desc-table", rt.DescribeTable)
// es 专用接口
pages.POST("/indices", rt.auth(), rt.user(), rt.QueryIndices)
pages.POST("/es-variable", rt.auth(), rt.user(), rt.QueryESVariable)
pages.POST("/fields", rt.auth(), rt.user(), rt.QueryFields)
pages.POST("/log-query", rt.auth(), rt.user(), rt.QueryLog)
} else {
pages.Any("/proxy/:id/*url", rt.auth(), rt.dsProxy)
pages.POST("/query-range-batch", rt.auth(), rt.promBatchQueryRange)
pages.POST("/query-instant-batch", rt.auth(), rt.promBatchQueryInstant)
pages.GET("/datasource/brief", rt.auth(), rt.user(), rt.datasourceBriefs)
pages.POST("/datasource/query", rt.auth(), rt.user(), rt.datasourceQuery)
pages.POST("/ds-query", rt.auth(), rt.QueryData)
pages.POST("/logs-query", rt.auth(), rt.QueryLog)
pages.POST("/logs-query", rt.auth(), rt.QueryLogV2)
pages.POST("/tdengine-databases", rt.auth(), rt.tdengineDatabases)
pages.POST("/tdengine-tables", rt.auth(), rt.tdengineTables)
pages.POST("/tdengine-columns", rt.auth(), rt.tdengineColumns)
pages.POST("/log-query-batch", rt.auth(), rt.user(), rt.QueryLogBatch)
// 数据库元数据接口
pages.POST("/db-databases", rt.auth(), rt.user(), rt.ShowDatabases)
pages.POST("/db-tables", rt.auth(), rt.user(), rt.ShowTables)
pages.POST("/db-desc-table", rt.auth(), rt.user(), rt.DescribeTable)
// es 专用接口
pages.POST("/indices", rt.auth(), rt.user(), rt.QueryIndices)
pages.POST("/es-variable", rt.QueryESVariable)
pages.POST("/fields", rt.QueryFields)
pages.POST("/log-query", rt.QueryLog)
}
pages.GET("/sql-template", rt.QuerySqlTemplate)
@@ -278,6 +302,7 @@ func (rt *Router) Config(r *gin.Engine) {
pages.DELETE("/busi-group/:id/members", rt.auth(), rt.user(), rt.perm("/busi-groups/put"), rt.bgrw(), rt.busiGroupMemberDel)
pages.DELETE("/busi-group/:id", rt.auth(), rt.user(), rt.perm("/busi-groups/del"), rt.bgrw(), rt.busiGroupDel)
pages.GET("/busi-group/:id/perm/:perm", rt.auth(), rt.user(), rt.checkBusiGroupPerm)
pages.GET("/busi-groups/tags", rt.auth(), rt.user(), rt.busiGroupsGetTags)
pages.GET("/targets", rt.auth(), rt.user(), rt.targetGets)
pages.GET("/target/extra-meta", rt.auth(), rt.user(), rt.targetExtendInfoByIdent)
@@ -319,6 +344,11 @@ func (rt *Router) Config(r *gin.Engine) {
pages.GET("/share-charts", rt.chartShareGets)
pages.POST("/share-charts", rt.auth(), rt.chartShareAdd)
pages.POST("/dashboard-annotations", rt.auth(), rt.user(), rt.perm("/dashboards/put"), rt.dashAnnotationAdd)
pages.GET("/dashboard-annotations", rt.dashAnnotationGets)
pages.PUT("/dashboard-annotation/:id", rt.auth(), rt.user(), rt.perm("/dashboards/put"), rt.dashAnnotationPut)
pages.DELETE("/dashboard-annotation/:id", rt.auth(), rt.user(), rt.perm("/dashboards/del"), rt.dashAnnotationDel)
// pages.GET("/alert-rules/builtin/alerts-cates", rt.auth(), rt.user(), rt.builtinAlertCateGets)
// pages.GET("/alert-rules/builtin/list", rt.auth(), rt.user(), rt.builtinAlertRules)
pages.GET("/alert-rules/callbacks", rt.auth(), rt.user(), rt.alertRuleCallbacks)
@@ -476,6 +506,7 @@ func (rt *Router) Config(r *gin.Engine) {
pages.PUT("/builtin-payloads", rt.auth(), rt.user(), rt.perm("/built-in-components/put"), rt.builtinPayloadsPut)
pages.DELETE("/builtin-payloads", rt.auth(), rt.user(), rt.perm("/built-in-components/del"), rt.builtinPayloadsDel)
pages.GET("/builtin-payload", rt.auth(), rt.user(), rt.builtinPayloadsGetByUUIDOrID)
}
r.GET("/api/n9e/versions", func(c *gin.Context) {

View File

@@ -77,6 +77,11 @@ func (rt *Router) alertRuleGetsByGids(c *gin.Context) {
for i := 0; i < len(ars); i++ {
ars[i].FillNotifyGroups(rt.Ctx, cache)
ars[i].FillSeverities()
if len(ars[i].DatasourceQueries) != 0 {
ars[i].DatasourceIdsJson = rt.DatasourceCache.GetIDsByDsCateAndQueries(ars[i].Cate, ars[i].DatasourceQueries)
}
rids = append(rids, ars[i].Id)
names = append(names, ars[i].UpdateBy)
}
@@ -123,6 +128,10 @@ func (rt *Router) alertRulesGetByService(c *gin.Context) {
cache := make(map[int64]*models.UserGroup)
for i := 0; i < len(ars); i++ {
ars[i].FillNotifyGroups(rt.Ctx, cache)
if len(ars[i].DatasourceQueries) != 0 {
ars[i].DatasourceIdsJson = rt.DatasourceCache.GetIDsByDsCateAndQueries(ars[i].Cate, ars[i].DatasourceQueries)
}
}
}
ginx.NewRender(c).Data(ars, err)
@@ -157,6 +166,14 @@ func (rt *Router) alertRuleAddByImport(c *gin.Context) {
ginx.Bomb(http.StatusBadRequest, "input json is empty")
}
for i := range lst {
if len(lst[i].DatasourceQueries) == 0 {
lst[i].DatasourceQueries = []models.DatasourceQuery{
models.DataSourceQueryAll,
}
}
}
bgid := ginx.UrlParamInt64(c, "id")
reterr := rt.alertRuleAdd(lst, username, bgid, c.GetHeader("X-Language"))
@@ -164,9 +181,9 @@ func (rt *Router) alertRuleAddByImport(c *gin.Context) {
}
type promRuleForm struct {
Payload string `json:"payload" binding:"required"`
DatasourceIds []int64 `json:"datasource_ids" binding:"required"`
Disabled int `json:"disabled" binding:"gte=0,lte=1"`
Payload string `json:"payload" binding:"required"`
DatasourceQueries []models.DatasourceQuery `json:"datasource_queries" binding:"required"`
Disabled int `json:"disabled" binding:"gte=0,lte=1"`
}
func (rt *Router) alertRuleAddByImportPromRule(c *gin.Context) {
@@ -185,7 +202,7 @@ func (rt *Router) alertRuleAddByImportPromRule(c *gin.Context) {
ginx.Bomb(http.StatusBadRequest, "input yaml is empty")
}
lst := models.DealPromGroup(pr.Groups, f.DatasourceIds, f.Disabled)
lst := models.DealPromGroup(pr.Groups, f.DatasourceQueries, f.Disabled)
username := c.MustGet("username").(string)
bgid := ginx.UrlParamInt64(c, "id")
ginx.NewRender(c).Data(rt.alertRuleAdd(lst, username, bgid, c.GetHeader("X-Language")), nil)
@@ -398,6 +415,16 @@ func (rt *Router) alertRulePutFields(c *gin.Context) {
}
}
if f.Action == "datasource_change" {
// 修改数据源
if datasourceQueries, has := f.Fields["datasource_queries"]; has {
bytes, err := json.Marshal(datasourceQueries)
ginx.Dangerous(err)
ginx.Dangerous(ar.UpdateFieldsMap(rt.Ctx, map[string]interface{}{"datasource_queries": bytes}))
continue
}
}
for k, v := range f.Fields {
ginx.Dangerous(ar.UpdateColumn(rt.Ctx, k, v))
}
@@ -417,6 +444,10 @@ func (rt *Router) alertRuleGet(c *gin.Context) {
return
}
if len(ar.DatasourceQueries) != 0 {
ar.DatasourceIdsJson = rt.DatasourceCache.GetIDsByDsCateAndQueries(ar.Cate, ar.DatasourceQueries)
}
err = ar.FillNotifyGroups(rt.Ctx, make(map[int64]*models.UserGroup))
ginx.Dangerous(err)
@@ -623,7 +654,7 @@ func (rt *Router) cloneToMachine(c *gin.Context) {
newRule.CreateAt = now
newRule.RuleConfig = alertRules[i].RuleConfig
exist, err := models.AlertRuleExists(rt.Ctx, 0, newRule.GroupId, newRule.DatasourceIdsJson, newRule.Name)
exist, err := models.AlertRuleExists(rt.Ctx, 0, newRule.GroupId, newRule.Name)
if err != nil {
errMsg[f.IdentList[j]] = err.Error()
continue

View File

@@ -36,8 +36,9 @@ func (rt *Router) builtinComponentsAdd(c *gin.Context) {
func (rt *Router) builtinComponentsGets(c *gin.Context) {
query := ginx.QueryStr(c, "query", "")
disabled := ginx.QueryInt(c, "disabled", -1)
bc, err := models.BuiltinComponentGets(rt.Ctx, query)
bc, err := models.BuiltinComponentGets(rt.Ctx, query, disabled)
ginx.Dangerous(err)
ginx.NewRender(c).Data(bc, nil)

View File

@@ -43,7 +43,7 @@ func (rt *Router) builtinPayloadsAdd(c *gin.Context) {
for _, rule := range alertRules {
if rule.UUID == 0 {
rule.UUID = time.Now().UnixNano()
rule.UUID = time.Now().UnixMicro()
}
contentBytes, err := json.Marshal(rule)
@@ -78,7 +78,13 @@ func (rt *Router) builtinPayloadsAdd(c *gin.Context) {
}
if alertRule.UUID == 0 {
alertRule.UUID = time.Now().UnixNano()
alertRule.UUID = time.Now().UnixMicro()
}
contentBytes, err := json.Marshal(alertRule)
if err != nil {
reterr[alertRule.Name] = err.Error()
continue
}
bp := models.BuiltinPayload{
@@ -88,7 +94,7 @@ func (rt *Router) builtinPayloadsAdd(c *gin.Context) {
Name: alertRule.Name,
Tags: alertRule.AppendTags,
UUID: alertRule.UUID,
Content: lst[i].Content,
Content: string(contentBytes),
CreatedBy: username,
UpdatedBy: username,
}
@@ -106,7 +112,7 @@ func (rt *Router) builtinPayloadsAdd(c *gin.Context) {
for _, dashboard := range dashboards {
if dashboard.UUID == 0 {
dashboard.UUID = time.Now().UnixNano()
dashboard.UUID = time.Now().UnixMicro()
}
contentBytes, err := json.Marshal(dashboard)
@@ -141,7 +147,13 @@ func (rt *Router) builtinPayloadsAdd(c *gin.Context) {
}
if dashboard.UUID == 0 {
dashboard.UUID = time.Now().UnixNano()
dashboard.UUID = time.Now().UnixMicro()
}
contentBytes, err := json.Marshal(dashboard)
if err != nil {
reterr[dashboard.Name] = err.Error()
continue
}
bp := models.BuiltinPayload{
@@ -151,7 +163,7 @@ func (rt *Router) builtinPayloadsAdd(c *gin.Context) {
Name: dashboard.Name,
Tags: dashboard.Tags,
UUID: dashboard.UUID,
Content: lst[i].Content,
Content: string(contentBytes),
CreatedBy: username,
UpdatedBy: username,
}

View File

@@ -140,3 +140,12 @@ func (rt *Router) busiGroupGet(c *gin.Context) {
ginx.Dangerous(bg.FillUserGroups(rt.Ctx))
ginx.NewRender(c).Data(bg, nil)
}
func (rt *Router) busiGroupsGetTags(c *gin.Context) {
bgids := str.IdsInt64(ginx.QueryStr(c, "gids", ""), ",")
targetIdents, err := models.TargetIndentsGetByBgids(rt.Ctx, bgids)
ginx.Dangerous(err)
tags, err := models.TargetGetTags(rt.Ctx, targetIdents, true, "busigroup")
ginx.Dangerous(err)
ginx.NewRender(c).Data(tags, nil)
}

View File

@@ -0,0 +1,99 @@
package router
import (
"fmt"
"net/http"
"time"
"github.com/ccfos/nightingale/v6/models"
"github.com/ccfos/nightingale/v6/pkg/ctx"
"github.com/gin-gonic/gin"
"github.com/toolkits/pkg/ginx"
)
func checkAnnotationPermission(c *gin.Context, ctx *ctx.Context, dashboardId int64) {
dashboard, err := models.BoardGetByID(ctx, dashboardId)
if err != nil {
ginx.Bomb(http.StatusInternalServerError, "failed to get dashboard: %v", err)
}
if dashboard == nil {
ginx.Bomb(http.StatusNotFound, "dashboard not found")
}
bg := BusiGroup(ctx, dashboard.GroupId)
me := c.MustGet("user").(*models.User)
can, err := me.CanDoBusiGroup(ctx, bg, "rw")
ginx.Dangerous(err)
if !can {
ginx.Bomb(http.StatusForbidden, "forbidden")
}
}
func (rt *Router) dashAnnotationAdd(c *gin.Context) {
var f models.DashAnnotation
ginx.BindJSON(c, &f)
username := c.MustGet("username").(string)
now := time.Now().Unix()
checkAnnotationPermission(c, rt.Ctx, f.DashboardId)
f.CreateBy = username
f.CreateAt = now
f.UpdateBy = username
f.UpdateAt = now
ginx.NewRender(c).Data(f.Id, f.Add(rt.Ctx))
}
func (rt *Router) dashAnnotationGets(c *gin.Context) {
dashboardId := ginx.QueryInt64(c, "dashboard_id")
from := ginx.QueryInt64(c, "from")
to := ginx.QueryInt64(c, "to")
limit := ginx.QueryInt(c, "limit", 100)
lst, err := models.DashAnnotationGets(rt.Ctx, dashboardId, from, to, limit)
ginx.NewRender(c).Data(lst, err)
}
func (rt *Router) dashAnnotationPut(c *gin.Context) {
var f models.DashAnnotation
ginx.BindJSON(c, &f)
id := ginx.UrlParamInt64(c, "id")
annotation, err := getAnnotationById(rt.Ctx, id)
ginx.Dangerous(err)
checkAnnotationPermission(c, rt.Ctx, annotation.DashboardId)
f.Id = id
f.UpdateAt = time.Now().Unix()
f.UpdateBy = c.MustGet("username").(string)
ginx.NewRender(c).Message(f.Update(rt.Ctx))
}
func (rt *Router) dashAnnotationDel(c *gin.Context) {
id := ginx.UrlParamInt64(c, "id")
annotation, err := getAnnotationById(rt.Ctx, id)
ginx.Dangerous(err)
checkAnnotationPermission(c, rt.Ctx, annotation.DashboardId)
ginx.NewRender(c).Message(models.DashAnnotationDel(rt.Ctx, id))
}
// 可以提取获取注释的通用方法
func getAnnotationById(ctx *ctx.Context, id int64) (*models.DashAnnotation, error) {
annotation, err := models.DashAnnotationGet(ctx, "id=?", id)
if err != nil {
return nil, err
}
if annotation == nil {
return nil, fmt.Errorf("annotation not found")
}
return annotation, nil
}

View File

@@ -93,10 +93,12 @@ func (rt *Router) datasourceUpsert(c *gin.Context) {
var count int64
if !req.ForceSave {
err = DatasourceCheck(req)
if err != nil {
Dangerous(c, err)
return
if req.PluginType == models.PROMETHEUS || req.PluginType == models.LOKI || req.PluginType == models.TDENGINE {
err = DatasourceCheck(req)
if err != nil {
Dangerous(c, err)
return
}
}
}
@@ -122,12 +124,14 @@ func (rt *Router) datasourceUpsert(c *gin.Context) {
}
func DatasourceCheck(ds models.Datasource) error {
if ds.HTTPJson.Url == "" {
return fmt.Errorf("url is empty")
}
if ds.PluginType == models.PROMETHEUS || ds.PluginType == models.LOKI || ds.PluginType == models.TDENGINE {
if ds.HTTPJson.Url == "" {
return fmt.Errorf("url is empty")
}
if !strings.HasPrefix(ds.HTTPJson.Url, "http") {
return fmt.Errorf("url must start with http or https")
if !strings.HasPrefix(ds.HTTPJson.Url, "http") {
return fmt.Errorf("url must start with http or https")
}
}
client := &http.Client{
@@ -138,11 +142,11 @@ func DatasourceCheck(ds models.Datasource) error {
},
}
fullURL := ds.HTTPJson.Url
req, err := http.NewRequest("GET", fullURL, nil)
var fullURL string
req, err := ds.HTTPJson.NewReq(&fullURL)
if err != nil {
logger.Errorf("Error creating request: %v", err)
return fmt.Errorf("request url:%s failed", fullURL)
return fmt.Errorf("request urls:%v failed", ds.HTTPJson.GetUrls())
}
if ds.PluginType == models.PROMETHEUS {
@@ -249,3 +253,37 @@ func (rt *Router) getDatasourceIds(c *gin.Context) {
ginx.NewRender(c).Data(datasourceIds, err)
}
type datasourceQueryForm struct {
Cate string `json:"datasource_cate"`
DatasourceQueries []models.DatasourceQuery `json:"datasource_queries"`
}
type datasourceQueryResp struct {
ID int64 `json:"id"`
Name string `json:"name"`
}
func (rt *Router) datasourceQuery(c *gin.Context) {
var dsf datasourceQueryForm
ginx.BindJSON(c, &dsf)
datasources, err := models.GetDatasourcesGetsByTypes(rt.Ctx, []string{dsf.Cate})
ginx.Dangerous(err)
nameToID := make(map[string]int64)
IDToName := make(map[int64]string)
for _, ds := range datasources {
nameToID[ds.Name] = ds.Id
IDToName[ds.Id] = ds.Name
}
ids := models.GetDatasourceIDsByDatasourceQueries(dsf.DatasourceQueries, IDToName, nameToID)
var req []datasourceQueryResp
for _, id := range ids {
req = append(req, datasourceQueryResp{
ID: id,
Name: IDToName[id],
})
}
ginx.NewRender(c).Data(req, err)
}

View File

@@ -0,0 +1,101 @@
package router
import (
"context"
"github.com/ccfos/nightingale/v6/dscache"
"github.com/ccfos/nightingale/v6/dskit/types"
"github.com/ccfos/nightingale/v6/models"
"github.com/gin-gonic/gin"
"github.com/toolkits/pkg/ginx"
"github.com/toolkits/pkg/logger"
)
func (rt *Router) ShowDatabases(c *gin.Context) {
var f models.QueryParam
ginx.BindJSON(c, &f)
plug, exists := dscache.DsCache.Get(f.Cate, f.DatasourceId)
if !exists {
logger.Warningf("cluster:%d not exists", f.DatasourceId)
ginx.Bomb(200, "cluster not exists")
}
var databases []string
var err error
type DatabaseShower interface {
ShowDatabases(context.Context) ([]string, error)
}
switch plug.(type) {
case DatabaseShower:
databases, err = plug.(DatabaseShower).ShowDatabases(c.Request.Context())
ginx.Dangerous(err)
default:
ginx.Bomb(200, "datasource not exists")
}
if len(databases) == 0 {
databases = make([]string, 0)
}
ginx.NewRender(c).Data(databases, nil)
}
func (rt *Router) ShowTables(c *gin.Context) {
var f models.QueryParam
ginx.BindJSON(c, &f)
plug, exists := dscache.DsCache.Get(f.Cate, f.DatasourceId)
if !exists {
logger.Warningf("cluster:%d not exists", f.DatasourceId)
ginx.Bomb(200, "cluster not exists")
}
// 只接受一个入参
tables := make([]string, 0)
var err error
type TableShower interface {
ShowTables(ctx context.Context, database string) ([]string, error)
}
switch plug.(type) {
case TableShower:
if len(f.Querys) > 0 {
database, ok := f.Querys[0].(string)
if ok {
tables, err = plug.(TableShower).ShowTables(c.Request.Context(), database)
}
}
default:
ginx.Bomb(200, "datasource not exists")
}
ginx.NewRender(c).Data(tables, err)
}
func (rt *Router) DescribeTable(c *gin.Context) {
var f models.QueryParam
ginx.BindJSON(c, &f)
plug, exists := dscache.DsCache.Get(f.Cate, f.DatasourceId)
if !exists {
logger.Warningf("cluster:%d not exists", f.DatasourceId)
ginx.Bomb(200, "cluster not exists")
}
// 只接受一个入参
columns := make([]*types.ColumnProperty, 0)
var err error
type TableDescriber interface {
DescribeTable(context.Context, interface{}) ([]*types.ColumnProperty, error)
}
switch plug.(type) {
case TableDescriber:
client := plug.(TableDescriber)
if len(f.Querys) > 0 {
columns, err = client.DescribeTable(c.Request.Context(), f.Querys[0])
}
default:
ginx.Bomb(200, "datasource not exists")
}
ginx.NewRender(c).Data(columns, err)
}

View File

@@ -0,0 +1,77 @@
package router
import (
"github.com/ccfos/nightingale/v6/datasource/es"
"github.com/ccfos/nightingale/v6/dscache"
"github.com/gin-gonic/gin"
"github.com/toolkits/pkg/ginx"
"github.com/toolkits/pkg/logger"
)
type IndexReq struct {
Cate string `json:"cate"`
DatasourceId int64 `json:"datasource_id"`
Index string `json:"index"`
}
type FieldValueReq struct {
Cate string `json:"cate"`
DatasourceId int64 `json:"datasource_id"`
Index string `json:"index"`
Query FieldObj `json:"query"`
}
type FieldObj struct {
Find string `json:"find"`
Field string `json:"field"`
Query string `json:"query"`
}
func (rt *Router) QueryIndices(c *gin.Context) {
var f IndexReq
ginx.BindJSON(c, &f)
plug, exists := dscache.DsCache.Get(f.Cate, f.DatasourceId)
if !exists {
logger.Warningf("cluster:%d not exists", f.DatasourceId)
ginx.Bomb(200, "cluster not exists")
}
indices, err := plug.(*es.Elasticsearch).QueryIndices()
ginx.Dangerous(err)
ginx.NewRender(c).Data(indices, nil)
}
func (rt *Router) QueryFields(c *gin.Context) {
var f IndexReq
ginx.BindJSON(c, &f)
plug, exists := dscache.DsCache.Get(f.Cate, f.DatasourceId)
if !exists {
logger.Warningf("cluster:%d not exists", f.DatasourceId)
ginx.Bomb(200, "cluster not exists")
}
fields, err := plug.(*es.Elasticsearch).QueryFields([]string{f.Index})
ginx.Dangerous(err)
ginx.NewRender(c).Data(fields, nil)
}
func (rt *Router) QueryESVariable(c *gin.Context) {
var f FieldValueReq
ginx.BindJSON(c, &f)
plug, exists := dscache.DsCache.Get(f.Cate, f.DatasourceId)
if !exists {
logger.Warningf("cluster:%d not exists", f.DatasourceId)
ginx.Bomb(200, "cluster not exists")
}
fields, err := plug.(*es.Elasticsearch).QueryFieldValue([]string{f.Index}, f.Query.Field, f.Query.Query)
ginx.Dangerous(err)
ginx.NewRender(c).Data(fields, nil)
}

View File

@@ -100,7 +100,7 @@ func HandleHeartbeat(c *gin.Context, ctx *ctx.Context, engineName string, metaSe
groupIds = append(groupIds, groupId)
}
err := models.TargetOverrideBgids(ctx, []string{target.Ident}, groupIds)
err := models.TargetOverrideBgids(ctx, []string{target.Ident}, groupIds, nil)
if err != nil {
logger.Warningf("update target:%s group ids failed, err: %v", target.Ident, err)
}
@@ -113,7 +113,7 @@ func HandleHeartbeat(c *gin.Context, ctx *ctx.Context, engineName string, metaSe
}
if !target.MatchGroupId(groupId) {
err := models.TargetBindBgids(ctx, []string{target.Ident}, []int64{groupId})
err := models.TargetBindBgids(ctx, []string{target.Ident}, []int64{groupId}, nil)
if err != nil {
logger.Warningf("update target:%s group ids failed, err: %v", target.Ident, err)
}

View File

@@ -130,9 +130,9 @@ func (rt *Router) User() gin.HandlerFunc {
func (rt *Router) user() gin.HandlerFunc {
return func(c *gin.Context) {
userid := c.MustGet("userid").(int64)
username := c.MustGet("username").(string)
user, err := models.UserGetById(rt.Ctx, userid)
user, err := models.UserGetByUsername(rt.Ctx, username)
if err != nil {
ginx.Bomb(http.StatusUnauthorized, "unauthorized")
}

View File

@@ -35,11 +35,18 @@ type Record struct {
// notificationRecordAdd
func (rt *Router) notificationRecordAdd(c *gin.Context) {
var req models.NotificaitonRecord
var req []*models.NotificaitonRecord
ginx.BindJSON(c, &req)
err := req.Add(rt.Ctx)
err := models.DB(rt.Ctx).CreateInBatches(req, 100).Error
var ids []int64
if err == nil {
ids = make([]int64, len(req))
for i, noti := range req {
ids[i] = noti.Id
}
}
ginx.NewRender(c).Data(req.Id, err)
ginx.NewRender(c).Data(ids, err)
}
func (rt *Router) notificationRecordList(c *gin.Context) {

View File

@@ -161,7 +161,11 @@ func (rt *Router) notifyTplPreview(c *gin.Context) {
func (rt *Router) notifyTplAdd(c *gin.Context) {
var f models.NotifyTpl
ginx.BindJSON(c, &f)
f.Channel = strings.TrimSpace(f.Channel)
user := c.MustGet("user").(*models.User)
f.CreateBy = user.Username
f.Channel = strings.TrimSpace(f.Channel)
ginx.Dangerous(templateValidate(f))
count, err := models.Count(models.DB(rt.Ctx).Model(&models.NotifyTpl{}).Where("channel = ? or name = ?", f.Channel, f.Name))
@@ -169,6 +173,8 @@ func (rt *Router) notifyTplAdd(c *gin.Context) {
if count != 0 {
ginx.Bomb(200, "Refuse to create duplicate channel(unique)")
}
f.CreateAt = time.Now().Unix()
ginx.NewRender(c).Message(f.Create(rt.Ctx))
}

View File

@@ -7,7 +7,6 @@ import (
"net"
"net/http"
"net/http/httputil"
"net/url"
"strings"
"sync"
"time"
@@ -112,9 +111,9 @@ func (rt *Router) dsProxy(c *gin.Context) {
return
}
target, err := url.Parse(ds.HTTPJson.Url)
target, err := ds.HTTPJson.ParseUrl()
if err != nil {
c.String(http.StatusInternalServerError, "invalid url: %s", ds.HTTPJson.Url)
c.String(http.StatusInternalServerError, "invalid urls: %s", ds.HTTPJson.GetUrls())
return
}

View File

@@ -0,0 +1,181 @@
package router
import (
"fmt"
"sort"
"github.com/ccfos/nightingale/v6/dscache"
"github.com/ccfos/nightingale/v6/models"
"github.com/gin-gonic/gin"
"github.com/toolkits/pkg/ginx"
"github.com/toolkits/pkg/logger"
)
func CheckDsPerm(c *gin.Context, dsId int64, cate string, q interface{}) bool {
// todo: 后续需要根据 cate 判断是否需要权限
return true
}
type QueryFrom struct {
Queries []Query `json:"queries"`
Exps []Exp `json:"exps"`
}
type Query struct {
Ref string `json:"ref"`
Did int64 `json:"ds_id"`
DsCate string `json:"ds_cate"`
Query interface{} `json:"query"`
}
type Exp struct {
Exp string `json:"exp"`
Ref string `json:"ref"`
}
type LogResp struct {
Total int64 `json:"total"`
List []interface{} `json:"list"`
}
func (rt *Router) QueryLogBatch(c *gin.Context) {
var f QueryFrom
ginx.BindJSON(c, &f)
var resp LogResp
var errMsg string
for _, q := range f.Queries {
if !rt.Center.AnonymousAccess.PromQuerier && !CheckDsPerm(c, q.Did, q.DsCate, q) {
ginx.Bomb(200, "no permission")
}
plug, exists := dscache.DsCache.Get(q.DsCate, q.Did)
if !exists {
logger.Warningf("cluster:%d not exists query:%+v", q.Did, q)
ginx.Bomb(200, "cluster not exists")
}
data, total, err := plug.QueryLog(c.Request.Context(), q.Query)
if err != nil {
errMsg += fmt.Sprintf("query data error: %v query:%v\n ", err, q)
logger.Warningf("query data error: %v query:%v", err, q)
continue
}
m := make(map[string]interface{})
m["ref"] = q.Ref
m["ds_id"] = q.Did
m["ds_cate"] = q.DsCate
m["data"] = data
resp.List = append(resp.List, m)
resp.Total += total
}
if errMsg != "" || len(resp.List) == 0 {
ginx.Bomb(200, errMsg)
}
ginx.NewRender(c).Data(resp, nil)
}
func (rt *Router) QueryData(c *gin.Context) {
var f models.QueryParam
ginx.BindJSON(c, &f)
var resp []models.DataResp
var err error
for _, q := range f.Querys {
if !rt.Center.AnonymousAccess.PromQuerier && !CheckDsPerm(c, f.DatasourceId, f.Cate, q) {
ginx.Bomb(403, "no permission")
}
plug, exists := dscache.DsCache.Get(f.Cate, f.DatasourceId)
if !exists {
logger.Warningf("cluster:%d not exists", f.DatasourceId)
ginx.Bomb(200, "cluster not exists")
}
var datas []models.DataResp
datas, err = plug.QueryData(c.Request.Context(), q)
if err != nil {
logger.Warningf("query data error: req:%+v err:%v", q, err)
ginx.Bomb(200, "err:%v", err)
}
logger.Debugf("query data: req:%+v resp:%+v", q, datas)
resp = append(resp, datas...)
}
// 面向API的统一处理
// 按照 .Metric 排序
// 确保仪表盘中相同图例的曲线颜色相同
if len(resp) > 1 {
sort.Slice(resp, func(i, j int) bool {
if resp[i].Metric != nil && resp[j].Metric != nil {
return resp[i].Metric.String() < resp[j].Metric.String()
}
return false
})
}
ginx.NewRender(c).Data(resp, err)
}
func (rt *Router) QueryLogV2(c *gin.Context) {
var f models.QueryParam
ginx.BindJSON(c, &f)
var resp LogResp
var errMsg string
for _, q := range f.Querys {
if !rt.Center.AnonymousAccess.PromQuerier && !CheckDsPerm(c, f.DatasourceId, f.Cate, q) {
ginx.Bomb(200, "no permission")
}
plug, exists := dscache.DsCache.Get(f.Cate, f.DatasourceId)
if !exists {
logger.Warningf("cluster:%d not exists query:%+v", f.DatasourceId, f)
ginx.Bomb(200, "cluster not exists")
}
data, total, err := plug.QueryLog(c.Request.Context(), q)
if err != nil {
errMsg += fmt.Sprintf("query data error: %v query:%v\n ", err, q)
logger.Warningf("query data error: %v query:%v", err, q)
continue
}
resp.List = append(resp.List, data...)
resp.Total += total
}
if errMsg != "" || len(resp.List) == 0 {
ginx.Bomb(200, errMsg)
}
ginx.NewRender(c).Data(resp, nil)
}
func (rt *Router) QueryLog(c *gin.Context) {
var f models.QueryParam
ginx.BindJSON(c, &f)
var resp []interface{}
for _, q := range f.Querys {
if !rt.Center.AnonymousAccess.PromQuerier && !CheckDsPerm(c, f.DatasourceId, f.Cate, q) {
ginx.Bomb(200, "no permission")
}
plug, exists := dscache.DsCache.Get("elasticsearch", f.DatasourceId)
if !exists {
logger.Warningf("cluster:%d not exists", f.DatasourceId)
ginx.Bomb(200, "cluster not exists")
}
data, _, err := plug.QueryLog(c.Request.Context(), q)
if err != nil {
logger.Warningf("query data error: %v", err)
ginx.Bomb(200, "err:%v", err)
continue
}
resp = append(resp, data...)
}
ginx.NewRender(c).Data(resp, nil)
}

View File

@@ -3,8 +3,6 @@ package router
import (
"encoding/json"
"net/http"
"strconv"
"strings"
"time"
"github.com/ccfos/nightingale/v6/models"
@@ -74,6 +72,14 @@ func (rt *Router) recordingRuleAddByFE(c *gin.Context) {
ginx.Bomb(http.StatusBadRequest, "input json is empty")
}
for i := range lst {
if len(lst[i].DatasourceQueries) == 0 {
lst[i].DatasourceQueries = []models.DatasourceQuery{
models.DataSourceQueryAll,
}
}
}
bgid := ginx.UrlParamInt64(c, "id")
reterr := make(map[string]string)
for i := 0; i < count; i++ {
@@ -137,23 +143,10 @@ func (rt *Router) recordingRulePutFields(c *gin.Context) {
f.Fields["update_by"] = c.MustGet("username").(string)
f.Fields["update_at"] = time.Now().Unix()
if _, ok := f.Fields["datasource_ids"]; ok {
// datasource_ids = "1 2 3"
idsStr := strings.Fields(f.Fields["datasource_ids"].(string))
ids := make([]int64, 0)
for _, idStr := range idsStr {
id, err := strconv.ParseInt(idStr, 10, 64)
if err != nil {
ginx.Bomb(http.StatusBadRequest, "datasource_ids error")
}
ids = append(ids, id)
}
bs, err := json.Marshal(ids)
if err != nil {
ginx.Bomb(http.StatusBadRequest, "datasource_ids error")
}
f.Fields["datasource_ids"] = string(bs)
if datasourceQueries, ok := f.Fields["datasource_queries"]; ok {
bytes, err := json.Marshal(datasourceQueries)
ginx.Dangerous(err)
f.Fields["datasource_queries"] = string(bytes)
}
for i := 0; i < len(f.Ids); i++ {

View File

@@ -9,6 +9,7 @@ import (
"time"
"github.com/ccfos/nightingale/v6/models"
"github.com/ccfos/nightingale/v6/pkg/ctx"
"github.com/ccfos/nightingale/v6/storage"
"github.com/gin-gonic/gin"
@@ -169,7 +170,7 @@ func (rt *Router) targetGetTags(c *gin.Context) {
idents := ginx.QueryStr(c, "idents", "")
idents = strings.ReplaceAll(idents, ",", " ")
ignoreHostTag := ginx.QueryBool(c, "ignore_host_tag", false)
lst, err := models.TargetGetTags(rt.Ctx, strings.Fields(idents), ignoreHostTag)
lst, err := models.TargetGetTags(rt.Ctx, strings.Fields(idents), ignoreHostTag, "")
ginx.NewRender(c).Data(lst, err)
}
@@ -272,11 +273,9 @@ func (rt *Router) validateTags(tags []string) error {
}
func (rt *Router) addTagsToTarget(target *models.Target, tags []string) error {
hostTagsMap := target.GetHostTagsMap()
for _, tag := range tags {
tagKey := strings.Split(tag, "=")[0]
if _, ok := hostTagsMap[tagKey]; ok ||
strings.Contains(target.Tags, tagKey+"=") {
if _, exist := target.TagsMap[tagKey]; exist {
return fmt.Errorf("duplicate tagkey(%s)", tagKey)
}
}
@@ -397,9 +396,26 @@ type targetBgidsForm struct {
Idents []string `json:"idents" binding:"required_without=HostIps"`
HostIps []string `json:"host_ips" binding:"required_without=Idents"`
Bgids []int64 `json:"bgids"`
Tags []string `json:"tags"`
Action string `json:"action"` // add del reset
}
func haveNeverGroupedIdent(ctx *ctx.Context, idents []string) (bool, error) {
for _, ident := range idents {
bgids, err := models.TargetGroupIdsGetByIdent(ctx, ident)
if err != nil {
return false, err
}
if len(bgids) <= 0 {
return true, nil
}
}
return false, nil
}
func (rt *Router) targetBindBgids(c *gin.Context) {
var f targetBgidsForm
var err error
@@ -442,21 +458,25 @@ func (rt *Router) targetBindBgids(c *gin.Context) {
ginx.Bomb(http.StatusForbidden, "No permission. You are not admin of BG(%s)", bg.Name)
}
}
isNeverGrouped, checkErr := haveNeverGroupedIdent(rt.Ctx, f.Idents)
ginx.Dangerous(checkErr)
can, err := user.CheckPerm(rt.Ctx, "/targets/bind")
ginx.Dangerous(err)
if !can {
ginx.Bomb(http.StatusForbidden, "No permission. Only admin can assign BG")
if isNeverGrouped {
can, err := user.CheckPerm(rt.Ctx, "/targets/bind")
ginx.Dangerous(err)
if !can {
ginx.Bomb(http.StatusForbidden, "No permission. Only admin can assign BG")
}
}
}
switch f.Action {
case "add":
ginx.NewRender(c).Data(failedResults, models.TargetBindBgids(rt.Ctx, f.Idents, f.Bgids))
ginx.NewRender(c).Data(failedResults, models.TargetBindBgids(rt.Ctx, f.Idents, f.Bgids, f.Tags))
case "del":
ginx.NewRender(c).Data(failedResults, models.TargetUnbindBgids(rt.Ctx, f.Idents, f.Bgids))
case "reset":
ginx.NewRender(c).Data(failedResults, models.TargetOverrideBgids(rt.Ctx, f.Idents, f.Bgids))
ginx.NewRender(c).Data(failedResults, models.TargetOverrideBgids(rt.Ctx, f.Idents, f.Bgids, f.Tags))
default:
ginx.Bomb(http.StatusBadRequest, "invalid action")
}
@@ -478,7 +498,7 @@ func (rt *Router) targetUpdateBgidByService(c *gin.Context) {
ginx.Bomb(http.StatusBadRequest, err.Error())
}
ginx.NewRender(c).Data(failedResults, models.TargetOverrideBgids(rt.Ctx, f.Idents, []int64{f.Bgid}))
ginx.NewRender(c).Data(failedResults, models.TargetOverrideBgids(rt.Ctx, f.Idents, []int64{f.Bgid}, nil))
}
type identsForm struct {

View File

@@ -1,14 +1,14 @@
package router
import (
"net/http"
"fmt"
"github.com/ccfos/nightingale/v6/center/cconf"
"github.com/ccfos/nightingale/v6/datasource/tdengine"
"github.com/ccfos/nightingale/v6/dscache"
"github.com/ccfos/nightingale/v6/models"
"github.com/gin-gonic/gin"
"github.com/toolkits/pkg/ginx"
"github.com/toolkits/pkg/logger"
"net/http"
)
type databasesQueryForm struct {
@@ -20,13 +20,13 @@ func (rt *Router) tdengineDatabases(c *gin.Context) {
var f databasesQueryForm
ginx.BindJSON(c, &f)
tdClient := rt.TdendgineClients.GetCli(f.DatasourceId)
if tdClient == nil {
datasource, hit := dscache.DsCache.Get(f.Cate, f.DatasourceId)
if _, ok := datasource.(*tdengine.TDengine); !hit || !ok {
ginx.NewRender(c, http.StatusNotFound).Message("No such datasource")
return
}
databases, err := tdClient.GetDatabases()
databases, err := datasource.(*tdengine.TDengine).ShowDatabases(rt.Ctx.Ctx)
ginx.NewRender(c).Data(databases, err)
}
@@ -37,18 +37,29 @@ type tablesQueryForm struct {
IsStable bool `json:"is_stable"`
}
type Column struct {
Name string `json:"name"`
Type string `json:"type"`
Size int `json:"size"`
}
// get tdengine tables
func (rt *Router) tdengineTables(c *gin.Context) {
var f tablesQueryForm
ginx.BindJSON(c, &f)
tdClient := rt.TdendgineClients.GetCli(f.DatasourceId)
if tdClient == nil {
datasource, hit := dscache.DsCache.Get(f.Cate, f.DatasourceId)
if _, ok := datasource.(*tdengine.TDengine); !hit || !ok {
ginx.NewRender(c, http.StatusNotFound).Message("No such datasource")
return
}
tables, err := tdClient.GetTables(f.Database, f.IsStable)
database := fmt.Sprintf("%s.tables", f.Database)
if f.IsStable {
database = fmt.Sprintf("%s.stables", f.Database)
}
tables, err := datasource.(*tdengine.TDengine).ShowTables(rt.Ctx.Ctx, database)
ginx.NewRender(c).Data(tables, err)
}
@@ -59,50 +70,31 @@ type columnsQueryForm struct {
Table string `json:"table"`
}
// get tdengine columns
func (rt *Router) tdengineColumns(c *gin.Context) {
var f columnsQueryForm
ginx.BindJSON(c, &f)
tdClient := rt.TdendgineClients.GetCli(f.DatasourceId)
if tdClient == nil {
datasource, hit := dscache.DsCache.Get(f.Cate, f.DatasourceId)
if _, ok := datasource.(*tdengine.TDengine); !hit || !ok {
ginx.NewRender(c, http.StatusNotFound).Message("No such datasource")
return
}
columns, err := tdClient.GetColumns(f.Database, f.Table)
ginx.NewRender(c).Data(columns, err)
}
func (rt *Router) QueryData(c *gin.Context) {
var f models.QueryParam
ginx.BindJSON(c, &f)
var resp []models.DataResp
var err error
tdClient := rt.TdendgineClients.GetCli(f.DatasourceId)
for _, q := range f.Querys {
datas, err := tdClient.Query(q)
ginx.Dangerous(err)
resp = append(resp, datas...)
query := map[string]string{
"database": f.Database,
"table": f.Table,
}
ginx.NewRender(c).Data(resp, err)
}
func (rt *Router) QueryLog(c *gin.Context) {
var f models.QueryParam
ginx.BindJSON(c, &f)
tdClient := rt.TdendgineClients.GetCli(f.DatasourceId)
if len(f.Querys) == 0 {
ginx.Bomb(200, "querys is empty")
return
columns, err := datasource.(*tdengine.TDengine).DescribeTable(rt.Ctx.Ctx, query)
// 对齐前端,后续可以将 tdEngine 的查数据的接口都统一
tdColumns := make([]Column, len(columns))
for i, column := range columns {
tdColumns[i] = Column{
Name: column.Field,
Type: column.Type,
}
}
data, err := tdClient.QueryLog(f.Querys[0])
logger.Debugf("tdengine query:%s result: %+v", f.Querys[0], data)
ginx.NewRender(c).Data(data, err)
ginx.NewRender(c).Data(tdColumns, err)
}
// query sql template

View File

@@ -33,7 +33,7 @@ type ClusterOptions struct {
MaxIdleConnsPerHost int
}
func Parse(fpath string, configPtr interface{}) error {
func Parse(fpath string, configPtr *Config) error {
var (
tBuf []byte
)

View File

@@ -5,8 +5,11 @@ import (
"errors"
"fmt"
"github.com/ccfos/nightingale/v6/dscache"
"github.com/ccfos/nightingale/v6/alert"
"github.com/ccfos/nightingale/v6/alert/astats"
"github.com/ccfos/nightingale/v6/alert/dispatch"
"github.com/ccfos/nightingale/v6/alert/process"
alertrt "github.com/ccfos/nightingale/v6/alert/router"
"github.com/ccfos/nightingale/v6/center/metas"
@@ -16,13 +19,12 @@ import (
"github.com/ccfos/nightingale/v6/pkg/ctx"
"github.com/ccfos/nightingale/v6/pkg/httpx"
"github.com/ccfos/nightingale/v6/pkg/logx"
"github.com/ccfos/nightingale/v6/pkg/macros"
"github.com/ccfos/nightingale/v6/prom"
"github.com/ccfos/nightingale/v6/pushgw/idents"
pushgwrt "github.com/ccfos/nightingale/v6/pushgw/router"
"github.com/ccfos/nightingale/v6/pushgw/writer"
"github.com/ccfos/nightingale/v6/storage"
"github.com/ccfos/nightingale/v6/tdengine"
"github.com/flashcatcloud/ibex/src/cmd/ibex"
)
@@ -60,6 +62,8 @@ func Initialize(configDir string, cryptoKey string) (func(), error) {
r := httpx.GinEngine(config.Global.RunMode, config.HTTP, configCvalCache.PrintBodyPaths, configCvalCache.PrintAccessLog)
pushgwRouter.Config(r)
macros.RegisterMacro(macros.MacroInVain)
dscache.Init(ctx, false)
if !config.Alert.Disable {
configCache := memsto.NewConfigCache(ctx, syncStats, nil, "")
@@ -73,11 +77,13 @@ func Initialize(configDir string, cryptoKey string) (func(), error) {
taskTplsCache := memsto.NewTaskTplCache(ctx)
promClients := prom.NewPromClient(ctx)
tdengineClients := tdengine.NewTdengineClient(ctx, config.Alert.Heartbeat)
dispatch.InitRegisterQueryFunc(promClients)
externalProcessors := process.NewExternalProcessors()
alert.Start(config.Alert, config.Pushgw, syncStats, alertStats, externalProcessors, targetCache, busiGroupCache, alertMuteCache,
alertRuleCache, notifyConfigCache, taskTplsCache, dsCache, ctx, promClients, tdengineClients, userCache, userGroupCache)
alertRuleCache, notifyConfigCache, taskTplsCache, dsCache, ctx, promClients, userCache, userGroupCache)
alertrtRouter := alertrt.New(config.HTTP, config.Alert, alertMuteCache, targetCache, busiGroupCache, alertStats, ctx, externalProcessors)

241
datasource/ck/clickhouse.go Normal file
View File

@@ -0,0 +1,241 @@
package ck
import (
"context"
"fmt"
"strings"
"github.com/ccfos/nightingale/v6/datasource"
ck "github.com/ccfos/nightingale/v6/dskit/clickhouse"
"github.com/ccfos/nightingale/v6/models"
"github.com/ccfos/nightingale/v6/pkg/macros"
"github.com/mitchellh/mapstructure"
"github.com/toolkits/pkg/logger"
)
const (
CKType = "ck"
TimeFieldFormatEpochMilli = "epoch_millis"
TimeFieldFormatEpochSecond = "epoch_second"
DefaultLimit = 500
)
var (
ckPrivBanned = []string{
"INSERT",
"CREATE",
"DROP",
"DELETE",
"UPDATE",
"ALL",
}
ckBannedOp = map[string]struct{}{
"CREATE": {},
"INSERT": {},
"ALTER": {},
"REVOKE": {},
"DROP": {},
"RENAME": {},
"ATTACH": {},
"DETACH": {},
"OPTIMIZE": {},
"TRUNCATE": {},
"SET": {},
}
)
func init() {
datasource.RegisterDatasource(CKType, new(Clickhouse))
}
type CKShard struct {
Addr string `json:"ck.addr" mapstructure:"ck.addr"`
User string `json:"ck.user" mapstructure:"ck.user"`
Password string `json:"ck.password" mapstructure:"ck.password"`
Database string `json:"ck.db" mapstructure:"ck.db"`
IsEncrypted bool `json:"ck.is_encrypt" mapstructure:"ck.is_encrypt"`
}
type QueryParam struct {
Limit int `json:"limit" mapstructure:"limit"`
Sql string `json:"sql" mapstructure:"sql"`
Ref string `json:"ref" mapstructure:"ref"`
From int64 `json:"from" mapstructure:"from"`
To int64 `json:"to" mapstructure:"to"`
TimeField string `json:"time_field" mapstructure:"time_field"`
TimeFormat string `json:"time_format" mapstructure:"time_format"`
Keys datasource.Keys `json:"keys" mapstructure:"keys"`
Database string `json:"database" mapstructure:"database"`
Table string `json:"table" mapstructure:"table"`
}
type Clickhouse struct {
ck.Clickhouse `json:",inline" mapstructure:",squash"`
}
func (c *Clickhouse) Init(settings map[string]interface{}) (datasource.Datasource, error) {
newest := new(Clickhouse)
err := mapstructure.Decode(settings, newest)
return newest, err
}
func (c *Clickhouse) InitClient() error {
return c.InitCli()
}
func (c *Clickhouse) Validate(ctx context.Context) error {
if len(c.Nodes) == 0 {
return fmt.Errorf("ck shard is invalid, please check datasource setting")
}
addr := c.Nodes[0]
if len(strings.Trim(c.User, " ")) == 0 {
return fmt.Errorf("ck shard user is invalid, please check datasource setting")
}
if len(strings.Trim(addr, " ")) == 0 {
return fmt.Errorf("ck shard addr is invalid, please check datasource setting")
}
// if len(strings.Trim(shard.Password, " ")) == 0 {
// return fmt.Errorf("ck shard password is empty, please check datasource setting or set password for user")
// }
return nil
}
// Equal compares whether two objects are the same, used for caching
func (c *Clickhouse) Equal(p datasource.Datasource) bool {
plg, ok := p.(*Clickhouse)
if !ok {
logger.Errorf("unexpected plugin type, expected is ck")
return false
}
// only compare first shard
if len(c.Nodes) == 0 {
logger.Errorf("ck shard is empty")
return false
}
addr := c.Nodes[0]
if len(plg.Nodes) == 0 {
logger.Errorf("new ck plugin obj shard is empty")
return false
}
newAddr := plg.Nodes[0]
if c.User != plg.User {
return false
}
if addr != newAddr {
return false
}
if c.Password != plg.Password {
return false
}
return true
}
func (c *Clickhouse) MakeLogQuery(ctx context.Context, query interface{}, eventTags []string, start, end int64) (interface{}, error) {
return nil, nil
}
func (c *Clickhouse) MakeTSQuery(ctx context.Context, query interface{}, eventTags []string, start, end int64) (interface{}, error) {
return nil, nil
}
func (c *Clickhouse) QueryMapData(ctx context.Context, query interface{}) ([]map[string]string, error) {
return nil, nil
}
func (c *Clickhouse) QueryData(ctx context.Context, query interface{}) ([]models.DataResp, error) {
ckQueryParam := new(ck.QueryParam)
if err := mapstructure.Decode(query, ckQueryParam); err != nil {
return nil, err
}
if strings.Contains(ckQueryParam.Sql, "$__") {
var err error
ckQueryParam.Sql, err = macros.Macro(ckQueryParam.Sql, ckQueryParam.From, ckQueryParam.To)
if err != nil {
return nil, err
}
}
if ckQueryParam.Keys.ValueKey == "" {
return nil, fmt.Errorf("valueKey is required")
}
rows, err := c.QueryTimeseries(ctx, ckQueryParam)
if err != nil {
logger.Warningf("query:%+v get data err:%v", ckQueryParam, err)
return nil, err
}
if err != nil {
logger.Warningf("query:%+v get data err:%v", ckQueryParam, err)
return []models.DataResp{}, err
}
data := make([]models.DataResp, 0)
for i := range rows {
data = append(data, models.DataResp{
Ref: ckQueryParam.Ref,
Metric: rows[i].Metric,
Values: rows[i].Values,
})
}
return data, nil
}
func (c *Clickhouse) QueryLog(ctx context.Context, query interface{}) ([]interface{}, int64, error) {
ckQueryParam := new(QueryParam)
if err := mapstructure.Decode(query, ckQueryParam); err != nil {
return nil, 0, err
}
if strings.Contains(ckQueryParam.Sql, "$__") {
var err error
ckQueryParam.Sql, err = macros.Macro(ckQueryParam.Sql, ckQueryParam.From, ckQueryParam.To)
if err != nil {
return nil, 0, err
}
}
rows, err := c.Query(ctx, ckQueryParam)
if err != nil {
logger.Warningf("query:%+v get data err:%v", ckQueryParam, err)
return nil, 0, err
}
limit := getLimit(len(rows), ckQueryParam.Limit)
logs := make([]interface{}, 0)
for i := 0; i < limit; i++ {
logs = append(logs, rows[i])
}
return logs, int64(limit), nil
}
func getLimit(rowLen, pLimit int) int {
limit := DefaultLimit
if pLimit > 0 {
limit = pLimit
}
if rowLen > limit {
return limit
}
return rowLen
}

View File

@@ -0,0 +1,587 @@
package eslike
import (
"context"
"encoding/json"
"fmt"
"strings"
"time"
"github.com/ccfos/nightingale/v6/models"
"github.com/bitly/go-simplejson"
"github.com/mitchellh/mapstructure"
"github.com/olivere/elastic/v7"
"github.com/prometheus/common/model"
"github.com/toolkits/pkg/logger"
)
type Query struct {
Ref string `json:"ref" mapstructure:"ref"`
Index string `json:"index" mapstructure:"index"`
Filter string `json:"filter" mapstructure:"filter"`
MetricAggr MetricAggr `json:"value" mapstructure:"value"`
GroupBy []GroupBy `json:"group_by" mapstructure:"group_by"`
DateField string `json:"date_field" mapstructure:"date_field"`
Interval int64 `json:"interval" mapstructure:"interval"`
Start int64 `json:"start" mapstructure:"start"`
End int64 `json:"end" mapstructure:"end"`
P int `json:"page" mapstructure:"page"` // 页码
Limit int `json:"limit" mapstructure:"limit"` // 每页个数
Ascending bool `json:"ascending" mapstructure:"ascending"` // 按照DataField排序
Timeout int `json:"timeout" mapstructure:"timeout"`
MaxShard int `json:"max_shard" mapstructure:"max_shard"`
}
type MetricAggr struct {
Field string `json:"field" mapstructure:"field"`
Func string `json:"func" mapstructure:"func"`
Ref string `json:"ref" mapstructure:"ref"` // 变量名A B C
}
type GroupBy struct {
Cate GroupByCate `json:"cate" mapstructure:"cate"` // 分组类型
Field string `json:"field" mapstructure:"field"`
MinDocCount int64 `json:"min_doc_count" mapstructure:"min_doc_count"`
Order string `json:"order" mapstructure:"order"`
OrderBy string `json:"order_by" mapstructure:"order_by"`
Size int `json:"size" mapstructure:"size"`
Params []Param `json:"params" mapstructure:"params"` // 类型是 filter 时使用
Interval int64 `json:"interval" mapstructure:"interval"` // 分组间隔
}
type SearchFunc func(ctx context.Context, indices []string, source interface{}, timeout int, maxShard int) (*elastic.SearchResult, error)
type QueryFieldsFunc func(indices []string) ([]string, error)
// 分组类型
type GroupByCate string
const (
Filters GroupByCate = "filters"
Histgram GroupByCate = "histgram"
Terms GroupByCate = "terms"
)
// 参数
type Param struct {
Alias string `json:"alias,omitempty"` // 别名a=b的形式filter 特有
Query string `json:"query,omitempty"` // 查询条件filter 特有
}
type MetricPtr struct {
Data map[string][][]float64
}
func IterGetMap(m, ret map[string]interface{}, prefixKey string) {
for k, v := range m {
switch v.(type) {
case map[string]interface{}:
var key string
if prefixKey != "" {
key = fmt.Sprintf("%s.%s", prefixKey, k)
} else {
key = k
}
IterGetMap(v.(map[string]interface{}), ret, key)
default:
ret[prefixKey+"."+k] = []interface{}{v}
}
}
}
func TransferData(metric, ref string, m map[string][][]float64) []models.DataResp {
var datas []models.DataResp
for k, v := range m {
data := models.DataResp{
Ref: ref,
Metric: make(model.Metric),
Labels: k,
Values: v,
}
data.Metric["__name__"] = model.LabelValue(metric)
labels := strings.Split(k, "--")
for _, label := range labels {
arr := strings.SplitN(label, "=", 2)
if len(arr) == 2 {
data.Metric[model.LabelName(arr[0])] = model.LabelValue(arr[1])
}
}
datas = append(datas, data)
}
for i := 0; i < len(datas); i++ {
for k, v := range datas[i].Metric {
if k == "__name__" {
datas[i].Metric[k] = model.LabelValue(ref) + "_" + v
}
}
}
return datas
}
func GetQueryString(filter string, q *elastic.RangeQuery) *elastic.BoolQuery {
var queryString *elastic.BoolQuery
if filter != "" {
if strings.Contains(filter, ":") || strings.Contains(filter, "AND") || strings.Contains(filter, "OR") || strings.Contains(filter, "NOT") {
queryString = elastic.NewBoolQuery().Must(elastic.NewQueryStringQuery(filter)).Filter(q)
} else {
queryString = elastic.NewBoolQuery().Filter(elastic.NewMultiMatchQuery(filter).Lenient(true).Type("phrase")).Filter(q)
}
} else {
queryString = elastic.NewBoolQuery().Should(q)
}
return queryString
}
func GetBuckts(labelKey string, keys []string, arr []interface{}, metrics *MetricPtr, labels string, ts int64, f string) {
var err error
bucketsKey := ""
if len(keys) > 0 {
bucketsKey = keys[0]
}
newlabels := ""
for i := 0; i < len(arr); i++ {
tmp := arr[i].(map[string]interface{})
keyValue := tmp["key"]
switch keyValue.(type) {
case json.Number:
ts, err = keyValue.(json.Number).Int64()
if err != nil {
logger.Warningf("labelKey:%s get ts error:%v", labelKey, err)
}
case string:
if labels != "" {
newlabels = fmt.Sprintf("%s--%s=%s", labels, labelKey, keyValue.(string))
} else {
newlabels = fmt.Sprintf("%s=%s", labelKey, keyValue.(string))
}
default:
continue
}
var finalValue float64
if len(keys) == 0 { // 计算 doc_count 的情况
count := tmp["doc_count"]
finalValue, err = count.(json.Number).Float64()
if err != nil {
logger.Warningf("labelKey:%s get value error:%v", labelKey, err)
}
newValues := []float64{float64(ts / 1000), finalValue}
metrics.Data[newlabels] = append(metrics.Data[newlabels], newValues)
continue
}
innerBuckets, exists := tmp[bucketsKey]
if !exists {
continue
}
nextBucketsArr, exists := innerBuckets.(map[string]interface{})["buckets"]
if exists {
if len(keys[1:]) >= 1 {
GetBuckts(bucketsKey, keys[1:], nextBucketsArr.([]interface{}), metrics, newlabels, ts, f)
} else {
GetBuckts(bucketsKey, []string{}, nextBucketsArr.([]interface{}), metrics, newlabels, ts, f)
}
} else {
// doc_count
if f == "count" || f == "nodata" {
count := tmp["doc_count"]
finalValue, err = count.(json.Number).Float64()
if err != nil {
logger.Warningf("get %v value error:%v", count, err)
}
} else {
values, exists := innerBuckets.(map[string]interface{})["value"]
if exists {
switch values.(type) {
case json.Number:
value, err := values.(json.Number).Float64()
if err != nil {
logger.Warningf("labelKey:%s get value error:%v", labelKey, err)
}
finalValue = value
}
} else {
switch values.(type) {
case map[string]interface{}:
var err error
values := innerBuckets.(map[string]interface{})["values"]
for _, v := range values.(map[string]interface{}) {
finalValue, err = v.(json.Number).Float64()
if err != nil {
logger.Warningf("labelKey:%s get value error:%v", labelKey, err)
}
}
default:
values := innerBuckets.(map[string]interface{})["values"]
for _, v := range values.(map[string]interface{}) {
// Todo 修复 v is nil 导致 panic 情况
finalValue, err = v.(json.Number).Float64()
if err != nil {
logger.Warningf("labelKey:%s get value error:%v", labelKey, err)
}
}
}
}
}
if _, exists := metrics.Data[newlabels]; !exists {
metrics.Data[newlabels] = [][]float64{}
}
newValues := []float64{float64(ts / 1000), finalValue}
metrics.Data[newlabels] = append(metrics.Data[newlabels], newValues)
}
}
}
func MakeLogQuery(ctx context.Context, query interface{}, eventTags []string, start, end int64) (interface{}, error) {
param := new(Query)
if err := mapstructure.Decode(query, param); err != nil {
return nil, err
}
for i := 0; i < len(eventTags); i++ {
eventTags[i] = strings.Replace(eventTags[i], "=", ":", 1)
}
if len(eventTags) > 0 {
if param.Filter == "" {
param.Filter = strings.Join(eventTags, " AND ")
} else {
param.Filter = param.Filter + " AND " + strings.Join(eventTags, " AND ")
}
}
param.Start = start
param.End = end
return param, nil
}
func MakeTSQuery(ctx context.Context, query interface{}, eventTags []string, start, end int64) (interface{}, error) {
param := new(Query)
if err := mapstructure.Decode(query, param); err != nil {
return nil, err
}
for i := 0; i < len(eventTags); i++ {
eventTags[i] = strings.Replace(eventTags[i], "=", ":", 1)
}
if len(eventTags) > 0 {
if param.Filter == "" {
param.Filter = strings.Join(eventTags, " AND ")
} else {
param.Filter = param.Filter + " AND " + strings.Join(eventTags, " AND ")
}
}
param.Start = start
param.End = end
return param, nil
}
func QueryData(ctx context.Context, queryParam interface{}, cliTimeout int64, version string, search SearchFunc) ([]models.DataResp, error) {
param := new(Query)
if err := mapstructure.Decode(queryParam, param); err != nil {
return nil, err
}
if param.Timeout == 0 {
param.Timeout = int(cliTimeout) / 1000
}
if param.Interval == 0 {
param.Interval = 60
}
if param.MaxShard < 1 {
param.MaxShard = 5
}
if param.DateField == "" {
param.DateField = "@timestamp"
}
indexArr := strings.Split(param.Index, ",")
q := elastic.NewRangeQuery(param.DateField)
now := time.Now().Unix()
var start, end int64
if param.End != 0 && param.Start != 0 {
end = param.End - param.End%param.Interval
start = param.Start - param.Start%param.Interval
} else {
end = now
start = end - param.Interval
}
delay, ok := ctx.Value("delay").(int64)
if ok && delay != 0 {
end = end - delay
start = start - delay
}
q.Gte(time.Unix(start, 0).UnixMilli())
q.Lte(time.Unix(end, 0).UnixMilli())
q.Format("epoch_millis")
field := param.MetricAggr.Field
groupBys := param.GroupBy
queryString := GetQueryString(param.Filter, q)
var aggr elastic.Aggregation
switch param.MetricAggr.Func {
case "avg":
aggr = elastic.NewAvgAggregation().Field(field)
case "max":
aggr = elastic.NewMaxAggregation().Field(field)
case "min":
aggr = elastic.NewMinAggregation().Field(field)
case "sum":
aggr = elastic.NewSumAggregation().Field(field)
case "count":
aggr = elastic.NewValueCountAggregation().Field(field)
case "p90":
aggr = elastic.NewPercentilesAggregation().Percentiles(90).Field(field)
case "p95":
aggr = elastic.NewPercentilesAggregation().Percentiles(95).Field(field)
case "p99":
aggr = elastic.NewPercentilesAggregation().Percentiles(99).Field(field)
case "median":
aggr = elastic.NewPercentilesAggregation().Percentiles(50).Field(field)
default:
return nil, fmt.Errorf("func %s not support", param.MetricAggr.Func)
}
tsAggr := elastic.NewDateHistogramAggregation().
Field(param.DateField).
MinDocCount(1)
if strings.HasPrefix(version, "7") {
tsAggr.FixedInterval(fmt.Sprintf("%ds", param.Interval))
} else {
// 兼容 7.0 以下的版本
// OpenSearch 也使用这个字段
tsAggr.Interval(fmt.Sprintf("%ds", param.Interval))
}
// group by
var groupByAggregation elastic.Aggregation
if len(groupBys) > 0 {
groupBy := groupBys[0]
if groupBy.MinDocCount == 0 {
groupBy.MinDocCount = 1
}
if groupBy.Size == 0 {
groupBy.Size = 300
}
switch groupBy.Cate {
case Terms:
if param.MetricAggr.Func != "count" {
groupByAggregation = elastic.NewTermsAggregation().Field(groupBy.Field).SubAggregation(field, aggr).OrderByKeyDesc().Size(groupBy.Size).MinDocCount(int(groupBy.MinDocCount))
} else {
groupByAggregation = elastic.NewTermsAggregation().Field(groupBy.Field).OrderByKeyDesc().Size(groupBy.Size).MinDocCount(int(groupBy.MinDocCount))
}
case Histgram:
if param.MetricAggr.Func != "count" {
groupByAggregation = elastic.NewHistogramAggregation().Field(groupBy.Field).Interval(float64(groupBy.Interval)).SubAggregation(field, aggr)
} else {
groupByAggregation = elastic.NewHistogramAggregation().Field(groupBy.Field).Interval(float64(groupBy.Interval))
}
case Filters:
for _, filterParam := range groupBy.Params {
if param.MetricAggr.Func != "count" {
groupByAggregation = elastic.NewFilterAggregation().Filter(elastic.NewTermQuery(filterParam.Query, "true")).SubAggregation(field, aggr)
} else {
groupByAggregation = elastic.NewFilterAggregation().Filter(elastic.NewTermQuery(filterParam.Query, "true"))
}
}
}
for i := 1; i < len(groupBys); i++ {
groupBy := groupBys[i]
if groupBy.MinDocCount == 0 {
groupBy.MinDocCount = 1
}
if groupBy.Size == 0 {
groupBy.Size = 300
}
switch groupBy.Cate {
case Terms:
groupByAggregation = elastic.NewTermsAggregation().Field(groupBy.Field).SubAggregation(groupBys[i-1].Field, groupByAggregation).OrderByKeyDesc().Size(groupBy.Size).MinDocCount(int(groupBy.MinDocCount))
case Histgram:
groupByAggregation = elastic.NewHistogramAggregation().Field(groupBy.Field).Interval(float64(groupBy.Interval)).SubAggregation(groupBys[i-1].Field, groupByAggregation)
case Filters:
for _, filterParam := range groupBy.Params {
groupByAggregation = elastic.NewFilterAggregation().Filter(elastic.NewTermQuery(filterParam.Query, "true")).SubAggregation(groupBys[i-1].Field, groupByAggregation)
}
}
}
tsAggr.SubAggregation(groupBys[len(groupBys)-1].Field, groupByAggregation)
} else if param.MetricAggr.Func != "count" {
tsAggr.SubAggregation(field, aggr)
}
source, _ := queryString.Source()
b, _ := json.Marshal(source)
logger.Debugf("query_data q:%+v tsAggr:%+v query_string:%s", param, tsAggr, string(b))
searchSource := elastic.NewSearchSource().
Query(queryString).
Aggregation("ts", tsAggr)
searchSourceString, err := searchSource.Source()
if err != nil {
logger.Warningf("query_data searchSource:%s to string error:%v", searchSourceString, err)
}
jsonSearchSource, err := json.Marshal(searchSourceString)
if err != nil {
logger.Warningf("query_data searchSource:%s to json error:%v", searchSourceString, err)
}
result, err := search(ctx, indexArr, searchSource, param.Timeout, param.MaxShard)
if err != nil {
logger.Warningf("query_data searchSource:%s query_data error:%v", searchSourceString, err)
return nil, err
}
logger.Debugf("query_data searchSource:%s resp:%s", string(jsonSearchSource), string(result.Aggregations["ts"]))
js, err := simplejson.NewJson(result.Aggregations["ts"])
if err != nil {
return nil, err
}
bucketsData, err := js.Get("buckets").Array()
if err != nil {
return nil, err
}
var keys []string
for i := len(groupBys) - 1; i >= 0; i-- {
keys = append(keys, groupBys[i].Field)
}
if param.MetricAggr.Func != "count" {
keys = append(keys, field)
}
metrics := &MetricPtr{Data: make(map[string][][]float64)}
GetBuckts("", keys, bucketsData, metrics, "", 0, param.MetricAggr.Func)
return TransferData(fmt.Sprintf("%s_%s", field, param.MetricAggr.Func), param.Ref, metrics.Data), nil
}
func HitFilter(typ string) bool {
switch typ {
case "keyword", "date", "long", "integer", "short", "byte", "double", "float", "half_float", "scaled_float", "unsigned_long":
return false
default:
return true
}
}
func QueryLog(ctx context.Context, queryParam interface{}, timeout int64, version string, maxShard int, search SearchFunc) ([]interface{}, int64, error) {
param := new(Query)
if err := mapstructure.Decode(queryParam, param); err != nil {
return nil, 0, err
}
if param.Timeout == 0 {
param.Timeout = int(timeout)
}
indexArr := strings.Split(param.Index, ",")
now := time.Now().Unix()
var start, end int64
if param.End != 0 && param.Start != 0 {
end = param.End - param.End%param.Interval
start = param.Start - param.Start%param.Interval
} else {
end = now
start = end - param.Interval
}
q := elastic.NewRangeQuery(param.DateField)
q.Gte(time.Unix(start, 0).UnixMilli())
q.Lte(time.Unix(end, 0).UnixMilli())
q.Format("epoch_millis")
queryString := GetQueryString(param.Filter, q)
if param.Limit <= 0 {
param.Limit = 10
}
if param.MaxShard < 1 {
param.MaxShard = maxShard
}
source := elastic.NewSearchSource().
TrackTotalHits(true).
Query(queryString).
From(param.P).
Size(param.Limit).
Sort(param.DateField, param.Ascending)
result, err := search(ctx, indexArr, source, param.Timeout, param.MaxShard)
if err != nil {
logger.Warningf("query data error:%v", err)
return nil, 0, err
}
total := result.TotalHits()
var ret []interface{}
b, _ := json.Marshal(source)
logger.Debugf("query data result query source:%s len:%d total:%d", string(b), len(result.Hits.Hits), total)
resultBytes, _ := json.Marshal(result)
logger.Debugf("query data result query source:%s result:%s", string(b), string(resultBytes))
if strings.HasPrefix(version, "6") {
for i := 0; i < len(result.Hits.Hits); i++ {
var x map[string]interface{}
err := json.Unmarshal(result.Hits.Hits[i].Source, &x)
if err != nil {
logger.Warningf("Unmarshal soruce error:%v", err)
continue
}
if result.Hits.Hits[i].Fields == nil {
result.Hits.Hits[i].Fields = make(map[string]interface{})
}
IterGetMap(x, result.Hits.Hits[i].Fields, "")
ret = append(ret, result.Hits.Hits[i])
}
} else {
for _, hit := range result.Hits.Hits {
ret = append(ret, hit)
}
}
return ret, total, nil
}

114
datasource/datasource.go Normal file
View File

@@ -0,0 +1,114 @@
package datasource
import (
"context"
"fmt"
"strings"
"github.com/ccfos/nightingale/v6/models"
)
type DatasourceType struct {
Id int64 `json:"id"`
Category string `json:"category"`
PluginType string `json:"type"`
PluginTypeName string `json:"type_name"`
}
type Keys struct {
ValueKey string `json:"valueKey" mapstructure:"valueKey"` // 多个用空格分隔
LabelKey string `json:"labelKey" mapstructure:"labelKey"` // 多个用空格分隔
TimeKey string `json:"timeKey" mapstructure:"timeKey"`
TimeFormat string `json:"timeFormat" mapstructure:"timeFormat"`
}
var DatasourceTypes map[int64]DatasourceType
func init() {
DatasourceTypes = make(map[int64]DatasourceType)
DatasourceTypes[1] = DatasourceType{
Id: 1,
Category: "timeseries",
PluginType: "prometheus",
PluginTypeName: "Prometheus Like",
}
DatasourceTypes[2] = DatasourceType{
Id: 2,
Category: "logging",
PluginType: "elasticsearch",
PluginTypeName: "Elasticsearch",
}
DatasourceTypes[3] = DatasourceType{
Id: 3,
Category: "logging",
PluginType: "aliyun-sls",
PluginTypeName: "SLS",
}
DatasourceTypes[4] = DatasourceType{
Id: 4,
Category: "timeseries",
PluginType: "ck",
PluginTypeName: "ClickHouse",
}
}
type NewDatasrouceFn func(settings map[string]interface{}) (Datasource, error)
var datasourceRegister = map[string]NewDatasrouceFn{}
type Datasource interface {
Init(settings map[string]interface{}) (Datasource, error) // 初始化配置
InitClient() error // 初始化客户端
Validate(ctx context.Context) error // 参数验证
Equal(p Datasource) bool // 验证是否相等
MakeLogQuery(ctx context.Context, query interface{}, eventTags []string, start, end int64) (interface{}, error)
MakeTSQuery(ctx context.Context, query interface{}, eventTags []string, start, end int64) (interface{}, error)
QueryData(ctx context.Context, query interface{}) ([]models.DataResp, error)
QueryLog(ctx context.Context, query interface{}) ([]interface{}, int64, error)
// 在生成告警事件时,会调用该方法,用于获取额外的数据
QueryMapData(ctx context.Context, query interface{}) ([]map[string]string, error)
}
func RegisterDatasource(typ string, p Datasource) {
if _, found := datasourceRegister[typ]; found {
return
}
datasourceRegister[typ] = p.Init
}
func GetDatasourceByType(typ string, settings map[string]interface{}) (Datasource, error) {
typ = strings.ReplaceAll(typ, ".logging", "")
fn, found := datasourceRegister[typ]
if !found {
return nil, fmt.Errorf("plugin type %s not found", typ)
}
plug, err := fn(settings)
if err != nil {
return nil, err
}
return plug, nil
}
type DatasourceInfo struct {
Id int64 `json:"id"`
Name string `json:"name"`
Description string `json:"description"`
ClusterName string `json:"cluster_name"`
Category string `json:"category"`
PluginId int64 `json:"plugin_id"`
Type string `json:"plugin_type"`
PluginTypeName string `json:"plugin_type_name"`
Settings map[string]interface{} `json:"settings"`
HTTPJson models.HTTP `json:"http"`
AuthJson models.Auth `json:"auth"`
Status string `json:"status"`
CreatedAt int64 `json:"created_at"`
UpdatedAt int64 `json:"updated_at"`
IsDefault bool `json:"is_default"`
}

405
datasource/es/es.go Normal file
View File

@@ -0,0 +1,405 @@
package es
import (
"context"
"encoding/json"
"fmt"
"net"
"net/http"
"net/url"
"reflect"
"sort"
"strings"
"time"
"github.com/ccfos/nightingale/v6/datasource"
"github.com/ccfos/nightingale/v6/datasource/commons/eslike"
"github.com/ccfos/nightingale/v6/models"
"github.com/ccfos/nightingale/v6/pkg/tlsx"
"github.com/mitchellh/mapstructure"
"github.com/olivere/elastic/v7"
"github.com/toolkits/pkg/logger"
)
const (
ESType = "elasticsearch"
)
type Elasticsearch struct {
Addr string `json:"es.addr" mapstructure:"es.addr"`
Nodes []string `json:"es.nodes" mapstructure:"es.nodes"`
Timeout int64 `json:"es.timeout" mapstructure:"es.timeout"` // millis
Basic BasicAuth `json:"es.basic" mapstructure:"es.basic"`
TLS TLS `json:"es.tls" mapstructure:"es.tls"`
Version string `json:"es.version" mapstructure:"es.version"`
Headers map[string]string `json:"es.headers" mapstructure:"es.headers"`
MinInterval int `json:"es.min_interval" mapstructure:"es.min_interval"` // seconds
MaxShard int `json:"es.max_shard" mapstructure:"es.max_shard"`
ClusterName string `json:"es.cluster_name" mapstructure:"es.cluster_name"`
EnableWrite bool `json:"es.enable_write" mapstructure:"es.enable_write"` // 允许写操作
Client *elastic.Client `json:"es.client" mapstructure:"es.client"`
}
type TLS struct {
SkipTlsVerify bool `json:"es.tls.skip_tls_verify" mapstructure:"es.tls.skip_tls_verify"`
}
type BasicAuth struct {
Enable bool `json:"es.auth.enable" mapstructure:"es.auth.enable"`
Username string `json:"es.user" mapstructure:"es.user"`
Password string `json:"es.password" mapstructure:"es.password"`
}
func init() {
datasource.RegisterDatasource(ESType, new(Elasticsearch))
}
func (e *Elasticsearch) Init(settings map[string]interface{}) (datasource.Datasource, error) {
newest := new(Elasticsearch)
err := mapstructure.Decode(settings, newest)
return newest, err
}
func (e *Elasticsearch) InitClient() error {
transport := &http.Transport{
Proxy: http.ProxyFromEnvironment,
DialContext: (&net.Dialer{
Timeout: time.Duration(e.Timeout) * time.Millisecond,
}).DialContext,
ResponseHeaderTimeout: time.Duration(e.Timeout) * time.Millisecond,
}
if len(e.Nodes) > 0 {
e.Addr = e.Nodes[0]
}
if strings.Contains(e.Addr, "https") {
tlsConfig := tlsx.ClientConfig{
InsecureSkipVerify: e.TLS.SkipTlsVerify,
UseTLS: true,
}
cfg, err := tlsConfig.TLSConfig()
if err != nil {
return err
}
transport.TLSClientConfig = cfg
}
var err error
options := []elastic.ClientOptionFunc{
elastic.SetURL(e.Nodes...),
}
if e.Basic.Username != "" {
options = append(options, elastic.SetBasicAuth(e.Basic.Username, e.Basic.Password))
}
headers := http.Header{}
for k, v := range e.Headers {
headers[k] = []string{v}
}
options = append(options, elastic.SetHeaders(headers))
options = append(options, elastic.SetHttpClient(&http.Client{Transport: transport}))
options = append(options, elastic.SetSniff(false))
options = append(options, elastic.SetHealthcheck(false))
e.Client, err = elastic.NewClient(options...)
return err
}
func (e *Elasticsearch) Equal(other datasource.Datasource) bool {
sort.Strings(e.Nodes)
sort.Strings(other.(*Elasticsearch).Nodes)
if strings.Join(e.Nodes, ",") != strings.Join(other.(*Elasticsearch).Nodes, ",") {
return false
}
if e.Basic.Username != other.(*Elasticsearch).Basic.Username {
return false
}
if e.Basic.Password != other.(*Elasticsearch).Basic.Password {
return false
}
if e.TLS.SkipTlsVerify != other.(*Elasticsearch).TLS.SkipTlsVerify {
return false
}
if e.EnableWrite != other.(*Elasticsearch).EnableWrite {
return false
}
if !reflect.DeepEqual(e.Headers, other.(*Elasticsearch).Headers) {
return false
}
return true
}
func (e *Elasticsearch) Validate(ctx context.Context) (err error) {
if len(e.Nodes) == 0 {
return fmt.Errorf("need a valid addr")
}
for _, addr := range e.Nodes {
_, err = url.Parse(addr)
if err != nil {
return fmt.Errorf("parse addr error: %v", err)
}
}
if e.Basic.Enable && (len(e.Basic.Username) == 0 || len(e.Basic.Password) == 0) {
return fmt.Errorf("need a valid user, password")
}
if e.MaxShard == 0 {
e.MaxShard = 5
}
if e.MinInterval < 10 {
e.MinInterval = 10
}
if e.Timeout == 0 {
e.Timeout = 60000
}
if !strings.HasPrefix(e.Version, "6") && !strings.HasPrefix(e.Version, "7") {
return fmt.Errorf("version must be 6.0+ or 7.0+")
}
return nil
}
func (e *Elasticsearch) MakeLogQuery(ctx context.Context, query interface{}, eventTags []string, start, end int64) (interface{}, error) {
return eslike.MakeLogQuery(ctx, query, eventTags, start, end)
}
func (e *Elasticsearch) MakeTSQuery(ctx context.Context, query interface{}, eventTags []string, start, end int64) (interface{}, error) {
return eslike.MakeTSQuery(ctx, query, eventTags, start, end)
}
func (e *Elasticsearch) QueryData(ctx context.Context, queryParam interface{}) ([]models.DataResp, error) {
search := func(ctx context.Context, indices []string, source interface{}, timeout int, maxShard int) (*elastic.SearchResult, error) {
return e.Client.Search().
Index(indices...).
Source(source).
Timeout(fmt.Sprintf("%ds", timeout)).
MaxConcurrentShardRequests(maxShard).
Do(ctx)
}
return eslike.QueryData(ctx, queryParam, e.Timeout, e.Version, search)
}
func (e *Elasticsearch) QueryIndices() ([]string, error) {
result, err := e.Client.IndexNames()
return result, err
}
func (e *Elasticsearch) QueryFields(indexs []string) ([]string, error) {
var fields []string
result, err := elastic.NewGetFieldMappingService(e.Client).Index(indexs...).Do(context.Background())
if err != nil {
return fields, err
}
fieldMap := make(map[string]struct{})
for _, indexMap := range result {
if m, exists := indexMap.(map[string]interface{})["mappings"]; exists {
for k, v := range m.(map[string]interface{}) {
// 兼容 es6 版本
if k == "doc" && strings.HasPrefix(e.Version, "6") {
// if k == "doc" {
for kk, vv := range v.(map[string]interface{}) {
typ := getFieldType(kk, vv.(map[string]interface{}))
if eslike.HitFilter(typ) {
continue
}
if _, exsits := fieldMap[kk]; !exsits {
fieldMap[kk] = struct{}{}
fields = append(fields, kk)
}
}
} else {
// es7 版本
typ := getFieldType(k, v.(map[string]interface{}))
if eslike.HitFilter(typ) {
continue
}
if _, exsits := fieldMap[k]; !exsits {
fieldMap[k] = struct{}{}
fields = append(fields, k)
}
}
}
}
}
sort.Strings(fields)
return fields, nil
}
func (e *Elasticsearch) QueryLog(ctx context.Context, queryParam interface{}) ([]interface{}, int64, error) {
search := func(ctx context.Context, indices []string, source interface{}, timeout int, maxShard int) (*elastic.SearchResult, error) {
// 应该是之前为了获取 fields 字段,做的这个兼容
// fields, err := e.QueryFields(indices)
// if err != nil {
// logger.Warningf("query data error:%v", err)
// return nil, err
// }
// if source != nil && strings.HasPrefix(e.Version, "7") {
// source = source.(*elastic.SearchSource).DocvalueFields(fields...)
// }
return e.Client.Search().
Index(indices...).
MaxConcurrentShardRequests(maxShard).
Source(source).
Timeout(fmt.Sprintf("%ds", timeout)).
Do(ctx)
}
return eslike.QueryLog(ctx, queryParam, e.Timeout, e.Version, e.MaxShard, search)
}
func (e *Elasticsearch) QueryFieldValue(indexs []string, field string, query string) ([]string, error) {
var values []string
search := e.Client.Search().
Index(indexs...).
Size(0)
if query != "" {
search = search.Query(elastic.NewBoolQuery().Must(elastic.NewQueryStringQuery(query)))
}
search = search.Aggregation("distinct", elastic.NewTermsAggregation().Field(field).Size(10000))
result, err := search.Do(context.Background())
if err != nil {
return values, err
}
agg, found := result.Aggregations.Terms("distinct")
if !found {
return values, nil
}
for _, bucket := range agg.Buckets {
values = append(values, bucket.Key.(string))
}
return values, nil
}
func (e *Elasticsearch) Test(ctx context.Context) (err error) {
err = e.Validate(ctx)
if err != nil {
return err
}
if e.Addr == "" {
return fmt.Errorf("addr is invalid")
}
if e.Version == "7.10+" {
options := []elastic.ClientOptionFunc{
elastic.SetURL(e.Addr),
}
if e.Basic.Enable {
options = append(options, elastic.SetBasicAuth(e.Basic.Username, e.Basic.Password))
}
client, err := elastic.NewClient(options...)
if err != nil {
return fmt.Errorf("config is invalid:%v", err)
}
_, err = client.ElasticsearchVersion(e.Addr)
if err != nil {
return fmt.Errorf("config is invalid:%v", err)
}
} else {
return fmt.Errorf("version must be 7.10+")
}
return nil
}
func getFieldType(key string, m map[string]interface{}) string {
if innerMap, exists := m["mapping"]; exists {
if innerM, exists := innerMap.(map[string]interface{})[key]; exists {
if typ, exists := innerM.(map[string]interface{})["type"]; exists {
return typ.(string)
}
} else {
arr := strings.Split(key, ".")
if innerM, exists := innerMap.(map[string]interface{})[arr[len(arr)-1]]; exists {
if typ, exists := innerM.(map[string]interface{})["type"]; exists {
return typ.(string)
}
}
}
}
return ""
}
func (e *Elasticsearch) QueryMapData(ctx context.Context, query interface{}) ([]map[string]string, error) {
search := func(ctx context.Context, indices []string, source interface{}, timeout int, maxShard int) (*elastic.SearchResult, error) {
return e.Client.Search().
Index(indices...).
Source(source).
Timeout(fmt.Sprintf("%ds", timeout)).
Do(ctx)
}
param := new(eslike.Query)
if err := mapstructure.Decode(query, param); err != nil {
return nil, err
}
// 扩大查询范围, 解决上一次查询消耗时间太多,导致本次查询时间范围起止时间,滞后问题
param.Interval += 30
res, _, err := eslike.QueryLog(ctx, param, e.Timeout, e.Version, e.MaxShard, search)
if err != nil {
return nil, err
}
var result []map[string]string
for _, item := range res {
logger.Debugf("query:%v item:%v", query, item)
if itemMap, ok := item.(*elastic.SearchHit); ok {
mItem := make(map[string]string)
// 遍历 fields 字段的每个键值对
sourceMap := make(map[string]interface{})
err := json.Unmarshal(itemMap.Source, &sourceMap)
if err != nil {
logger.Warningf("unmarshal source%s error:%v", string(itemMap.Source), err)
continue
}
for k, v := range sourceMap {
mItem[k] = fmt.Sprintf("%v", v)
}
// 将处理好的 map 添加到 m 切片中
result = append(result, mItem)
// 只取第一条数据
break
}
}
return result, nil
}

15
datasource/prom/prom.go Normal file
View File

@@ -0,0 +1,15 @@
package prom
type Prometheus struct {
PrometheusAddr string `json:"prometheus.addr"`
PrometheusBasic struct {
PrometheusUser string `json:"prometheus.user"`
PrometheusPass string `json:"prometheus.password"`
} `json:"prometheus.basic"`
Headers map[string]string `json:"prometheus.headers"`
PrometheusTimeout int64 `json:"prometheus.timeout"`
ClusterName string `json:"prometheus.cluster_name"`
WriteAddr string `json:"prometheus.write_addr"`
TsdbType string `json:"prometheus.tsdb_type"`
InternalAddr string `json:"prometheus.internal_addr"`
}

View File

@@ -0,0 +1,460 @@
package tdengine
import (
"context"
"encoding/json"
"fmt"
"reflect"
"strconv"
"strings"
"time"
"github.com/prometheus/common/model"
"github.com/toolkits/pkg/logger"
"github.com/ccfos/nightingale/v6/datasource"
td "github.com/ccfos/nightingale/v6/dskit/tdengine"
"github.com/ccfos/nightingale/v6/models"
"github.com/mitchellh/mapstructure"
)
const (
TDEngineType = "tdengine"
)
type TDengine struct {
td.Tdengine `json:",inline" mapstructure:",squash"`
}
type TdengineQuery struct {
From string `json:"from"`
Interval int64 `json:"interval"`
Keys Keys `json:"keys"`
Query string `json:"query"` // 查询条件
Ref string `json:"ref"` // 变量
To string `json:"to"`
}
type Keys struct {
LabelKey string `json:"labelKey"` // 多个用空格分隔
MetricKey string `json:"metricKey"` // 多个用空格分隔
TimeFormat string `json:"timeFormat"`
}
func init() {
datasource.RegisterDatasource(TDEngineType, new(TDengine))
}
func (td *TDengine) Init(settings map[string]interface{}) (datasource.Datasource, error) {
newest := new(TDengine)
err := mapstructure.Decode(settings, newest)
return newest, err
}
func (td *TDengine) InitClient() error {
td.InitCli()
return nil
}
func (td *TDengine) Equal(other datasource.Datasource) bool {
otherTD, ok := other.(*TDengine)
if !ok {
return false
}
if td.Addr != otherTD.Addr {
return false
}
if td.Basic != nil && otherTD.Basic != nil {
if td.Basic.User != otherTD.Basic.User {
return false
}
if td.Basic.Password != otherTD.Basic.Password {
return false
}
}
if td.Token != otherTD.Token {
return false
}
if td.Timeout != otherTD.Timeout {
return false
}
if td.DialTimeout != otherTD.DialTimeout {
return false
}
if td.MaxIdleConnsPerHost != otherTD.MaxIdleConnsPerHost {
return false
}
if len(td.Headers) != len(otherTD.Headers) {
return false
}
for k, v := range td.Headers {
if otherV, ok := otherTD.Headers[k]; !ok || v != otherV {
return false
}
}
return true
}
func (td *TDengine) Validate(ctx context.Context) (err error) {
return nil
}
func (td *TDengine) MakeLogQuery(ctx context.Context, query interface{}, eventTags []string, start, end int64) (interface{}, error) {
return nil, nil
}
func (td *TDengine) MakeTSQuery(ctx context.Context, query interface{}, eventTags []string, start, end int64) (interface{}, error) {
return nil, nil
}
func (td *TDengine) QueryData(ctx context.Context, queryParam interface{}) ([]models.DataResp, error) {
return td.Query(queryParam, 0)
}
func (td *TDengine) QueryLog(ctx context.Context, queryParam interface{}) ([]interface{}, int64, error) {
b, err := json.Marshal(queryParam)
if err != nil {
return nil, 0, err
}
var q TdengineQuery
err = json.Unmarshal(b, &q)
if err != nil {
return nil, 0, err
}
if q.Interval == 0 {
q.Interval = 60
}
if q.From == "" {
// 2023-09-21T05:37:30.000Z format
to := time.Now().Unix()
q.To = time.Unix(to, 0).UTC().Format(time.RFC3339)
from := to - q.Interval
q.From = time.Unix(from, 0).UTC().Format(time.RFC3339)
}
replacements := map[string]string{
"$from": fmt.Sprintf("'%s'", q.From),
"$to": fmt.Sprintf("'%s'", q.To),
"$interval": fmt.Sprintf("%ds", q.Interval),
}
for key, val := range replacements {
q.Query = strings.ReplaceAll(q.Query, key, val)
}
if !strings.Contains(q.Query, "limit") {
q.Query = q.Query + " limit 200"
}
data, err := td.QueryTable(q.Query)
if err != nil {
return nil, 0, err
}
return ConvertToTable(data), int64(len(data.Data)), nil
}
func (td *TDengine) QueryMapData(ctx context.Context, query interface{}) ([]map[string]string, error) {
return nil, nil
}
func (td *TDengine) Query(query interface{}, delay ...int) ([]models.DataResp, error) {
b, err := json.Marshal(query)
if err != nil {
return nil, err
}
var q TdengineQuery
err = json.Unmarshal(b, &q)
if err != nil {
return nil, err
}
if q.Interval == 0 {
q.Interval = 60
}
delaySec := 0
if len(delay) > 0 {
delaySec = delay[0]
}
if q.From == "" {
// 2023-09-21T05:37:30.000Z format
to := time.Now().Unix() - int64(delaySec)
q.To = time.Unix(to, 0).UTC().Format(time.RFC3339)
from := to - q.Interval
q.From = time.Unix(from, 0).UTC().Format(time.RFC3339)
}
replacements := map[string]string{
"$from": fmt.Sprintf("'%s'", q.From),
"$to": fmt.Sprintf("'%s'", q.To),
"$interval": fmt.Sprintf("%ds", q.Interval),
}
for key, val := range replacements {
q.Query = strings.ReplaceAll(q.Query, key, val)
}
data, err := td.QueryTable(q.Query)
if err != nil {
return nil, err
}
logger.Debugf("tdengine query:%s result: %+v", q.Query, data)
return ConvertToTStData(data, q.Keys, q.Ref)
}
func ConvertToTStData(src td.APIResponse, key Keys, ref string) ([]models.DataResp, error) {
metricIdxMap := make(map[string]int)
labelIdxMap := make(map[string]int)
metricMap := make(map[string]struct{})
if key.MetricKey != "" {
metricList := strings.Split(key.MetricKey, " ")
for _, metric := range metricList {
metricMap[metric] = struct{}{}
}
}
labelMap := make(map[string]string)
if key.LabelKey != "" {
labelList := strings.Split(key.LabelKey, " ")
for _, label := range labelList {
labelMap[label] = label
}
}
tsIdx := -1
for colIndex, colData := range src.ColumnMeta {
colName := colData[0].(string)
var colType string
// 处理v2版本数字类型和v3版本字符串类型
switch t := colData[1].(type) {
case float64:
// v2版本数字类型映射
switch int(t) {
case 1:
colType = "BOOL"
case 2:
colType = "TINYINT"
case 3:
colType = "SMALLINT"
case 4:
colType = "INT"
case 5:
colType = "BIGINT"
case 6:
colType = "FLOAT"
case 7:
colType = "DOUBLE"
case 8:
colType = "BINARY"
case 9:
colType = "TIMESTAMP"
case 10:
colType = "NCHAR"
default:
colType = "UNKNOWN"
}
case string:
// v3版本直接使用字符串类型
colType = t
default:
logger.Warningf("unexpected column type format: %v", colData[1])
continue
}
switch colType {
case "TIMESTAMP":
tsIdx = colIndex
case "BIGINT", "INT", "INT UNSIGNED", "BIGINT UNSIGNED", "FLOAT", "DOUBLE",
"SMALLINT", "SMALLINT UNSIGNED", "TINYINT", "TINYINT UNSIGNED", "BOOL":
if len(metricMap) > 0 {
if _, ok := metricMap[colName]; !ok {
continue
}
metricIdxMap[colName] = colIndex
} else {
metricIdxMap[colName] = colIndex
}
default:
if len(labelMap) > 0 {
if _, ok := labelMap[colName]; !ok {
continue
}
labelIdxMap[colName] = colIndex
} else {
labelIdxMap[colName] = colIndex
}
}
}
if tsIdx == -1 {
return nil, fmt.Errorf("timestamp column not found, please check your query")
}
var result []models.DataResp
m := make(map[string]*models.DataResp)
for _, row := range src.Data {
for metricName, metricIdx := range metricIdxMap {
value, err := interfaceToFloat64(row[metricIdx])
if err != nil {
logger.Warningf("parse %v value failed: %v", row, err)
continue
}
metric := make(model.Metric)
for labelName, labelIdx := range labelIdxMap {
metric[model.LabelName(labelName)] = model.LabelValue(row[labelIdx].(string))
}
metric[model.MetricNameLabel] = model.LabelValue(metricName)
// transfer 2022-06-29T05:52:16.603Z to unix timestamp
t, err := parseTimeString(row[tsIdx].(string))
if err != nil {
logger.Warningf("parse %v timestamp failed: %v", row, err)
continue
}
timestamp := t.Unix()
if _, ok := m[metric.String()]; !ok {
m[metric.String()] = &models.DataResp{
Metric: metric,
Values: [][]float64{
{float64(timestamp), value},
},
}
} else {
m[metric.String()].Values = append(m[metric.String()].Values, []float64{float64(timestamp), value})
}
}
}
for _, v := range m {
v.Ref = ref
result = append(result, *v)
}
return result, nil
}
func interfaceToFloat64(input interface{}) (float64, error) {
// Check for the kind of the value first
if input == nil {
return 0, fmt.Errorf("unsupported type: %T", input)
}
kind := reflect.TypeOf(input).Kind()
switch kind {
case reflect.Float64:
return input.(float64), nil
case reflect.Float32:
return float64(input.(float32)), nil
case reflect.Int, reflect.Int32, reflect.Int64, reflect.Int8, reflect.Int16:
return float64(reflect.ValueOf(input).Int()), nil
case reflect.Uint, reflect.Uint32, reflect.Uint64, reflect.Uint8, reflect.Uint16:
return float64(reflect.ValueOf(input).Uint()), nil
case reflect.String:
return strconv.ParseFloat(input.(string), 64)
case reflect.Bool:
if input.(bool) {
return 1.0, nil
}
return 0.0, nil
default:
return 0, fmt.Errorf("unsupported type: %T", input)
}
}
func parseTimeString(ts string) (time.Time, error) {
// 尝试不同的时间格式
formats := []string{
// 标准格式
time.Layout, // "01/02 03:04:05PM '06 -0700"
time.ANSIC, // "Mon Jan _2 15:04:05 2006"
time.UnixDate, // "Mon Jan _2 15:04:05 MST 2006"
time.RubyDate, // "Mon Jan 02 15:04:05 -0700 2006"
time.RFC822, // "02 Jan 06 15:04 MST"
time.RFC822Z, // "02 Jan 06 15:04 -0700"
time.RFC850, // "Monday, 02-Jan-06 15:04:05 MST"
time.RFC1123, // "Mon, 02 Jan 2006 15:04:05 MST"
time.RFC1123Z, // "Mon, 02 Jan 2006 15:04:05 -0700"
time.RFC3339, // "2006-01-02T15:04:05Z07:00"
time.RFC3339Nano, // "2006-01-02T15:04:05.999999999Z07:00"
time.Kitchen, // "3:04PM"
// 实用时间戳格式
time.Stamp, // "Jan _2 15:04:05"
time.StampMilli, // "Jan _2 15:04:05.000"
time.StampMicro, // "Jan _2 15:04:05.000000"
time.StampNano, // "Jan _2 15:04:05.000000000"
time.DateTime, // "2006-01-02 15:04:05"
time.DateOnly, // "2006-01-02"
time.TimeOnly, // "15:04:05"
// 常用自定义格式
"2006-01-02T15:04:05", // 无时区的ISO格式
"2006-01-02T15:04:05.000Z",
"2006-01-02T15:04:05Z",
"2006-01-02 15:04:05.999999999", // 纳秒
"2006-01-02 15:04:05.999999", // 微秒
"2006-01-02 15:04:05.999", // 毫秒
"2006/01/02",
"20060102",
"01/02/2006",
"2006年01月02日",
"2006年01月02日 15:04:05",
}
var lastErr error
for _, format := range formats {
t, err := time.Parse(format, ts)
if err == nil {
return t, nil
}
lastErr = err
}
// 尝试解析 Unix 时间戳
if timestamp, err := strconv.ParseInt(ts, 10, 64); err == nil {
switch len(ts) {
case 10: // 秒
return time.Unix(timestamp, 0), nil
case 13: // 毫秒
return time.Unix(timestamp/1000, (timestamp%1000)*1000000), nil
case 16: // 微秒
return time.Unix(timestamp/1000000, (timestamp%1000000)*1000), nil
case 19: // 纳秒
return time.Unix(timestamp/1000000000, timestamp%1000000000), nil
}
}
return time.Time{}, fmt.Errorf("failed to parse time with any format: %v", lastErr)
}
func ConvertToTable(src td.APIResponse) []interface{} {
var resp []interface{}
for i := range src.Data {
cur := make(map[string]interface{})
for j := range src.Data[i] {
cur[src.ColumnMeta[j][0].(string)] = src.Data[i][j]
}
resp = append(resp, cur)
}
return resp
}

View File

@@ -1,5 +1,3 @@
version: "3.7"
networks:
nightingale:
driver: bridge

View File

@@ -19,8 +19,8 @@ precision = "ms"
# global collect interval
interval = 15
[global.labels]
source="categraf"
# [global.labels]
# source="categraf"
# region = "shanghai"
# env = "localhost"

View File

@@ -0,0 +1,42 @@
[[instances]]
address = "mysql:3306"
username = "root"
password = "1234"
# # set tls=custom to enable tls
# parameters = "tls=false"
# extra_status_metrics = true
# extra_innodb_metrics = false
# gather_processlist_processes_by_state = false
# gather_processlist_processes_by_user = false
# gather_schema_size = true
# gather_table_size = false
# gather_system_table_size = false
# gather_slave_status = true
# # timeout
# timeout_seconds = 3
# # interval = global.interval * interval_times
# interval_times = 1
# important! use global unique string to specify instance
labels = { instance="docker-compose-mysql" }
## Optional TLS Config
# use_tls = false
# tls_min_version = "1.2"
# tls_ca = "/etc/categraf/ca.pem"
# tls_cert = "/etc/categraf/cert.pem"
# tls_key = "/etc/categraf/key.pem"
## Use TLS but skip chain & host verification
# insecure_skip_verify = true
#[[instances.queries]]
# mesurement = "lock_wait"
# metric_fields = [ "total" ]
# timeout = "3s"
# request = '''
#SELECT count(*) as total FROM information_schema.innodb_trx WHERE trx_state='LOCK WAIT'
#'''

View File

@@ -0,0 +1,37 @@
[[instances]]
address = "redis:6379"
username = ""
password = ""
# pool_size = 2
## 是否开启slowlog 收集
# gather_slowlog = true
## 最多收集少条slowlog
# slowlog_max_len = 100
## 收集距离现在多少秒以内的slowlog
## 注意插件的采集周期,该参数不要小于采集周期否则会有slowlog查不到
# slowlog_time_window=30
# 指标
# redis_slow_log{ident=dev-01 client_addr=127.0.0.1:56364 client_name= cmd="info ALL" log_id=983} 74 (单位微秒)
# # Optional. Specify redis commands to retrieve values
# commands = [
# {command = ["get", "sample-key1"], metric = "custom_metric_name1"},
# {command = ["get", "sample-key2"], metric = "custom_metric_name2"}
# ]
# # interval = global.interval * interval_times
# interval_times = 1
# important! use global unique string to specify instance
labels = { instance="docker-compose-redis" }
## Optional TLS Config
# use_tls = false
# tls_min_version = "1.2"
# tls_ca = "/etc/categraf/ca.pem"
# tls_cert = "/etc/categraf/cert.pem"
# tls_key = "/etc/categraf/key.pem"
## Use TLS but skip chain & host verification
# insecure_skip_verify = true

View File

@@ -50,7 +50,7 @@ Enable = true
# user001 = "ccc26da7b9aba533cbb263a36c07dcc5"
[HTTP.APIForService]
Enable = true
Enable = false
[HTTP.APIForService.BasicAuth]
user001 = "ccc26da7b9aba533cbb263a36c07dcc5"

View File

@@ -50,7 +50,7 @@ Enable = true
# user001 = "ccc26da7b9aba533cbb263a36c07dcc5"
[HTTP.APIForService]
Enable = true
Enable = false
[HTTP.APIForService.BasicAuth]
user001 = "ccc26da7b9aba533cbb263a36c07dcc5"

View File

@@ -25,7 +25,7 @@ services:
network_mode: host
prometheus:
image: prom/prometheus
image: prom/prometheus:v2.55.1
container_name: prometheus
hostname: prometheus
restart: always

View File

@@ -50,7 +50,7 @@ Enable = true
# user001 = "ccc26da7b9aba533cbb263a36c07dcc5"
[HTTP.APIForService]
Enable = true
Enable = false
[HTTP.APIForService.BasicAuth]
user001 = "ccc26da7b9aba533cbb263a36c07dcc5"

View File

@@ -790,6 +790,7 @@ CREATE TABLE es_index_pattern (
time_field varchar(128) not null default '@timestamp',
allow_hide_system_indices smallint not null default 0,
fields_format varchar(4096) not null default '',
cross_cluster_enabled int not null default 0,
create_at bigint default '0',
create_by varchar(64) default '',
update_at bigint default '0',
@@ -859,6 +860,7 @@ CREATE TABLE builtin_components (
ident VARCHAR(191) NOT NULL,
logo VARCHAR(191) NOT NULL,
readme TEXT NOT NULL,
disabled INT NOT NULL DEFAULT 0,
created_at BIGINT NOT NULL DEFAULT 0,
created_by VARCHAR(191) NOT NULL DEFAULT '',
updated_at BIGINT NOT NULL DEFAULT 0,
@@ -885,4 +887,20 @@ CREATE TABLE builtin_payloads (
CREATE INDEX idx_component ON builtin_payloads (component);
CREATE INDEX idx_builtin_payloads_name ON builtin_payloads (name);
CREATE INDEX idx_cate ON builtin_payloads (cate);
CREATE INDEX idx_type ON builtin_payloads (type);
CREATE INDEX idx_type ON builtin_payloads (type);
CREATE TABLE dash_annotation (
id bigserial PRIMARY KEY,
dashboard_id bigint not null,
panel_id varchar(191) not null,
tags text,
description text,
config text,
time_start bigint not null default 0,
time_end bigint not null default 0,
create_at bigint not null default 0,
create_by varchar(64) not null default '',
update_at bigint not null default 0,
update_by varchar(64) not null default ''
);

View File

@@ -50,7 +50,7 @@ Enable = true
# user001 = "ccc26da7b9aba533cbb263a36c07dcc5"
[HTTP.APIForService]
Enable = true
Enable = false
[HTTP.APIForService.BasicAuth]
user001 = "ccc26da7b9aba533cbb263a36c07dcc5"

File diff suppressed because it is too large Load Diff

View File

@@ -1,3 +1,10 @@
GRANT ALL ON *.* TO 'root'@'127.0.0.1' IDENTIFIED BY '1234';
GRANT ALL ON *.* TO 'root'@'localhost' IDENTIFIED BY '1234';
GRANT ALL ON *.* TO 'root'@'%' IDENTIFIED BY '1234';
CREATE USER IF NOT EXISTS 'root'@'127.0.0.1' IDENTIFIED BY '1234';
GRANT ALL PRIVILEGES ON *.* TO 'root'@'127.0.0.1' WITH GRANT OPTION;
CREATE USER IF NOT EXISTS 'root'@'localhost' IDENTIFIED BY '1234';
GRANT ALL PRIVILEGES ON *.* TO 'root'@'localhost' WITH GRANT OPTION;
CREATE USER IF NOT EXISTS 'root'@'%' IDENTIFIED BY '1234';
GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' WITH GRANT OPTION;
FLUSH PRIVILEGES;

View File

@@ -116,4 +116,35 @@ CREATE TABLE `target_busi_group` (
`update_at` bigint NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `idx_target_group` (`target_ident`,`group_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
/* v7.7.2 2024-12-02 */
ALTER TABLE alert_subscribe MODIFY COLUMN rule_ids varchar(1024);
ALTER TABLE alert_subscribe MODIFY COLUMN busi_groups varchar(4096);
/* v8.0.0-beta.1 2024-12-13 */
ALTER TABLE `alert_rule` ADD COLUMN `cron_pattern` VARCHAR(64);
ALTER TABLE `builtin_components` MODIFY COLUMN `logo` mediumtext COMMENT '''logo of component''';
/* v8.0.0-beta.2 2024-12-26 */
ALTER TABLE `es_index_pattern` ADD COLUMN `cross_cluster_enabled` int not null default 0;
/* v8.0.0-beta.3 2024-01-03 */
ALTER TABLE `builtin_components` ADD COLUMN `disabled` INT NOT NULL DEFAULT 0 COMMENT 'is disabled or not';
CREATE TABLE `dash_annotation` (
`id` bigint unsigned not null auto_increment,
`dashboard_id` bigint not null comment 'dashboard id',
`panel_id` varchar(191) not null comment 'panel id',
`tags` text comment 'tags array json string',
`description` text comment 'annotation description',
`config` text comment 'annotation config',
`time_start` bigint not null default 0 comment 'start timestamp',
`time_end` bigint not null default 0 comment 'end timestamp',
`create_at` bigint not null default 0 comment 'create time',
`create_by` varchar(64) not null default '' comment 'creator',
`update_at` bigint not null default 0 comment 'update time',
`update_by` varchar(64) not null default '' comment 'updater',
PRIMARY KEY (`id`),
KEY `idx_dashboard_id` (`dashboard_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;

View File

@@ -17,6 +17,8 @@ CREATE TABLE `users` (
`update_by` varchar(64) not null default ''
);
CREATE UNIQUE INDEX idx_users_username ON `users` (username);
insert into `users`(id, username, nickname, password, roles, create_at, create_by, update_at, update_by) values(1, 'root', '超管', 'root.2020', 'Admin', strftime('%s', 'now'), 'system', strftime('%s', 'now'), 'system');
CREATE TABLE `user_group` (
@@ -182,8 +184,9 @@ CREATE TABLE `board` (
`create_by` varchar(64) not null default '',
`update_at` bigint not null default 0,
`update_by` varchar(64) not null default '',
unique (`group_id`, `name`)
`public_cate` bigint not null default 0
);
CREATE UNIQUE INDEX idx_board_group_id_name ON `board` (group_id, name);
CREATE INDEX `idx_board_ident` ON `board` (`ident` asc);
-- for dashboard new version
@@ -192,6 +195,15 @@ CREATE TABLE `board_payload` (
`payload` mediumtext not null
);
CREATE TABLE `chart` (
`id` integer primary key autoincrement,
`group_id` integer not null,
`configs` text,
`weight` integer not null default 0
);
CREATE INDEX idx_chart_group_id ON `chart` (group_id);
CREATE TABLE `chart_share` (
`id` integer primary key autoincrement,
`cluster` varchar(128) not null,
@@ -238,7 +250,9 @@ CREATE TABLE `alert_rule` (
`create_at` bigint not null default 0,
`create_by` varchar(64) not null default '',
`update_at` bigint not null default 0,
`update_by` varchar(64) not null default ''
`update_by` varchar(64) not null default '',
`cron_pattern` varchar(64),
`datasource_queries` text
);
CREATE INDEX `idx_alert_rule_group_id` ON `alert_rule` (`group_id` asc);
CREATE INDEX `idx_alert_rule_update_at` ON `alert_rule` (`update_at` asc);
@@ -308,11 +322,18 @@ CREATE TABLE `target` (
`tags` varchar(512) not null default '',
`host_ip` varchar(15) default '',
`agent_version` varchar(255) default '',
`host_tags` text,
`engine_name` varchar(255) default '',
`os` varchar(31) default '',
`update_at` bigint not null default 0
);
CREATE INDEX `idx_target_group_id` ON `target` (`group_id` asc);
CREATE INDEX `idx_target_group_id` ON `target` (`group_id` asc);
CREATE UNIQUE INDEX idx_target_ident ON `target` (ident);
CREATE INDEX idx_host_ip ON `target` (host_ip);
CREATE INDEX idx_agent_version ON `target` (agent_version);
CREATE INDEX idx_engine_name ON `target` (engine_name);
CREATE INDEX idx_os ON `target` (os);
CREATE TABLE `metric_view` (
`id` integer primary key autoincrement,
@@ -337,12 +358,14 @@ CREATE TABLE `recording_rule` (
`disabled` tinyint(1) not null default 0,
`prom_ql` varchar(8192) not null,
`prom_eval_interval` int not null,
`cron_pattern` varchar(255) default '',
`append_tags` varchar(255) default '',
`query_configs` text not null,
`create_at` bigint default '0',
`create_by` varchar(64) default '',
`update_at` bigint default '0',
`update_by` varchar(64) default ''
`update_by` varchar(64) default '',
`datasource_queries` text
);
CREATE INDEX `idx_recording_rule_group_id` ON `recording_rule` (`group_id` asc);
CREATE INDEX `idx_recording_rule_update_at` ON `recording_rule` (`update_at` asc);
@@ -430,6 +453,7 @@ CREATE TABLE `alert_his_event` (
`trigger_value` varchar(2048) not null,
`recover_time` bigint not null default 0,
`last_eval_time` bigint not null default 0,
`original_tags` varchar(8192),
`tags` varchar(1024) not null default '',
`annotations` text not null,
`rule_config` text not null
@@ -459,6 +483,8 @@ CREATE INDEX `idx_builtin_components_ident` ON `builtin_components` (`ident` asc
CREATE TABLE `builtin_payloads` (
`id` integer primary key autoincrement,
`component_id` integer not null default 0,
`uuid` integer not null,
`type` varchar(191) not null,
`component` varchar(191) not null,
`cate` varchar(191) not null,
@@ -474,6 +500,20 @@ CREATE INDEX `idx_builtin_payloads_component` ON `builtin_payloads` (`component`
CREATE INDEX `idx_builtin_payloads_name` ON `builtin_payloads` (`name` asc);
CREATE INDEX `idx_builtin_payloads_cate` ON `builtin_payloads` (`cate` asc);
CREATE INDEX `idx_builtin_payloads_type` ON `builtin_payloads` (`type` asc);
CREATE INDEX idx_uuid ON `builtin_payloads` (uuid);
CREATE TABLE `notification_record` (
`id` integer primary key autoincrement,
`event_id` integer not null,
`sub_id` integer,
`channel` varchar(255) not null,
`status` integer,
`target` varchar(1024) not null,
`details` varchar(2048) default '',
`created_at` integer not null
);
CREATE INDEX idx_evt ON notification_record (event_id);
CREATE TABLE `task_tpl` (
`id` integer primary key autoincrement,
@@ -553,6 +593,8 @@ CREATE TABLE `datasource`
`updated_by` varchar(64) not null default ''
);
CREATE UNIQUE INDEX idx_datasource_name ON datasource (name);
CREATE TABLE `builtin_cate` (
`id` integer primary key autoincrement,
`name` varchar(191) not null,
@@ -570,6 +612,8 @@ CREATE TABLE `notify_tpl` (
`update_by` varchar(64) not null default ''
);
CREATE UNIQUE INDEX idx_notify_tpl_channel ON notify_tpl (channel);
CREATE TABLE `sso_config` (
`id` integer primary key autoincrement,
`name` varchar(191) not null unique,
@@ -577,6 +621,8 @@ CREATE TABLE `sso_config` (
`update_at` bigint not null default 0
);
CREATE UNIQUE INDEX idx_sso_config_name ON sso_config (name);
CREATE TABLE `es_index_pattern` (
`id` integer primary key autoincrement,
`datasource_id` bigint not null default 0,
@@ -584,6 +630,7 @@ CREATE TABLE `es_index_pattern` (
`time_field` varchar(128) not null default '@timestamp',
`allow_hide_system_indices` tinyint(1) not null default 0,
`fields_format` varchar(4096) not null default '',
`cross_cluster_enabled` int not null default 0,
`create_at` bigint default '0',
`create_by` varchar(64) default '',
`update_at` bigint default '0',
@@ -591,6 +638,8 @@ CREATE TABLE `es_index_pattern` (
unique (`datasource_id`, `name`)
);
CREATE UNIQUE INDEX idx_es_index_pattern_datasource_id_name ON es_index_pattern (datasource_id, name);
CREATE TABLE `builtin_metrics` (
`id` integer primary key autoincrement,
`collector` varchar(191) NOT NULL,
@@ -603,13 +652,15 @@ CREATE TABLE `builtin_metrics` (
`created_at` bigint NOT NULL DEFAULT 0,
`created_by` varchar(191) NOT NULL DEFAULT '',
`updated_at` bigint NOT NULL DEFAULT 0,
`updated_by` varchar(191) NOT NULL DEFAULT ''
`updated_by` varchar(191) NOT NULL DEFAULT '',
`uuid integer` not null default 0
);
-- CREATE UNIQUE INDEX `idx_builtin_metrics_collector_typ_name` ON `builtin_metrics` (`lang`,`collector`, `typ`, `name` asc);
-- CREATE INDEX `idx_builtin_metrics_collector` ON `builtin_metrics` (`collector` asc);
-- CREATE INDEX `idx_builtin_metrics_typ` ON `builtin_metrics` (`typ` asc);
-- CREATE INDEX `idx_builtin_metrics_name` ON `builtin_metrics` (`name` asc);
-- CREATE INDEX `idx_builtin_metrics_lang` ON `builtin_metrics` (`lang` asc);
CREATE UNIQUE INDEX idx_collector_typ_name ON builtin_metrics (lang, collector, typ, name);
CREATE INDEX idx_collector ON builtin_metrics (collector);
CREATE INDEX idx_typ ON builtin_metrics (typ);
CREATE INDEX idx_builtinmetric_name ON builtin_metrics (name);
CREATE INDEX idx_lang ON builtin_metrics (lang);
CREATE TABLE `metric_filter` (
@@ -624,6 +675,30 @@ CREATE TABLE `metric_filter` (
);
CREATE INDEX `idx_metric_filter_name` ON `metric_filter` (`name` asc);
CREATE TABLE `target_busi_group` (
`id` integer primary key autoincrement,
`target_ident` varchar(191) not null,
`group_id` integer not null,
`update_at` integer not null
);
CREATE UNIQUE INDEX idx_target_busi_group ON target_busi_group (target_ident, group_id);
CREATE TABLE `dash_annotation` (
`id` integer primary key autoincrement,
`dashboard_id` bigint not null,
`panel_id` varchar(191) not null,
`tags` text,
`description` text,
`config` text,
`time_start` bigint not null default 0,
`time_end` bigint not null default 0,
`create_at` bigint not null default 0,
`create_by` varchar(64) not null default '',
`update_at` bigint not null default 0,
`update_by` varchar(64) not null default ''
);
CREATE TABLE `task_meta`
(

74
dscache/cache.go Normal file
View File

@@ -0,0 +1,74 @@
package dscache
import (
"fmt"
"sync"
"time"
"github.com/ccfos/nightingale/v6/datasource"
)
type Cache struct {
datas map[string]map[int64]datasource.Datasource
datasourceStatus map[string]string
mutex *sync.RWMutex
}
var DsCache = Cache{
datas: make(map[string]map[int64]datasource.Datasource),
datasourceStatus: make(map[string]string),
mutex: new(sync.RWMutex),
}
func (cs *Cache) Put(cate string, dsId int64, name string, ds datasource.Datasource) {
cs.mutex.Lock()
if _, found := cs.datas[cate]; !found {
cs.datas[cate] = make(map[int64]datasource.Datasource)
}
if _, found := cs.datas[cate][dsId]; found {
if cs.datas[cate][dsId].Equal(ds) {
cs.mutex.Unlock()
return
}
}
cs.mutex.Unlock()
// InitClient() 在用户配置错误或远端不可用时, 会非常耗时, mutex被长期持有, 导致Get()会超时
err := ds.InitClient()
if err != nil {
cs.SetStatus(name, fmt.Sprintf("%s init plugin:%s %d %+v client fail: %v", time.Now().Format("2006-01-02 15:04:05"), cate, dsId, ds, err))
return
}
cs.SetStatus(name, fmt.Sprintf("%s init plugin:%s %d %+v client success", time.Now().Format("2006-01-02 15:04:05"), cate, dsId, ds))
cs.mutex.Lock()
cs.datas[cate][dsId] = ds
cs.mutex.Unlock()
}
func (cs *Cache) Get(cate string, dsId int64) (datasource.Datasource, bool) {
cs.mutex.RLock()
defer cs.mutex.RUnlock()
if _, found := cs.datas[cate]; !found {
return nil, false
}
if _, found := cs.datas[cate][dsId]; !found {
return nil, false
}
return cs.datas[cate][dsId], true
}
func (cs *Cache) GetAllStatus() map[string]string {
cs.mutex.RLock()
defer cs.mutex.RUnlock()
return cs.datasourceStatus
}
func (cs *Cache) SetStatus(cate string, status string) {
cs.mutex.Lock()
cs.datasourceStatus[cate] = status
cs.mutex.Unlock()
}

188
dscache/sync.go Normal file
View File

@@ -0,0 +1,188 @@
package dscache
import (
"context"
"fmt"
"strings"
"sync/atomic"
"time"
"github.com/ccfos/nightingale/v6/datasource"
_ "github.com/ccfos/nightingale/v6/datasource/ck"
"github.com/ccfos/nightingale/v6/datasource/es"
"github.com/ccfos/nightingale/v6/dskit/tdengine"
"github.com/ccfos/nightingale/v6/models"
"github.com/ccfos/nightingale/v6/pkg/ctx"
"github.com/toolkits/pkg/logger"
)
var FromAPIHook func()
func Init(ctx *ctx.Context, fromAPI bool) {
go getDatasourcesFromDBLoop(ctx, fromAPI)
}
type ListInput struct {
Page int `json:"p"`
Limit int `json:"limit"`
Category string `json:"category"`
PluginType string `json:"plugin_type"` // promethues
Status string `json:"status"`
}
type DSReply struct {
RequestID string `json:"request_id"`
Data struct {
Items []datasource.DatasourceInfo `json:"items"`
} `json:"data"`
}
type DSReplyEncrypt struct {
RequestID string `json:"request_id"`
Data string `json:"data"`
}
var PromDefaultDatasourceId int64
func getDatasourcesFromDBLoop(ctx *ctx.Context, fromAPI bool) {
for {
if !fromAPI {
items, err := models.GetDatasources(ctx)
if err != nil {
logger.Errorf("get datasource from database fail: %v", err)
//stat.CounterExternalErrorTotal.WithLabelValues("db", "get_cluster").Inc()
time.Sleep(time.Second * 2)
continue
}
var dss []datasource.DatasourceInfo
for _, item := range items {
if item.PluginType == "prometheus" && item.IsDefault {
atomic.StoreInt64(&PromDefaultDatasourceId, item.Id)
}
DsCache.SetStatus(item.Name, fmt.Sprintf("%s get datasource: %+v", time.Now().Format("2006-01-02 15:04:05"), item))
ds := datasource.DatasourceInfo{
Id: item.Id,
Name: item.Name,
Description: item.Description,
Category: item.Category,
PluginId: item.PluginId,
Type: item.PluginType,
PluginTypeName: item.PluginTypeName,
Settings: item.SettingsJson,
HTTPJson: item.HTTPJson,
AuthJson: item.AuthJson,
Status: item.Status,
IsDefault: item.IsDefault,
}
if item.PluginType == "elasticsearch" {
esN9eToDatasourceInfo(&ds, item)
} else if item.PluginType == "opensearch" {
osN9eToDatasourceInfo(&ds, item)
} else if item.PluginType == "tdengine" {
tdN9eToDatasourceInfo(&ds, item)
} else {
ds.Settings = make(map[string]interface{})
for k, v := range item.SettingsJson {
ds.Settings[k] = v
}
}
dss = append(dss, ds)
}
PutDatasources(dss)
} else {
FromAPIHook()
}
time.Sleep(time.Second * 2)
}
}
func tdN9eToDatasourceInfo(ds *datasource.DatasourceInfo, item models.Datasource) {
ds.Settings = make(map[string]interface{})
ds.Settings["tdengine.cluster_name"] = item.Name
ds.Settings["tdengine.addr"] = item.HTTPJson.Url
ds.Settings["tdengine.timeout"] = item.HTTPJson.Timeout
ds.Settings["tdengine.dial_timeout"] = item.HTTPJson.DialTimeout
ds.Settings["tdengine.max_idle_conns_per_host"] = item.HTTPJson.MaxIdleConnsPerHost
ds.Settings["tdengine.headers"] = item.HTTPJson.Headers
ds.Settings["tdengine.basic"] = tdengine.TDengineBasicAuth{
User: item.AuthJson.BasicAuthUser,
Password: item.AuthJson.BasicAuthPassword,
}
}
func esN9eToDatasourceInfo(ds *datasource.DatasourceInfo, item models.Datasource) {
ds.Settings = make(map[string]interface{})
ds.Settings["es.nodes"] = []string{item.HTTPJson.Url}
if len(item.HTTPJson.Urls) > 0 {
ds.Settings["es.nodes"] = item.HTTPJson.Urls
}
ds.Settings["es.timeout"] = item.HTTPJson.Timeout
ds.Settings["es.basic"] = es.BasicAuth{
Username: item.AuthJson.BasicAuthUser,
Password: item.AuthJson.BasicAuthPassword,
}
ds.Settings["es.tls"] = es.TLS{
SkipTlsVerify: item.HTTPJson.TLS.SkipTlsVerify,
}
ds.Settings["es.version"] = item.SettingsJson["version"]
ds.Settings["es.headers"] = item.HTTPJson.Headers
ds.Settings["es.min_interval"] = item.SettingsJson["min_interval"]
ds.Settings["es.max_shard"] = item.SettingsJson["max_shard"]
ds.Settings["es.enable_write"] = item.SettingsJson["enable_write"]
}
// for opensearch
func osN9eToDatasourceInfo(ds *datasource.DatasourceInfo, item models.Datasource) {
ds.Settings = make(map[string]interface{})
ds.Settings["os.nodes"] = []string{item.HTTPJson.Url}
ds.Settings["os.timeout"] = item.HTTPJson.Timeout
ds.Settings["os.basic"] = es.BasicAuth{
Username: item.AuthJson.BasicAuthUser,
Password: item.AuthJson.BasicAuthPassword,
}
ds.Settings["os.tls"] = es.TLS{
SkipTlsVerify: item.HTTPJson.TLS.SkipTlsVerify,
}
ds.Settings["os.version"] = item.SettingsJson["version"]
ds.Settings["os.headers"] = item.HTTPJson.Headers
ds.Settings["os.min_interval"] = item.SettingsJson["min_interval"]
ds.Settings["os.max_shard"] = item.SettingsJson["max_shard"]
}
func PutDatasources(items []datasource.DatasourceInfo) {
for _, item := range items {
if item.Type == "prometheus" {
continue
}
if item.Type == "loki" {
continue
}
if item.Name == "" {
DsCache.SetStatus(item.Name, fmt.Sprintf("%s cluster name is empty, ignore %+v", time.Now().Format("2006-01-02 15:04:05"), item))
continue
}
typ := strings.ReplaceAll(item.Type, ".logging", "")
ds, err := datasource.GetDatasourceByType(typ, item.Settings)
if err != nil {
DsCache.SetStatus(item.Name, fmt.Sprintf("%s get plugin:%+v fail: %v", time.Now().Format("2006-01-02 15:04:05"), item, err))
continue
}
err = ds.Validate(context.Background())
if err != nil {
DsCache.SetStatus(item.Name, fmt.Sprintf("%s get plugin:%+v fail: %v", time.Now().Format("2006-01-02 15:04:05"), item, err))
continue
}
// 异步初始化 client 不然数据源同步的会很慢
go DsCache.Put(typ, item.Id, item.Name, ds)
}
}

View File

@@ -0,0 +1,303 @@
package clickhouse
import (
"context"
"database/sql"
"errors"
"fmt"
"io"
"strings"
"time"
"github.com/ccfos/nightingale/v6/dskit/sqlbase"
"github.com/ccfos/nightingale/v6/dskit/types"
"github.com/ClickHouse/clickhouse-go/v2"
"github.com/mitchellh/mapstructure"
"github.com/toolkits/pkg/net/httplib"
ckDriver "gorm.io/driver/clickhouse"
"gorm.io/gorm"
)
const (
ckDataSource = "clickhouse://%s:%s@%s?read_timeout=10s"
DefaultLimit = 500
)
type Clickhouse struct {
Nodes []string `json:"ck.nodes" mapstructure:"ck.nodes"`
User string `json:"ck.user" mapstructure:"ck.user"`
Password string `json:"ck.password" mapstructure:"ck.password"`
Timeout int `json:"ck.timeout" mapstructure:"ck.timeout"`
MaxQueryRows int `json:"ck.max_query_rows" mapstructure:"ck.max_query_rows"`
Client *gorm.DB `json:"-"`
ClientByHTTP *sql.DB `json:"-"`
}
func (c *Clickhouse) InitCli() error {
if c.MaxQueryRows == 0 {
c.MaxQueryRows = DefaultLimit
}
if len(c.Nodes) == 0 {
return fmt.Errorf("not found ck shard, please check datasource config")
}
addr := c.Nodes[0]
url := addr
if !strings.HasPrefix(url, "http://") {
url = "http://" + url
}
resp, err := httplib.Get(url).SetTimeout(time.Second * 1).Response()
// 忽略HTTP Code错误, 因为可能不是HTTP协议
if err != nil {
return err
}
defer resp.Body.Close()
// HTTP 协议
if resp.StatusCode == 200 {
jsonBytes, _ := io.ReadAll(resp.Body)
if len(jsonBytes) > 0 && strings.Contains(strings.ToLower(string(jsonBytes)), "ok.") {
ckconn := clickhouse.OpenDB(&clickhouse.Options{
Addr: []string{addr},
Auth: clickhouse.Auth{
Username: c.User,
Password: c.Password,
},
Settings: clickhouse.Settings{
"max_execution_time": 60,
},
DialTimeout: 10 * time.Second,
Protocol: clickhouse.HTTP,
})
if ckconn == nil {
return errors.New("db conn failed")
}
c.ClientByHTTP = ckconn
return nil
}
}
db, err := gorm.Open(
ckDriver.New(
ckDriver.Config{
DSN: fmt.Sprintf(ckDataSource,
c.User, c.Password, addr),
DisableDatetimePrecision: true,
DontSupportRenameColumn: true,
SkipInitializeWithVersion: false,
}),
)
if err != nil {
return err
}
c.Client = db
return nil
}
const (
ShowDatabases = "SHOW DATABASES"
ShowTables = "SELECT name FROM system.tables WHERE database = '%s'"
DescTable = "SELECT name,type FROM system.columns WHERE database='%s' AND table = '%s';"
)
func (c *Clickhouse) QueryRows(ctx context.Context, query string) (*sql.Rows, error) {
var (
rows *sql.Rows
err error
)
if c.ClientByHTTP != nil {
rows, err = c.ClientByHTTP.Query(query)
if err != nil {
return nil, err
}
} else if c.Client != nil {
rows, err = c.Client.Raw(query).Rows()
if err != nil {
return nil, err
}
} else {
return nil, fmt.Errorf("clickhouse client is nil")
}
return rows, nil
}
// ShowDatabases lists all databases in Clickhouse
func (c *Clickhouse) ShowDatabases(ctx context.Context) ([]string, error) {
var (
res []string
)
rows, err := c.QueryRows(ctx, ShowDatabases)
if err != nil {
return nil, err
}
for rows.Next() {
var r string
if err := rows.Scan(&r); err != nil {
return nil, err
}
res = append(res, r)
}
return res, nil
}
// ShowTables lists all tables in a given database
func (c *Clickhouse) ShowTables(ctx context.Context, database string) ([]string, error) {
var (
res []string
)
showTables := fmt.Sprintf(ShowTables, database)
rows, err := c.QueryRows(ctx, showTables)
if err != nil {
return nil, err
}
for rows.Next() {
var r string
if err := rows.Scan(&r); err != nil {
return nil, err
}
res = append(res, r)
}
return res, nil
}
// DescribeTable describes the schema of a specified table in Clickhouse
func (c *Clickhouse) DescribeTable(ctx context.Context, query interface{}) ([]*types.ColumnProperty, error) {
var (
ret []*types.ColumnProperty
)
ckQueryParam := new(QueryParam)
if err := mapstructure.Decode(query, ckQueryParam); err != nil {
return nil, err
}
descTable := fmt.Sprintf(DescTable, ckQueryParam.Database, ckQueryParam.Table)
rows, err := c.QueryRows(ctx, descTable)
if err != nil {
return nil, err
}
for rows.Next() {
var column types.ColumnProperty
if err := rows.Scan(&column.Field, &column.Type); err != nil {
return nil, err
}
ret = append(ret, &column)
}
return ret, nil
}
func (c *Clickhouse) ExecQueryBySqlDB(ctx context.Context, sql string) ([]map[string]interface{}, error) {
rows, err := c.QueryRows(ctx, sql)
if err != nil {
return nil, err
}
defer rows.Close()
columns, err := rows.Columns()
if err != nil {
return nil, err
}
var results []map[string]interface{}
for rows.Next() {
columnValues := make([]interface{}, len(columns))
columnPointers := make([]interface{}, len(columns))
for i := range columnValues {
columnPointers[i] = &columnValues[i]
}
if err := rows.Scan(columnPointers...); err != nil {
continue
}
rowMap := make(map[string]interface{})
for i, colName := range columns {
val := columnValues[i]
bytes, ok := val.([]byte)
if ok {
rowMap[colName] = string(bytes)
} else {
rowMap[colName] = val
}
}
results = append(results, rowMap)
}
return results, nil
}
func (c *Clickhouse) Query(ctx context.Context, query interface{}) ([]map[string]interface{}, error) {
ckQuery := new(QueryParam)
if err := mapstructure.Decode(query, ckQuery); err != nil {
return nil, err
}
// 校验SQL的合法性, 过滤掉 write请求
sqlItem := strings.Split(strings.ToUpper(ckQuery.Sql), " ")
for _, item := range sqlItem {
if _, ok := ckBannedOp[item]; ok {
return nil, fmt.Errorf("operation %s is forbid, only read db, please check your sql", item)
}
}
// 检查匹配数据长度,防止数据量过大
err := c.CheckMaxQueryRows(ctx, ckQuery.Sql)
if err != nil {
return nil, err
}
dbRows := make([]map[string]interface{}, 0)
if c.ClientByHTTP != nil {
dbRows, err = c.ExecQueryBySqlDB(ctx, ckQuery.Sql)
} else {
err = c.Client.Raw(ckQuery.Sql).Find(&dbRows).Error
}
if err != nil {
return nil, fmt.Errorf("fetch data failed, sql is %s, err is %s", ckQuery.Sql, err.Error())
}
return dbRows, nil
}
func (c *Clickhouse) CheckMaxQueryRows(ctx context.Context, sql string) error {
subSql := strings.ReplaceAll(sql, ";", "")
subSql = fmt.Sprintf("SELECT COUNT(*) as count FROM (%s) AS subquery;", subSql)
dbRows, err := c.ExecQueryBySqlDB(ctx, subSql)
if err != nil {
return fmt.Errorf("fetch data failed, sql is %s, err is %s", subSql, err.Error())
}
if len(dbRows) > 0 {
if count, exists := dbRows[0]["count"]; exists {
v, err := sqlbase.ParseFloat64Value(count)
if err != nil {
return err
}
if v > float64(c.MaxQueryRows) {
return fmt.Errorf("query result rows count %d exceeds the maximum limit %d", int(v), c.MaxQueryRows)
}
}
}
return nil
}

View File

@@ -0,0 +1,43 @@
package clickhouse
import (
"context"
"encoding/json"
"fmt"
"testing"
"time"
"github.com/ccfos/nightingale/v6/dskit/types"
)
func Test_Timeseries(t *testing.T) {
ck := &Clickhouse{
Nodes: []string{"127.0.0.1:8123"},
User: "default",
Password: "123456",
}
err := ck.InitCli()
if err != nil {
t.Fatal(err)
}
data, err := ck.QueryTimeseries(context.TODO(), &QueryParam{
Sql: `select * from default.student limit 20`,
From: time.Now().Unix() - 300,
To: time.Now().Unix(),
TimeField: "created_at",
TimeFormat: "datetime",
Keys: types.Keys{
LabelKey: "age",
},
})
if err != nil {
t.Fatal(err)
}
bs, err := json.Marshal(data)
if err != nil {
t.Fatal(err)
}
fmt.Println(string(bs))
}

View File

@@ -0,0 +1,58 @@
package clickhouse
import (
"context"
"fmt"
"github.com/ccfos/nightingale/v6/dskit/sqlbase"
"github.com/ccfos/nightingale/v6/dskit/types"
)
const (
TimeFieldFormatEpochMilli = "epoch_millis"
TimeFieldFormatEpochSecond = "epoch_second"
)
// 时序数据相关的API
type QueryParam struct {
Limit int `json:"limit" mapstructure:"limit"`
Sql string `json:"sql" mapstructure:"sql"`
Ref string `json:"ref" mapstructure:"ref"`
From int64 `json:"from" mapstructure:"from"`
To int64 `json:"to" mapstructure:"to"`
TimeField string `json:"time_field" mapstructure:"time_field"`
TimeFormat string `json:"time_format" mapstructure:"time_format"`
Keys types.Keys `json:"keys" mapstructure:"keys"`
Database string `json:"database" mapstructure:"database"`
Table string `json:"table" mapstructure:"table"`
}
var (
ckBannedOp = map[string]struct{}{
"CREATE": {},
"INSERT": {},
"ALTER": {},
"REVOKE": {},
"DROP": {},
"RENAME": {},
"ATTACH": {},
"DETACH": {},
"OPTIMIZE": {},
"TRUNCATE": {},
"SET": {},
}
)
func (c *Clickhouse) QueryTimeseries(ctx context.Context, query *QueryParam) ([]types.MetricValues, error) {
if query.Keys.ValueKey == "" {
return nil, fmt.Errorf("valueKey is required")
}
rows, err := c.Query(ctx, query)
if err != nil {
return nil, err
}
// 构造成时续数据
return sqlbase.FormatMetricValues(query.Keys, rows, true), nil
}

236
dskit/sqlbase/base.go Normal file
View File

@@ -0,0 +1,236 @@
// @Author: Ciusyan 5/19/24
package sqlbase
import (
"context"
"database/sql"
"fmt"
"strings"
"time"
"github.com/ccfos/nightingale/v6/dskit/types"
"gorm.io/gorm"
)
// NewDB creates a new Gorm DB instance based on the provided gorm.Dialector and configures the connection pool
func NewDB(ctx context.Context, dialector gorm.Dialector, maxIdleConns, maxOpenConns int, connMaxLifetime time.Duration) (*gorm.DB, error) {
// Create a new Gorm DB instance
db, err := gorm.Open(dialector, &gorm.Config{})
if err != nil {
return nil, err
}
// Configure the connection pool
sqlDB, err := db.DB()
if err != nil {
return nil, err
}
sqlDB.SetMaxIdleConns(maxIdleConns)
sqlDB.SetMaxOpenConns(maxOpenConns)
sqlDB.SetConnMaxLifetime(connMaxLifetime)
return db.WithContext(ctx), sqlDB.Ping()
}
// ShowTables retrieves a list of all tables in the specified database
func ShowTables(ctx context.Context, db *gorm.DB, query string) ([]string, error) {
var tables []string
rows, err := db.WithContext(ctx).Raw(query).Rows()
if err != nil {
return nil, err
}
defer rows.Close()
for rows.Next() {
var table string
if err := rows.Scan(&table); err != nil {
return nil, err
}
tables = append(tables, table)
}
return tables, nil
}
// ShowDatabases retrieves a list of all databases in the connected database server
func ShowDatabases(ctx context.Context, db *gorm.DB, query string) ([]string, error) {
var databases []string
rows, err := db.WithContext(ctx).Raw(query).Rows()
if err != nil {
return nil, err
}
defer rows.Close()
for rows.Next() {
var database string
if err := rows.Scan(&database); err != nil {
return nil, err
}
databases = append(databases, database)
}
return databases, nil
}
// DescTable describes the schema of a specified table in MySQL or PostgreSQL
func DescTable(ctx context.Context, db *gorm.DB, query string) ([]*types.ColumnProperty, error) {
rows, err := db.WithContext(ctx).Raw(query).Rows()
if err != nil {
return nil, err
}
defer rows.Close()
var columns []*types.ColumnProperty
for rows.Next() {
var (
field string
typ string
null string
key sql.NullString
defaultValue sql.NullString
extra sql.NullString
)
switch db.Dialector.Name() {
case "mysql":
if err := rows.Scan(&field, &typ, &null, &key, &defaultValue, &extra); err != nil {
continue
}
case "postgres", "sqlserver":
if err := rows.Scan(&field, &typ, &null, &defaultValue); err != nil {
continue
}
case "oracle":
if err := rows.Scan(&field, &typ, &null); err != nil {
continue
}
}
// Convert the database-specific type to internal type
type2, indexable := convertDBType(db.Dialector.Name(), typ)
columns = append(columns, &types.ColumnProperty{
Field: field,
Type: typ,
Type2: type2,
Indexable: indexable,
})
}
return columns, nil
}
// ExecQuery executes the specified query and returns the result rows
func ExecQuery(ctx context.Context, db *gorm.DB, sql string) ([]map[string]interface{}, error) {
rows, err := db.WithContext(ctx).Raw(sql).Rows()
if err != nil {
return nil, err
}
defer rows.Close()
columns, err := rows.Columns()
if err != nil {
return nil, err
}
var results []map[string]interface{}
for rows.Next() {
columnValues := make([]interface{}, len(columns))
columnPointers := make([]interface{}, len(columns))
for i := range columnValues {
columnPointers[i] = &columnValues[i]
}
if err := rows.Scan(columnPointers...); err != nil {
continue
}
rowMap := make(map[string]interface{})
for i, colName := range columns {
val := columnValues[i]
bytes, ok := val.([]byte)
if ok {
rowMap[colName] = string(bytes)
} else {
rowMap[colName] = val
}
}
results = append(results, rowMap)
}
return results, nil
}
// SelectRows selects rows from a specified table based on a given query
func SelectRows(ctx context.Context, db *gorm.DB, table, query string) ([]map[string]interface{}, error) {
sql := fmt.Sprintf("SELECT * FROM %s", table)
if query != "" {
sql += " WHERE " + query
}
return ExecQuery(ctx, db, sql)
}
// convertDBType converts MySQL or PostgreSQL data types to custom internal types and determines if they are indexable
func convertDBType(dialect, dbType string) (string, bool) {
typ := strings.ToLower(dbType)
// Common type conversions
switch {
case strings.HasPrefix(typ, "int"), strings.HasPrefix(typ, "tinyint"),
strings.HasPrefix(typ, "smallint"), strings.HasPrefix(typ, "mediumint"),
strings.HasPrefix(typ, "bigint"), strings.HasPrefix(typ, "serial"),
strings.HasPrefix(typ, "bigserial"):
return types.LogExtractValueTypeLong, true
case strings.HasPrefix(typ, "varchar"), strings.HasPrefix(typ, "text"),
strings.HasPrefix(typ, "char"), strings.HasPrefix(typ, "tinytext"),
strings.HasPrefix(typ, "mediumtext"), strings.HasPrefix(typ, "longtext"),
strings.HasPrefix(typ, "character varying"), strings.HasPrefix(typ, "nvarchar"),
strings.HasPrefix(typ, "nchar"):
return types.LogExtractValueTypeText, true
case strings.HasPrefix(typ, "float"), strings.HasPrefix(typ, "double"),
strings.HasPrefix(typ, "decimal"), strings.HasPrefix(typ, "numeric"),
strings.HasPrefix(typ, "real"), strings.HasPrefix(typ, "double precision"):
return types.LogExtractValueTypeFloat, true
case strings.HasPrefix(typ, "date"), strings.HasPrefix(typ, "datetime"),
strings.HasPrefix(typ, "timestamp"), strings.HasPrefix(typ, "timestamptz"),
strings.HasPrefix(typ, "time"), strings.HasPrefix(typ, "smalldatetime"):
return types.LogExtractValueTypeDate, false
case strings.HasPrefix(typ, "boolean"), strings.HasPrefix(typ, "bit"):
return types.LogExtractValueTypeBool, false
}
// Specific type conversions for MySQL
if dialect == "mysql" {
switch {
default:
return typ, false
}
}
// Specific type conversions for PostgreSQL
if dialect == "postgres" {
switch {
default:
return typ, false
}
}
if dialect == "oracle" {
switch {
default:
return typ, false
}
}
// Can continue to add specific 'dialect' type ...
return typ, false
}

250
dskit/sqlbase/timeseries.go Normal file
View File

@@ -0,0 +1,250 @@
// @Author: Ciusyan 5/20/24
package sqlbase
import (
"context"
"crypto/md5"
"encoding/json"
"fmt"
"reflect"
"sort"
"strconv"
"strings"
"time"
"github.com/ccfos/nightingale/v6/dskit/types"
"github.com/prometheus/common/model"
"gorm.io/gorm"
)
type QueryParam struct {
Sql string `json:"sql"`
Keys types.Keys `json:"keys" mapstructure:"keys"`
}
var (
BannedOp = map[string]struct{}{
"CREATE": {},
"INSERT": {},
"UPDATE": {},
"DELETE": {},
"ALTER": {},
"REVOKE": {},
"DROP": {},
"RENAME": {},
"TRUNCATE": {},
"SET": {},
}
)
// Query executes a given SQL query and returns the results
func Query(ctx context.Context, db *gorm.DB, query *QueryParam) ([]map[string]interface{}, error) {
// Validate SQL to prevent write operations if needed
sqlItem := strings.Split(strings.ToUpper(query.Sql), " ")
for _, item := range sqlItem {
if _, ok := BannedOp[item]; ok {
return nil, fmt.Errorf("operation %s is forbidden, only read operations are allowed, please check your SQL", item)
}
}
return ExecQuery(ctx, db, query.Sql)
}
// QueryTimeseries executes a time series data query using the given parameters
func QueryTimeseries(ctx context.Context, db *gorm.DB, query *QueryParam, ignoreDefault ...bool) ([]types.MetricValues, error) {
rows, err := Query(ctx, db, query)
if err != nil {
return nil, err
}
return FormatMetricValues(query.Keys, rows, ignoreDefault...), nil
}
func FormatMetricValues(keys types.Keys, rows []map[string]interface{}, ignoreDefault ...bool) []types.MetricValues {
ignore := false
if len(ignoreDefault) > 0 {
ignore = ignoreDefault[0]
}
keyMap := make(map[string]string)
for _, valueMetric := range strings.Split(keys.ValueKey, " ") {
keyMap[valueMetric] = "value"
}
for _, labelMetric := range strings.Split(keys.LabelKey, " ") {
keyMap[labelMetric] = "label"
}
if keys.TimeKey == "" {
keys.TimeKey = "time"
}
if len(keys.TimeKey) > 0 {
keyMap[keys.TimeKey] = "time"
}
var dataResps []types.MetricValues
dataMap := make(map[string]*types.MetricValues)
for _, row := range rows {
labels := make(map[string]string)
metricValue := make(map[string]float64)
metricTs := make(map[string]float64)
// Process each column based on its designated role (value, label, time)
for k, v := range row {
switch keyMap[k] {
case "value":
val, err := ParseFloat64Value(v)
if err != nil {
continue
}
metricValue[k] = val
case "label":
labels[k] = fmt.Sprintf("%v", v)
case "time":
ts, err := ParseTime(v, keys.TimeFormat)
if err != nil {
continue
}
metricTs[k] = float64(ts.Unix())
default:
// Default to labels for any unrecognized columns
if !ignore {
labels[k] = fmt.Sprintf("%v", v)
}
}
}
// Compile and store the metric values
for metricName, value := range metricValue {
metrics := make(model.Metric)
var labelsStr []string
for k1, v1 := range labels {
metrics[model.LabelName(k1)] = model.LabelValue(v1)
labelsStr = append(labelsStr, fmt.Sprintf("%s=%s", k1, v1))
}
metrics["__name__"] = model.LabelValue(metricName)
labelsStr = append(labelsStr, fmt.Sprintf("__name__=%s", metricName))
// Hash the labels to use as a key
sort.Strings(labelsStr)
labelsStrHash := fmt.Sprintf("%x", md5.Sum([]byte(strings.Join(labelsStr, ","))))
// Append new values to the existing metric, if present
ts, exists := metricTs[keys.TimeKey]
if !exists {
ts = float64(time.Now().Unix()) // Default to current time if not specified
}
valuePair := []float64{ts, value}
if existing, ok := dataMap[labelsStrHash]; ok {
existing.Values = append(existing.Values, valuePair)
} else {
dataResp := types.MetricValues{
Metric: metrics,
Values: [][]float64{valuePair},
}
dataMap[labelsStrHash] = &dataResp
}
}
}
// Convert the map to a slice for the response
for _, v := range dataMap {
sort.Slice(v.Values, func(i, j int) bool { return v.Values[i][0] < v.Values[j][0] }) // Sort by timestamp
dataResps = append(dataResps, *v)
}
return dataResps
}
// parseFloat64Value attempts to convert an interface{} to float64 using reflection
func ParseFloat64Value(val interface{}) (float64, error) {
v := reflect.ValueOf(val)
switch v.Kind() {
case reflect.Float64, reflect.Float32:
return v.Float(), nil
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return float64(v.Int()), nil
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
return float64(v.Uint()), nil
case reflect.String:
return strconv.ParseFloat(v.String(), 64)
case reflect.Slice:
if v.Type().Elem().Kind() == reflect.Uint8 {
return strconv.ParseFloat(string(v.Bytes()), 64)
}
case reflect.Interface:
return ParseFloat64Value(v.Interface())
case reflect.Ptr:
if !v.IsNil() {
return ParseFloat64Value(v.Elem().Interface())
}
case reflect.Struct:
if num, ok := val.(json.Number); ok {
return num.Float64()
}
}
return 0, fmt.Errorf("cannot convert type %T to float64", val)
}
// ParseTime attempts to parse a time value from an interface{} using a specified format
func ParseTime(val interface{}, format string) (time.Time, error) {
v := reflect.ValueOf(val)
switch v.Kind() {
case reflect.String:
str := v.String()
return parseTimeFromString(str, format)
case reflect.Slice:
if v.Type().Elem().Kind() == reflect.Uint8 {
str := string(v.Bytes())
return parseTimeFromString(str, format)
}
case reflect.Int, reflect.Int64:
return time.Unix(v.Int(), 0), nil
case reflect.Float64:
return time.Unix(int64(v.Float()), 0), nil
case reflect.Interface:
return ParseTime(v.Interface(), format)
case reflect.Ptr:
if !v.IsNil() {
return ParseTime(v.Elem().Interface(), format)
}
case reflect.Struct:
if t, ok := val.(time.Time); ok {
return t, nil
}
}
return time.Time{}, fmt.Errorf("invalid time value type: %v", val)
}
func parseTimeFromString(str, format string) (time.Time, error) {
// If a custom time format is provided, use it to parse the string
if format != "" {
parsedTime, err := time.Parse(format, str)
if err == nil {
return parsedTime, nil
}
return time.Time{}, fmt.Errorf("failed to parse time '%s' with format '%s': %v", str, format, err)
}
// Try to parse the string as RFC3339, RFC3339Nano, or Unix timestamp
if parsedTime, err := time.Parse(time.RFC3339, str); err == nil {
return parsedTime, nil
}
if parsedTime, err := time.Parse(time.RFC3339Nano, str); err == nil {
return parsedTime, nil
}
if timestamp, err := strconv.ParseInt(str, 10, 64); err == nil {
return time.Unix(timestamp, 0), nil
}
if timestamp, err := strconv.ParseFloat(str, 64); err == nil {
return time.Unix(int64(timestamp), 0), nil
}
return time.Time{}, fmt.Errorf("failed to parse time '%s'", str)
}

View File

@@ -0,0 +1,149 @@
// @Author: Ciusyan 5/17/24
package sqlbase
import (
"encoding/json"
"testing"
"time"
"github.com/ccfos/nightingale/v6/dskit/types"
)
func TestFormatMetricValues(t *testing.T) {
tests := []struct {
name string
keys types.Keys
rows []map[string]interface{}
want []types.MetricValues
}{
{
name: "cases1",
keys: types.Keys{
ValueKey: "grade a_grade",
LabelKey: "id student_name",
TimeKey: "update_time",
TimeFormat: "2006-01-02 15:04:05",
},
rows: []map[string]interface{}{
{
"id": "10007",
"grade": 20003,
"student_name": "邵子韬",
"a_grade": 69,
"update_time": "2024-05-14 10:00:00",
},
{
"id": "10007",
"grade": 20003,
"student_name": "邵子韬",
"a_grade": 69,
"update_time": "2024-05-14 10:05:00",
},
{
"id": "10007",
"grade": 20003,
"student_name": "邵子韬",
"a_grade": 69,
"update_time": "2024-05-14 10:10:00",
},
{
"id": "10008",
"grade": 20004,
"student_name": "Ciusyan",
"a_grade": 100,
"update_time": "2024-05-14 12:00:00",
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got := FormatMetricValues(tt.keys, tt.rows)
for _, g := range got {
t.Log(g)
}
})
}
}
func TestParseFloat64Value(t *testing.T) {
ptr := func(val float64) *float64 {
return &val
}
tests := []struct {
name string
input interface{}
want float64
wantErr bool
}{
{"float64", 1.23, 1.23, false},
{"float32", float32(1.23), float64(float32(1.23)), false},
{"int", 123, 123, false},
{"int64", int64(123), 123, false},
{"uint", uint(123), 123, false},
{"uint64", uint64(123), 123, false},
{"string", "1.23", 1.23, false},
{"[]byte", []byte("1.23"), 1.23, false},
{"json.Number", json.Number("1.23"), 1.23, false},
{"interface", interface{}(1.23), 1.23, false},
{"pointer", ptr(1.23), 1.23, false},
{"invalid string", "abc", 0, true},
{"invalid type", struct{}{}, 0, true},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := ParseFloat64Value(tt.input)
if (err != nil) != tt.wantErr {
t.Errorf("parseFloat64Value() error = %v, wantErr %v", err, tt.wantErr)
return
}
if got != tt.want {
t.Errorf("parseFloat64Value() = %v, want %v", got, tt.want)
}
})
}
}
func TestParseTime(t *testing.T) {
ptrTime := func(t time.Time) *time.Time {
return &t
}
tests := []struct {
name string
input interface{}
format string
want time.Time
wantErr bool
}{
{"RFC3339", "2024-05-14T12:34:56Z", "", time.Date(2024, 5, 14, 12, 34, 56, 0, time.UTC), false},
{"RFC3339Nano", "2024-05-14T12:34:56.789Z", "", time.Date(2024, 5, 14, 12, 34, 56, 789000000, time.UTC), false},
{"Unix timestamp int", int64(1715642135), "", time.Unix(1715642135, 0), false},
{"Unix timestamp float64", 1715642135.0, "", time.Unix(int64(1715642135), 0), false},
{"custom format", "14/05/2024", "02/01/2006", time.Date(2024, 5, 14, 0, 0, 0, 0, time.UTC), false},
{"slice", []byte("2024-05-14T12:34:56Z"), "", time.Date(2024, 5, 14, 12, 34, 56, 0, time.UTC), false},
{"interface", interface{}("2024-05-14T12:34:56Z"), "", time.Date(2024, 5, 14, 12, 34, 56, 0, time.UTC), false},
{"pointer", ptrTime(time.Date(2024, 5, 14, 12, 34, 56, 0, time.UTC)), "", time.Date(2024, 5, 14, 12, 34, 56, 0, time.UTC), false},
{"invalid format", "14-05-2024", "02/01/2006", time.Time{}, true},
{"invalid type", struct{}{}, "", time.Time{}, true},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := ParseTime(tt.input, tt.format)
if (err != nil) != tt.wantErr {
t.Errorf("ParseTime() error = %v, wantErr %v", err, tt.wantErr)
return
}
if !got.Equal(tt.want) {
t.Errorf("ParseTime() = %v, want %v", got, tt.want)
}
})
}
}

207
dskit/tdengine/tdengine.go Normal file
View File

@@ -0,0 +1,207 @@
package tdengine
import (
"context"
"encoding/base64"
"encoding/json"
"fmt"
"net"
"net/http"
"strings"
"time"
"github.com/ccfos/nightingale/v6/dskit/types"
"github.com/ccfos/nightingale/v6/pkg/tlsx"
"github.com/toolkits/pkg/logger"
)
type Tdengine struct {
Addr string `json:"tdengine.addr" mapstructure:"tdengine.addr"`
Basic *TDengineBasicAuth `json:"tdengine.basic" mapstructure:"tdengine.basic"`
Token string `json:"tdengine.token" mapstructure:"tdengine.token"`
Timeout int64 `json:"tdengine.timeout" mapstructure:"tdengine.timeout"`
DialTimeout int64 `json:"tdengine.dial_timeout" mapstructure:"tdengine.dial_timeout"`
MaxIdleConnsPerHost int `json:"tdengine.max_idle_conns_per_host" mapstructure:"tdengine.max_idle_conns_per_host"`
Headers map[string]string `json:"tdengine.headers" mapstructure:"tdengine.headers"`
SkipTlsVerify bool `json:"tdengine.skip_tls_verify" mapstructure:"tdengine.skip_tls_verify"`
tlsx.ClientConfig
header map[string][]string `json:"-"`
client *http.Client `json:"-"`
}
type TDengineBasicAuth struct {
User string `json:"tdengine.user" mapstructure:"tdengine.user"`
Password string `json:"tdengine.password" mapstructure:"tdengine.password"`
IsEncrypt bool `json:"tdengine.is_encrypt" mapstructure:"tdengine.is_encrypt"`
}
type APIResponse struct {
Code int `json:"code"`
ColumnMeta [][]interface{} `json:"column_meta"`
Data [][]interface{} `json:"data"`
Rows int `json:"rows"`
}
type QueryParam struct {
Database string `json:"database"`
Table string `json:"table"`
}
func (tc *Tdengine) InitCli() {
tc.client = &http.Client{
Transport: &http.Transport{
Proxy: http.ProxyFromEnvironment,
DialContext: (&net.Dialer{
Timeout: 30 * time.Second,
KeepAlive: 30 * time.Second,
}).DialContext,
IdleConnTimeout: 90 * time.Second,
TLSHandshakeTimeout: 10 * time.Second,
ExpectContinueTimeout: 1 * time.Second,
DisableCompression: true,
},
}
tc.header = map[string][]string{
"Connection": {"keep-alive"},
}
for k, v := range tc.Headers {
kv := strings.Split(v, ":")
if len(kv) != 2 {
continue
}
tc.header[k] = []string{v}
}
if tc.Basic != nil {
basic := base64.StdEncoding.EncodeToString([]byte(tc.Basic.User + ":" + tc.Basic.Password))
tc.header["Authorization"] = []string{fmt.Sprintf("Basic %s", basic)}
}
}
func (tc *Tdengine) QueryTable(query string) (APIResponse, error) {
var apiResp APIResponse
req, err := http.NewRequest("POST", tc.Addr+"/rest/sql", strings.NewReader(query))
if err != nil {
return apiResp, err
}
for k, v := range tc.header {
req.Header[k] = v
}
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
resp, err := tc.client.Do(req)
if err != nil {
return apiResp, err
}
defer resp.Body.Close()
// 限制响应体大小为10MB
maxSize := int64(10 * 1024 * 1024) // 10MB
limitedReader := http.MaxBytesReader(nil, resp.Body, maxSize)
if resp.StatusCode != http.StatusOK {
return apiResp, fmt.Errorf("HTTP error, status: %s", resp.Status)
}
err = json.NewDecoder(limitedReader).Decode(&apiResp)
if err != nil {
if strings.Contains(err.Error(), "http: request body too large") {
return apiResp, fmt.Errorf("response body exceeds 10MB limit")
}
return apiResp, err
}
return apiResp, nil
}
func (tc *Tdengine) ShowDatabases(context.Context) ([]string, error) {
var databases []string
data, err := tc.QueryTable("show databases")
if err != nil {
return databases, err
}
for _, row := range data.Data {
databases = append(databases, row[0].(string))
}
return databases, nil
}
func (tc *Tdengine) ShowTables(ctx context.Context, database string) ([]string, error) {
var tables []string
sql := fmt.Sprintf("show %s", database)
data, err := tc.QueryTable(sql)
if err != nil {
return tables, err
}
for _, row := range data.Data {
tables = append(tables, row[0].(string))
}
return tables, nil
}
func (tc *Tdengine) DescribeTable(ctx context.Context, query interface{}) ([]*types.ColumnProperty, error) {
var columns []*types.ColumnProperty
queryMap, ok := query.(map[string]string)
if !ok {
return nil, fmt.Errorf("invalid query")
}
sql := fmt.Sprintf("select * from %s.%s limit 1", queryMap["database"], queryMap["table"])
data, err := tc.QueryTable(sql)
if err != nil {
return columns, err
}
for _, row := range data.ColumnMeta {
var colType string
switch t := row[1].(type) {
case float64:
// v2版本数字类型映射
switch int(t) {
case 1:
colType = "BOOL"
case 2:
colType = "TINYINT"
case 3:
colType = "SMALLINT"
case 4:
colType = "INT"
case 5:
colType = "BIGINT"
case 6:
colType = "FLOAT"
case 7:
colType = "DOUBLE"
case 8:
colType = "BINARY"
case 9:
colType = "TIMESTAMP"
case 10:
colType = "NCHAR"
default:
colType = "UNKNOWN"
}
case string:
// v3版本直接使用字符串类型
colType = t
default:
logger.Warningf("unexpected column type format: %v", row[1])
colType = "UNKNOWN"
}
column := &types.ColumnProperty{
Field: row[0].(string),
Type: colType,
}
columns = append(columns, column)
}
return columns, nil
}

51
dskit/types/timeseries.go Normal file
View File

@@ -0,0 +1,51 @@
package types
import (
"bytes"
"fmt"
"strconv"
"github.com/prometheus/common/model"
)
// 时序数据
type MetricValues struct {
Metric model.Metric `json:"metric"`
Values [][]float64 `json:"values"`
}
type HistogramValues struct {
Total int64 `json:"total"`
Values [][]float64 `json:"values"`
}
// 瞬时值
type AggregateValues struct {
Labels map[string]string `json:"labels"`
Values map[string]float64 `json:"values"`
}
// string
func (m *MetricValues) String() string {
var buf bytes.Buffer
buf.WriteString(fmt.Sprintf("Metric: %+v ", m.Metric))
buf.WriteString("Values: ")
for _, v := range m.Values {
buf.WriteString(" [")
for i, ts := range v {
if i > 0 {
buf.WriteString(", ")
}
buf.WriteString(strconv.FormatFloat(ts, 'f', -1, 64))
}
buf.WriteString("] ")
}
return buf.String()
}
type Keys struct {
ValueKey string `json:"valueKey" mapstructure:"valueKey"` // 多个用空格分隔
LabelKey string `json:"labelKey" mapstructure:"labelKey"` // 多个用空格分隔
TimeKey string `json:"timeKey" mapstructure:"timeKey"`
TimeFormat string `json:"timeFormat" mapstructure:"timeFormat"` // not used anymore
}

19
dskit/types/types.go Normal file
View File

@@ -0,0 +1,19 @@
package types
const (
LogExtractValueTypeLong = "long"
LogExtractValueTypeFloat = "float"
LogExtractValueTypeText = "text"
LogExtractValueTypeDate = "date"
LogExtractValueTypeBool = "bool"
LogExtractValueTypeObject = "object"
LogExtractValueTypeArray = "array"
LogExtractValueTypeJSON = "json"
)
type ColumnProperty struct {
Field string `json:"field"`
Type string `json:"type"`
Type2 string `json:"type2,omitempty"` // field_property.Type
Indexable bool `json:"indexable"` // 是否可以索引
}

View File

@@ -50,7 +50,7 @@ Enable = true
# user001 = "ccc26da7b9aba533cbb263a36c07dcc5"
[HTTP.APIForService]
Enable = true
Enable = false
[HTTP.APIForService.BasicAuth]
user001 = "ccc26da7b9aba533cbb263a36c07dcc5"
@@ -73,14 +73,14 @@ DefaultRoles = ["Standard"]
OpenRSA = false
[DB]
# mysql postgres sqlite
DBType = "sqlite"
# postgres: host=%s port=%s user=%s dbname=%s password=%s sslmode=%s
# postgres: DSN="host=127.0.0.1 port=5432 user=root dbname=n9e_v6 password=1234 sslmode=disable"
# sqlite: DSN="/path/to/filename.db"
DSN = "root:1234@tcp(127.0.0.1:3306)/n9e_v6?charset=utf8mb4&parseTime=True&loc=Local&allowNativePasswords=true"
# mysql: DSN="root:1234@tcp(localhost:3306)/n9e_v6?charset=utf8mb4&parseTime=True&loc=Local"
DSN = "n9e.db"
# enable debug mode or not
Debug = false
# mysql postgres sqlite
DBType = "mysql"
# unit: s
MaxLifetime = 7200
# max open connections
@@ -98,8 +98,8 @@ Address = "127.0.0.1:6379"
# DB = 0
# UseTLS = false
# TLSMinVersion = "1.2"
# standalone cluster sentinel
RedisType = "standalone"
# standalone cluster sentinel miniredis
RedisType = "miniredis"
# Mastername for sentinel type
# MasterName = "mymaster"
# SentinelUsername = ""
@@ -138,6 +138,9 @@ ForceUseServerTS = true
# [Pushgw.WriterOpt]
# QueueMaxSize = 1000000
# QueuePopSize = 1000
# AllQueueMaxSize = 1000000
# fresh time, unit ms
# AllQueueMaxSizeInterval = 200
[[Pushgw.Writers]]
# Url = "http://127.0.0.1:8480/insert/0/prometheus/api/v1/write"

View File

@@ -54,7 +54,7 @@ Enable = true
# user001 = "ccc26da7b9aba533cbb263a36c07dcc5"
[HTTP.APIForService]
Enable = true
Enable = false
[HTTP.APIForService.BasicAuth]
user001 = "ccc26da7b9aba533cbb263a36c07dcc5"

98
go.mod
View File

@@ -1,9 +1,12 @@
module github.com/ccfos/nightingale/v6
go 1.18
go 1.22
require (
github.com/BurntSushi/toml v0.3.1
github.com/BurntSushi/toml v1.4.0
github.com/ClickHouse/clickhouse-go/v2 v2.23.2
github.com/VictoriaMetrics/metricsql v0.81.1
github.com/bitly/go-simplejson v0.5.1
github.com/coreos/go-oidc v2.2.1+incompatible
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc
github.com/dgrijalva/jwt-go v3.2.0+incompatible
@@ -11,44 +14,85 @@ require (
github.com/flashcatcloud/ibex v1.3.5
github.com/gin-contrib/pprof v1.4.0
github.com/gin-gonic/gin v1.9.1
github.com/glebarez/sqlite v1.11.0
github.com/go-ldap/ldap/v3 v3.4.4
github.com/gogo/protobuf v1.3.2
github.com/golang-jwt/jwt v3.2.2+incompatible
github.com/golang/protobuf v1.5.3
github.com/golang/protobuf v1.5.4
github.com/golang/snappy v0.0.4
github.com/google/uuid v1.3.0
github.com/google/uuid v1.6.0
github.com/hashicorp/go-version v1.6.0
github.com/jinzhu/copier v0.4.0
github.com/json-iterator/go v1.1.12
github.com/koding/multiconfig v0.0.0-20171124222453-69c27309b2d7
github.com/mailru/easyjson v0.7.7
github.com/mattn/go-isatty v0.0.19
github.com/mitchellh/mapstructure v1.5.0
github.com/mojocn/base64Captcha v1.3.6
github.com/olivere/elastic/v7 v7.0.32
github.com/pelletier/go-toml/v2 v2.0.8
github.com/pkg/errors v0.9.1
github.com/prometheus/client_golang v1.16.0
github.com/prometheus/common v0.44.0
github.com/prometheus/client_golang v1.20.5
github.com/prometheus/common v0.60.1
github.com/prometheus/prometheus v0.47.1
github.com/rakyll/statik v0.1.7
github.com/redis/go-redis/v9 v9.0.2
github.com/spaolacci/murmur3 v1.1.0
github.com/tidwall/gjson v1.14.0
github.com/stretchr/testify v1.9.0
github.com/tidwall/gjson v1.14.2
github.com/toolkits/pkg v1.3.8
golang.org/x/exp v0.0.0-20230713183714-613f0c0eb8a1
golang.org/x/oauth2 v0.10.0
golang.org/x/exp v0.0.0-20231006140011-7918f672742d
golang.org/x/oauth2 v0.23.0
gopkg.in/gomail.v2 v2.0.0-20160411212932-81ebce5c23df
gopkg.in/yaml.v2 v2.4.0
gorm.io/driver/clickhouse v0.6.1
gorm.io/driver/mysql v1.4.4
gorm.io/driver/postgres v1.4.5
gorm.io/driver/postgres v1.5.11
gorm.io/driver/sqlite v1.5.5
gorm.io/gorm v1.25.7-0.20240204074919-46816ad31dde
gorm.io/gorm v1.25.10
)
require (
github.com/ClickHouse/ch-go v0.61.5 // indirect
github.com/andybalholm/brotli v1.1.0 // indirect
github.com/go-faster/city v1.0.1 // indirect
github.com/go-faster/errors v0.7.1 // indirect
github.com/klauspost/compress v1.17.9 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/paulmach/orb v0.11.1 // indirect
github.com/pierrec/lz4/v4 v4.1.21 // indirect
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
github.com/segmentio/asm v1.2.0 // indirect
github.com/shopspring/decimal v1.4.0 // indirect
go.opentelemetry.io/otel v1.32.0 // indirect
go.opentelemetry.io/otel/trace v1.32.0 // indirect
)
require (
github.com/VictoriaMetrics/metrics v1.34.0 // indirect
github.com/alicebob/gopher-json v0.0.0-20200520072559-a9ecdc9d1d3a // indirect
github.com/dustin/go-humanize v1.0.1 // indirect
github.com/glebarez/go-sqlite v1.21.2 // indirect
github.com/jackc/pgx/v5 v5.7.1 // indirect
github.com/jackc/puddle/v2 v2.2.2 // indirect
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec // indirect
github.com/rogpeppe/go-internal v1.13.1 // indirect
github.com/valyala/fastrand v1.1.0 // indirect
github.com/valyala/histogram v1.2.0 // indirect
github.com/yuin/gopher-lua v1.1.1 // indirect
golang.org/x/sync v0.10.0 // indirect
modernc.org/libc v1.22.5 // indirect
modernc.org/mathutil v1.5.0 // indirect
modernc.org/memory v1.5.0 // indirect
modernc.org/sqlite v1.23.1 // indirect
)
require (
github.com/Azure/go-ntlmssp v0.0.0-20220621081337-cb9428e4ac1e // indirect
github.com/alicebob/miniredis/v2 v2.33.0
github.com/beorn7/perks v1.0.1 // indirect
github.com/bytedance/sonic v1.9.1 // indirect
github.com/cespare/xxhash/v2 v2.2.0 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/chenzhuoyu/base64x v0.0.0-20221115062448-fe3a3abad311 // indirect
github.com/dennwc/varint v1.0.0 // indirect
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f // indirect
@@ -66,42 +110,36 @@ require (
github.com/goccy/go-json v0.10.2 // indirect
github.com/golang/freetype v0.0.0-20170609003504-e2365dfdc4a0 // indirect
github.com/grafana/regexp v0.0.0-20221122212121-6b5c0a4cb7fd // indirect
github.com/jackc/chunkreader/v2 v2.0.1 // indirect
github.com/jackc/pgconn v1.13.0 // indirect
github.com/jackc/pgio v1.0.0 // indirect
github.com/jackc/pgpassfile v1.0.0 // indirect
github.com/jackc/pgproto3/v2 v2.3.1 // indirect
github.com/jackc/pgservicefile v0.0.0-20200714003250-2b9c44734f2b // indirect
github.com/jackc/pgtype v1.12.0 // indirect
github.com/jackc/pgx/v4 v4.17.2 // indirect
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 // indirect
github.com/jinzhu/inflection v1.0.0 // indirect
github.com/jinzhu/now v1.1.5 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/klauspost/cpuid/v2 v2.2.4 // indirect
github.com/klauspost/cpuid/v2 v2.2.5 // indirect
github.com/leodido/go-urn v1.2.4 // indirect
github.com/mattn/go-sqlite3 v1.14.17 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.4 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/pquerna/cachecontrol v0.1.0 // indirect
github.com/prometheus/client_model v0.4.0 // indirect
github.com/prometheus/procfs v0.11.0 // indirect
github.com/prometheus/client_model v0.6.1 // indirect
github.com/prometheus/procfs v0.15.1 // indirect
github.com/robfig/cron/v3 v3.0.1
github.com/tidwall/match v1.1.1 // indirect
github.com/tidwall/match v1.1.1
github.com/tidwall/pretty v1.2.0 // indirect
github.com/twitchyliquid64/golang-asm v0.15.1 // indirect
github.com/ugorji/go/codec v1.2.11 // indirect
go.uber.org/atomic v1.11.0 // indirect
go.uber.org/automaxprocs v1.5.2 // indirect
golang.org/x/arch v0.3.0 // indirect
golang.org/x/crypto v0.21.0 // indirect
golang.org/x/crypto v0.31.0 // indirect
golang.org/x/image v0.18.0 // indirect
golang.org/x/net v0.23.0 // indirect
golang.org/x/sys v0.18.0 // indirect
golang.org/x/text v0.16.0 // indirect
google.golang.org/appengine v1.6.7 // indirect
google.golang.org/protobuf v1.33.0 // indirect
golang.org/x/net v0.30.0 // indirect
golang.org/x/sys v0.28.0 // indirect
golang.org/x/text v0.21.0 // indirect
google.golang.org/protobuf v1.35.1 // indirect
gopkg.in/alexcesaro/quotedprintable.v3 v3.0.0-20150716171945-2caba252f4dc // indirect
gopkg.in/square/go-jose.v2 v2.6.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
)
replace golang.org/x/exp v0.0.0-20231006140011-7918f672742d => golang.org/x/exp v0.0.0-20230713183714-613f0c0eb8a1

337
go.sum
View File

@@ -1,35 +1,52 @@
github.com/Azure/azure-sdk-for-go v65.0.0+incompatible h1:HzKLt3kIwMm4KeJYTdx9EbjRYTySD/t8i1Ee/W5EGXw=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.7.0 h1:8q4SaHjFsClSvuVne0ID/5Ka8u3fcIHyqkLjcFpNRHQ=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.7.0/go.mod h1:bjGvMhVMb+EEm3VRNQawDMUyMMjo+S5ewNjflkep/0Q=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.3.0 h1:vcYCAze6p19qBW7MhZybIsqD8sMV8js0NyQM8JDnVtg=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.3.0/go.mod h1:OQeznEEkTZ9OrhHJoDD8ZDq51FHgXjqtP9z6bEwBq9U=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.3.0 h1:sXr+ck84g/ZlZUOZiNELInmMgOsuGwdjjVkEIde0OtY=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.3.0/go.mod h1:okt5dMMTOFjX/aovMlrjvvXoPMBVSPzk9185BT0+eZM=
github.com/Azure/go-ntlmssp v0.0.0-20220621081337-cb9428e4ac1e h1:NeAW1fUYUEWhft7pkxDf6WoUvEZJ/uOKsvtpjLnn8MU=
github.com/Azure/go-ntlmssp v0.0.0-20220621081337-cb9428e4ac1e/go.mod h1:chxPXzSsl7ZWRAuOIE23GDNzjWuZquvFlgA8xmpunjU=
github.com/AzureAD/microsoft-authentication-library-for-go v1.0.0 h1:OBhqkivkhkMqLPymWEppkm7vgPQY2XsHoEkaMQ0AdZY=
github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/Masterminds/semver/v3 v3.1.1 h1:hLg3sBzpNErnxhQtUy/mmLR2I9foDujNK030IGemrRc=
github.com/Masterminds/semver/v3 v3.1.1/go.mod h1:VPu/7SZ7ePZ3QOrcuXROw5FAcLl4a0cBrbBpGY/8hQs=
github.com/AzureAD/microsoft-authentication-library-for-go v1.0.0/go.mod h1:kgDmCTgBzIEPFElEF+FK0SdjAor06dRq2Go927dnQ6o=
github.com/BurntSushi/toml v1.4.0 h1:kuoIxZQy2WRRk1pttg9asf+WVv6tWQuBNVmK8+nqPr0=
github.com/BurntSushi/toml v1.4.0/go.mod h1:ukJfTF/6rtPPRCnwkur4qwRxa8vTRFBF0uk2lLoLwho=
github.com/ClickHouse/ch-go v0.61.5 h1:zwR8QbYI0tsMiEcze/uIMK+Tz1D3XZXLdNrlaOpeEI4=
github.com/ClickHouse/ch-go v0.61.5/go.mod h1:s1LJW/F/LcFs5HJnuogFMta50kKDO0lf9zzfrbl0RQg=
github.com/ClickHouse/clickhouse-go/v2 v2.23.2 h1:+DAKPMnxLS7pduQZsrJc8OhdLS2L9MfDEJ2TS+hpYDM=
github.com/ClickHouse/clickhouse-go/v2 v2.23.2/go.mod h1:aNap51J1OM3yxQJRgM+AlP/MPkGBCL8A74uQThoQhR0=
github.com/VictoriaMetrics/metrics v1.34.0 h1:0i8k/gdOJdSoZB4Z9pikVnVQXfhcIvnG7M7h2WaQW2w=
github.com/VictoriaMetrics/metrics v1.34.0/go.mod h1:r7hveu6xMdUACXvB8TYdAj8WEsKzWB0EkpJN+RDtOf8=
github.com/VictoriaMetrics/metricsql v0.81.1 h1:1gpqI3Mwru1tCM8nZiKxBG0P+DNkjlRwLhRPII3cuho=
github.com/VictoriaMetrics/metricsql v0.81.1/go.mod h1:1g4hdCwlbJZ851PU9VN65xy9Rdlzupo6fx3SNZ8Z64U=
github.com/alecthomas/units v0.0.0-20211218093645-b94a6e3cc137 h1:s6gZFSlWYmbqAuRjVTiNNhvNRfY2Wxp9nhfyel4rklc=
github.com/alecthomas/units v0.0.0-20211218093645-b94a6e3cc137/go.mod h1:OMCwj8VM1Kc9e19TLln2VL61YJF0x1XFtfdL4JdbSyE=
github.com/alicebob/gopher-json v0.0.0-20200520072559-a9ecdc9d1d3a h1:HbKu58rmZpUGpz5+4FfNmIU+FmZg2P3Xaj2v2bfNWmk=
github.com/alicebob/gopher-json v0.0.0-20200520072559-a9ecdc9d1d3a/go.mod h1:SGnFV6hVsYE877CKEZ6tDNTjaSXYUk6QqoIK6PrAtcc=
github.com/alicebob/miniredis/v2 v2.33.0 h1:uvTF0EDeu9RLnUEG27Db5I68ESoIxTiXbNUiji6lZrA=
github.com/alicebob/miniredis/v2 v2.33.0/go.mod h1:MhP4a3EU7aENRi9aO+tHfTBZicLqQevyi/DJpoj6mi0=
github.com/andybalholm/brotli v1.1.0 h1:eLKJA0d02Lf0mVpIDgYnqXcUn0GqVmEFny3VuID1U3M=
github.com/andybalholm/brotli v1.1.0/go.mod h1:sms7XGricyQI9K10gOSf56VKKWS4oLer58Q+mhRPtnY=
github.com/aws/aws-sdk-go v1.44.302 h1:ST3ko6GrJKn3Xi+nAvxjG3uk/V1pW8KC52WLeIxqqNk=
github.com/aws/aws-sdk-go v1.44.302/go.mod h1:aVsgQcEevwlmQ7qHE9I3h+dtQgpqhFB+i8Phjh7fkwI=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/bitly/go-simplejson v0.5.1 h1:xgwPbetQScXt1gh9BmoJ6j9JMr3TElvuIyjR8pgdoow=
github.com/bitly/go-simplejson v0.5.1/go.mod h1:YOPVLzCfwK14b4Sff3oP1AmGhI9T9Vsg84etUnlyp+Q=
github.com/bsm/ginkgo/v2 v2.5.0 h1:aOAnND1T40wEdAtkGSkvSICWeQ8L3UASX7YVCqQx+eQ=
github.com/bsm/ginkgo/v2 v2.5.0/go.mod h1:AiKlXPm7ItEHNc/2+OkrNG4E0ITzojb9/xWzvQ9XZ9w=
github.com/bsm/gomega v1.20.0 h1:JhAwLmtRzXFTx2AkALSLa8ijZafntmhSoU63Ok18Uq8=
github.com/bsm/gomega v1.20.0/go.mod h1:JifAceMQ4crZIWYUKrlGcmbN3bqHogVTADMD2ATsbwk=
github.com/bytedance/sonic v1.5.0/go.mod h1:ED5hyg4y6t3/9Ku1R6dU/4KyJ48DZ4jPhfY1O2AihPM=
github.com/bytedance/sonic v1.9.1 h1:6iJ6NqdoxCDr6mbY8h18oSO+cShGSMRGCEo7F2h0x8s=
github.com/bytedance/sonic v1.9.1/go.mod h1:i736AoUSYt75HyZLoJW9ERYxcy6eaN6h4BZXU064P/U=
github.com/cespare/xxhash/v2 v2.2.0 h1:DC2CZ1Ep5Y4k3ZQ899DldepgrayRUGE6BBZ/cd9Cj44=
github.com/cespare/xxhash/v2 v2.2.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/chenzhuoyu/base64x v0.0.0-20211019084208-fb5309c8db06/go.mod h1:DH46F32mSOjUmXrMHnKwZdA8wcEefY7UVqBKYGjpdQY=
github.com/chenzhuoyu/base64x v0.0.0-20221115062448-fe3a3abad311 h1:qSGYFH7+jGhDF8vLC+iwCD4WpbV1EBDSzWkJODFLams=
github.com/chenzhuoyu/base64x v0.0.0-20221115062448-fe3a3abad311/go.mod h1:b583jCggY9gE99b6G5LEC39OIiVsWj+R97kbl5odCEk=
github.com/cockroachdb/apd v1.1.0 h1:3LFP3629v+1aKXU5Q37mxmRxX/pIu1nijXydLShEq5I=
github.com/cockroachdb/apd v1.1.0/go.mod h1:8Sl8LxpKi29FqWXR16WEFZRNSz3SoPzUzeMeY4+DwBQ=
github.com/coreos/go-oidc v2.2.1+incompatible h1:mh48q/BqXqgjVHpy2ZY7WnWAbenxRjsz9N1i1YxjHAk=
github.com/coreos/go-oidc v2.2.1+incompatible/go.mod h1:CgnwVTmzoESiwO9qyAFEMiHoZ1nMCKZlZ9V6mm3/LKc=
github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
github.com/coreos/go-systemd v0.0.0-20190719114852-fd7a80b32e1f/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
github.com/creack/pty v1.1.7/go.mod h1:lj5s0c3V2DBrqTV7llrYr5NG6My20zk30Fl46Y7DoTY=
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
@@ -41,6 +58,8 @@ github.com/dgrijalva/jwt-go v3.2.0+incompatible h1:7qlOGliEKZXTDg6OTjfoBKDXWrumC
github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f h1:lO4WD4F/rVNCu3HqELle0jiPLLBs70cWOduZpkS1E78=
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f/go.mod h1:cuUVRXasLTGF7a8hSLbxyZXjz+1KgoB3wDUb6vlszIc=
github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY=
github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto=
github.com/expr-lang/expr v1.16.1 h1:Na8CUcMdyGbnNpShY7kzcHCU7WqxuL+hnxgHZ4vaz/A=
github.com/expr-lang/expr v1.16.1/go.mod h1:uCkhfG+x7fcZ5A5sXHKuQ07jGZRl6J0FCAaf2k4PtVQ=
github.com/fatih/camelcase v1.0.0 h1:hxNvNX/xYBp0ovncs8WyWZrOrpBNub/JfaMvbURyft8=
@@ -49,6 +68,8 @@ github.com/fatih/structs v1.1.0 h1:Q7juDM0QtcnhCpeyLGQKyg4TOIghuNXrkL32pHAUMxo=
github.com/fatih/structs v1.1.0/go.mod h1:9NiDSp5zOcgEDl+j00MP/WkGVPOlPRLejGD8Ga6PJ7M=
github.com/flashcatcloud/ibex v1.3.5 h1:8GOOf5+aJT0TP/MC6izz7CO5JKJSdKVFBwL0vQp93Nc=
github.com/flashcatcloud/ibex v1.3.5/go.mod h1:T8hbMUySK2q6cXUaYp0AUVeKkU9Od2LjzwmB5lmTRBM=
github.com/fortytw2/leaktest v1.3.0 h1:u8491cBMTQ8ft8aeV+adlcytMZylmA5nnwwkRZjI8vw=
github.com/fortytw2/leaktest v1.3.0/go.mod h1:jDsjWgpAGjm2CA7WthBh/CdZYEPF31XHquHwclZch5g=
github.com/gabriel-vasile/mimetype v1.4.2 h1:w5qFW6JKBz9Y393Y4q372O9A7cUSequkh1Q7OhCmWKU=
github.com/gabriel-vasile/mimetype v1.4.2/go.mod h1:zApsH/mKG4w07erKIaJPFiX0Tsq9BFQgN3qGY5GnNgA=
github.com/garyburd/redigo v1.6.2/go.mod h1:NR3MbYisc3/PwhQ00EMzDiPmrwpPxAn5GI05/YaO1SY=
@@ -59,14 +80,20 @@ github.com/gin-contrib/sse v0.1.0/go.mod h1:RHrZQHXnP2xjPF+u1gW/2HnVO7nvIa9PG3Gm
github.com/gin-gonic/gin v1.8.1/go.mod h1:ji8BvRH1azfM+SYow9zQ6SZMvR8qOMZHmsCuWR9tTTk=
github.com/gin-gonic/gin v1.9.1 h1:4idEAncQnU5cB7BeOkPtxjfCSye0AAm1R0RVIqJ+Jmg=
github.com/gin-gonic/gin v1.9.1/go.mod h1:hPrL7YrpYKXt5YId3A/Tnip5kqbEAP+KLuI3SUcPTeU=
github.com/glebarez/go-sqlite v1.21.2 h1:3a6LFC4sKahUunAmynQKLZceZCOzUthkRkEAl9gAXWo=
github.com/glebarez/go-sqlite v1.21.2/go.mod h1:sfxdZyhQjTM2Wry3gVYWaW072Ri1WMdWJi0k6+3382k=
github.com/glebarez/sqlite v1.11.0 h1:wSG0irqzP6VurnMEpFGer5Li19RpIRi2qvQz++w0GMw=
github.com/glebarez/sqlite v1.11.0/go.mod h1:h8/o8j5wiAsqSPoWELDUdJXhjAhsVliSn7bWZjOhrgQ=
github.com/go-asn1-ber/asn1-ber v1.5.4 h1:vXT6d/FNDiELJnLb6hGNa309LMsrCoYFvpwHDF0+Y1A=
github.com/go-asn1-ber/asn1-ber v1.5.4/go.mod h1:hEBeB/ic+5LoWskz+yKT7vGhhPYkProFKoKdwZRWMe0=
github.com/go-kit/log v0.1.0/go.mod h1:zbhenjAZHb184qTLMA9ZjW7ThYL0H2mk7Q6pNt4vbaY=
github.com/go-faster/city v1.0.1 h1:4WAxSZ3V2Ws4QRDrscLEDcibJY8uf41H6AhXDrNDcGw=
github.com/go-faster/city v1.0.1/go.mod h1:jKcUJId49qdW3L1qKHH/3wPeUstCVpVSXTM6vO3VcTw=
github.com/go-faster/errors v0.7.1 h1:MkJTnDoEdi9pDabt1dpWf7AA8/BaSYZqibYyhZ20AYg=
github.com/go-faster/errors v0.7.1/go.mod h1:5ySTjWFiphBs07IKuiL69nxdfd5+fzh1u7FPGZP2quo=
github.com/go-kit/log v0.2.1 h1:MRVx0/zhvdseW+Gza6N9rVzU/IVzaeE1SFI4raAhmBU=
github.com/go-kit/log v0.2.1/go.mod h1:NwTd00d/i8cPZ3xOwwiv2PO5MOcx78fFErGNcVmBjv0=
github.com/go-ldap/ldap/v3 v3.4.4 h1:qPjipEpt+qDa6SI/h1fzuGWoRUY+qqQ9sOZq67/PYUs=
github.com/go-ldap/ldap/v3 v3.4.4/go.mod h1:fe1MsuN5eJJ1FeLT/LEBVdWfNWKh459R7aXgXtJC+aI=
github.com/go-logfmt/logfmt v0.5.0/go.mod h1:wCYkCAKZfumFQihp8CzCvQ3paCTfi41vtzG1KdI/P7A=
github.com/go-logfmt/logfmt v0.6.0 h1:wGYYu3uicYdqXVgoYbvnkrPVXkuLM1p1ifugDMEdRi4=
github.com/go-logfmt/logfmt v0.6.0/go.mod h1:WYhtIu8zTZfxdn5+rREduYbwxfcBr/Vr6KEVveWlfTs=
github.com/go-playground/assert/v2 v2.0.1/go.mod h1:VDjEfimB/XKnb+ZQfWdccd7VUvScMdVu0Titje2rxJ4=
@@ -83,83 +110,44 @@ github.com/go-playground/validator/v10 v10.14.0 h1:vgvQWe3XCz3gIeFDm/HnTIbj6UGmg
github.com/go-playground/validator/v10 v10.14.0/go.mod h1:9iXMNT7sEkjXb0I+enO7QXmzG6QCsPWY4zveKFVRSyU=
github.com/go-sql-driver/mysql v1.6.0 h1:BCTh4TKNUYmOmMUcQ3IipzF5prigylS7XXjEkfCHuOE=
github.com/go-sql-driver/mysql v1.6.0/go.mod h1:DCzpHaOWr8IXmIStZouvnhqoel9Qv2LBy8hT2VhHyBg=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/goccy/go-json v0.9.7/go.mod h1:6MelG93GURQebXPDq3khkgXZkazVtN9CRI+MGFi0w8I=
github.com/goccy/go-json v0.10.2 h1:CrxCmQqYDkv1z7lO7Wbh2HN93uovUHgrECaO5ZrCXAU=
github.com/goccy/go-json v0.10.2/go.mod h1:6MelG93GURQebXPDq3khkgXZkazVtN9CRI+MGFi0w8I=
github.com/gofrs/uuid v4.0.0+incompatible h1:1SD/1F5pU8p29ybwgQSwpQk+mwdRrXCYuPhW6m+TnJw=
github.com/gofrs/uuid v4.0.0+incompatible/go.mod h1:b2aQJv3Z4Fp6yNu3cdSllBxTCLRxnplIgP/c0N/04lM=
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
github.com/golang-jwt/jwt v3.2.2+incompatible h1:IfV12K8xAKAnZqdXVzCZ+TOjboZ2keLg81eXfW3O+oY=
github.com/golang-jwt/jwt v3.2.2+incompatible/go.mod h1:8pz2t5EyA70fFQQSrl6XZXzqecmYZeUEB8OUGHkxJ+I=
github.com/golang-jwt/jwt/v4 v4.5.0 h1:7cYmW1XlMY7h7ii7UhUyChSgS5wUJEnm9uZVTGqOWzg=
github.com/golang-jwt/jwt/v4 v4.5.0/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0=
github.com/golang/freetype v0.0.0-20170609003504-e2365dfdc4a0 h1:DACJavvAHhabrF08vX0COfcOBJRhZ8lUbR+ZWIs0Y5g=
github.com/golang/freetype v0.0.0-20170609003504-e2365dfdc4a0/go.mod h1:E/TSTwGwJL78qG/PmXZO1EjYhfJinVAhrmmHX6Z8B9k=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
github.com/golang/protobuf v1.5.3 h1:KhyjKVUg7Usr/dYsdSqoFveMYd5ko72D+zANwlG1mmg=
github.com/golang/protobuf v1.5.3/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=
github.com/golang/snappy v0.0.1/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/golang/snappy v0.0.4 h1:yAGX7huGHXlcLOEtBnF4w7FQwA26wojNCwOYAEhLjQM=
github.com/golang/snappy v0.0.4/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.9 h1:O2Tfq5qg4qc4AmwVlvv0oLiVAGB7enBSJ2x2DqQFi38=
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
github.com/google/uuid v1.3.0 h1:t6JiXgmwXMjEs8VusXIJk2BXHsn+wx8BZdTaoZ5fu7I=
github.com/google/uuid v1.3.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/pprof v0.0.0-20230705174524-200ffdc848b8 h1:n6vlPhxsA+BW/XsS5+uqi7GyzaLa5MH7qlSLBZtRdiA=
github.com/google/pprof v0.0.0-20230705174524-200ffdc848b8/go.mod h1:Jh3hGz2jkYak8qXPD19ryItVnUgpgeqzdkY/D0EaeuA=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/grafana/regexp v0.0.0-20221122212121-6b5c0a4cb7fd h1:PpuIBO5P3e9hpqBD0O/HjhShYuM6XE0i/lbE6J94kww=
github.com/grafana/regexp v0.0.0-20221122212121-6b5c0a4cb7fd/go.mod h1:M5qHK+eWfAv8VR/265dIuEpL3fNfeC21tXXp9itM24A=
github.com/hashicorp/go-version v1.6.0 h1:feTTfFNnjP967rlCxM/I9g701jU+RN74YKx2mOkIeek=
github.com/hashicorp/go-version v1.6.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA=
github.com/jackc/chunkreader v1.0.0/go.mod h1:RT6O25fNZIuasFJRyZ4R/Y2BbhasbmZXF9QQ7T3kePo=
github.com/jackc/chunkreader/v2 v2.0.0/go.mod h1:odVSm741yZoC3dpHEUXIqA9tQRhFrgOHwnPIn9lDKlk=
github.com/jackc/chunkreader/v2 v2.0.1 h1:i+RDz65UE+mmpjTfyz0MoVTnzeYxroil2G82ki7MGG8=
github.com/jackc/chunkreader/v2 v2.0.1/go.mod h1:odVSm741yZoC3dpHEUXIqA9tQRhFrgOHwnPIn9lDKlk=
github.com/jackc/pgconn v0.0.0-20190420214824-7e0022ef6ba3/go.mod h1:jkELnwuX+w9qN5YIfX0fl88Ehu4XC3keFuOJJk9pcnA=
github.com/jackc/pgconn v0.0.0-20190824142844-760dd75542eb/go.mod h1:lLjNuW/+OfW9/pnVKPazfWOgNfH2aPem8YQ7ilXGvJE=
github.com/jackc/pgconn v0.0.0-20190831204454-2fabfa3c18b7/go.mod h1:ZJKsE/KZfsUgOEh9hBm+xYTstcNHg7UPMVJqRfQxq4s=
github.com/jackc/pgconn v1.8.0/go.mod h1:1C2Pb36bGIP9QHGBYCjnyhqu7Rv3sGshaQUvmfGIB/o=
github.com/jackc/pgconn v1.9.0/go.mod h1:YctiPyvzfU11JFxoXokUOOKQXQmDMoJL9vJzHH8/2JY=
github.com/jackc/pgconn v1.9.1-0.20210724152538-d89c8390a530/go.mod h1:4z2w8XhRbP1hYxkpTuBjTS3ne3J48K83+u0zoyvg2pI=
github.com/jackc/pgconn v1.13.0 h1:3L1XMNV2Zvca/8BYhzcRFS70Lr0WlDg16Di6SFGAbys=
github.com/jackc/pgconn v1.13.0/go.mod h1:AnowpAqO4CMIIJNZl2VJp+KrkAZciAkhEl0W0JIobpI=
github.com/jackc/pgio v1.0.0 h1:g12B9UwVnzGhueNavwioyEEpAmqMe1E/BN9ES+8ovkE=
github.com/jackc/pgio v1.0.0/go.mod h1:oP+2QK2wFfUWgr+gxjoBH9KGBb31Eio69xUb0w5bYf8=
github.com/jackc/pgmock v0.0.0-20190831213851-13a1b77aafa2/go.mod h1:fGZlG77KXmcq05nJLRkk0+p82V8B8Dw8KN2/V9c/OAE=
github.com/jackc/pgmock v0.0.0-20201204152224-4fe30f7445fd/go.mod h1:hrBW0Enj2AZTNpt/7Y5rr2xe/9Mn757Wtb2xeBzPv2c=
github.com/jackc/pgmock v0.0.0-20210724152146-4ad1a8207f65 h1:DadwsjnMwFjfWc9y5Wi/+Zz7xoE5ALHsRQlOctkOiHc=
github.com/jackc/pgmock v0.0.0-20210724152146-4ad1a8207f65/go.mod h1:5R2h2EEX+qri8jOWMbJCtaPWkrrNc7OHwsp2TCqp7ak=
github.com/jackc/pgpassfile v1.0.0 h1:/6Hmqy13Ss2zCq62VdNG8tM1wchn8zjSGOBJ6icpsIM=
github.com/jackc/pgpassfile v1.0.0/go.mod h1:CEx0iS5ambNFdcRtxPj5JhEz+xB6uRky5eyVu/W2HEg=
github.com/jackc/pgproto3 v1.1.0/go.mod h1:eR5FA3leWg7p9aeAqi37XOTgTIbkABlvcPB3E5rlc78=
github.com/jackc/pgproto3/v2 v2.0.0-alpha1.0.20190420180111-c116219b62db/go.mod h1:bhq50y+xrl9n5mRYyCBFKkpRVTLYJVWeCc+mEAI3yXA=
github.com/jackc/pgproto3/v2 v2.0.0-alpha1.0.20190609003834-432c2951c711/go.mod h1:uH0AWtUmuShn0bcesswc4aBTWGvw0cAxIJp+6OB//Wg=
github.com/jackc/pgproto3/v2 v2.0.0-rc3/go.mod h1:ryONWYqW6dqSg1Lw6vXNMXoBJhpzvWKnT95C46ckYeM=
github.com/jackc/pgproto3/v2 v2.0.0-rc3.0.20190831210041-4c03ce451f29/go.mod h1:ryONWYqW6dqSg1Lw6vXNMXoBJhpzvWKnT95C46ckYeM=
github.com/jackc/pgproto3/v2 v2.0.6/go.mod h1:WfJCnwN3HIg9Ish/j3sgWXnAfK8A9Y0bwXYU5xKaEdA=
github.com/jackc/pgproto3/v2 v2.1.1/go.mod h1:WfJCnwN3HIg9Ish/j3sgWXnAfK8A9Y0bwXYU5xKaEdA=
github.com/jackc/pgproto3/v2 v2.3.1 h1:nwj7qwf0S+Q7ISFfBndqeLwSwxs+4DPsbRFjECT1Y4Y=
github.com/jackc/pgproto3/v2 v2.3.1/go.mod h1:WfJCnwN3HIg9Ish/j3sgWXnAfK8A9Y0bwXYU5xKaEdA=
github.com/jackc/pgservicefile v0.0.0-20200714003250-2b9c44734f2b h1:C8S2+VttkHFdOOCXJe+YGfa4vHYwlt4Zx+IVXQ97jYg=
github.com/jackc/pgservicefile v0.0.0-20200714003250-2b9c44734f2b/go.mod h1:vsD4gTJCa9TptPL8sPkXrLZ+hDuNrZCnj29CQpr4X1E=
github.com/jackc/pgtype v0.0.0-20190421001408-4ed0de4755e0/go.mod h1:hdSHsc1V01CGwFsrv11mJRHWJ6aifDLfdV3aVjFF0zg=
github.com/jackc/pgtype v0.0.0-20190824184912-ab885b375b90/go.mod h1:KcahbBH1nCMSo2DXpzsoWOAfFkdEtEJpPbVLq8eE+mc=
github.com/jackc/pgtype v0.0.0-20190828014616-a8802b16cc59/go.mod h1:MWlu30kVJrUS8lot6TQqcg7mtthZ9T0EoIBFiJcmcyw=
github.com/jackc/pgtype v1.8.1-0.20210724151600-32e20a603178/go.mod h1:C516IlIV9NKqfsMCXTdChteoXmwgUceqaLfjg2e3NlM=
github.com/jackc/pgtype v1.12.0 h1:Dlq8Qvcch7kiehm8wPGIW0W3KsCCHJnRacKW0UM8n5w=
github.com/jackc/pgtype v1.12.0/go.mod h1:LUMuVrfsFfdKGLw+AFFVv6KtHOFMwRgDDzBt76IqCA4=
github.com/jackc/pgx/v4 v4.0.0-20190420224344-cc3461e65d96/go.mod h1:mdxmSJJuR08CZQyj1PVQBHy9XOp5p8/SHH6a0psbY9Y=
github.com/jackc/pgx/v4 v4.0.0-20190421002000-1b8f0016e912/go.mod h1:no/Y67Jkk/9WuGR0JG/JseM9irFbnEPbuWV2EELPNuM=
github.com/jackc/pgx/v4 v4.0.0-pre1.0.20190824185557-6972a5742186/go.mod h1:X+GQnOEnf1dqHGpw7JmHqHc1NxDoalibchSk9/RWuDc=
github.com/jackc/pgx/v4 v4.12.1-0.20210724153913-640aa07df17c/go.mod h1:1QD0+tgSXP7iUjYm9C1NxKhny7lq6ee99u/z+IHFcgs=
github.com/jackc/pgx/v4 v4.17.2 h1:0Ut0rpeKwvIVbMQ1KbMBU4h6wxehBI535LK6Flheh8E=
github.com/jackc/pgx/v4 v4.17.2/go.mod h1:lcxIZN44yMIrWI78a5CpucdD14hX0SBDbNRvjDBItsw=
github.com/jackc/puddle v0.0.0-20190413234325-e4ced69a3a2b/go.mod h1:m4B5Dj62Y0fbyuIc15OsIqK0+JU8nkqQjsgx7dvjSWk=
github.com/jackc/puddle v0.0.0-20190608224051-11cab39313c9/go.mod h1:m4B5Dj62Y0fbyuIc15OsIqK0+JU8nkqQjsgx7dvjSWk=
github.com/jackc/puddle v1.1.3/go.mod h1:m4B5Dj62Y0fbyuIc15OsIqK0+JU8nkqQjsgx7dvjSWk=
github.com/jackc/puddle v1.3.0/go.mod h1:m4B5Dj62Y0fbyuIc15OsIqK0+JU8nkqQjsgx7dvjSWk=
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 h1:iCEnooe7UlwOQYpKFhBabPMi4aNAfoODPEFNiAnClxo=
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761/go.mod h1:5TJZWKEWniPve33vlWYSoGYefn3gLQRzjfDlhSJ9ZKM=
github.com/jackc/pgx/v5 v5.7.1 h1:x7SYsPBYDkHDksogeSmZZ5xzThcTgRz++I5E+ePFUcs=
github.com/jackc/pgx/v5 v5.7.1/go.mod h1:e7O26IywZZ+naJtWWos6i6fvWK+29etgITqrqHLfoZA=
github.com/jackc/puddle/v2 v2.2.2 h1:PR8nw+E/1w0GLuRFSmiioY6UooMp6KJv0/61nB7icHo=
github.com/jackc/puddle/v2 v2.2.2/go.mod h1:vriiEXHvEE654aYKXXjOvZM39qJ0q+azkZFrfEOc3H4=
github.com/jinzhu/copier v0.4.0 h1:w3ciUoD19shMCRargcpm0cm91ytaBhDvuRpz1ODO/U8=
github.com/jinzhu/copier v0.4.0/go.mod h1:DfbEm0FYsaqBcKcFuvmOZb218JkPGtvSHsKg8S8hyyg=
github.com/jinzhu/inflection v1.0.0 h1:K317FqzuhWc8YvSVlFMCCUb36O/S9MCKRDI7QkRKD/E=
@@ -168,53 +156,47 @@ github.com/jinzhu/now v1.1.4/go.mod h1:d3SSVoowX0Lcu0IBviAWJpolVfI5UJVZZ7cO71lE/
github.com/jinzhu/now v1.1.5 h1:/o9tlHleP7gOFmsnYNz3RGnqzefHA47wQpKrrdTIwXQ=
github.com/jinzhu/now v1.1.5/go.mod h1:d3SSVoowX0Lcu0IBviAWJpolVfI5UJVZZ7cO71lE/z8=
github.com/jmespath/go-jmespath v0.4.0 h1:BEgLn5cpjn8UN1mAw4NjwDrS35OdebyEtFe+9YPoQUg=
github.com/jmespath/go-jmespath v0.4.0/go.mod h1:T8mJZnbsbmF+m6zOOFylbeCJqk5+pHWvzYPziyZiYoo=
github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8HmY=
github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y=
github.com/jpillora/backoff v1.0.0 h1:uvFg412JmmHBHw7iwprIxkPMI+sGQ4kzOWsMeHnm2EA=
github.com/jpillora/backoff v1.0.0/go.mod h1:J/6gKK9jxlEcS3zixgDgUAsiuZ7yrSoa/FX5e0EB2j4=
github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM=
github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/klauspost/compress v1.16.7 h1:2mk3MPGNzKyxErAw8YaohYh69+pa4sIQSC0fPGCFR9I=
github.com/klauspost/compress v1.13.6/go.mod h1:/3/Vjq9QcHkK5uEr5lBEmyoZ1iFhe47etQ6QUkpK6sk=
github.com/klauspost/compress v1.17.9 h1:6KIumPrER1LHsvBVuDa0r5xaG0Es51mhhB9BQB2qeMA=
github.com/klauspost/compress v1.17.9/go.mod h1:Di0epgTjJY877eYKx5yC51cX2A2Vl2ibi7bDH9ttBbw=
github.com/klauspost/cpuid/v2 v2.0.9/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg=
github.com/klauspost/cpuid/v2 v2.2.4 h1:acbojRNwl3o09bUq+yDCtZFc1aiwaAAxtcn8YkZXnvk=
github.com/klauspost/cpuid/v2 v2.2.4/go.mod h1:RVVoqg1df56z8g3pUjL/3lE5UfnlrJX8tyFgg4nqhuY=
github.com/klauspost/cpuid/v2 v2.2.5 h1:0E5MSMDEoAulmXNFquVs//DdoomxaoTY1kUhbc/qbZg=
github.com/klauspost/cpuid/v2 v2.2.5/go.mod h1:Lcz8mBdAVJIBVzewtcLocK12l3Y+JytZYpaMropDUws=
github.com/koding/multiconfig v0.0.0-20171124222453-69c27309b2d7 h1:SWlt7BoQNASbhTUD0Oy5yysI2seJ7vWuGUp///OM4TM=
github.com/koding/multiconfig v0.0.0-20171124222453-69c27309b2d7/go.mod h1:Y2SaZf2Rzd0pXkLVhLlCiAXFCLSXAIbTKDivVgff/AM=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/konsorten/go-windows-terminal-sequences v1.0.2/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
github.com/kr/pretty v0.3.0/go.mod h1:640gp4NfQd8pI5XOwp5fnNeVWj67G7CFk/SaSQn7NBk=
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/pty v1.1.8/go.mod h1:O1sed60cT9XZ5uDucP5qwvh+TE3NnUj51EiZO/lmSfw=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
github.com/leodido/go-urn v1.2.1/go.mod h1:zt4jvISO2HfUBqxjfIshjdMTYS56ZS/qv49ictyFfxY=
github.com/leodido/go-urn v1.2.4 h1:XlAE/cm/ms7TE/VMVoduSpNBoyc2dOxHs5MZSwAN63Q=
github.com/leodido/go-urn v1.2.4/go.mod h1:7ZrI8mTSeBSHl/UaRyKQW1qZeMgak41ANeCNaVckg+4=
github.com/lib/pq v1.0.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
github.com/lib/pq v1.1.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
github.com/lib/pq v1.2.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
github.com/lib/pq v1.10.2 h1:AqzbZs4ZoCBp+GtejcpCpcxM3zlSMx29dXbUSeVtJb8=
github.com/lib/pq v1.10.2/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
github.com/mailru/easyjson v0.7.7 h1:UGYAvKxe3sBsEDzO8ZeWOSlIQfWFlxbzLZe7hwFURr0=
github.com/mailru/easyjson v0.7.7/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc=
github.com/mattn/go-colorable v0.1.1/go.mod h1:FuOcm+DKB9mbwrcAfNl7/TZVBZ6rcnceauSikq3lYCQ=
github.com/mattn/go-colorable v0.1.6/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc=
github.com/mattn/go-isatty v0.0.5/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s=
github.com/mattn/go-isatty v0.0.7/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s=
github.com/mattn/go-isatty v0.0.12/go.mod h1:cbi8OIDigv2wuxKPP5vlRcQ1OAZbq2CE4Kysco4FUpU=
github.com/mattn/go-isatty v0.0.14/go.mod h1:7GGIvUiUoEMVVmxf/4nioHXj79iQHKdU27kJ6hsGG94=
github.com/mattn/go-isatty v0.0.19 h1:JITubQf0MOLdlGRuRq+jtsDlekdYPia9ZFsB8h/APPA=
github.com/mattn/go-isatty v0.0.19/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/mattn/go-sqlite3 v1.14.17 h1:mCRHCLDUBXgpKAqIKsaAaAsrAlbkeomtRFKXh2L6YIM=
github.com/mattn/go-sqlite3 v1.14.17/go.mod h1:2eHXhiwb8IkHr+BDWZGa96P6+rkvnG63S2DGjv9HUNg=
github.com/matttproud/golang_protobuf_extensions v1.0.4 h1:mmDVorXM7PCGKw94cs5zkfA9PSy5pEvNWRP0ET0TIVo=
github.com/matttproud/golang_protobuf_extensions v1.0.4/go.mod h1:BSXmuO+STAnVfrANrmjBb36TMTDstsz7MSK+HVaYKv4=
github.com/mitchellh/mapstructure v1.5.0 h1:jeMsZIYE/09sWLaz43PL7Gy6RuMjD2eJVyuac5Z2hdY=
github.com/mitchellh/mapstructure v1.5.0/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
@@ -222,63 +204,72 @@ github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9G
github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
github.com/mojocn/base64Captcha v1.3.6 h1:gZEKu1nsKpttuIAQgWHO+4Mhhls8cAKyiV2Ew03H+Tw=
github.com/mojocn/base64Captcha v1.3.6/go.mod h1:i5CtHvm+oMbj1UzEPXaA8IH/xHFZ3DGY3Wh3dBpZ28E=
github.com/montanaflynn/stats v0.0.0-20171201202039-1bf9dbcd8cbe/go.mod h1:wL8QJuTMNUDYhXwkmfOly8iTdp5TEcJFWZD2D7SIkUc=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f h1:KUppIJq7/+SVif2QVs3tOP0zanoHgBEVAwHxUSIzRqU=
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/oklog/ulid v1.3.1 h1:EGfNDEx6MqHz8B3uNV6QAib1UR2Lm97sHi3ocA6ESJ4=
github.com/oklog/ulid v1.3.1/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U=
github.com/olivere/elastic/v7 v7.0.32 h1:R7CXvbu8Eq+WlsLgxmKVKPox0oOwAE/2T9Si5BnvK6E=
github.com/olivere/elastic/v7 v7.0.32/go.mod h1:c7PVmLe3Fxq77PIfY/bZmxY/TAamBhCzZ8xDOE09a9k=
github.com/paulmach/orb v0.11.1 h1:3koVegMC4X/WeiXYz9iswopaTwMem53NzTJuTF20JzU=
github.com/paulmach/orb v0.11.1/go.mod h1:5mULz1xQfs3bmQm63QEJA6lNGujuRafwA5S/EnuLaLU=
github.com/paulmach/protoscan v0.2.1/go.mod h1:SpcSwydNLrxUGSDvXvO0P7g7AuhJ7lcKfDlhJCDw2gY=
github.com/pelletier/go-toml/v2 v2.0.1/go.mod h1:r9LEWfGN8R5k0VXJ+0BkIe7MYkRdwZOjgMj2KwnJFUo=
github.com/pelletier/go-toml/v2 v2.0.8 h1:0ctb6s9mE31h0/lhu+J6OPmVeDxJn+kYnJc2jZR9tGQ=
github.com/pelletier/go-toml/v2 v2.0.8/go.mod h1:vuYfssBdrU2XDZ9bYydBu6t+6a6PYNcZljzZR9VXg+4=
github.com/pierrec/lz4/v4 v4.1.21 h1:yOVMLb6qSIDP67pl/5F7RepeKYu/VmTyEXvuMI5d9mQ=
github.com/pierrec/lz4/v4 v4.1.21/go.mod h1:gZWDp/Ze/IJXGXf23ltt2EXimqmTUXEy0GFuRQyBid4=
github.com/pkg/browser v0.0.0-20210911075715-681adbf594b8 h1:KoWmjvw+nsYOo29YJK9vDA65RGE3NrOnUtO7a+RF9HU=
github.com/pkg/browser v0.0.0-20210911075715-681adbf594b8/go.mod h1:HKlIX3XHQyzLZPlr7++PzdhaXEj94dEiJgZDTsxEqUI=
github.com/pkg/diff v0.0.0-20210226163009-20ebb0f2a09e/go.mod h1:pJLUxLENpZxwdsKMEsNbx1VGcRFpLqf3715MtcvvzbA=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/pquerna/cachecontrol v0.1.0 h1:yJMy84ti9h/+OEWa752kBTKv4XC30OtVVHYv/8cTqKc=
github.com/pquerna/cachecontrol v0.1.0/go.mod h1:NrUG3Z7Rdu85UNR3vm7SOsl1nFIeSiQnrHV5K9mBcUI=
github.com/prashantv/gostub v1.1.0 h1:BTyx3RfQjRHnUWaGF9oQos79AlQ5k8WNktv7VGvVH4g=
github.com/prometheus/client_golang v1.16.0 h1:yk/hx9hDbrGHovbci4BY+pRMfSuuat626eFsHb7tmT8=
github.com/prometheus/client_golang v1.16.0/go.mod h1:Zsulrv/L9oM40tJ7T815tM89lFEugiJ9HzIqaAx4LKc=
github.com/prometheus/client_model v0.4.0 h1:5lQXD3cAg1OXBf4Wq03gTrXHeaV0TQvGfUooCfx1yqY=
github.com/prometheus/client_model v0.4.0/go.mod h1:oMQmHW1/JoDwqLtg57MGgP/Fb1CJEYF2imWWhWtMkYU=
github.com/prometheus/common v0.44.0 h1:+5BrQJwiBB9xsMygAB3TNvpQKOwlkc25LbISbrdOOfY=
github.com/prometheus/common v0.44.0/go.mod h1:ofAIvZbQ1e/nugmZGz4/qCb9Ap1VoSTIO7x0VV9VvuY=
github.com/prashantv/gostub v1.1.0/go.mod h1:A5zLQHz7ieHGG7is6LLXLz7I8+3LZzsrV0P1IAHhP5U=
github.com/prometheus/client_golang v1.20.5 h1:cxppBPuYhUnsO6yo/aoRol4L7q7UFfdm+bR9r+8l63Y=
github.com/prometheus/client_golang v1.20.5/go.mod h1:PIEt8X02hGcP8JWbeHyeZ53Y/jReSnHgO035n//V5WE=
github.com/prometheus/client_model v0.6.1 h1:ZKSh/rekM+n3CeS952MLRAdFwIKqeY8b62p8ais2e9E=
github.com/prometheus/client_model v0.6.1/go.mod h1:OrxVMOVHjw3lKMa8+x6HeMGkHMQyHDk9E3jmP2AmGiY=
github.com/prometheus/common v0.60.1 h1:FUas6GcOw66yB/73KC+BOZoFJmbo/1pojoILArPAaSc=
github.com/prometheus/common v0.60.1/go.mod h1:h0LYf1R1deLSKtD4Vdg8gy4RuOvENW2J/h19V5NADQw=
github.com/prometheus/common/sigv4 v0.1.0 h1:qoVebwtwwEhS85Czm2dSROY5fTo2PAPEVdDeppTwGX4=
github.com/prometheus/procfs v0.11.0 h1:5EAgkfkMl659uZPbe9AS2N68a7Cc1TJbPEuGzFuRbyk=
github.com/prometheus/procfs v0.11.0/go.mod h1:nwNm2aOCAYw8uTR/9bWRREkZFxAUcWzPHWJq+XBB/FM=
github.com/prometheus/common/sigv4 v0.1.0/go.mod h1:2Jkxxk9yYvCkE5G1sQT7GuEXm57JrvHu9k5YwTjsNtI=
github.com/prometheus/procfs v0.15.1 h1:YagwOFzUgYfKKHX6Dr+sHT7km/hxC76UB0learggepc=
github.com/prometheus/procfs v0.15.1/go.mod h1:fB45yRUv8NstnjriLhBQLuOUt+WW4BsoGhij/e3PBqk=
github.com/prometheus/prometheus v0.47.1 h1:bd2LiZyxzHn9Oo2Ei4eK2D86vz/L/OiqR1qYo0XmMBo=
github.com/prometheus/prometheus v0.47.1/go.mod h1:J/bmOSjgH7lFxz2gZhrWEZs2i64vMS+HIuZfmYNhJ/M=
github.com/rakyll/statik v0.1.7 h1:OF3QCZUuyPxuGEP7B4ypUa7sB/iHtqOTDYZXGM8KOdQ=
github.com/rakyll/statik v0.1.7/go.mod h1:AlZONWzMtEnMs7W4e/1LURLiI49pIMmp6V9Unghqrcc=
github.com/redis/go-redis/v9 v9.0.2 h1:BA426Zqe/7r56kCcvxYLWe1mkaz71LKF77GwgFzSxfE=
github.com/redis/go-redis/v9 v9.0.2/go.mod h1:/xDTe9EF1LM61hek62Poq2nzQSGj0xSrEtEHbBQevps=
github.com/remyoudompheng/bigfft v0.0.0-20200410134404-eec4a21b6bb0/go.mod h1:qqbHyh8v60DhA7CoWK5oRCqLrMHRGoxYCSS9EjAz6Eo=
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec h1:W09IVJc94icq4NjY3clb7Lk8O1qJ8BdBEF8z0ibU0rE=
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec/go.mod h1:qqbHyh8v60DhA7CoWK5oRCqLrMHRGoxYCSS9EjAz6Eo=
github.com/robfig/cron/v3 v3.0.1 h1:WdRxkvbJztn8LMz/QEvLN5sBU+xKpSqwwUO1Pjr4qDs=
github.com/robfig/cron/v3 v3.0.1/go.mod h1:eQICP3HwyT7UooqI/z+Ov+PtYAWygg1TEWWzGIFLtro=
github.com/robfig/go-cache v0.0.0-20130306151617-9fc39e0dbf62/go.mod h1:65XQgovT59RWatovFwnwocoUxiI/eENTnOY5GK3STuY=
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
github.com/rogpeppe/go-internal v1.6.1/go.mod h1:xXDCJY+GAPziupqXw64V24skbSoqbTEfhy4qGm1nDQc=
github.com/rogpeppe/go-internal v1.8.0/go.mod h1:WmiCO8CzOY8rg0OYDC4/i/2WRWAB6poM+XZ2dLUbcbE=
github.com/rogpeppe/go-internal v1.10.0 h1:TMyTOH3F/DB16zRVcYyreMH6GnZZrwQVAoYjRBZyWFQ=
github.com/rs/xid v1.2.1/go.mod h1:+uKXf+4Djp6Md1KODXJxgGQPKngRmWyn10oCKFzNHOQ=
github.com/rs/zerolog v1.13.0/go.mod h1:YbFCdg8HfsridGWAh22vktObvhZbQsZXe4/zB0OKkWU=
github.com/rs/zerolog v1.15.0/go.mod h1:xYTKnLHcpfU2225ny5qZjxnj9NvkumZYjJHlAThCjNc=
github.com/satori/go.uuid v1.2.0/go.mod h1:dA0hQrYB0VpLJoorglMZABFdXlWrHn1NEOzdhQKdks0=
github.com/shopspring/decimal v0.0.0-20180709203117-cd690d0c9e24/go.mod h1:M+9NzErvs504Cn4c5DxATwIqPbtswREoFCre64PpcG4=
github.com/shopspring/decimal v1.2.0 h1:abSATXmQEYyShuxI4/vyW3tV1MrKAJzCZ/0zLUXYbsQ=
github.com/shopspring/decimal v1.2.0/go.mod h1:DKyhrW/HYNuLGql+MJL6WCR6knT2jwCFRcu2hWCYk4o=
github.com/sirupsen/logrus v1.4.1/go.mod h1:ni0Sbl8bgC9z8RoU9G6nDWqqs/fq4eDPysMBDgk/93Q=
github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE=
github.com/rogpeppe/go-internal v1.13.1 h1:KvO1DLK/DRN07sQ1LQKScxyZJuNnedQ5/wKSR38lUII=
github.com/rogpeppe/go-internal v1.13.1/go.mod h1:uMEvuHeurkdAXX61udpOXGD/AzZDWNMNyH2VO9fmH0o=
github.com/segmentio/asm v1.2.0 h1:9BQrFxC+YOHJlTlHGkTrFWf59nbL3XnCoFLTwDCI7ys=
github.com/segmentio/asm v1.2.0/go.mod h1:BqMnlJP91P8d+4ibuonYZw9mfnzI9HfxselHZr5aAcs=
github.com/shopspring/decimal v1.4.0 h1:bxl37RwXBklmTi0C79JfXCEBD1cqqHt0bbgBAGFp81k=
github.com/shopspring/decimal v1.4.0/go.mod h1:gawqmDU56v4yIKSwfBSFip1HdCCXN8/+DMd9qYNcwME=
github.com/spaolacci/murmur3 v1.1.0 h1:7c1g84S4BPRrfL5Xrdp6fOJ206sU9y293DDHaoy0bLI=
github.com/spaolacci/murmur3 v1.1.0/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.2.0/go.mod h1:qt09Ya8vawLte6SNmTgCsAVtYtaKzEcn8ATUoHMkEqE=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA=
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
@@ -287,11 +278,13 @@ github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/stretchr/testify v1.8.2/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/stretchr/testify v1.8.3/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
github.com/stretchr/testify v1.8.4 h1:CcVxjf3Q8PM0mHUKJCdn+eZZtm5yQwehR5yeSVQQcUk=
github.com/tidwall/gjson v1.14.0 h1:6aeJ0bzojgWLa82gDQHcx3S0Lr/O51I9bJ5nv6JFx5w=
github.com/tidwall/gjson v1.14.0/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk=
github.com/stretchr/testify v1.9.0 h1:HtqpIVDClZ4nwg75+f6Lvsy/wHu+3BoSGCbBAcpTsTg=
github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/tidwall/gjson v1.14.2 h1:6BBkirS0rAHjumnjHF6qgy5d2YAJ1TLIaFE2lzfOLqo=
github.com/tidwall/gjson v1.14.2/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk=
github.com/tidwall/match v1.1.1 h1:+Ho715JplO36QYgwN9PGYNhgZvoUSc9X2c80KVTi+GA=
github.com/tidwall/match v1.1.1/go.mod h1:eRSPERbgtNPcGhD8UCthc6PmLEQXEWd3PRB5JTxsfmM=
github.com/tidwall/pretty v1.0.0/go.mod h1:XNkn88O1ChpSDQmQeStsy+sBenx6DDtFZJxhVysOjyk=
github.com/tidwall/pretty v1.2.0 h1:RWIZEg2iJ8/g6fDDYzMpobmaoGh5OLl4AXtGUGPcqCs=
github.com/tidwall/pretty v1.2.0/go.mod h1:ITEVvHYasfjBbM0u2Pg8T2nJnzm8xPwvNhhsoaGGjNU=
github.com/toolkits/pkg v1.3.8 h1:2yamC20c5mHRtbcGiLY99Lm/2mVitFn6onE8KKvMT1o=
@@ -302,63 +295,55 @@ github.com/ugorji/go v1.2.7/go.mod h1:nF9osbDWLy6bDVv/Rtoh6QgnvNDpmCalQV5urGCCS6
github.com/ugorji/go/codec v1.2.7/go.mod h1:WGN1fab3R1fzQlVQTkfxVtIBhWDRqOviHU95kRgeqEY=
github.com/ugorji/go/codec v1.2.11 h1:BMaWp1Bb6fHwEtbplGBGJ498wD+LKlNSl25MjdZY4dU=
github.com/ugorji/go/codec v1.2.11/go.mod h1:UNopzCgEMSXjBc6AOMqYvWC1ktqTAfzJZUZgYf6w6lg=
github.com/valyala/fastrand v1.1.0 h1:f+5HkLW4rsgzdNoleUOB69hyT9IlD2ZQh9GyDMfb5G8=
github.com/valyala/fastrand v1.1.0/go.mod h1:HWqCzkrkg6QXT8V2EXWvXCoow7vLwOFN002oeRzjapQ=
github.com/valyala/histogram v1.2.0 h1:wyYGAZZt3CpwUiIb9AU/Zbllg1llXyrtApRS815OLoQ=
github.com/valyala/histogram v1.2.0/go.mod h1:Hb4kBwb4UxsaNbbbh+RRz8ZR6pdodR57tzWUS3BUzXY=
github.com/xdg-go/pbkdf2 v1.0.0/go.mod h1:jrpuAogTd400dnrH08LKmI/xc1MbPOebTwRqcT5RDeI=
github.com/xdg-go/scram v1.1.1/go.mod h1:RaEWvsqvNKKvBPvcKeFjrG2cJqOkHTiyTpzz23ni57g=
github.com/xdg-go/stringprep v1.0.3/go.mod h1:W3f5j4i+9rC0kuIEJL0ky1VpHXQU3ocBgklLGvcBnW8=
github.com/youmark/pkcs8 v0.0.0-20181117223130-1be2e3e5546d/go.mod h1:rHwXgn7JulP+udvsHwJoVG1YGAP6VLg4y9I5dyZdqmA=
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
github.com/zenazn/goji v0.9.0/go.mod h1:7S9M489iMyHBNxwZnk9/EHS098H4/F6TATF2mIxtB1Q=
go.uber.org/atomic v1.3.2/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
go.uber.org/atomic v1.5.0/go.mod h1:sABNBOSYdrvTF6hTgEIbc7YasKWGhgEQZyfxyTvoXHQ=
go.uber.org/atomic v1.6.0/go.mod h1:sABNBOSYdrvTF6hTgEIbc7YasKWGhgEQZyfxyTvoXHQ=
github.com/yuin/gopher-lua v1.1.1 h1:kYKnWBjvbNP4XLT3+bPEwAXJx262OhaHDWDVOPjL46M=
github.com/yuin/gopher-lua v1.1.1/go.mod h1:GBR0iDaNXjAgGg9zfCvksxSRnQx76gclCIb7kdAd1Pw=
go.mongodb.org/mongo-driver v1.11.4/go.mod h1:PTSz5yu21bkT/wXpkS7WR5f0ddqw5quethTUn9WM+2g=
go.opentelemetry.io/otel v1.32.0 h1:WnBN+Xjcteh0zdk01SVqV55d/m62NJLJdIyb4y/WO5U=
go.opentelemetry.io/otel v1.32.0/go.mod h1:00DCVSB0RQcnzlwyTfqtxSm+DRr9hpYrHjNGiBHVQIg=
go.opentelemetry.io/otel/trace v1.32.0 h1:WIC9mYrXf8TmY/EXuULKc8hR17vE+Hjv2cssQDe03fM=
go.opentelemetry.io/otel/trace v1.32.0/go.mod h1:+i4rkvCraA+tG6AzwloGaCtkx53Fa+L+V8e9a7YvhT8=
go.uber.org/atomic v1.11.0 h1:ZvwS0R+56ePWxUNi+Atn9dWONBPp/AUETXlHW0DxSjE=
go.uber.org/atomic v1.11.0/go.mod h1:LUxbIzbOniOlMKjJjyPfpl4v+PKK2cNJn91OQbhoJI0=
go.uber.org/automaxprocs v1.4.0/go.mod h1:/mTEdr7LvHhs0v7mjdxDreTz1OG5zdZGqgOnhWiR/+Q=
go.uber.org/automaxprocs v1.5.2 h1:2LxUOGiR3O6tw8ui5sZa2LAaHnsviZdVOUZw4fvbnME=
go.uber.org/automaxprocs v1.5.2/go.mod h1:eRbA25aqJrxAbsLO0xy5jVwPt7FQnRgjW+efnwa1WM0=
go.uber.org/goleak v1.2.1 h1:NBol2c7O1ZokfZ0LEU9K6Whx/KnwvepVetCUhtKja4A=
go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0=
go.uber.org/multierr v1.3.0/go.mod h1:VgVr7evmIr6uPjLBxg28wmKNXyqE9akIJ5XnfpiKl+4=
go.uber.org/multierr v1.5.0/go.mod h1:FeouvMocqHpRaaGuG9EjoKcStLC43Zu/fmqdUMPcKYU=
go.uber.org/tools v0.0.0-20190618225709-2cfd321de3ee/go.mod h1:vJERXedbb3MVM5f9Ejo0C68/HhF8uaILCdgjnY+goOA=
go.uber.org/zap v1.9.1/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
go.uber.org/zap v1.13.0/go.mod h1:zwrFLgMcdUuIBviXEYEH1YKNaOBnKXsx2IPda5bBwHM=
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
golang.org/x/arch v0.0.0-20210923205945-b76863e36670/go.mod h1:5om86z9Hs0C8fWVUuoMHwpExlXzs5Tkyp9hOrfG7pp8=
golang.org/x/arch v0.3.0 h1:02VY4/ZcO/gBOH6PUaoiptASxtXU10jazRCP865E97k=
golang.org/x/arch v0.3.0/go.mod h1:5om86z9Hs0C8fWVUuoMHwpExlXzs5Tkyp9hOrfG7pp8=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20190411191339-88737f569e3a/go.mod h1:WFFai1msRO1wXaEeE5yQxYXgSfI8pQAWXbQop6sCtWE=
golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190820162420-60c769a6c586/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20201203163018-be400aefbc4c/go.mod h1:jdWPYTVW3xRLrWPugEBEK3UY2ZEsg3UU495nc5E+M+I=
golang.org/x/crypto v0.0.0-20210616213533-5ff15b29337e/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.0.0-20210711020723-a769d52b0f97/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.0.0-20220622213112-05595931fe9d/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/crypto v0.0.0-20220722155217-630584e8d5aa/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/crypto v0.7.0/go.mod h1:pYwdfH91IfpZVANVyUOhSIPZaFoJGxTFbZhFTx+dXZU=
golang.org/x/crypto v0.9.0/go.mod h1:yrmDGqONDYtNj3tH8X9dzUun2m2lzPa9ngI6/RUPGR0=
golang.org/x/crypto v0.21.0 h1:X31++rzVUdKhX5sWmSOFZxx8UW/ldWx55cbf08iNAMA=
golang.org/x/crypto v0.21.0/go.mod h1:0BP7YvVV9gBbVKyeTG0Gyn+gZm94bibOW5BjDEYAOMs=
golang.org/x/crypto v0.31.0 h1:ihbySMvVjLAeSH1IbfcRTkD/iNscyz8rGzjF/E5hV6U=
golang.org/x/crypto v0.31.0/go.mod h1:kDsLvtWBEx7MV9tJOj9bnXsPbxwJQ6csT/x4KIN4Ssk=
golang.org/x/exp v0.0.0-20230713183714-613f0c0eb8a1 h1:MGwJjxBy0HJshjDNfLsYO8xppfqWlA5ZT9OhtUUhTNw=
golang.org/x/exp v0.0.0-20230713183714-613f0c0eb8a1/go.mod h1:FXUEEKJgO7OQYeo8N01OfiKP8RXMtf6e8aTskBGqWdc=
golang.org/x/image v0.13.0/go.mod h1:6mmbMOeV28HuMTgA6OSRkdXKYw/t5W9Uwn2Yv1r3Yxk=
golang.org/x/image v0.18.0 h1:jGzIakQa/ZXI1I0Fxvaa9W7yP25TqT6cHIHn+6CqvSQ=
golang.org/x/image v0.18.0/go.mod h1:4yyo5vMFQjVjUcVk4jEQcU9MGy/rulF5WvUILseCM2E=
golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/mod v0.0.0-20190513183733-4bf6d317e70e/go.mod h1:mXi4GBBbnImb6dmsKGUJ2LatrhH/nqhxcFungHvyanc=
golang.org/x/mod v0.1.1-0.20191105210325-c90efee705ee/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190813141303-74dc4d7220e7/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
@@ -367,27 +352,20 @@ golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug
golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
golang.org/x/net v0.8.0/go.mod h1:QVkue5JL9kW//ek3r6jTKnTFis1tRmNAW2P1shuFdJc=
golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg=
golang.org/x/net v0.23.0 h1:7EYJ93RZ9vYSZAIb2x3lnuvqO5zneoD6IvWjuhfxjTs=
golang.org/x/net v0.23.0/go.mod h1:JKghWKKOSdJwpW2GEx0Ja7fmaKnMsbu+MWVZTokSYmg=
golang.org/x/oauth2 v0.10.0 h1:zHCpF2Khkwy4mMB4bv0U37YtJdTGW8jI0glAApi0Kh8=
golang.org/x/oauth2 v0.10.0/go.mod h1:kTpgurOux7LqtuxjuyZa4Gj2gdezIt/jQtGnNFfypQI=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/net v0.30.0 h1:AcW1SDZMkb8IpzCdQUaIq2sP4sZ4zw+55h6ynffypl4=
golang.org/x/net v0.30.0/go.mod h1:2wGyMJ5iFasEhkwi13ChkO/t1ECNC4X4eBKkVFyYFlU=
golang.org/x/oauth2 v0.23.0 h1:PbgcYx2W7i4LvjJWEbf0ngHV6qJYr86PkAV3bXdLEbs=
golang.org/x/oauth2 v0.23.0/go.mod h1:XYTD2NtWslqkgxebSiOHnXEap4TF09sJSc7H1sXbhtI=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.7.0 h1:YsImfSBoP9QPYL0xyKJPq0gcaJdG3rInoqxTWbfQu9M=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sync v0.10.0 h1:3NQrjDixjgGwUOCaF8w2+VYHv0Ve/vGYSbdkTa98gmQ=
golang.org/x/sync v0.10.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190403152447-81d4e9dc473e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190813064441-fde4db37ae7a/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200116001909-b77594299b42/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200223170610-d5e6a3e2c0ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@@ -400,18 +378,17 @@ golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBc
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.18.0 h1:DBdB3niSjOA/O0blCZBqDefyWNYveAYMNF1Wum0DYQ4=
golang.org/x/sys v0.18.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
golang.org/x/sys v0.15.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.21.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.28.0 h1:Fksou7UEQUWlKvIdsqzJmUmCX3cZuD2+P3XyyzwMhlA=
golang.org/x/sys v0.28.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
golang.org/x/term v0.6.0/go.mod h1:m6U89DPEgQRMq3DNkDClhWw02AUbt2daBVO4cn4Hv9U=
golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.4/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.3.8/go.mod h1:E6s5w1FMmriuDzIBO73fBruAKo1PCIq6d2Q6DHfQ8WQ=
@@ -419,35 +396,24 @@ golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/text v0.8.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE=
golang.org/x/text v0.16.0 h1:a94ExnEXNtEwYLGJSIUxnWoxoRz/ZcCsV63ROupILh4=
golang.org/x/text v0.16.0/go.mod h1:GhwF1Be+LQoKShO3cGOHzqOgRrGaYc9AvblQOmPVHnI=
golang.org/x/text v0.21.0 h1:zyQAAkrwaneQ066sspRyJaG9VNi/YJ1NfzcGB3hZ/qo=
golang.org/x/text v0.21.0/go.mod h1:4IBbMaMmOPCJ8SecivzSH54+73PCFmPWxNTLm+vZkEQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190425163242-31fd60d6bfdc/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20190621195816-6e04913cbbac/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190823170909-c4a336ef6a2f/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191029041327-9cc4af7d6b2c/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191029190741-b9c20aec41a5/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20200103221440-774c71fcf114/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU=
golang.org/x/xerrors v0.0.0-20190410155217-1f06c39b4373/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20190513163551-3ee3066db522/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/appengine v1.6.7 h1:FZR1q0exgwxzPzp/aF+VccGrSfxfPpkBqjIIEq3ru6c=
google.golang.org/appengine v1.6.7/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.28.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
google.golang.org/protobuf v1.30.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
google.golang.org/protobuf v1.33.0 h1:uNO2rsAINq/JlFpSdYEKIZ0uKD/R9cpdv0T+yoGwGmI=
google.golang.org/protobuf v1.33.0/go.mod h1:c6P6GXX6sHbq/GpV6MGZEdwhWPcYBgnhAHhKbcUYpos=
google.golang.org/protobuf v1.35.1 h1:m3LfL6/Ca+fqnjnlqQXNpFPABW1UD7mjh8KO2mKFytA=
google.golang.org/protobuf v1.35.1/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE=
gopkg.in/alexcesaro/quotedprintable.v3 v3.0.0-20150716171945-2caba252f4dc h1:2gGKlE2+asNV9m7xrywl36YYNnBG5ZQ0r/BOOxqPpmk=
gopkg.in/alexcesaro/quotedprintable.v3 v3.0.0-20150716171945-2caba252f4dc/go.mod h1:m7x9LTH6d71AHyAX77c9yqWCCa3UKHcVEj9y7hAtKDk=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
@@ -457,7 +423,6 @@ gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EV
gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
gopkg.in/gomail.v2 v2.0.0-20160411212932-81ebce5c23df h1:n7WqCuqOuCbNr617RXOY0AWRXxgwEyPp2z+p0+hgMuE=
gopkg.in/gomail.v2 v2.0.0-20160411212932-81ebce5c23df/go.mod h1:LRQQ+SO6ZHR7tOkpBDuZnXENFzX8qRjMDMyPD6BRkCw=
gopkg.in/inconshreveable/log15.v2 v2.0.0-20180818164646-67afb5ed74ec/go.mod h1:aPpfJ7XW+gOuirDoZ8gHhLh3kZ1B08FtV2bbmy7Jv3s=
gopkg.in/square/go-jose.v2 v2.6.0 h1:NGk74WTnPKBNUhNzQX7PYcTLUjoq7mzKk2OKbvwk2iI=
gopkg.in/square/go-jose.v2 v2.6.0/go.mod h1:M9dMgbHiYLoDGQrXy7OpJDJWiKiU//h+vD76mk0e1AI=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
@@ -467,15 +432,23 @@ gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gorm.io/driver/clickhouse v0.6.1 h1:t7JMB6sLBXxN8hEO6RdzCbJCwq/jAEVZdwXlmQs1Sd4=
gorm.io/driver/clickhouse v0.6.1/go.mod h1:riMYpJcGZ3sJ/OAZZ1rEP1j/Y0H6cByOAnwz7fo2AyM=
gorm.io/driver/mysql v1.4.4 h1:MX0K9Qvy0Na4o7qSC/YI7XxqUw5KDw01umqgID+svdQ=
gorm.io/driver/mysql v1.4.4/go.mod h1:BCg8cKI+R0j/rZRQxeKis/forqRwRSYOR8OM3Wo6hOM=
gorm.io/driver/postgres v1.4.5 h1:mTeXTTtHAgnS9PgmhN2YeUbazYpLhUI1doLnw42XUZc=
gorm.io/driver/postgres v1.4.5/go.mod h1:GKNQYSJ14qvWkvPwXljMGehpKrhlDNsqYRr5HnYGncg=
gorm.io/driver/postgres v1.5.11 h1:ubBVAfbKEUld/twyKZ0IYn9rSQh448EdelLYk9Mv314=
gorm.io/driver/postgres v1.5.11/go.mod h1:DX3GReXH+3FPWGrrgffdvCk3DQ1dwDPdmbenSkweRGI=
gorm.io/driver/sqlite v1.5.5 h1:7MDMtUZhV065SilG62E0MquljeArQZNfJnjd9i9gx3E=
gorm.io/driver/sqlite v1.5.5/go.mod h1:6NgQ7sQWAIFsPrJJl1lSNSu2TABh0ZZ/zm5fosATavE=
gorm.io/gorm v1.23.8/go.mod h1:l2lP/RyAtc1ynaTjFksBde/O8v9oOGIApu2/xRitmZk=
gorm.io/gorm v1.24.1-0.20221019064659-5dd2bb482755/go.mod h1:DVrVomtaYTbqs7gB/x2uVvqnXzv0nqjB396B8cG4dBA=
gorm.io/gorm v1.25.7-0.20240204074919-46816ad31dde h1:9DShaph9qhkIYw7QF91I/ynrr4cOO2PZra2PFD7Mfeg=
gorm.io/gorm v1.25.7-0.20240204074919-46816ad31dde/go.mod h1:hbnx/Oo0ChWMn1BIhpy1oYozzpM15i4YPuHDmfYtwg8=
honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg=
gorm.io/gorm v1.25.10 h1:dQpO+33KalOA+aFYGlK+EfxcI5MbO7EP2yYygwh9h+s=
gorm.io/gorm v1.25.10/go.mod h1:hbnx/Oo0ChWMn1BIhpy1oYozzpM15i4YPuHDmfYtwg8=
modernc.org/libc v1.22.5 h1:91BNch/e5B0uPbJFgqbxXuOnxBQjlS//icfQEGmvyjE=
modernc.org/libc v1.22.5/go.mod h1:jj+Z7dTNX8fBScMVNRAYZ/jF91K8fdT2hYMThc3YjBY=
modernc.org/mathutil v1.5.0 h1:rV0Ko/6SfM+8G+yKiyI830l3Wuz1zRutdslNoQ0kfiQ=
modernc.org/mathutil v1.5.0/go.mod h1:mZW8CKdRPY1v87qxC/wUdX5O1qDzXMP5TH3wjfpga6E=
modernc.org/memory v1.5.0 h1:N+/8c5rE6EqugZwHii4IFsaJ7MUhoWX07J5tC/iI5Ds=
modernc.org/memory v1.5.0/go.mod h1:PkUhL0Mugw21sHPeskwZW4D6VscE/GQJOnIpCnW6pSU=
modernc.org/sqlite v1.23.1 h1:nrSBg4aRQQwq59JpvGEQ15tNxoO5pX/kUjcRNwSAGQM=
modernc.org/sqlite v1.23.1/go.mod h1:OrDj17Mggn6MhE+iPbBNf7RGKODDE9NFT0f3EwDzJqk=
rsc.io/pdf v0.1.1/go.mod h1:n8OzWcQ6Sp37PL01nO98y4iUCRdTGarVfzxY20ICaU4=

View File

@@ -1,6 +1,6 @@
## Appdynamics
## AppDynamics
Appdynamics 采集插件, 采集 Appdynamics 数据
AppDynamics 采集插件, 采集 AppDynamics 数据
## Configuration

View File

@@ -0,0 +1,8 @@
[[instances]]
urls = [
# "http://localhost:8053/xml/v3",
]
gather_memory_contexts = true
gather_views = true
timeout = "5s"
# labels={app="bind"}

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.9 KiB

View File

@@ -0,0 +1,13 @@
forked from [telegraf/snmp](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/bind)
配置示例
```
[[instances]]
urls = [
#"http://localhost:8053/xml/v3",
]
timeout = "5s"
gather_memory_contexts = true
gather_views = true
```

View File

@@ -0,0 +1,3 @@
## canal
canal 默认提供了 prometheus 格式指标的接口 [Prometheus-QuickStart](https://github.com/alibaba/canal/wiki/Prometheus-QuickStart) ,所以可以直接通过[ prometheus 插件](https://flashcat.cloud/docs/content/flashcat-monitor/categraf/plugin/prometheus)采集。

View File

@@ -1,3 +1,4 @@
interval="5m"
[[instances]]
## Amazon Region
# list of region and endpoint, see https://docs.aws.amazon.com/general/latest/gr/cw_region.html
@@ -40,17 +41,13 @@
## particular metric, that metric will not be returned by the CloudWatch API
## and will not be collected by Telegraf.
#
## Requested CloudWatch aggregation Period (required)
## Must be a multiple of 60s.
period = "5m"
## Collection Delay (required)
## Must account for metrics availability via CloudWatch API
delay = "5m"
## Recommended: use metric 'interval' that is a multiple of 'period' to avoid
## gaps or overlap in pulled data
interval = "5m"
## Requested CloudWatch aggregation Period (required)
## Must be a multiple of 60s.
period = "5m"
## Recommended if "delay" and "period" are both within 3 hours of request
## time. Invalid values will be ignored. Recently Active feature will only

View File

@@ -71,18 +71,14 @@ See the [CONFIGURATION.md][CONFIGURATION.md] for more details.
## Note that if a period is configured that is smaller than the minimum for a
## particular metric, that metric will not be returned by the CloudWatch API
## and will not be collected by Categraf.
#
## Requested CloudWatch aggregation Period (required)
## Must be a multiple of 60s.
period = "5m"
## Collection Delay (required)
## Must account for metrics availability via CloudWatch API
delay = "5m"
## Recommended: use metric 'interval' that is a multiple of 'period' to avoid
## gaps or overlap in pulled data
interval = "5m"
## Requested CloudWatch aggregation Period (required)
## Must be a multiple of 60s.
period = "5m"
## Recommended if "delay" and "period" are both within 3 hours of request
## time. Invalid values will be ignored. Recently Active feature will only

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,22 @@
# doris_fe
[[instances]]
# 配置 fe metrics 服务地址
urls = [
"http://127.0.0.1:8030/metrics"
]
url_label_key = "instance"
url_label_value = "{{.Host}}"
# 指定 fe 服务 group 和 job 标签,这里是仪表盘变量调用,可根据实际需求修改。
labels = { group = "fe",job = "doris_cluster01"}
# doris_be
[[instances]]
# 配置 be metrics 服务地址
urls = [
"http://127.0.0.1:8040/metrics"
]
url_label_key = "instance"
url_label_value = "{{.Host}}"
# 指定 be 服务 group 和 job 标签,这里是仪表盘变量调用,可根据实际需求修改。
labels = { group = "be",job = "doris_cluster01"}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,26 @@
<?xml version="1.0" encoding="utf-8"?>
<!-- Generator: Adobe Illustrator 27.6.1, SVG Export Plug-In . SVG Version: 6.00 Build 0) -->
<svg version="1.1" id="图层_1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px"
viewBox="0 0 30 30" style="enable-background:new 0 0 30 30;" xml:space="preserve">
<style type="text/css">
.st0{fill:#00A5CA;}
.st1{fill:#3ACA9B;}
.st2{fill:#405AAD;}
</style>
<g>
<g>
<g>
<path class="st0" d="M17.4,4.6l-3.3-3.3c-0.4-0.4-0.9-0.8-1.5-1C12.1,0.1,11.5,0,11,0C9.9,0,8.8,0.4,8,1.2C7.6,1.6,7.3,2,7.1,2.6
c-0.3,0.5-0.4,1-0.4,1.6S6.8,5.3,7,5.9c0.2,0.5,0.5,1,0.9,1.4l5.9,5.9c0.1,0.1,0.3,0.2,0.5,0.2s0.3-0.1,0.5-0.2l2.6-2.6
C17.6,10.5,20.2,7.4,17.4,4.6z"/>
<path class="st1" d="M22.8,9.8c-0.6-0.6-1.3-1.2-1.9-1.9l0,0c0,0.1,0,0.1,0,0.2c-0.2,1.4-0.9,2.7-2,3.7
c-3.4,3.4-6.9,6.9-10.3,10.3l-0.5,0.5c-0.7,0.6-1.2,1.5-1.3,2.4c-0.1,0.7,0,1.3,0.2,2c0.2,0.6,0.5,1.2,1,1.7
c0.4,0.4,0.9,0.8,1.4,1c0.5,0.2,1.1,0.3,1.7,0.3c1.3,0,2-0.2,3-1.1c3.9-3.8,7.8-7.7,10.8-10.6c1.4-1.4,1.7-3.7,0.7-5.2
C24.8,11.8,23.8,10.8,22.8,9.8z"/>
<path class="st2" d="M3.8,7.8v14.5c0,0.2,0,0.3,0.1,0.4C4,22.8,4.1,22.9,4.3,23c0.1,0,0.3,0,0.5,0c0.2,0,0.3-0.1,0.4-0.2l7.3-7.3
c0.1-0.1,0.2-0.3,0.2-0.5s-0.1-0.4-0.2-0.5L5.2,7.2C5.1,7.1,5,7.1,4.9,7C4.8,7,4.7,7,4.6,7C4.4,7,4.2,7.1,4,7.2
C3.9,7.4,3.8,7.6,3.8,7.8z"/>
</g>
</g>
</g>
</svg>

After

Width:  |  Height:  |  Size: 1.4 KiB

View File

@@ -0,0 +1,39 @@
# Doris
Doris 的进程都会暴露 `/metrics` 接口,通过这个接口暴露 Prometheus 协议的监控数据。
## 采集配置
categraf 的 `conf/input.prometheus/prometheus.toml`。因为 Doris 是暴露的 Prometheus 协议的监控数据,所以使用 categraf 的 prometheus 插件即可采集。
```toml
# doris_fe
[[instances]]
urls = [
"http://127.0.0.1:8030/metrics"
]
url_label_key = "instance"
url_label_value = "{{.Host}}"
labels = { group = "fe",job = "doris_cluster01"}
# doris_be
[[instances]]
urls = [
"http://127.0.0.1:8040/metrics"
]
url_label_key = "instance"
url_label_value = "{{.Host}}"
labels = { group = "be",job = "doris_cluster01"}
```
## 告警规则
夜莺内置了 Doris 的告警规则,克隆到自己的业务组下即可使用。
## 仪表盘
夜莺内置了 Doris 的仪表盘,克隆到自己的业务组下即可使用。

Some files were not shown because too many files have changed in this diff Show More