Compare commits

..

1 Commits

Author SHA1 Message Date
Ulric Qin
0003b33e99 code refactor notify plugin 2022-07-22 17:56:25 +08:00
68 changed files with 2019 additions and 3634 deletions

View File

@@ -22,20 +22,20 @@
## Highlighted Features
- **开箱即用**
- 支持 Docker、Helm Chart、云服务等多种部署方式,集数据采集、监控告警、可视化为一体,内置多种监控仪表盘、快捷视图、告警规则模板,导入即可快速使用,**大幅降低云原生监控系统的建设成本、学习成本、使用成本**
- **专业告警**
- 可视化的告警配置和管理,支持丰富的告警规则,提供屏蔽规则、订阅规则的配置能力,支持告警多种送达渠道,支持告警自愈、告警事件管理等
- **云原生**
- 以交钥匙的方式快速构建企业级的云原生监控体系,支持 [**Categraf**](https://github.com/flashcatcloud/categraf)、Telegraf、Grafana-agent 等多种采集器,支持 Prometheus、VictoriaMetrics、M3DB、ElasticSearch 等多种数据库,兼容支持导入 Grafana 仪表盘,**与云原生生态无缝集成**
- **高性能,高可用**
- 得益于夜莺的多数据源管理引擎,和夜莺引擎侧优秀的架构设计,借助于高性能时序库,可以满足数亿时间线的采集、存储、告警分析场景,节省大量成本;
- 夜莺监控组件均可水平扩展,无单点,已在上千家企业部署落地,经受了严苛的生产实践检验。众多互联网头部公司,夜莺集群机器达百台,处理数亿级时间线,重度使用夜莺监控;
- **灵活扩展,中心化管理**
- 夜莺监控,可部署在 1 核 1G 的云主机,可在上百台机器集群化部署,可运行在 K8s 中;也可将时序库、告警引擎等组件下沉到各机房、各 Region兼顾边缘部署和中心化统一管理**解决数据割裂,缺乏统一视图的难题**
- 支持 Docker、Helm Chart 等多种部署方式,内置多种监控盘、快捷视图、告警规则模板,导入即可快速使用,活跃、专业的社区用户也在持续迭代和沉淀更多的最佳实践于产品中
- **兼容并包**
- 支持 [Categraf](https://github.com/flashcatcloud/categraf)、Telegraf、Grafana-agent 等多种采集器,支持 Prometheus、VictoriaMetrics、M3DB 等各种时序数据库,支持对接 Grafana与云原生生态无缝集成
- 集数据采集、可视化、监控告警、数据分析于一体,与云原生生态紧密集成,提供开箱即用的企业级监控分析和告警能力;
- **开放社区**
- 托管于[中国计算机学会开源发展委员会](https://www.ccf.org.cn/kyfzwyh/),有[**快猫星云**](https://flashcat.cloud)和众多公司的持续投入,和数千名社区用户的积极参与,以及夜莺监控项目清晰明确的定位,都保证了夜莺开源社区健康、长久的发展。活跃、专业的社区用户也在持续迭代和沉淀更多的最佳实践于产品中
- 托管于[中国计算机学会开源发展委员会](https://www.ccf.org.cn/kyfzwyh/),有[快猫星云](https://flashcat.cloud)的持续投入,和数千名社区用户的积极参与,以及夜莺监控项目清晰明确的定位,都保证了夜莺开源社区健康、长久的发展;
- **高性能**
- 得益于夜莺的多数据源管理引擎,和夜莺引擎侧优秀的架构设计,借助于高性能时序库,可以满足数亿时间线的采集、存储、告警分析场景,节省大量成本;
- **高可用**
- 夜莺监控组件均可水平扩展,无单点,已在上千家企业部署落地,经受了严苛的生产实践检验。众多互联网头部公司,夜莺集群机器达百台,处理十亿级时间线,重度使用夜莺监控;
- **灵活扩展**
- 夜莺监控可部署在1核1G的云主机可在上百台机器部署集群可运行在K8s中也可将时序库、告警引擎等组件下沉到各机房、各region兼顾边缘部署和中心化管理
> 如果您在使用 Prometheus 过程中,有以下的一个或者多个需求场景,推荐您无缝升级到夜莺:
> 如果您在使用 Prometheus 过程中,有以下的一个或者多个需求场景,推荐您升级到夜莺:
- Prometheus、Alertmanager、Grafana 等多个系统较为割裂,缺乏统一视图,无法开箱即用;
- 通过修改配置文件来管理 Prometheus、Alertmanager 的方式,学习曲线大,协同有难度;
@@ -50,7 +50,7 @@
> 如果您在使用 [Open-Falcon](https://github.com/open-falcon/falcon-plus),我们更推荐您升级到夜莺:
- 关于 Open-Falcon 和夜莺的详细介绍,请参考阅读[云原生监控的十个特点和趋势](https://mp.weixin.qq.com/s?__biz=MzkzNjI5OTM5Nw==&mid=2247483738&idx=1&sn=e8bdbb974a2cd003c1abcc2b5405dd18&chksm=c2a19fb0f5d616a63185cd79277a79a6b80118ef2185890d0683d2bb20451bd9303c78d083c5#rd)。
- 关于 Open-Falcon 和夜莺的详细介绍,请参考阅读[云原生监控的十个特点和趋势](https://mp.weixin.qq.com/s?__biz=MzkzNjI5OTM5Nw==&mid=2247483738&idx=1&sn=e8bdbb974a2cd003c1abcc2b5405dd18&chksm=c2a19fb0f5d616a63185cd79277a79a6b80118ef2185890d0683d2bb20451bd9303c78d083c5#rd)。
> 我们推荐您使用 [Categraf](https://github.com/flashcatcloud/categraf) 作为首选的监控数据采集器:
@@ -65,28 +65,28 @@
## Screenshots
<img src="doc/img/intro.gif" width="480">
<img src="doc/img/intro.gif" width="680">
## Architecture
<img src="doc/img/arch-product.png" width="480">
<img src="doc/img/arch-product.png" width="680">
夜莺监控可以接收各种采集器上报的监控数据(比如 [Categraf](https://github.com/flashcatcloud/categraf)、telegraf、grafana-agent、Prometheus并写入多种流行的时序数据库中可以支持Prometheus、M3DB、VictoriaMetrics、Thanos、TDEngine等提供告警规则、屏蔽规则、订阅规则的配置能力提供监控数据的查看能力提供告警自愈机制告警触发之后自动回调某个webhook地址或者执行某个脚本提供历史告警事件的存储管理、分组查看的能力。
<img src="doc/img/arch-system.png" width="480">
<img src="doc/img/arch-system.png" width="680">
夜莺 v5 版本的设计非常简单,核心是 server 和 webapi 两个模块webapi 无状态放到中心端承接前端请求将用户配置写入数据库server 是告警引擎和数据转发模块,一般随着时序库走,一个时序库就对应一套 server每套 server 可以只用一个实例也可以多个实例组成集群server 可以接收 Categraf、Telegraf、Grafana-Agent、Datadog-Agent、Falcon-Plugins 上报的数据,写入后端时序库,周期性从数据库同步告警规则,然后查询时序库做告警判断。每套 server 依赖一个 redis。
<img src="doc/img/install-vm.png" width="480">
<img src="doc/img/install-vm.png" width="680">
如果单机版本的时序数据库(比如 Prometheus 性能有瓶颈或容灾较差,我们推荐使用 [VictoriaMetrics](https://github.com/VictoriaMetrics/VictoriaMetrics)VictoriaMetrics 架构较为简单性能优异易于部署和运维架构图如上。VictoriaMetrics 更详尽的文档,还请参考其[官网](https://victoriametrics.com/)。
如果单机版本的 Prometheus 性能不够或容灾较差,我们推荐使用 [VictoriaMetrics](https://github.com/VictoriaMetrics/VictoriaMetrics)VictoriaMetrics 架构较为简单性能优异易于部署和运维架构图如上。VictoriaMetrics 更详尽的文档,还请参考其[官网](https://victoriametrics.com/)。
## Community
开源项目要更有生命力,离不开开放的治理架构和源源不断的开发者和用户共同参与,我们致力于建立开放、中立的开源治理架构,吸纳更多来自企业、高校等各方面对云原生监控感兴趣、有热情的开发者,一起打造有活力的夜莺开源社区。关于《夜莺开源项目和社区治理架构(草案)》,请查阅 [COMMUNITY GOVERNANCE](./doc/community-governance.md).
开源项目要更有生命力,离不开开放的治理架构和源源不断的开发者和用户共同参与,我们致力于建立开放、中立的开源治理架构,吸纳更多来自企业、高校等各方面对云原生监控感兴趣、有热情的开发者,一起打造有活力的夜莺开源社区。关于《夜莺开源项目和社区治理架构(草案)》,请查阅 **[COMMUNITY GOVERNANCE](./doc/community-governance.md)**.
**我们欢迎您以各种方式参与到夜莺开源项目和开源社区中来,工作包括不限于**
- 补充和完善文档 => [n9e.github.io](https://n9e.github.io/)
@@ -119,4 +119,4 @@
## Contact Us
推荐您关注夜莺监控公众号,及时获取相关产品和社区动态:
<img src="doc/img/n9e-vx-new.png" width="120">
<img src="doc/img/n9e-vx-new.png" width="180">

View File

@@ -1,5 +0,0 @@
## Active Contributors
- [xiaoziv](https://github.com/xiaoziv)
- [tanxiao1990](https://github.com/tanxiao1990)
- [bbaobelief](https://github.com/bbaobelief)

View File

@@ -1,5 +0,0 @@
## Committers
- [YeningQin](https://github.com/710leo)
- [FeiKong](https://github.com/kongfei605)
- [XiaqingDai](https://github.com/jsers)

View File

@@ -34,12 +34,13 @@ Committer 承担以下一个或多个职责:
Committer 记录并公示于 **[COMMITTERS](./committers.md)** 列表,并获得 **[CCF ODC](https://www.ccf.org.cn/kyfzwyh/)** 颁发的电子证书,以及享有夜莺开源社区的各种权益和福利。
### 项目管委会(PMC)
### 项目管委会成员(PMC Member)
> 项目管委会作为一个实体,来管理和领导夜莺项目,为整个项目的发展全权负责。项目管委会相关内容记录并公示于文件[PMC](./pmc.md).
> 项目管委会成员,从 Contributor 或者 Committer 中选举产生,他们拥有 [CCFOS/NIGHTINGALE](https://github.com/ccfos/nightingale) 代码仓库的写操作权限,拥有 ccf.org.cn 为后缀的邮箱地址(待上线),拥有 Nightingale 社区相关事务的投票权、以及提名 Committer 候选人的权利。 项目管委会作为一个实体,为整个项目的发展全权负责。项目管委会成员记录并公示于 **[PMC](./pmc.md)** 列表。
- 项目管委会成员(PMC Member),从 Contributor 或者 Committer 中选举产生,他们拥有 [CCFOS/NIGHTINGALE](https://github.com/ccfos/nightingale) 代码仓库的写操作权限,拥有 ccf.org.cn 为后缀的邮箱地址(待上线),拥有 Nightingale 社区相关事务的投票权、以及提名 Committer 候选人的权利。
- 项目管委会主席(PMC Chair),由 **[CCF ODC](https://www.ccf.org.cn/kyfzwyh/)** 从项目管委会成员中任命产生。管委会主席是 CCF ODC 和项目管委会之间的沟通桥梁,履行特定的项目管理职责。
### 项目管委会主席(PMC Chair)
> 项目管委会主席采用任命制,由 **[CCF ODC](https://www.ccf.org.cn/kyfzwyh/)** 从项目管委会成员中任命产生。项目管委会作为一个统一的实体,来管理和领导夜莺项目。管委会主席是 CCF ODC 和项目管委会之间的沟通桥梁,履行特定的项目管理职责。
## 沟通机制(Communication)
1. 我们推荐使用邮件列表来反馈建议(待发布);
@@ -70,4 +71,4 @@ Committer 记录并公示于 **[COMMITTERS](./committers.md)** 列表,并获
2. 提问之前请先搜索 [github issue](https://github.com/ccfos/nightingale/issues)
3. 我们优先推荐通过提交 github issue 来提问,如果[有问题点击这里](https://github.com/ccfos/nightingale/issues/new?assignees=&labels=kind%2Fbug&template=bug_report.yml) | [有需求建议点击这里](https://github.com/ccfos/nightingale/issues/new?assignees=&labels=kind%2Ffeature&template=enhancement.md)
最后,我们推荐你加入微信群,针对相关开放式问题,相互交流咨询 (请先加好友:[UlricGO](https://www.gitlink.org.cn/UlricQin/gist/tree/master/self.jpeg) 备注:夜莺加群+姓名+公司,交流群里会有开发者团队和专业、热心的群友回答问题)
最后,我们推荐你加入微信群,针对相关开放式问题,相互交流咨询 (请先加好友:[UlricGO](https://www.gitlink.org.cn/UlricQin/gist/tree/master/self.jpeg) 备注:夜莺加群+姓名+公司,交流群里会有开发者团队和专业、热心的群友回答问题)

View File

@@ -1,5 +0,0 @@
## Contributors
<a href="https://github.com/ccfos/nightingale/graphs/contributors">
<img src="https://contrib.rocks/image?repo=ccfos/nightingale" />
</a>

View File

@@ -1,5 +0,0 @@
## End Users
- [中移动](https://github.com/ccfos/nightingale/issues/897#issuecomment-1086573166)
- [inke](https://github.com/ccfos/nightingale/issues/897#issuecomment-1099840636)
- [方正证券](https://github.com/ccfos/nightingale/issues/897#issuecomment-1110492461)

View File

@@ -1,7 +0,0 @@
## PMC Chair
- [laiwei](https://github.com/laiwei)
## PMC Member
- [UlricQin](https://github.com/UlricQin)

View File

@@ -1,5 +1,4 @@
FROM python:2.7.8-slim
#FROM python:2
FROM python:2
#FROM ubuntu:21.04
WORKDIR /app

View File

@@ -1,4 +1,4 @@
FROM --platform=$BUILDPLATFORM python:2.7.8-slim
FROM --platform=$BUILDPLATFORM python:2
WORKDIR /app

View File

@@ -43,9 +43,3 @@ basic_auth_pass = ""
timeout = 5000
dial_timeout = 2500
max_idle_conns_per_host = 100
[http]
enable = false
address = ":9100"
print_access = false
run_mode = "release"

View File

@@ -80,7 +80,7 @@ services:
sh -c "/wait && /app/ibex server"
nwebapi:
image: flashcatcloud/nightingale:latest
image: ulric2019/nightingale:5.9.4
container_name: nwebapi
hostname: nwebapi
restart: always
@@ -108,7 +108,7 @@ services:
sh -c "/wait && /app/n9e webapi"
nserver:
image: flashcatcloud/nightingale:latest
image: ulric2019/nightingale:5.9.4
container_name: nserver
hostname: nserver
restart: always
@@ -136,7 +136,7 @@ services:
sh -c "/wait && /app/n9e server"
categraf:
image: "flashcatcloud/categraf:latest"
image: "flashcatcloud/categraf:v0.1.9"
container_name: "categraf"
hostname: "categraf01"
restart: always
@@ -150,7 +150,7 @@ services:
- /:/hostfs
- /var/run/docker.sock:/var/run/docker.sock
ports:
- "9100:9100/tcp"
- "8094:8094/tcp"
networks:
- nightingale
depends_on:

View File

@@ -52,7 +52,7 @@ insert into user_group_member(group_id, user_id) values(1, 1);
CREATE TABLE `configs` (
`id` bigint unsigned not null auto_increment,
`ckey` varchar(191) not null,
`cval` varchar(4096) not null default '',
`cval` varchar(1024) not null default '',
PRIMARY KEY (`id`),
UNIQUE KEY (`ckey`)
) ENGINE = InnoDB DEFAULT CHARSET = utf8mb4;
@@ -402,7 +402,6 @@ CREATE TABLE `alert_cur_event` (
`notify_cur_number` int not null default 0 comment '',
`target_ident` varchar(191) not null default '' comment 'target ident, also in tags',
`target_note` varchar(191) not null default '' comment 'target note',
`first_trigger_time` bigint,
`trigger_time` bigint not null,
`trigger_value` varchar(255) not null,
`tags` varchar(1024) not null default '' comment 'merge data_tags rule_tags, split by ,,',
@@ -437,7 +436,6 @@ CREATE TABLE `alert_his_event` (
`notify_cur_number` int not null default 0 comment '',
`target_ident` varchar(191) not null default '' comment 'target ident, also in tags',
`target_note` varchar(191) not null default '' comment 'target note',
`first_trigger_time` bigint,
`trigger_time` bigint not null,
`trigger_value` varchar(255) not null,
`recover_time` bigint not null default 0,
@@ -500,13 +498,3 @@ CREATE TABLE `task_record`
KEY (`create_at`, `group_id`),
KEY (`create_by`)
) ENGINE = InnoDB DEFAULT CHARSET = utf8mb4;
CREATE TABLE `alerting_engines`
(
`id` int unsigned NOT NULL AUTO_INCREMENT,
`instance` varchar(128) not null default '' comment 'instance identification, e.g. 10.9.0.9:9090',
`cluster` varchar(128) not null default '' comment 'target reader cluster',
`clock` bigint not null,
PRIMARY KEY (`id`),
UNIQUE KEY (`instance`)
) ENGINE = InnoDB DEFAULT CHARSET = utf8mb4;

View File

@@ -436,7 +436,6 @@ CREATE TABLE alert_cur_event (
notify_cur_number int4 not null default 0,
target_ident varchar(191) NOT NULL DEFAULT ''::character varying,
target_note varchar(191) NOT NULL DEFAULT ''::character varying,
first_trigger_time int8,
trigger_time int8 NOT NULL,
trigger_value varchar(255) NOT NULL,
tags varchar(1024) NOT NULL DEFAULT ''::character varying,
@@ -488,7 +487,6 @@ CREATE TABLE alert_his_event (
notify_cur_number int4 not null default 0,
target_ident varchar(191) NOT NULL DEFAULT ''::character varying,
target_note varchar(191) NOT NULL DEFAULT ''::character varying,
first_trigger_time int8,
trigger_time int8 NOT NULL,
trigger_value varchar(255) NOT NULL,
recover_time int8 NOT NULL DEFAULT 0,

View File

@@ -174,10 +174,4 @@ Address = "http://ibex:10090"
BasicAuthUser = "ibex"
BasicAuthPass = "ibex"
# unit: ms
Timeout = 3000
[TargetMetrics]
TargetUp = '''max(max_over_time(target_up{ident=~"(%s)"}[%dm])) by (ident)'''
LoadPerCore = '''max(max_over_time(system_load_norm_1{ident=~"(%s)"}[%dm])) by (ident)'''
MemUtil = '''100-max(max_over_time(mem_available_percent{ident=~"(%s)"}[%dm])) by (ident)'''
DiskUtil = '''max(max_over_time(disk_used_percent{ident=~"(%s)", path="/"}[%dm])) by (ident)'''
Timeout = 3000

File diff suppressed because it is too large Load Diff

View File

@@ -1,383 +1,131 @@
zh:
cpu_usage_idle: CPU空闲率(单位:%
cpu_usage_active: CPU使用率(单位:%
cpu_usage_system: CPU内核态时间占比(单位:%
cpu_usage_user: CPU用户态时间占比(单位:%
cpu_usage_nice: 低优先级用户态CPU时间占比也就是进程nice值被调整为1-19之间的CPU时间。这里注意nice可取值范围是-20到19数值越大优先级反而越低(单位:%
cpu_usage_iowait: CPU等待I/O的时间占比(单位:%
cpu_usage_irq: CPU处理中断的时间占比(单位:%
cpu_usage_softirq: CPU处理软中断的时间占比(单位:%
cpu_usage_steal: 在虚拟机环境下有该指标表示CPU被其他虚拟机争用的时间占比超过20就表示争抢严重(单位:%
cpu_usage_guest: 通过虚拟化运行其他操作系统的时间,也就是运行虚拟机的CPU时间占比(单位:%
cpu_usage_guest_nice: 以低优先级运行虚拟机的时间占比(单位:%
cpu_usage_idle: CPU空闲率单位%
cpu_usage_active: CPU使用率(单位:%
cpu_usage_system: CPU内核态时间占比(单位:%
cpu_usage_user: CPU用户态时间占比(单位:%
cpu_usage_nice: 低优先级用户态CPU时间占比也就是进程nice值被调整为1-19之间的CPU时间。这里注意nice可取值范围是-20到19数值越大优先级反而越低(单位:%
cpu_usage_iowait: CPU等待I/O的时间占比(单位:%
cpu_usage_irq: CPU处理硬中断的时间占比(单位:%
cpu_usage_softirq: CPU处理中断的时间占比(单位:%
cpu_usage_steal: 在虚拟机环境下有该指标表示CPU被其他虚拟机争用的时间占比超过20就表示争抢严重(单位:%
cpu_usage_guest: 通过虚拟化运行其他操作系统的时间也就是运行虚拟机的CPU时间占比(单位:%
cpu_usage_guest_nice: 以低优先级运行虚拟机的时间占比(单位:%
disk_free: 硬盘分区剩余量单位byte
disk_used: 硬盘分区使用量单位byte
disk_used_percent: 硬盘分区使用率(单位:%
disk_total: 硬盘分区总量单位byte
disk_inodes_free: 硬盘分区inode剩余量
disk_inodes_used: 硬盘分区inode使用量
disk_inodes_total: 硬盘分区inode总量
disk_free: 硬盘分区剩余量单位byte
disk_used: 硬盘分区使用量单位byte
disk_used_percent: 硬盘分区使用率(单位:%
disk_total: 硬盘分区总量单位byte
disk_inodes_free: 硬盘分区inode剩余量
disk_inodes_used: 硬盘分区inode使用量
disk_inodes_total: 硬盘分区inode总量
diskio_io_time: 从设备视角来看I/O请求总时间队列中有I/O请求就计数单位毫秒counter类型需要用函数求rate才有使用价值
diskio_iops_in_progress: 已经分配给设备驱动且尚未完成的IO请求不包含在队列中但尚未分配给设备驱动的IO请求gauge类型
diskio_merged_reads: 相邻读请求merge读的次数counter类型
diskio_merged_writes: 相邻写请求merge写的次数counter类型
diskio_read_bytes: 读取的byte数量counter类型需要用函数求rate才有使用价值
diskio_read_time: 读请求总时间单位毫秒counter类型需要用函数求rate才有使用价值
diskio_reads: 读请求次数counter类型需要用函数求rate才有使用价值
diskio_weighted_io_time: 从I/O请求视角来看I/O等待总时间如果同时有多个I/O请求时间会叠加单位毫秒
diskio_write_bytes: 写入的byte数量counter类型需要用函数求rate才有使用价值
diskio_write_time: 写请求总时间单位毫秒counter类型需要用函数求rate才有使用价值
diskio_writes: 写请求次数counter类型需要用函数求rate才有使用价值
diskio_io_time: 从设备视角来看I/O请求总时间队列中有I/O请求就计数单位毫秒counter类型需要用函数求rate才有使用价值
diskio_iops_in_progress: 已经分配给设备驱动且尚未完成的IO请求不包含在队列中但尚未分配给设备驱动的IO请求gauge类型
diskio_merged_reads: 相邻读请求merge读的次数counter类型
diskio_merged_writes: 相邻写请求merge写的次数counter类型
diskio_read_bytes: 读取的byte数量counter类型需要用函数求rate才有使用价值
diskio_read_time: 读请求总时间单位毫秒counter类型需要用函数求rate才有使用价值
diskio_reads: 读请求次数counter类型需要用函数求rate才有使用价值
diskio_weighted_io_time: 从I/O请求视角来看I/O等待总时间如果同时有多个I/O请求时间会叠加单位毫秒
diskio_write_bytes: 写入的byte数量counter类型需要用函数求rate才有使用价值
diskio_write_time: 写请求总时间单位毫秒counter类型需要用函数求rate才有使用价值
diskio_writes: 写请求次数counter类型需要用函数求rate才有使用价值
kernel_boot_time: 内核启动时间
kernel_context_switches: 内核上下文切换次数
kernel_entropy_avail: linux系统内部的熵池
kernel_interrupts: 内核中断次数
kernel_processes_forked: fork的进程数
kernel_boot_time: 内核启动时间
kernel_context_switches: 内核上下文切换次数
kernel_entropy_avail: linux系统内部的熵池
kernel_interrupts: 内核中断次数
kernel_processes_forked: fork的进程数
mem_active: 活跃使用的内存总数(包括cache和buffer内存)
mem_available: 应用程序可用内存数
mem_available_percent: 内存剩余百分比(0~100)
mem_buffered: 用来给文件做缓冲大小
mem_cached: 被高速缓冲存储器cache memory用的内存的大小等于 diskcache minus SwapCache
mem_commit_limit: 根据超额分配比率('vm.overcommit_ratio'这是当前在系统上分配可用的内存总量这个限制只是在模式2('vm.overcommit_memory')时启用
mem_committed_as: 目前在系统上分配的内存量。是所有进程申请的内存的总和
mem_dirty: 等待被写回到磁盘的内存大小
mem_free: 空闲内存数
mem_high_free: 未被使用的高位内存大小
mem_high_total: 高位内存总大小Highmem是指所有内存高于860MB的物理内存,Highmem区域供用户程序使用或用于页面缓存。该区域不是直接映射到内核空间。内核必须使用不同的手法使用该段内存
mem_huge_page_size: 每个大页的大小
mem_huge_pages_free: 池中尚未分配的 HugePages 数量
mem_huge_pages_total: 预留HugePages的总个数
mem_inactive: 空闲的内存数(包括free和avalible的内存)
mem_low_free: 未被使用的低位大小
mem_low_total: 低位内存总大小,低位可以达到高位内存一样的作用,而且它还能够被内核用来记录一些自己的数据结构
mem_mapped: 设备和文件等映射的大小
mem_page_tables: 管理内存分页页面的索引表的大小
mem_shared: 多个进程共享的内存总额
mem_slab: 内核数据结构缓存的大小,可以减少申请和释放内存带来的消耗
mem_sreclaimable: 可收回Slab的大小
mem_sunreclaim: 不可收回Slab的大小SUnreclaim+SReclaimableSlab
mem_swap_cached: 被高速缓冲存储器cache memory用的交换空间的大小已经被交换出来的内存但仍然被存放在swapfile中。用来在需要的时候很快的被替换而不需要再次打开I/O端口
mem_swap_free: 未被使用交换空间的大小
mem_swap_total: 交换空间的总大小
mem_total: 内存总数
mem_used: 已用内存数
mem_used_percent: 已用内存数百分比(0~100)
mem_vmalloc_chunk: 最大的连续未被使用的vmalloc区域
mem_vmalloc_totalL: 可以vmalloc虚拟内存大小
mem_vmalloc_used: vmalloc已使用的虚拟内存大小
mem_write_back: 正在被写回到磁盘的内存大小
mem_write_back_tmp: FUSE用于临时写回缓冲区的内存
mem_active: 活跃使用的内存总数(包括cache和buffer内存)
mem_available: 应用程序可用内存数
mem_available_percent: 内存剩余百分比(0~100)
mem_buffered: 用来给文件做缓冲大小
mem_cached: 被高速缓冲存储器cache memory用的内存的大小等于 diskcache minus SwapCache
mem_commit_limit: 根据超额分配比率('vm.overcommit_ratio'这是当前在系统上分配可用的内存总量这个限制只是在模式2('vm.overcommit_memory')时启用
mem_committed_as: 目前在系统上分配的内存量。是所有进程申请的内存的总和
mem_dirty: 等待被写回到磁盘的内存大小
mem_free: 空闲内存数
mem_high_free: 未被使用的高位内存大小
mem_high_total: 高位内存总大小Highmem是指所有内存高于860MB的物理内存,Highmem区域供用户程序使用或用于页面缓存。该区域不是直接映射到内核空间。内核必须使用不同的手法使用该段内存
mem_huge_page_size: 每个大页的大小
mem_huge_pages_free: 池中尚未分配的 HugePages 数量
mem_huge_pages_total: 预留HugePages的总个数
mem_inactive: 空闲的内存数(包括free和avalible的内存)
mem_low_free: 未被使用的低位大小
mem_low_total: 低位内存总大小,低位可以达到高位内存一样的作用,而且它还能够被内核用来记录一些自己的数据结构
mem_mapped: 设备和文件等映射的大小
mem_page_tables: 管理内存分页页面的索引表的大小
mem_shared: 多个进程共享的内存总额
mem_slab: 内核数据结构缓存的大小,可以减少申请和释放内存带来的消耗
mem_sreclaimable: 可收回Slab的大小
mem_sunreclaim: 不可收回Slab的大小SUnreclaim+SReclaimableSlab
mem_swap_cached: 被高速缓冲存储器cache memory用的交换空间的大小已经被交换出来的内存但仍然被存放在swapfile中。用来在需要的时候很快的被替换而不需要再次打开I/O端口
mem_swap_free: 未被使用交换空间的大小
mem_swap_total: 交换空间的总大小
mem_total: 内存总数
mem_used: 已用内存数
mem_used_percent: 已用内存数百分比(0~100)
mem_vmalloc_chunk: 最大的连续未被使用的vmalloc区域
mem_vmalloc_totalL: 可以vmalloc虚拟内存大小
mem_vmalloc_used: vmalloc已使用的虚拟内存大小
mem_write_back: 正在被写回到磁盘的内存大小
mem_write_back_tmp: FUSE用于临时写回缓冲区的内存
net_bytes_recv: 网卡收包总数(bytes)
net_bytes_sent: 网卡发包总数(bytes)
net_drop_in: 网卡收丢包数量
net_drop_out: 网卡发丢包数量
net_err_in: 网卡收包错误数量
net_err_out: 网卡发包错误数量
net_packets_recv: 网卡收包数量
net_packets_sent: 网卡发包数量
net_bytes_recv: 网卡收包总数(bytes)
net_bytes_sent: 网卡发包总数(bytes)
net_drop_in: 网卡收丢包数量
net_drop_out: 网卡发丢包数量
net_err_in: 网卡收包错误数量
net_err_out: 网卡发包错误数量
net_packets_recv: 网卡收包数量
net_packets_sent: 网卡发包数量
netstat_tcp_established: ESTABLISHED状态的网络链接数
netstat_tcp_fin_wait1: FIN_WAIT1状态的网络链接数
netstat_tcp_fin_wait2: FIN_WAIT2状态的网络链接数
netstat_tcp_last_ack: LAST_ACK状态的网络链接数
netstat_tcp_listen: LISTEN状态的网络链接数
netstat_tcp_syn_recv: SYN_RECV状态的网络链接数
netstat_tcp_syn_sent: SYN_SENT状态的网络链接数
netstat_tcp_time_wait: TIME_WAIT状态的网络链接数
netstat_udp_socket: UDP状态的网络链接数
netstat_tcp_established: ESTABLISHED状态的网络链接数
netstat_tcp_fin_wait1: FIN_WAIT1状态的网络链接数
netstat_tcp_fin_wait2: FIN_WAIT2状态的网络链接数
netstat_tcp_last_ack: LAST_ACK状态的网络链接数
netstat_tcp_listen: LISTEN状态的网络链接数
netstat_tcp_syn_recv: SYN_RECV状态的网络链接数
netstat_tcp_syn_sent: SYN_SENT状态的网络链接数
netstat_tcp_time_wait: TIME_WAIT状态的网络链接数
netstat_udp_socket: UDP状态的网络链接数
#[ping]
ping_percent_packet_loss: ping数据包丢失百分比(%)
ping_result_code: ping返回码('0','1')
processes_blocked: 不可中断的睡眠状态下的进程数('U','D','L')
processes_dead: 回收中的进程数('X')
processes_idle: 挂起的空闲进程数('I')
processes_paging: 分页进程数('P')
processes_running: 运行中的进程数('R')
processes_sleeping: 可中断进程数('S')
processes_stopped: 暂停状态进程数('T')
processes_total: 总进程数
processes_total_threads: 总线程数
processes_unknown: 未知状态进程数
processes_zombies: 僵尸态进程数('Z')
processes_blocked: 不可中断的睡眠状态下的进程数('U','D','L')
processes_dead: 回收中的进程数('X')
processes_idle: 挂起的空闲进程数('I')
processes_paging: 分页进程数('P')
processes_running: 运行中的进程数('R')
processes_sleeping: 可中断进程数('S')
processes_stopped: 暂停状态进程数('T')
processes_total: 总进程数
processes_total_threads: 总线程数
processes_unknown: 未知状态进程数
processes_zombies: 僵尸态进程数('Z')
swap_used_percent: Swap空间换出数据量
swap_used_percent: Swap空间换出数据量
system_load1: 1分钟平均load值
system_load5: 5分钟平均load值
system_load15: 15分钟平均load值
system_n_users: 用户数
system_n_cpus: CPU核数
system_uptime: 系统启动时间
system_load1: 1分钟平均load值
system_load5: 5分钟平均load值
system_load15: 15分钟平均load值
system_n_users: 用户
system_n_cpus: CPU核数
system_uptime: 系统启动时间
nginx_accepts: 自nginx启动起,与客户端建立过得连接总数
nginx_active: 当前nginx正在处理的活动连接数,等于Reading/Writing/Waiting总和
nginx_handled: 自nginx启动起,处理过的客户端连接总数
nginx_reading: 正在读取HTTP请求头部的连接总
nginx_requests: 自nginx启动起,处理过的客户端请求总数,由于存在HTTP Krrp-Alive请求,该值会大于handled值
nginx_upstream_check_fall: upstream_check模块检测到后端失败的次数
nginx_upstream_check_rise: upstream_check模块对后端的检测次数
nginx_upstream_check_status_code: 后端upstream的状态,up为1,down为0
nginx_waiting: 开启 keep-alive 的情况下,这个值等于 active (reading+writing), 意思就是 Nginx 已经处理完正在等候下一次请求指令的驻留连接
nginx_writing: 正在向客户端发送响应的连接总数
nginx_accepts: 自nginx启动起,与客户端建立过得连接总数
nginx_active: 当前nginx正在处理的活动连接数,等于Reading/Writing/Waiting总和
nginx_handled: 自nginx启动起,处理过的客户端连接总数
nginx_reading: 正在读取HTTP请求头部的连接总数
nginx_requests: 自nginx启动起,处理过的客户端请求总数,由于存在HTTP Krrp-Alive请求,该值会大于handled值
nginx_upstream_check_fall: upstream_check模块检测到后端失败的次数
nginx_upstream_check_rise: upstream_check模块对后端的检测次数
nginx_upstream_check_status_code: 后端upstream的状态,up为1,down为0
nginx_waiting: 开启 keep-alive 的情况下,这个值等于 active (reading+writing), 意思就是 Nginx 已经处理完正在等候下一次请求指令的驻留连接
nginx_writing: 正在向客户端发送响应的连接总数
http_response_content_length: HTTP消息实体的传输长度
http_response_http_response_code: http响应状态码
http_response_response_time: http响应用时
http_response_result_code: url探测结果0为正常否则url无法访问
# [aws cloudwatch rds]
cloudwatch_aws_rds_bin_log_disk_usage_average: rds 磁盘使用平均值
cloudwatch_aws_rds_bin_log_disk_usage_maximum: rds 磁盘使用量最大值
cloudwatch_aws_rds_bin_log_disk_usage_minimum: rds binlog 磁盘使用量最低
cloudwatch_aws_rds_bin_log_disk_usage_sample_count: rds binlog 磁盘使用情况样本计数
cloudwatch_aws_rds_bin_log_disk_usage_sum: rds binlog 磁盘使用总和
cloudwatch_aws_rds_burst_balance_average: rds 突发余额平均值
cloudwatch_aws_rds_burst_balance_maximum: rds 突发余额最大值
cloudwatch_aws_rds_burst_balance_minimum: rds 突发余额最低
cloudwatch_aws_rds_burst_balance_sample_count: rds 突发平衡样本计数
cloudwatch_aws_rds_burst_balance_sum: rds 突发余额总和
cloudwatch_aws_rds_cpu_utilization_average: rds cpu 利用率平均值
cloudwatch_aws_rds_cpu_utilization_maximum: rds cpu 利用率最大值
cloudwatch_aws_rds_cpu_utilization_minimum: rds cpu 利用率最低
cloudwatch_aws_rds_cpu_utilization_sample_count: rds cpu 利用率样本计数
cloudwatch_aws_rds_cpu_utilization_sum: rds cpu 利用率总和
cloudwatch_aws_rds_database_connections_average: rds 数据库连接平均值
cloudwatch_aws_rds_database_connections_maximum: rds 数据库连接数最大值
cloudwatch_aws_rds_database_connections_minimum: rds 数据库连接最小
cloudwatch_aws_rds_database_connections_sample_count: rds 数据库连接样本数
cloudwatch_aws_rds_database_connections_sum: rds 数据库连接总和
cloudwatch_aws_rds_db_load_average: rds db 平均负载
cloudwatch_aws_rds_db_load_cpu_average: rds db 负载 cpu 平均值
cloudwatch_aws_rds_db_load_cpu_maximum: rds db 负载 cpu 最大值
cloudwatch_aws_rds_db_load_cpu_minimum: rds db 负载 cpu 最小值
cloudwatch_aws_rds_db_load_cpu_sample_count: rds db 加载 CPU 样本数
cloudwatch_aws_rds_db_load_cpu_sum: rds db 加载cpu总和
cloudwatch_aws_rds_db_load_maximum: rds 数据库负载最大值
cloudwatch_aws_rds_db_load_minimum: rds 数据库负载最小值
cloudwatch_aws_rds_db_load_non_cpu_average: rds 加载非 CPU 平均值
cloudwatch_aws_rds_db_load_non_cpu_maximum: rds 加载非 cpu 最大值
cloudwatch_aws_rds_db_load_non_cpu_minimum: rds 加载非 cpu 最小值
cloudwatch_aws_rds_db_load_non_cpu_sample_count: rds 加载非 cpu 样本计数
cloudwatch_aws_rds_db_load_non_cpu_sum: rds 加载非cpu总和
cloudwatch_aws_rds_db_load_sample_count: rds db 加载样本计数
cloudwatch_aws_rds_db_load_sum: rds db 负载总和
cloudwatch_aws_rds_disk_queue_depth_average: rds 磁盘队列深度平均值
cloudwatch_aws_rds_disk_queue_depth_maximum: rds 磁盘队列深度最大值
cloudwatch_aws_rds_disk_queue_depth_minimum: rds 磁盘队列深度最小值
cloudwatch_aws_rds_disk_queue_depth_sample_count: rds 磁盘队列深度样本计数
cloudwatch_aws_rds_disk_queue_depth_sum: rds 磁盘队列深度总和
cloudwatch_aws_rds_ebs_byte_balance__average: rds ebs 字节余额平均值
cloudwatch_aws_rds_ebs_byte_balance__maximum: rds ebs 字节余额最大值
cloudwatch_aws_rds_ebs_byte_balance__minimum: rds ebs 字节余额最低
cloudwatch_aws_rds_ebs_byte_balance__sample_count: rds ebs 字节余额样本数
cloudwatch_aws_rds_ebs_byte_balance__sum: rds ebs 字节余额总和
cloudwatch_aws_rds_ebsio_balance__average: rds ebsio 余额平均值
cloudwatch_aws_rds_ebsio_balance__maximum: rds ebsio 余额最大值
cloudwatch_aws_rds_ebsio_balance__minimum: rds ebsio 余额最低
cloudwatch_aws_rds_ebsio_balance__sample_count: rds ebsio 平衡样本计数
cloudwatch_aws_rds_ebsio_balance__sum: rds ebsio 余额总和
cloudwatch_aws_rds_free_storage_space_average: rds 免费存储空间平均
cloudwatch_aws_rds_free_storage_space_maximum: rds 最大可用存储空间
cloudwatch_aws_rds_free_storage_space_minimum: rds 最低可用存储空间
cloudwatch_aws_rds_free_storage_space_sample_count: rds 可用存储空间样本数
cloudwatch_aws_rds_free_storage_space_sum: rds 免费存储空间总和
cloudwatch_aws_rds_freeable_memory_average: rds 可用内存平均值
cloudwatch_aws_rds_freeable_memory_maximum: rds 最大可用内存
cloudwatch_aws_rds_freeable_memory_minimum: rds 最小可用内存
cloudwatch_aws_rds_freeable_memory_sample_count: rds 可释放内存样本数
cloudwatch_aws_rds_freeable_memory_sum: rds 可释放内存总和
cloudwatch_aws_rds_lvm_read_iops_average: rds lvm 读取 iops 平均值
cloudwatch_aws_rds_lvm_read_iops_maximum: rds lvm 读取 iops 最大值
cloudwatch_aws_rds_lvm_read_iops_minimum: rds lvm 读取 iops 最低
cloudwatch_aws_rds_lvm_read_iops_sample_count: rds lvm 读取 iops 样本计数
cloudwatch_aws_rds_lvm_read_iops_sum: rds lvm 读取 iops 总和
cloudwatch_aws_rds_lvm_write_iops_average: rds lvm 写入 iops 平均值
cloudwatch_aws_rds_lvm_write_iops_maximum: rds lvm 写入 iops 最大值
cloudwatch_aws_rds_lvm_write_iops_minimum: rds lvm 写入 iops 最低
cloudwatch_aws_rds_lvm_write_iops_sample_count: rds lvm 写入 iops 样本计数
cloudwatch_aws_rds_lvm_write_iops_sum: rds lvm 写入 iops 总和
cloudwatch_aws_rds_network_receive_throughput_average: rds 网络接收吞吐量平均
cloudwatch_aws_rds_network_receive_throughput_maximum: rds 网络接收吞吐量最大值
cloudwatch_aws_rds_network_receive_throughput_minimum: rds 网络接收吞吐量最小值
cloudwatch_aws_rds_network_receive_throughput_sample_count: rds 网络接收吞吐量样本计数
cloudwatch_aws_rds_network_receive_throughput_sum: rds 网络接收吞吐量总和
cloudwatch_aws_rds_network_transmit_throughput_average: rds 网络传输吞吐量平均值
cloudwatch_aws_rds_network_transmit_throughput_maximum: rds 网络传输吞吐量最大
cloudwatch_aws_rds_network_transmit_throughput_minimum: rds 网络传输吞吐量最小值
cloudwatch_aws_rds_network_transmit_throughput_sample_count: rds 网络传输吞吐量样本计数
cloudwatch_aws_rds_network_transmit_throughput_sum: rds 网络传输吞吐量总和
cloudwatch_aws_rds_read_iops_average: rds 读取 iops 平均值
cloudwatch_aws_rds_read_iops_maximum: rds 最大读取 iops
cloudwatch_aws_rds_read_iops_minimum: rds 读取 iops 最低
cloudwatch_aws_rds_read_iops_sample_count: rds 读取 iops 样本计数
cloudwatch_aws_rds_read_iops_sum: rds 读取 iops 总和
cloudwatch_aws_rds_read_latency_average: rds 读取延迟平均值
cloudwatch_aws_rds_read_latency_maximum: rds 读取延迟最大值
cloudwatch_aws_rds_read_latency_minimum: rds 最小读取延迟
cloudwatch_aws_rds_read_latency_sample_count: rds 读取延迟样本计数
cloudwatch_aws_rds_read_latency_sum: rds 读取延迟总和
cloudwatch_aws_rds_read_throughput_average: rds 读取吞吐量平均值
cloudwatch_aws_rds_read_throughput_maximum: rds 最大读取吞吐量
cloudwatch_aws_rds_read_throughput_minimum: rds 最小读取吞吐量
cloudwatch_aws_rds_read_throughput_sample_count: rds 读取吞吐量样本计数
cloudwatch_aws_rds_read_throughput_sum: rds 读取吞吐量总和
cloudwatch_aws_rds_swap_usage_average: rds 交换使用平均值
cloudwatch_aws_rds_swap_usage_maximum: rds 交换使用最大值
cloudwatch_aws_rds_swap_usage_minimum: rds 交换使用量最低
cloudwatch_aws_rds_swap_usage_sample_count: rds 交换使用示例计数
cloudwatch_aws_rds_swap_usage_sum: rds 交换使用总和
cloudwatch_aws_rds_write_iops_average: rds 写入 iops 平均值
cloudwatch_aws_rds_write_iops_maximum: rds 写入 iops 最大值
cloudwatch_aws_rds_write_iops_minimum: rds 写入 iops 最低
cloudwatch_aws_rds_write_iops_sample_count: rds 写入 iops 样本计数
cloudwatch_aws_rds_write_iops_sum: rds 写入 iops 总和
cloudwatch_aws_rds_write_latency_average: rds 写入延迟平均值
cloudwatch_aws_rds_write_latency_maximum: rds 最大写入延迟
cloudwatch_aws_rds_write_latency_minimum: rds 写入延迟最小值
cloudwatch_aws_rds_write_latency_sample_count: rds 写入延迟样本计数
cloudwatch_aws_rds_write_latency_sum: rds 写入延迟总和
cloudwatch_aws_rds_write_throughput_average: rds 写入吞吐量平均值
cloudwatch_aws_rds_write_throughput_maximum: rds 最大写入吞吐量
cloudwatch_aws_rds_write_throughput_minimum: rds 写入吞吐量最小值
cloudwatch_aws_rds_write_throughput_sample_count: rds 写入吞吐量样本计数
cloudwatch_aws_rds_write_throughput_sum: rds 写入吞吐量总和
en:
cpu_usage_idle: "CPU idle rate(unit%)"
cpu_usage_active: "CPU usage rate(unit%)"
cpu_usage_system: "CPU kernel state time proportion(unit%)"
cpu_usage_user: "CPU user attitude time proportion(unit%)"
cpu_usage_nice: "The proportion of low priority CPU time, that is, the process NICE value is adjusted to the CPU time between 1-19. Note here that the value range of NICE is -20 to 19, the larger the value, the lower the priority, the lower the priority(unit%)"
cpu_usage_iowait: "CPU waiting for I/O time proportion(unit%)"
cpu_usage_irq: "CPU processing hard interrupt time proportion(unit%)"
cpu_usage_softirq: "CPU processing soft interrupt time proportion(unit%)"
cpu_usage_steal: "In the virtual machine environment, there is this indicator, which means that the CPU is used by other virtual machines for the proportion of time.(unit%)"
cpu_usage_guest: "The time to run other operating systems by virtualization, that is, the proportion of CPU time running the virtual machine(unit%)"
cpu_usage_guest_nice: "The proportion of time to run the virtual machine at low priority(unit%)"
disk_free: "The remaining amount of the hard disk partition (unit: byte)"
disk_used: "Hard disk partitional use (unit: byte)"
disk_used_percent: "Hard disk partitional use rate (unit:%)"
disk_total: "Total amount of hard disk partition (unit: byte)"
disk_inodes_free: "Hard disk partition INODE remaining amount"
disk_inodes_used: "Hard disk partition INODE usage amount"
disk_inodes_total: "The total amount of hard disk partition INODE"
diskio_io_time: "From the perspective of the device perspective, the total time of I/O request, the I/O request in the queue is count (unit: millisecond), the counter type, you need to use the function to find the value"
diskio_iops_in_progress: "IO requests that have been assigned to device -driven and have not yet been completed, not included in the queue but not yet assigned to the device -driven IO request, Gauge type"
diskio_merged_reads: "The number of times of adjacent reading request Merge, the counter type"
diskio_merged_writes: "The number of times the request Merge writes, the counter type"
diskio_read_bytes: "The number of byte reads, the counter type, you need to use the function to find the Rate to use the value"
diskio_read_time: "The total time of reading request (unit: millisecond), the counter type, you need to use the function to find the Rate to have the value of use"
diskio_reads: "Read the number of requests, the counter type, you need to use the function to find the Rate to use the value"
diskio_weighted_io_time: "From the perspective of the I/O request perspective, I/O wait for the total time. If there are multiple I/O requests at the same time, the time will be superimposed (unit: millisecond)"
diskio_write_bytes: "The number of bytes written, the counter type, you need to use the function to find the Rate to use the value"
diskio_write_time: "The total time of the request (unit: millisecond), the counter type, you need to use the function to find the rate to have the value of use"
diskio_writes: "Write the number of requests, the counter type, you need to use the function to find the rate to use value"
kernel_boot_time: "Kernel startup time"
kernel_context_switches: "Number of kernel context switching times"
kernel_entropy_avail: "Entropy pool inside the Linux system"
kernel_interrupts: "Number of kernel interruption"
kernel_processes_forked: "ForK's process number"
mem_active: "The total number of memory (including Cache and BUFFER memory)"
mem_available: "Application can use memory numbers"
mem_available_percent: "Memory remaining percentage (0 ~ 100)"
mem_buffered: "Used to make buffer size for the file"
mem_cached: "The size of the memory used by the cache memory (equal to diskcache minus Swap Cache )"
mem_commit_limit: "According to the over allocation ratio ('vm.overCommit _ Ratio'), this is the current total memory that can be allocated on the system."
mem_committed_as: "Currently allocated on the system. It is the sum of the memory of all process applications"
mem_dirty: "Waiting to be written back to the memory size of the disk"
mem_free: "Senior memory number"
mem_high_free: "Unused high memory size"
mem_high_total: "The total memory size of the high memory (Highmem refers to all the physical memory that is higher than 860 MB of memory, the HighMem area is used for user programs, or for page cache. This area is not directly mapped to the kernel space. The kernels must use different methods to use this section of memory. )"
mem_huge_page_size: "The size of each big page"
mem_huge_pages_free: "The number of Huge Pages in the pool that have not been allocated"
mem_huge_pages_total: "Reserve the total number of Huge Pages"
mem_inactive: "Free memory (including the memory of free and avalible)"
mem_low_free: "Unused low size"
mem_low_total: "The total size of the low memory memory can achieve the same role of high memory, and it can be used by the kernel to record some of its own data structure"
mem_mapped: "The size of the mapping of equipment and files"
mem_page_tables: "The size of the index table of the management of the memory paging page"
mem_shared: "The total memory shared by multiple processes"
mem_slab: "The size of the kernel data structure cache can reduce the consumption of application and release memory"
mem_sreclaimable: "The size of the SLAB can be recovered"
mem_sunreclaim: "The size of the SLAB cannot be recovered(SUnreclaim+SReclaimableSlab)"
mem_swap_cached: "The size of the swap space used by the cache memory (cache memory), the memory that has been swapped out, but is still stored in the swapfile. Used to be quickly replaced when needed without opening the I/O port again"
mem_swap_free: "The size of the switching space is not used"
mem_swap_total: "The total size of the exchange space"
mem_total: "Total memory"
mem_used: "Memory number"
mem_used_percent: "The memory has been used by several percentage (0 ~ 100)"
mem_vmalloc_chunk: "The largest continuous unused vmalloc area"
mem_vmalloc_totalL: "You can vmalloc virtual memory size"
mem_vmalloc_used: "Vmalloc's virtual memory size"
mem_write_back: "The memory size of the disk is being written back to the disk"
mem_write_back_tmp: "Fuse is used to temporarily write back the memory of the buffer area"
net_bytes_recv: "The total number of packaging of the network card (bytes)"
net_bytes_sent: "Total number of network cards (bytes)"
net_drop_in: "The number of packets for network cards"
net_drop_out: "The number of packets issued by the network card"
net_err_in: "The number of incorrect packets of the network card"
net_err_out: "Number of incorrect number of network cards"
net_packets_recv: "Net card collection quantity"
net_packets_sent: "Number of network card issuance"
netstat_tcp_established: "ESTABLISHED status network link number"
netstat_tcp_fin_wait1: "FIN _ WAIT1 status network link number"
netstat_tcp_fin_wait2: "FIN _ WAIT2 status number of network links"
netstat_tcp_last_ack: "LAST_ ACK status number of network links"
netstat_tcp_listen: "Number of network links in Listen status"
netstat_tcp_syn_recv: "SYN _ RECV status number of network links"
netstat_tcp_syn_sent: "SYN _ SENT status number of network links"
netstat_tcp_time_wait: "Time _ WAIT status network link number"
netstat_udp_socket: "Number of network links in UDP status"
processes_blocked: "The number of processes in the unreprudible sleep state('U','D','L')"
processes_dead: "Number of processes in recycling('X')"
processes_idle: "Number of idle processes hanging('I')"
processes_paging: "Number of paging processes('P')"
processes_running: "Number of processes during operation('R')"
processes_sleeping: "Can interrupt the number of processes('S')"
processes_stopped: "Pushing status process number('T')"
processes_total: "Total process number"
processes_total_threads: "Number of threads"
processes_unknown: "Unknown status process number"
processes_zombies: "Number of zombies('Z')"
swap_used_percent: "SWAP space replace the data volume"
system_load1: "1 minute average load value"
system_load5: "5 minutes average load value"
system_load15: "15 minutes average load value"
system_n_users: "User number"
system_n_cpus: "CPU nuclear number"
system_uptime: "System startup time"
nginx_accepts: "Since Nginx started, the total number of connections has been established with the client"
nginx_active: "The current number of activity connections that Nginx is being processed is equal to Reading/Writing/Waiting"
nginx_handled: "Starting from Nginx, the total number of client connections that have been processed"
nginx_reading: "Reading the total number of connections on the http request header"
nginx_requests: "Since nginx is started, the total number of client requests processed, due to the existence of HTTP Krrp - Alive requests, this value will be greater than the handled value"
nginx_upstream_check_fall: "UPStream_CHECK module detects the number of back -end failures"
nginx_upstream_check_rise: "UPSTREAM _ Check module to detect the number of back -end"
nginx_upstream_check_status_code: "The state of the backstream is 1, and the down is 0"
nginx_waiting: "When keep-alive is enabled, this value is equal to active (reading+writing), which means that Nginx has processed the resident connection that is waiting for the next request command"
nginx_writing: "The total number of connections to send a response to the client"
http_response_content_length: "HTTP message entity transmission length"
http_response_http_response_code: "http response status code"
http_response_response_time: "When http ring application"
http_response_result_code: "URL detection result 0 is normal, otherwise the URL cannot be accessed"
http_response_content_length: HTTP消息实体的传输长度
http_response_http_response_code: http响应状态码
http_response_response_time: http响应用时
http_response_result_code: url探测结果0为正常否则url无法访问
# [mysqld_exporter]
mysql_global_status_uptime: The number of seconds that the server has been up.(Gauge)
@@ -489,7 +237,7 @@ redis_last_key_groups_scrape_duration_milliseconds: Duration of the last key gro
redis_last_slow_execution_duration_seconds: The amount of time needed for last slow execution, in seconds.
redis_latest_fork_seconds: The amount of time needed for last fork, in seconds.
redis_lazyfree_pending_objects: The number of objects waiting to be freed (as a result of calling UNLINK, or FLUSHDB and FLUSHALL with the ASYNC option).
redis_master_repl_offset: The server's current replication offset.
redis_master_repl_offset: The server's current replication offset.
redis_mem_clients_normal: Memory used by normal clients.(Gauge)
redis_mem_clients_slaves: Memory used by replica clients - Starting Redis 7.0, replica buffers share memory with the replication backlog, so this field can show 0 when replicas don't trigger an increase of memory usage.
redis_mem_fragmentation_bytes: Delta between used_memory_rss and used_memory. Note that when the total fragmentation bytes is low (few megabytes), a high ratio (e.g. 1.5 and above) is not an indication of an issue.
@@ -622,6 +370,8 @@ node_load15: cpu load 15m
# MEM
# 内核态
# 用户追踪已从交换区获取但尚未修改的页面的内存
node_memory_SwapCached_bytes: Memory that keeps track of pages that have been fetched from swap but not yet been modified
# 内核用于缓存数据结构供自己使用的内存
node_memory_Slab_bytes: Memory used by the kernel to cache data structures for its own use
# slab中可回收的部分
@@ -683,7 +433,7 @@ node_memory_SwapTotal_bytes: Memory information field SwapTotal_bytes
node_memory_SwapFree_bytes: Memory information field SwapFree_bytes
# DISK
node_filesystem_avail_bytes: Filesystem space available to non-root users in byte
node_filesystem_files_free: Filesystem space available to non-root users in byte
node_filesystem_free_bytes: Filesystem free space in bytes
node_filesystem_size_bytes: Filesystem size in bytes
node_filesystem_files_free: Filesystem total free file nodes
@@ -729,7 +479,7 @@ kafka_consumer_lag_millis: Current approximation of consumer lag for a ConsumerG
kafka_topic_partition_under_replicated_partition: 1 if Topic/Partition is under Replicated
# [zookeeper_exporter]
zk_znode_count: The total count of znodes stored
zk_znode_count: The total count of znodes stored
zk_ephemerals_count: The number of Ephemerals nodes
zk_watch_count: The number of watchers setup over Zookeeper nodes.
zk_approximate_data_size: Size of data in bytes that a zookeeper server has in its data tree
@@ -741,4 +491,4 @@ zk_open_file_descriptor_count: Number of file descriptors that a zookeeper serve
zk_max_file_descriptor_count: Maximum number of file descriptors that a zookeeper server can open
zk_avg_latency: Average time in milliseconds for requests to be processed
zk_min_latency: Maximum time in milliseconds for a request to be processed
zk_max_latency: Minimum time in milliseconds for a request to be processed
zk_max_latency: Minimum time in milliseconds for a request to be processed

View File

@@ -7,6 +7,13 @@ import (
"github.com/tidwall/gjson"
)
// the caller can be called for alerting notify by complete this interface
type inter interface {
Descript() string
Notify([]byte)
NotifyMaintainer([]byte)
}
// N9E complete
type N9EPlugin struct {
Name string
@@ -41,6 +48,6 @@ func (n *N9EPlugin) NotifyMaintainer(bs []byte) {
// will be loaded for alertingCall , The first letter must be capitalized to be exported
var N9eCaller = N9EPlugin{
Name: "N9EPlugin",
Description: "Notify by lib",
Description: "Notification by lib",
BuildAt: time.Now().Local().Format("2006/01/02 15:04:05"),
}

View File

@@ -1,193 +0,0 @@
import json
import yaml
'''
将promtheus/vmalert的rule转换为n9e中的rule
支持k8s的rule configmap
'''
rule_file = 'rules.yaml'
def convert_interval(interval):
if interval.endswith('s') or interval.endswith('S'):
return int(interval[:-1])
if interval.endswith('m') or interval.endswith('M'):
return int(interval[:-1]) * 60
if interval.endswith('h') or interval.endswith('H'):
return int(interval[:-1]) * 60 * 60
if interval.endswith('d') or interval.endswith('D'):
return int(interval[:-1]) * 60 * 60 * 24
return int(interval)
def convert_alert(rule, interval):
name = rule['alert']
prom_ql = rule['expr']
if 'for' in rule:
prom_for_duration = convert_interval(rule['for'])
else:
prom_for_duration = 0
prom_eval_interval = convert_interval(interval)
note = ''
if 'annotations' in rule:
for v in rule['annotations'].values():
note = v
break
append_tags = []
severity = 2
if 'labels' in rule:
for k, v in rule['labels'].items():
if k != 'severity':
append_tags.append('{}={}'.format(k, v))
continue
if v == 'critical':
severity = 1
elif v == 'info':
severity = 3
# elif v == 'warning':
# severity = 2
n9e_alert_rule = {
"name": name,
"note": note,
"severity": severity,
"disabled": 0,
"prom_for_duration": prom_for_duration,
"prom_ql": prom_ql,
"prom_eval_interval": prom_eval_interval,
"enable_stime": "00:00",
"enable_etime": "23:59",
"enable_days_of_week": [
"1",
"2",
"3",
"4",
"5",
"6",
"0"
],
"enable_in_bg": 0,
"notify_recovered": 1,
"notify_channels": [],
"notify_repeat_step": 60,
"recover_duration": 0,
"callbacks": [],
"runbook_url": "",
"append_tags": append_tags
}
return n9e_alert_rule
def convert_record(rule, interval):
name = rule['record']
prom_ql = rule['expr']
prom_eval_interval = convert_interval(interval)
note = ''
append_tags = []
if 'labels' in rule:
for k, v in rule['labels'].items():
append_tags.append('{}={}'.format(k, v))
n9e_record_rule = {
"name": name,
"note": note,
"disabled": 0,
"prom_ql": prom_ql,
"prom_eval_interval": prom_eval_interval,
"append_tags": append_tags
}
return n9e_record_rule
'''
example of rule group file
---
groups:
- name: example
rules:
- alert: HighRequestLatency
expr: job:request_latency_seconds:mean5m{job="myjob"} > 0.5
for: 10m
labels:
severity: page
annotations:
summary: High request latency
'''
def deal_group(group):
"""
parse single prometheus/vmalert rule group
"""
alert_rules = []
record_rules = []
for rule_segment in group['groups']:
if 'interval' in rule_segment:
interval = rule_segment['interval']
else:
interval = '15s'
for rule in rule_segment['rules']:
if 'alert' in rule:
alert_rules.append(convert_alert(rule, interval))
else:
record_rules.append(convert_record(rule, interval))
return alert_rules, record_rules
'''
example of k8s rule configmap
---
apiVersion: v1
kind: ConfigMap
metadata:
name: rulefiles-0
data:
etcdrules.yaml: |
groups:
- name: etcd
rules:
- alert: etcdInsufficientMembers
annotations:
message: 'etcd cluster "{{ $labels.job }}": insufficient members ({{ $value}}).'
expr: sum(up{job=~".*etcd.*"} == bool 1) by (job) < ((count(up{job=~".*etcd.*"})
by (job) + 1) / 2)
for: 3m
labels:
severity: critical
'''
def deal_configmap(rule_configmap):
"""
parse rule configmap from k8s
"""
all_record_rules = []
all_alert_rules = []
for _, rule_group_str in rule_configmap['data'].items():
rule_group = yaml.load(rule_group_str, Loader=yaml.FullLoader)
alert_rules, record_rules = deal_group(rule_group)
all_alert_rules.extend(alert_rules)
all_record_rules.extend(record_rules)
return all_alert_rules, all_record_rules
def main():
with open(rule_file, 'r') as f:
rule_config = yaml.load(f, Loader=yaml.FullLoader)
# 如果文件是k8s中的configmap,使用下面的方法
# alert_rules, record_rules = deal_configmap(rule_config)
alert_rules, record_rules = deal_group(rule_config)
with open("alert-rules.json", 'w') as fw:
json.dump(alert_rules, fw, indent=2, ensure_ascii=False)
with open("record-rules.json", 'w') as fw:
json.dump(record_rules, fw, indent=2, ensure_ascii=False)
if __name__ == '__main__':
main()

View File

@@ -13,9 +13,6 @@ EngineDelay = 120
DisableUsageReport = false
# config | database
ReaderFrom = "config"
[Log]
# log write dir
Dir = "logs"
@@ -158,8 +155,15 @@ BasicAuthUser = ""
BasicAuthPass = ""
# timeout settings, unit: ms
Timeout = 30000
DialTimeout = 3000
MaxIdleConnsPerHost = 100
DialTimeout = 10000
TLSHandshakeTimeout = 30000
ExpectContinueTimeout = 1000
IdleConnTimeout = 90000
# time duration, unit: ms
KeepAlive = 30000
MaxConnsPerHost = 0
MaxIdleConns = 100
MaxIdleConnsPerHost = 10
[WriterOpt]
# queue channel count
@@ -186,12 +190,6 @@ KeepAlive = 30000
MaxConnsPerHost = 0
MaxIdleConns = 100
MaxIdleConnsPerHost = 100
# [[Writers.WriteRelabels]]
# Action = "replace"
# SourceLabels = ["__address__"]
# Regex = "([^:]+)(?::\\d+)?"
# Replacement = "$1:80"
# TargetLabel = "__address__"
# [[Writers]]
# Url = "http://127.0.0.1:7201/api/v1/prom/remote/write"

View File

@@ -1,26 +0,0 @@
# 告警消息模版文件
模版中可以使用的变量参考`AlertCurEvent`对象
模版语法如何使用可以参考[html/template](https://pkg.go.dev/html/template)
## 如何在告警模版中添加监控详情url
假设web的地址是http://127.0.0.1:18000/, 实际使用时用web地址替换该地址
在监控模版中添加以下行:
* dingtalk / wecom / feishu
```markdown
[监控详情](http://127.0.0.1:18000/metric/explorer?promql={{ .PromQl | escape }})
```
* mailbody
```html
<tr>
<th>监控详情:</th>
<td>
<a href="http://127.0.0.1:18000/metric/explorer?promql={{ .PromQl | escape }}" target="_blank">点击查看</a>
</td>
</tr>
```

View File

@@ -4,9 +4,6 @@ RunMode = "release"
# # custom i18n dict config
# I18N = "./etc/i18n.json"
# # custom i18n request header key
# I18NHeaderKey = "X-Language"
# metrics descriptions
MetricsYamlFile = "./etc/metrics.yaml"
@@ -201,10 +198,4 @@ Address = "http://127.0.0.1:10090"
BasicAuthUser = "ibex"
BasicAuthPass = "ibex"
# unit: ms
Timeout = 3000
[TargetMetrics]
TargetUp = '''max(max_over_time(target_up{ident=~"(%s)"}[%dm])) by (ident)'''
LoadPerCore = '''max(max_over_time(system_load_norm_1{ident=~"(%s)"}[%dm])) by (ident)'''
MemUtil = '''100-max(max_over_time(mem_available_percent{ident=~"(%s)"}[%dm])) by (ident)'''
DiskUtil = '''max(max_over_time(disk_used_percent{ident=~"(%s)", path="/"}[%dm])) by (ident)'''
Timeout = 3000

View File

@@ -46,7 +46,6 @@ type AlertCurEvent struct {
LastEvalTime int64 `json:"last_eval_time" gorm:"-"` // for notify.py 上次计算的时间
LastSentTime int64 `json:"last_sent_time" gorm:"-"` // 上次发送时间
NotifyCurNumber int `json:"notify_cur_number"` // notify: current number
FirstTriggerTime int64 `json:"first_trigger_time"` // 连续告警的首次告警时间
}
func (e *AlertCurEvent) TableName() string {
@@ -181,7 +180,6 @@ func (e *AlertCurEvent) ToHis() *AlertHisEvent {
RecoverTime: recoverTime,
LastEvalTime: e.LastEvalTime,
NotifyCurNumber: e.NotifyCurNumber,
FirstTriggerTime: e.FirstTriggerTime,
}
}

View File

@@ -38,8 +38,7 @@ type AlertHisEvent struct {
LastEvalTime int64 `json:"last_eval_time"`
Tags string `json:"-"`
TagsJSON []string `json:"tags" gorm:"-"`
NotifyCurNumber int `json:"notify_cur_number"` // notify: current number
FirstTriggerTime int64 `json:"first_trigger_time"` // 连续告警的首次告警时间
NotifyCurNumber int `json:"notify_cur_number"` // notify: current number
}
func (e *AlertHisEvent) TableName() string {

View File

@@ -13,10 +13,10 @@ import (
type TagFilter struct {
Key string `json:"key"` // tag key
Func string `json:"func"` // `==` | `=~` | `in` | `!=` | `!~` | `not in`
Func string `json:"func"` // == | =~ | in
Value string `json:"value"` // tag value
Regexp *regexp.Regexp // parse value to regexp if func = '=~' or '!~'
Vset map[string]struct{} // parse value to regexp if func = 'in' or 'not in'
Regexp *regexp.Regexp // parse value to regexp if func = '=~'
Vset map[string]struct{} // parse value to regexp if func = 'in'
}
type AlertMute struct {
@@ -71,7 +71,7 @@ func (m *AlertMute) Verify() error {
}
if m.Etime <= m.Btime {
return fmt.Errorf("oops... etime(%d) <= btime(%d)", m.Etime, m.Btime)
return fmt.Errorf("Oops... etime(%d) <= btime(%d)", m.Etime, m.Btime)
}
if err := m.Parse(); err != nil {

View File

@@ -336,7 +336,7 @@ func AlertRuleGetsByCluster(cluster string) ([]*AlertRule, error) {
return lr, err
}
func AlertRulesGetsBy(prods []string, query, algorithm string) ([]*AlertRule, error) {
func AlertRulesGetsBy(prods []string, query string) ([]*AlertRule, error) {
session := DB().Where("prod in (?)", prods)
if query != "" {
@@ -347,10 +347,6 @@ func AlertRulesGetsBy(prods []string, query, algorithm string) ([]*AlertRule, er
}
}
if algorithm != "" {
session = session.Where("algorithm = ?", algorithm)
}
var lst []*AlertRule
err := session.Find(&lst).Error
if err == nil {

View File

@@ -88,12 +88,12 @@ func (s *AlertSubscribe) Parse() error {
}
for i := 0; i < len(s.ITags); i++ {
if s.ITags[i].Func == "=~" || s.ITags[i].Func == "!~" {
if s.ITags[i].Func == "=~" {
s.ITags[i].Regexp, err = regexp.Compile(s.ITags[i].Value)
if err != nil {
return err
}
} else if s.ITags[i].Func == "in" || s.ITags[i].Func == "not in" {
} else if s.ITags[i].Func == "in" {
arr := strings.Fields(s.ITags[i].Value)
s.ITags[i].Vset = make(map[string]struct{})
for j := 0; j < len(arr); j++ {

View File

@@ -1,81 +0,0 @@
package models
import "time"
type AlertingEngines struct {
Id int64 `json:"id" gorm:"primaryKey"`
Instance string `json:"instance"`
Cluster string `json:"cluster"` // reader cluster
Clock int64 `json:"clock"`
}
func (e *AlertingEngines) TableName() string {
return "alerting_engines"
}
// UpdateCluster 页面上用户会给各个n9e-server分配要关联的目标集群是什么
func (e *AlertingEngines) UpdateCluster(c string) error {
e.Cluster = c
return DB().Model(e).Select("cluster").Updates(e).Error
}
// AlertingEngineGetCluster 根据实例名获取对应的集群名字
func AlertingEngineGetCluster(instance string) (string, error) {
var objs []AlertingEngines
err := DB().Where("instance=?", instance).Find(&objs).Error
if err != nil {
return "", err
}
if len(objs) == 0 {
return "", nil
}
return objs[0].Cluster, nil
}
// AlertingEngineGets 拉取列表数据,用户要在页面上看到所有 n9e-server 实例列表,然后为其分配 cluster
func AlertingEngineGets(where string, args ...interface{}) ([]*AlertingEngines, error) {
var objs []*AlertingEngines
var err error
session := DB().Order("instance")
if where == "" {
err = session.Find(&objs).Error
} else {
err = session.Where(where, args...).Find(&objs).Error
}
return objs, err
}
func AlertingEngineGetsInstances(where string, args ...interface{}) ([]string, error) {
var arr []string
var err error
session := DB().Model(new(AlertingEngines)).Order("instance")
if where == "" {
err = session.Pluck("instance", &arr).Error
} else {
err = session.Where(where, args...).Pluck("instance", &arr).Error
}
return arr, err
}
func AlertingEngineHeartbeat(instance string) error {
var total int64
err := DB().Model(new(AlertingEngines)).Where("instance=?", instance).Count(&total).Error
if err != nil {
return err
}
if total == 0 {
// insert
err = DB().Create(&AlertingEngines{
Instance: instance,
Clock: time.Now().Unix(),
}).Error
} else {
// update
err = DB().Model(new(AlertingEngines)).Where("instance=?", instance).Update("clock", time.Now().Unix()).Error
}
return err
}

View File

@@ -71,20 +71,6 @@ func (b *Board) Del() error {
})
}
func BoardGetByID(id int64) (*Board, error) {
var lst []*Board
err := DB().Where("id = ?", id).Find(&lst).Error
if err != nil {
return nil, err
}
if len(lst) == 0 {
return nil, nil
}
return lst[0], nil
}
// BoardGet for detail page
func BoardGet(where string, args ...interface{}) (*Board, error) {
var lst []*Board

View File

@@ -83,15 +83,14 @@ func (re *RecordingRule) Add() error {
return err
}
// 由于实际场景中会出现name重复的recording rule所以不需要检查重复
//exists, err := RecordingRuleExists(0, re.GroupId, re.Cluster, re.Name)
//if err != nil {
// return err
//}
//
//if exists {
// return errors.New("RecordingRule already exists")
//}
exists, err := RecordingRuleExists(0, re.GroupId, re.Cluster, re.Name)
if err != nil {
return err
}
if exists {
return errors.New("RecordingRule already exists")
}
now := time.Now().Unix()
re.CreateAt = now
@@ -101,16 +100,15 @@ func (re *RecordingRule) Add() error {
}
func (re *RecordingRule) Update(ref RecordingRule) error {
// 由于实际场景中会出现name重复的recording rule所以不需要检查重复
//if re.Name != ref.Name {
// exists, err := RecordingRuleExists(re.Id, re.GroupId, re.Cluster, ref.Name)
// if err != nil {
// return err
// }
// if exists {
// return errors.New("RecordingRule already exists")
// }
//}
if re.Name != ref.Name {
exists, err := RecordingRuleExists(re.Id, re.GroupId, re.Cluster, ref.Name)
if err != nil {
return err
}
if exists {
return errors.New("RecordingRule already exists")
}
}
ref.FE2DB()
ref.Id = re.Id

View File

@@ -1,198 +0,0 @@
package models
import (
"crypto/md5"
"fmt"
"regexp"
"sort"
"strings"
"github.com/prometheus/common/model"
"github.com/prometheus/prometheus/prompb"
)
const (
Replace Action = "replace"
Keep Action = "keep"
Drop Action = "drop"
HashMod Action = "hashmod"
LabelMap Action = "labelmap"
LabelDrop Action = "labeldrop"
LabelKeep Action = "labelkeep"
Lowercase Action = "lowercase"
Uppercase Action = "uppercase"
)
type Action string
type Regexp struct {
*regexp.Regexp
}
type RelabelConfig struct {
SourceLabels model.LabelNames
Separator string
Regex interface{}
Modulus uint64
TargetLabel string
Replacement string
Action Action
}
func Process(labels []*prompb.Label, cfgs ...*RelabelConfig) []*prompb.Label {
for _, cfg := range cfgs {
labels = relabel(labels, cfg)
if labels == nil {
return nil
}
}
return labels
}
func getValue(ls []*prompb.Label, name model.LabelName) string {
for _, l := range ls {
if l.Name == string(name) {
return l.Value
}
}
return ""
}
type LabelBuilder struct {
LabelSet map[string]string
}
func newBuilder(ls []*prompb.Label) *LabelBuilder {
lset := make(map[string]string, len(ls))
for _, l := range ls {
lset[l.Name] = l.Value
}
return &LabelBuilder{LabelSet: lset}
}
func (l *LabelBuilder) set(k, v string) *LabelBuilder {
if v == "" {
return l.del(k)
}
l.LabelSet[k] = v
return l
}
func (l *LabelBuilder) del(ns ...string) *LabelBuilder {
for _, n := range ns {
delete(l.LabelSet, n)
}
return l
}
func (l *LabelBuilder) labels() []*prompb.Label {
ls := make([]*prompb.Label, 0, len(l.LabelSet))
if len(l.LabelSet) == 0 {
return ls
}
for k, v := range l.LabelSet {
ls = append(ls, &prompb.Label{
Name: k,
Value: v,
})
}
sort.Slice(ls, func(i, j int) bool {
return ls[i].Name > ls[j].Name
})
return ls
}
func relabel(lset []*prompb.Label, cfg *RelabelConfig) []*prompb.Label {
values := make([]string, 0, len(cfg.SourceLabels))
for _, ln := range cfg.SourceLabels {
values = append(values, getValue(lset, ln))
}
regx := cfg.Regex.(Regexp)
val := strings.Join(values, cfg.Separator)
lb := newBuilder(lset)
switch cfg.Action {
case Drop:
if regx.MatchString(val) {
return nil
}
case Keep:
if !regx.MatchString(val) {
return nil
}
case Replace:
indexes := regx.FindStringSubmatchIndex(val)
if indexes == nil {
break
}
target := model.LabelName(regx.ExpandString([]byte{}, cfg.TargetLabel, val, indexes))
if !target.IsValid() {
lb.del(cfg.TargetLabel)
break
}
res := regx.ExpandString([]byte{}, cfg.Replacement, val, indexes)
if len(res) == 0 {
lb.del(cfg.TargetLabel)
break
}
lb.set(string(target), string(res))
case Lowercase:
lb.set(cfg.TargetLabel, strings.ToLower(val))
case Uppercase:
lb.set(cfg.TargetLabel, strings.ToUpper(val))
case HashMod:
mod := sum64(md5.Sum([]byte(val))) % cfg.Modulus
lb.set(cfg.TargetLabel, fmt.Sprintf("%d", mod))
case LabelMap:
for _, l := range lset {
if regx.MatchString(l.Name) {
res := regx.ReplaceAllString(l.Name, cfg.Replacement)
lb.set(res, l.Value)
}
}
case LabelDrop:
for _, l := range lset {
if regx.MatchString(l.Name) {
lb.del(l.Name)
}
}
case LabelKeep:
for _, l := range lset {
if !regx.MatchString(l.Name) {
lb.del(l.Name)
}
}
default:
panic(fmt.Errorf("relabel: unknown relabel action type %q", cfg.Action))
}
return lb.labels()
}
func sum64(hash [md5.Size]byte) uint64 {
var s uint64
for i, b := range hash {
shift := uint64((md5.Size - i - 1) * 8)
s |= uint64(b) << shift
}
return s
}
func NewRegexp(s string) (Regexp, error) {
regex, err := regexp.Compile("^(?:" + s + ")$")
return Regexp{Regexp: regex}, err
}
func MustNewRegexp(s string) Regexp {
re, err := NewRegexp(s)
if err != nil {
panic(err)
}
return re
}

View File

@@ -20,11 +20,6 @@ type Target struct {
TagsJSON []string `json:"tags" gorm:"-"`
TagsMap map[string]string `json:"-" gorm:"-"` // internal use, append tags to series
UpdateAt int64 `json:"update_at"`
TargetUp float64 `json:"target_up" gorm:"-"`
LoadPerCore float64 `json:"load_per_core" gorm:"-"`
MemUtil float64 `json:"mem_util" gorm:"-"`
DiskUtil float64 `json:"disk_util" gorm:"-"`
}
func (t *Target) TableName() string {

View File

@@ -450,21 +450,6 @@ func (u *User) BusiGroups(limit int, query string, all ...bool) ([]BusiGroup, er
var lst []BusiGroup
if u.IsAdmin() || (len(all) > 0 && all[0]) {
err := session.Where("name like ?", "%"+query+"%").Find(&lst).Error
if err != nil {
return lst, err
}
if len(lst) == 0 && len(query) > 0 {
// 隐藏功能一般人不告诉哈哈。query可能是给的ident所以上面的sql没有查到当做ident来查一下试试
var t *Target
t, err = TargetGet("ident=?", query)
if err != nil {
return lst, err
}
err = DB().Order("name").Limit(limit).Where("id=?", t.GroupId).Find(&lst).Error
}
return lst, err
}
@@ -483,22 +468,6 @@ func (u *User) BusiGroups(limit int, query string, all ...bool) ([]BusiGroup, er
}
err = session.Where("id in ?", busiGroupIds).Where("name like ?", "%"+query+"%").Find(&lst).Error
if err != nil {
return nil, err
}
if len(lst) == 0 && len(query) > 0 {
var t *Target
t, err = TargetGet("ident=?", query)
if err != nil {
return lst, err
}
if slice.ContainsInt64(busiGroupIds, t.GroupId) {
err = DB().Order("name").Limit(limit).Where("id=?", t.GroupId).Find(&lst).Error
}
}
return lst, err
}

View File

@@ -6,11 +6,9 @@ import (
"io/ioutil"
"net/http"
"time"
"github.com/toolkits/pkg/logger"
)
func PostJSON(url string, timeout time.Duration, v interface{}, retries ...int) (response []byte, code int, err error) {
func PostJSON(url string, timeout time.Duration, v interface{}) (response []byte, code int, err error) {
var bs []byte
bs, err = json.Marshal(v)
@@ -28,29 +26,7 @@ func PostJSON(url string, timeout time.Duration, v interface{}, retries ...int)
req.Header.Set("Content-Type", "application/json")
var resp *http.Response
if len(retries) > 0 {
for i := 0; i < retries[0]; i++ {
resp, err = client.Do(req)
if err == nil {
break
}
tryagain := ""
if i+1 < retries[0] {
tryagain = " try again"
}
logger.Warningf("failed to curl %s error: %s"+tryagain, url, err)
if i+1 < retries[0] {
time.Sleep(time.Millisecond * 200)
}
}
} else {
resp, err = client.Do(req)
}
resp, err = client.Do(req)
if err != nil {
return
}

View File

@@ -12,26 +12,25 @@ import (
// ClientConfig represents the standard client TLS config.
type ClientConfig struct {
TLSCA string `toml:"tls_ca"`
TLSCert string `toml:"tls_cert"`
TLSKey string `toml:"tls_key"`
TLSKeyPwd string `toml:"tls_key_pwd"`
InsecureSkipVerify bool `toml:"insecure_skip_verify"`
ServerName string `toml:"tls_server_name"`
TLSMinVersion string `toml:"tls_min_version"`
TLSMaxVersion string `toml:"tls_max_version"`
TLSCA string
TLSCert string
TLSKey string
TLSKeyPwd string
InsecureSkipVerify bool
ServerName string
TLSMinVersion string
}
// ServerConfig represents the standard server TLS config.
type ServerConfig struct {
TLSCert string `toml:"tls_cert"`
TLSKey string `toml:"tls_key"`
TLSKeyPwd string `toml:"tls_key_pwd"`
TLSAllowedCACerts []string `toml:"tls_allowed_cacerts"`
TLSCipherSuites []string `toml:"tls_cipher_suites"`
TLSMinVersion string `toml:"tls_min_version"`
TLSMaxVersion string `toml:"tls_max_version"`
TLSAllowedDNSNames []string `toml:"tls_allowed_dns_names"`
TLSCert string
TLSKey string
TLSKeyPwd string
TLSAllowedCACerts []string
TLSCipherSuites []string
TLSMinVersion string
TLSMaxVersion string
TLSAllowedDNSNames []string
}
// TLSConfig returns a tls.Config, may be nil without error if TLS is not
@@ -71,16 +70,6 @@ func (c *ClientConfig) TLSConfig() (*tls.Config, error) {
tlsConfig.MinVersion = tls.VersionTLS13
}
if c.TLSMaxVersion == "1.0" {
tlsConfig.MaxVersion = tls.VersionTLS10
} else if c.TLSMaxVersion == "1.1" {
tlsConfig.MaxVersion = tls.VersionTLS11
} else if c.TLSMaxVersion == "1.2" {
tlsConfig.MaxVersion = tls.VersionTLS12
} else if c.TLSMaxVersion == "1.3" {
tlsConfig.MaxVersion = tls.VersionTLS13
}
return tlsConfig, nil
}

View File

@@ -2,13 +2,11 @@ package tplx
import (
"html/template"
"net/url"
"regexp"
"strings"
)
var TemplateFuncMap = template.FuncMap{
"escape": url.PathEscape,
"unescaped": Unescaped,
"urlconvert": Urlconvert,
"timeformat": Timeformat,

View File

@@ -66,7 +66,7 @@ func SendDingtalk(message DingtalkMessage) {
}
}
res, code, err := poster.PostJSON(ur, time.Second*5, body, 3)
res, code, err := poster.PostJSON(ur, time.Second*5, body)
if err != nil {
logger.Errorf("dingtalk_sender: result=fail url=%s code=%d error=%v response=%s", ur, code, err, string(res))
} else {

View File

@@ -42,7 +42,7 @@ func SendFeishu(message FeishuMessage) {
},
}
res, code, err := poster.PostJSON(url, time.Second*5, body, 3)
res, code, err := poster.PostJSON(url, time.Second*5, body)
if err != nil {
logger.Errorf("feishu_sender: result=fail url=%s code=%d error=%v response=%s", url, code, err, string(res))
} else {

View File

@@ -31,7 +31,7 @@ func SendWecom(message WecomMessage) {
},
}
res, code, err := poster.PostJSON(url, time.Second*5, body, 3)
res, code, err := poster.PostJSON(url, time.Second*5, body)
if err != nil {
logger.Errorf("wecom_sender: result=fail url=%s code=%d error=%v response=%s", url, code, err, string(res))
} else {

View File

@@ -14,7 +14,6 @@ import (
"github.com/gin-gonic/gin"
"github.com/koding/multiconfig"
"github.com/didi/nightingale/v5/src/models"
"github.com/didi/nightingale/v5/src/notifier"
"github.com/didi/nightingale/v5/src/pkg/httpx"
"github.com/didi/nightingale/v5/src/pkg/logx"
@@ -70,10 +69,6 @@ func MustLoad(fpaths ...string) {
C.EngineDelay = 120
}
if C.ReaderFrom == "" {
C.ReaderFrom = "config"
}
if C.Heartbeat.IP == "" {
// auto detect
// C.Heartbeat.IP = fmt.Sprint(GetOutboundIP())
@@ -85,11 +80,7 @@ func MustLoad(fpaths ...string) {
os.Exit(1)
}
if strings.Contains(hostname, "localhost") {
fmt.Println("Warning! hostname contains substring localhost, setting a more unique hostname is recommended")
}
C.Heartbeat.IP = hostname
C.Heartbeat.IP = hostname + "+" + fmt.Sprint(os.Getpid())
// if C.Heartbeat.IP == "" {
// fmt.Println("heartbeat ip auto got is blank")
@@ -152,33 +143,6 @@ func MustLoad(fpaths ...string) {
C.WriterOpt.QueueCount = 100
}
for _, write := range C.Writers {
for _, relabel := range write.WriteRelabels {
regex, ok := relabel.Regex.(string)
if !ok {
log.Println("Regex field must be a string")
os.Exit(1)
}
if regex == "" {
regex = "(.*)"
}
relabel.Regex = models.MustNewRegexp(regex)
if relabel.Separator == "" {
relabel.Separator = ";"
}
if relabel.Action == "" {
relabel.Action = "replace"
}
if relabel.Replacement == "" {
relabel.Replacement = "$1"
}
}
}
fmt.Println("heartbeat.ip:", C.Heartbeat.IP)
fmt.Printf("heartbeat.interval: %dms\n", C.Heartbeat.Interval)
})
@@ -191,7 +155,6 @@ type Config struct {
AnomalyDataApi []string
EngineDelay int64
DisableUsageReport bool
ReaderFrom string
Log logx.Config
HTTP httpx.Config
BasicAuth gin.Accounts
@@ -212,9 +175,15 @@ type ReaderOptions struct {
BasicAuthUser string
BasicAuthPass string
Timeout int64
DialTimeout int64
Timeout int64
DialTimeout int64
TLSHandshakeTimeout int64
ExpectContinueTimeout int64
IdleConnTimeout int64
KeepAlive int64
MaxConnsPerHost int
MaxIdleConns int
MaxIdleConnsPerHost int
Headers []string
@@ -237,8 +206,6 @@ type WriterOptions struct {
MaxIdleConnsPerHost int
Headers []string
WriteRelabels []*models.RelabelConfig
}
type WriterGlobalOpt struct {
@@ -318,7 +285,7 @@ func (c *Config) IsDebugMode() bool {
// Get preferred outbound ip of this machine
func GetOutboundIP() net.IP {
conn, err := net.Dial("udp", "223.5.5.5:80")
conn, err := net.Dial("udp", "8.8.8.8:80")
if err != nil {
fmt.Println("auto get outbound ip fail:", err)
os.Exit(1)

View File

@@ -32,7 +32,7 @@ func callback(event *models.AlertCurEvent) {
url = "http://" + url
}
resp, code, err := poster.PostJSON(url, 5*time.Second, event, 3)
resp, code, err := poster.PostJSON(url, 5*time.Second, event)
if err != nil {
logger.Errorf("event_callback(rule_id=%d url=%s) fail, resp: %s, err: %v, code: %d", event.RuleId, url, string(resp), err, code)
} else {

View File

@@ -2,18 +2,15 @@ package engine
import (
"context"
"fmt"
"time"
"github.com/toolkits/pkg/logger"
"github.com/didi/nightingale/v5/src/server/common/sender"
"github.com/didi/nightingale/v5/src/server/config"
promstat "github.com/didi/nightingale/v5/src/server/stat"
)
func Start(ctx context.Context) error {
err := reloadTpls()
err := initTpls()
if err != nil {
return err
}
@@ -28,28 +25,9 @@ func Start(ctx context.Context) error {
go sender.StartEmailSender()
go initReporter(func(em map[ErrorType]uint64) {
if len(em) == 0 {
return
}
title := fmt.Sprintf("server %s has some errors, please check server logs for detail", config.C.Heartbeat.IP)
msg := ""
for k, v := range em {
msg += fmt.Sprintf("error: %s, count: %d\n", k, v)
}
notifyToMaintainer(title, msg)
})
return nil
}
func Reload() {
err := reloadTpls()
if err != nil {
logger.Error("engine reload err:", err)
}
}
func reportQueueSize() {
for {
time.Sleep(time.Second)

View File

@@ -6,7 +6,7 @@ import (
)
// 如果传入了clock这个可选参数就表示使用这个clock表示的时间否则就从event的字段中取TriggerTime
func IsMuted(event *models.AlertCurEvent, clock ...int64) bool {
func isMuted(event *models.AlertCurEvent, clock ...int64) bool {
mutes, has := memsto.AlertMuteCache.Gets(event.GroupId)
if !has || len(mutes) == 0 {
return false

View File

@@ -10,7 +10,6 @@ import (
"os/exec"
"path"
"strings"
"sync"
"time"
"github.com/pkg/errors"
@@ -30,12 +29,9 @@ import (
"github.com/didi/nightingale/v5/src/storage"
)
var (
tpls map[string]*template.Template
rwLock sync.RWMutex
)
var tpls = make(map[string]*template.Template)
func reloadTpls() error {
func initTpls() error {
if config.C.Alerting.TemplatesDir == "" {
config.C.Alerting.TemplatesDir = path.Join(runner.Cwd, "etc", "template")
}
@@ -60,7 +56,6 @@ func reloadTpls() error {
return errors.New("no tpl files under " + config.C.Alerting.TemplatesDir)
}
tmpTpls := make(map[string]*template.Template)
for i := 0; i < len(tplFiles); i++ {
tplpath := path.Join(config.C.Alerting.TemplatesDir, tplFiles[i])
@@ -69,12 +64,9 @@ func reloadTpls() error {
return errors.WithMessage(err, "failed to parse tpl: "+tplpath)
}
tmpTpls[tplFiles[i]] = tpl
tpls[tplFiles[i]] = tpl
}
rwLock.Lock()
tpls = tmpTpls
rwLock.Unlock()
return nil
}
@@ -86,9 +78,6 @@ type Notice struct {
func genNotice(event *models.AlertCurEvent) Notice {
// build notice body with templates
ntpls := make(map[string]string)
rwLock.RLock()
defer rwLock.RUnlock()
for filename, tpl := range tpls {
var body bytes.Buffer
if err := tpl.Execute(&body, event); err != nil {

View File

@@ -2,6 +2,7 @@ package engine
import (
"encoding/json"
"runtime"
"time"
"github.com/didi/nightingale/v5/src/models"
@@ -19,30 +20,20 @@ type MaintainMessage struct {
Content string `json:"content"`
}
// notify to maintainer to handle the error
func notifyToMaintainer(title, msg string) {
logger.Errorf("notifyToMaintainer, msg: %s", msg)
users := memsto.UserCache.GetMaintainerUsers()
if len(users) == 0 {
func notifyMaintainerWithPlugin(e error, title, triggerTime string, users []*models.User) {
if !config.C.Alerting.CallPlugin.Enable {
return
}
triggerTime := time.Now().Format("2006/01/02 - 15:04:05")
notifyMaintainerWithPlugin(title, msg, triggerTime, users)
notifyMaintainerWithBuiltin(title, msg, triggerTime, users)
}
func notifyMaintainerWithPlugin(title, msg, triggerTime string, users []*models.User) {
if !config.C.Alerting.CallPlugin.Enable {
if runtime.GOOS == "windows" {
logger.Errorf("call notify plugin on unsupported os: %s", runtime.GOOS)
return
}
stdinBytes, err := json.Marshal(MaintainMessage{
Tos: users,
Title: title,
Content: "Title: " + title + "\nContent: " + msg + "\nTime: " + triggerTime,
Content: "Title: " + title + "\nContent: " + e.Error() + "\nTime: " + triggerTime,
})
if err != nil {
@@ -54,7 +45,22 @@ func notifyMaintainerWithPlugin(title, msg, triggerTime string, users []*models.
logger.Debugf("notify maintainer with plugin done")
}
func notifyMaintainerWithBuiltin(title, msg, triggerTime string, users []*models.User) {
// notify to maintainer to handle the error
func notifyToMaintainer(e error, title string) {
logger.Errorf("notifyToMaintainer, title:%s, error:%v", title, e)
users := memsto.UserCache.GetMaintainerUsers()
if len(users) == 0 {
return
}
triggerTime := time.Now().Format("2006/01/02 - 15:04:05")
notifyMaintainerWithPlugin(e, title, triggerTime, users)
notifyMaintainerWithBuiltin(e, title, triggerTime, users)
}
func notifyMaintainerWithBuiltin(e error, title, triggerTime string, users []*models.User) {
if len(config.C.Alerting.NotifyBuiltinChannels) == 0 {
return
}
@@ -104,13 +110,13 @@ func notifyMaintainerWithBuiltin(title, msg, triggerTime string, users []*models
if len(emailset) == 0 {
continue
}
content := "Title: " + title + "\nContent: " + msg + "\nTime: " + triggerTime
content := "Title: " + title + "\nContent: " + e.Error() + "\nTime: " + triggerTime
sender.WriteEmail(title, content, StringSetKeys(emailset))
case "dingtalk":
if len(dingtalkset) == 0 {
continue
}
content := "**Title: **" + title + "\n**Content: **" + msg + "\n**Time: **" + triggerTime
content := "**Title: **" + title + "\n**Content: **" + e.Error() + "\n**Time: **" + triggerTime
sender.SendDingtalk(sender.DingtalkMessage{
Title: title,
Text: content,
@@ -121,7 +127,7 @@ func notifyMaintainerWithBuiltin(title, msg, triggerTime string, users []*models
if len(wecomset) == 0 {
continue
}
content := "**Title: **" + title + "\n**Content: **" + msg + "\n**Time: **" + triggerTime
content := "**Title: **" + title + "\n**Content: **" + e.Error() + "\n**Time: **" + triggerTime
sender.SendWecom(sender.WecomMessage{
Text: content,
Tokens: StringSetKeys(wecomset),
@@ -131,7 +137,7 @@ func notifyMaintainerWithBuiltin(title, msg, triggerTime string, users []*models
continue
}
content := "Title: " + title + "\nContent: " + msg + "\nTime: " + triggerTime
content := "Title: " + title + "\nContent: " + e.Error() + "\nTime: " + triggerTime
sender.SendFeishu(sender.FeishuMessage{
Text: content,
AtMobiles: phones,

View File

@@ -1,65 +0,0 @@
package engine
import (
"sync"
"time"
)
type ErrorType string
// register new error here
const (
QueryPrometheusError ErrorType = "QueryPrometheusError"
RuntimeError ErrorType = "RuntimeError"
)
type reporter struct {
sync.Mutex
em map[ErrorType]uint64
cb func(em map[ErrorType]uint64)
}
var rp reporter
func initReporter(cb func(em map[ErrorType]uint64)) {
rp = reporter{cb: cb, em: make(map[ErrorType]uint64)}
rp.Start()
}
func Report(errorType ErrorType) {
rp.report(errorType)
}
func (r *reporter) reset() map[ErrorType]uint64 {
r.Lock()
defer r.Unlock()
if len(r.em) == 0 {
return nil
}
oem := r.em
r.em = make(map[ErrorType]uint64)
return oem
}
func (r *reporter) report(errorType ErrorType) {
r.Lock()
defer r.Unlock()
if count, has := r.em[errorType]; has {
r.em[errorType] = count + 1
} else {
r.em[errorType] = 1
}
}
func (r *reporter) Start() {
for {
select {
case <-time.After(time.Minute):
cur := r.reset()
if cur != nil {
r.cb(cur)
}
}
}
}

View File

@@ -3,15 +3,15 @@ package engine
import (
"context"
"fmt"
"log"
"math/rand"
"sort"
"strings"
"sync"
"time"
"github.com/didi/nightingale/v5/src/server/writer"
"github.com/prometheus/common/model"
"github.com/toolkits/pkg/logger"
"github.com/toolkits/pkg/net/httplib"
"github.com/toolkits/pkg/str"
"github.com/didi/nightingale/v5/src/models"
@@ -60,88 +60,25 @@ func filterRules() {
}
Workers.Build(mines)
RuleEvalForExternal.Build()
}
type RuleEval struct {
rule *models.AlertRule
fires *AlertCurEventMap
pendings *AlertCurEventMap
fires map[string]*models.AlertCurEvent
pendings map[string]*models.AlertCurEvent
quit chan struct{}
}
type AlertCurEventMap struct {
sync.RWMutex
Data map[string]*models.AlertCurEvent
}
func (a *AlertCurEventMap) SetAll(data map[string]*models.AlertCurEvent) {
a.Lock()
defer a.Unlock()
a.Data = data
}
func (a *AlertCurEventMap) Set(key string, value *models.AlertCurEvent) {
a.Lock()
defer a.Unlock()
a.Data[key] = value
}
func (a *AlertCurEventMap) Get(key string) (*models.AlertCurEvent, bool) {
a.RLock()
defer a.RUnlock()
event, exists := a.Data[key]
return event, exists
}
func (a *AlertCurEventMap) UpdateLastEvalTime(key string, lastEvalTime int64) {
a.Lock()
defer a.Unlock()
event, exists := a.Data[key]
if !exists {
return
}
event.LastEvalTime = lastEvalTime
}
func (a *AlertCurEventMap) Delete(key string) {
a.Lock()
defer a.Unlock()
delete(a.Data, key)
}
func (a *AlertCurEventMap) Keys() []string {
a.RLock()
defer a.RUnlock()
keys := make([]string, 0, len(a.Data))
for k := range a.Data {
keys = append(keys, k)
}
return keys
}
func (a *AlertCurEventMap) GetAll() map[string]*models.AlertCurEvent {
a.RLock()
defer a.RUnlock()
return a.Data
}
func NewAlertCurEventMap() *AlertCurEventMap {
return &AlertCurEventMap{
Data: make(map[string]*models.AlertCurEvent),
}
}
func (r *RuleEval) Stop() {
func (r RuleEval) Stop() {
logger.Infof("rule_eval:%d stopping", r.RuleID())
close(r.quit)
}
func (r *RuleEval) RuleID() int64 {
func (r RuleEval) RuleID() int64 {
return r.rule.Id
}
func (r *RuleEval) Start() {
func (r RuleEval) Start() {
logger.Infof("rule_eval:%d started", r.RuleID())
for {
select {
@@ -150,7 +87,7 @@ func (r *RuleEval) Start() {
return
default:
r.Work()
logger.Debugf("rule executed, rule_eval:%d", r.RuleID())
logger.Debugf("rule executedrule_id=%d", r.RuleID())
interval := r.rule.PromEvalInterval
if interval <= 0 {
interval = 10
@@ -160,18 +97,18 @@ func (r *RuleEval) Start() {
}
}
func (r *RuleEval) Work() {
type AnomalyPoint struct {
Data model.Matrix `json:"data"`
Err string `json:"error"`
}
func (r RuleEval) Work() {
promql := strings.TrimSpace(r.rule.PromQl)
if promql == "" {
logger.Errorf("rule_eval:%d promql is blank", r.RuleID())
return
}
if reader.Client == nil {
logger.Error("reader.Client is nil")
return
}
var value model.Value
var err error
if r.rule.Algorithm == "" {
@@ -179,8 +116,7 @@ func (r *RuleEval) Work() {
value, warnings, err = reader.Client.Query(context.Background(), promql, time.Now())
if err != nil {
logger.Errorf("rule_eval:%d promql:%s, error:%v", r.RuleID(), promql, err)
//notifyToMaintainer(err, "failed to query prometheus")
Report(QueryPrometheusError)
notifyToMaintainer(err, "failed to query prometheus")
return
}
@@ -188,18 +124,34 @@ func (r *RuleEval) Work() {
logger.Errorf("rule_eval:%d promql:%s, warnings:%v", r.RuleID(), promql, warnings)
return
}
logger.Debugf("rule_eval:%d promql:%s, value:%v", r.RuleID(), promql, value)
} else {
var res AnomalyPoint
count := len(config.C.AnomalyDataApi)
for _, i := range rand.Perm(count) {
url := fmt.Sprintf("%s?rid=%d", config.C.AnomalyDataApi[i], r.rule.Id)
err = httplib.Get(url).SetTimeout(time.Duration(3000) * time.Millisecond).ToJSON(&res)
if err != nil {
logger.Errorf("curl %s fail: %v", url, err)
continue
}
if res.Err != "" {
logger.Errorf("curl %s fail: %s", url, res.Err)
continue
}
value = res.Data
logger.Debugf("curl %s get: %+v", url, res.Data)
}
}
r.Judge(conv.ConvertVectors(value))
r.judge(conv.ConvertVectors(value))
}
type WorkersType struct {
rules map[string]*RuleEval
rules map[string]RuleEval
recordRules map[string]RecordingRuleEval
}
var Workers = &WorkersType{rules: make(map[string]*RuleEval), recordRules: make(map[string]RecordingRuleEval)}
var Workers = &WorkersType{rules: make(map[string]RuleEval), recordRules: make(map[string]RecordingRuleEval)}
func (ws *WorkersType) Build(rids []int64) {
rules := make(map[string]*models.AlertRule)
@@ -245,13 +197,12 @@ func (ws *WorkersType) Build(rids []int64) {
elst[i].DB2Mem()
firemap[elst[i].Hash] = elst[i]
}
fires := NewAlertCurEventMap()
fires.SetAll(firemap)
re := &RuleEval{
re := RuleEval{
rule: rules[hash],
quit: make(chan struct{}),
fires: fires,
pendings: NewAlertCurEventMap(),
fires: firemap,
pendings: make(map[string]*models.AlertCurEvent),
}
go re.Start()
@@ -306,31 +257,20 @@ func (ws *WorkersType) BuildRe(rids []int64) {
}
}
func (r *RuleEval) Judge(vectors []conv.Vector) {
now := time.Now().Unix()
alertingKeys, ruleExists := r.MakeNewEvent("inner", now, vectors)
if !ruleExists {
return
}
// handle recovered events
r.recoverRule(alertingKeys, now)
}
func (r *RuleEval) MakeNewEvent(from string, now int64, vectors []conv.Vector) (map[string]struct{}, bool) {
func (r RuleEval) judge(vectors []conv.Vector) {
// 有可能rule的一些配置已经发生变化比如告警接收人、callbacks等
// 这些信息的修改是不会引起worker restart的但是确实会影响告警处理逻辑
// 所以这里直接从memsto.AlertRuleCache中获取并覆盖
curRule := memsto.AlertRuleCache.Get(r.rule.Id)
if curRule == nil {
return map[string]struct{}{}, false
return
}
r.rule = curRule
count := len(vectors)
alertingKeys := make(map[string]struct{})
now := time.Now().Unix()
for i := 0; i < count; i++ {
// compute hash
hash := str.MD5(fmt.Sprintf("%d_%s", r.rule.Id, vectors[i].Key))
@@ -338,7 +278,6 @@ func (r *RuleEval) MakeNewEvent(from string, now int64, vectors []conv.Vector) (
// rule disabled in this time span?
if isNoneffective(vectors[i].Timestamp, r.rule) {
logger.Debugf("event_disabled: rule_eval:%d rule:%v timestamp:%d", r.rule.Id, r.rule, vectors[i].Timestamp)
continue
}
@@ -359,17 +298,14 @@ func (r *RuleEval) MakeNewEvent(from string, now int64, vectors []conv.Vector) (
// handle target note
targetIdent, has := vectors[i].Labels["ident"]
targetNote := ""
targetCluster := ""
if has {
target, exists := memsto.TargetCache.Get(string(targetIdent))
if exists {
targetNote = target.Note
targetCluster = target.Cluster
// 对于包含ident的告警事件check一下ident所属bg和rule所属bg是否相同
// 如果告警规则选择了只在本BG生效那其他BG的机器就不能因此规则产生告警
if r.rule.EnableInBG == 1 && target.GroupId != r.rule.GroupId {
logger.Debugf("event_enable_in_bg: rule_eval:%d", r.rule.Id)
continue
}
}
@@ -388,7 +324,7 @@ func (r *RuleEval) MakeNewEvent(from string, now int64, vectors []conv.Vector) (
}
// isMuted only need TriggerTime RuleName and TagsMap
if IsMuted(event) {
if isMuted(event) {
logger.Infof("event_muted: rule_id=%d %s", r.rule.Id, vectors[i].Key)
continue
}
@@ -396,7 +332,7 @@ func (r *RuleEval) MakeNewEvent(from string, now int64, vectors []conv.Vector) (
tagsArr := labelMapToArr(tagsMap)
sort.Strings(tagsArr)
event.Cluster = targetCluster
event.Cluster = r.rule.Cluster
event.Hash = hash
event.RuleId = r.rule.Id
event.RuleName = r.rule.Name
@@ -422,15 +358,12 @@ func (r *RuleEval) MakeNewEvent(from string, now int64, vectors []conv.Vector) (
event.Tags = strings.Join(tagsArr, ",,")
event.IsRecovered = false
event.LastEvalTime = now
if from != "inner" {
event.LastEvalTime = event.TriggerTime
}
r.handleNewEvent(event)
}
return alertingKeys, true
// handle recovered events
r.recoverRule(alertingKeys, now)
}
func readableValue(value float64) string {
@@ -454,30 +387,26 @@ func labelMapToArr(m map[string]string) []string {
return labelStrings
}
func (r *RuleEval) handleNewEvent(event *models.AlertCurEvent) {
func (r RuleEval) handleNewEvent(event *models.AlertCurEvent) {
if event.PromForDuration == 0 {
r.fireEvent(event)
return
}
var preTriggerTime int64
preEvent, has := r.pendings.Get(event.Hash)
_, has := r.pendings[event.Hash]
if has {
r.pendings.UpdateLastEvalTime(event.Hash, event.LastEvalTime)
preTriggerTime = preEvent.TriggerTime
r.pendings[event.Hash].LastEvalTime = event.LastEvalTime
} else {
r.pendings.Set(event.Hash, event)
preTriggerTime = event.TriggerTime
r.pendings[event.Hash] = event
}
if event.LastEvalTime-preTriggerTime+int64(event.PromEvalInterval) >= int64(event.PromForDuration) {
if r.pendings[event.Hash].LastEvalTime-r.pendings[event.Hash].TriggerTime+int64(event.PromEvalInterval) >= int64(event.PromForDuration) {
r.fireEvent(event)
}
}
func (r *RuleEval) fireEvent(event *models.AlertCurEvent) {
if fired, has := r.fires.Get(event.Hash); has {
r.fires.UpdateLastEvalTime(event.Hash, event.LastEvalTime)
func (r RuleEval) fireEvent(event *models.AlertCurEvent) {
if fired, has := r.fires[event.Hash]; has {
r.fires[event.Hash].LastEvalTime = event.LastEvalTime
if r.rule.NotifyRepeatStep == 0 {
// 说明不想重复通知那就直接返回了nothing to do
@@ -489,7 +418,6 @@ func (r *RuleEval) fireEvent(event *models.AlertCurEvent) {
if r.rule.NotifyMaxNumber == 0 {
// 最大可以发送次数如果是0表示不想限制最大发送次数一直发即可
event.NotifyCurNumber = fired.NotifyCurNumber + 1
event.FirstTriggerTime = fired.FirstTriggerTime
r.pushEventToQueue(event)
} else {
// 有最大发送次数的限制,就要看已经发了几次了,是否达到了最大发送次数
@@ -497,7 +425,6 @@ func (r *RuleEval) fireEvent(event *models.AlertCurEvent) {
return
} else {
event.NotifyCurNumber = fired.NotifyCurNumber + 1
event.FirstTriggerTime = fired.FirstTriggerTime
r.pushEventToQueue(event)
}
}
@@ -505,74 +432,62 @@ func (r *RuleEval) fireEvent(event *models.AlertCurEvent) {
}
} else {
event.NotifyCurNumber = 1
event.FirstTriggerTime = event.TriggerTime
r.pushEventToQueue(event)
}
}
func (r *RuleEval) recoverRule(alertingKeys map[string]struct{}, now int64) {
for _, hash := range r.pendings.Keys() {
if _, has := alertingKeys[hash]; has {
continue
}
r.pendings.Delete(hash)
}
for hash, event := range r.fires.GetAll() {
func (r RuleEval) recoverRule(alertingKeys map[string]struct{}, now int64) {
for hash := range r.pendings {
if _, has := alertingKeys[hash]; has {
continue
}
r.recoverEvent(hash, event, now)
delete(r.pendings, hash)
}
for hash, event := range r.fires {
if _, has := alertingKeys[hash]; has {
continue
}
// 如果配置了留观时长,就不能立马恢复了
if r.rule.RecoverDuration > 0 && now-event.LastEvalTime < r.rule.RecoverDuration {
continue
}
// 没查到触发阈值的vector姑且就认为这个vector的值恢复了
// 我确实无法分辨是prom中有值但是未满足阈值所以没返回还是prom中确实丢了一些点导致没有数据可以返回尴尬
delete(r.fires, hash)
delete(r.pendings, hash)
event.IsRecovered = true
event.LastEvalTime = now
// 可能是因为调整了promql才恢复的所以事件里边要体现最新的promql否则用户会比较困惑
// 当然其实rule的各个字段都可能发生变化了都更新一下吧
event.RuleName = r.rule.Name
event.RuleNote = r.rule.Note
event.RuleProd = r.rule.Prod
event.RuleAlgo = r.rule.Algorithm
event.Severity = r.rule.Severity
event.PromForDuration = r.rule.PromForDuration
event.PromQl = r.rule.PromQl
event.PromEvalInterval = r.rule.PromEvalInterval
event.Callbacks = r.rule.Callbacks
event.CallbacksJSON = r.rule.CallbacksJSON
event.RunbookUrl = r.rule.RunbookUrl
event.NotifyRecovered = r.rule.NotifyRecovered
event.NotifyChannels = r.rule.NotifyChannels
event.NotifyChannelsJSON = r.rule.NotifyChannelsJSON
event.NotifyGroups = r.rule.NotifyGroups
event.NotifyGroupsJSON = r.rule.NotifyGroupsJSON
r.pushEventToQueue(event)
}
}
func (r *RuleEval) RecoverEvent(hash string, now int64) {
event, has := r.fires.Get(hash)
if !has {
return
}
r.recoverEvent(hash, event, time.Now().Unix())
}
func (r *RuleEval) recoverEvent(hash string, event *models.AlertCurEvent, now int64) {
// 如果配置了留观时长,就不能立马恢复了
if r.rule.RecoverDuration > 0 && now-event.LastEvalTime < r.rule.RecoverDuration {
return
}
// 没查到触发阈值的vector姑且就认为这个vector的值恢复了
// 我确实无法分辨是prom中有值但是未满足阈值所以没返回还是prom中确实丢了一些点导致没有数据可以返回尴尬
r.fires.Delete(hash)
r.pendings.Delete(hash)
event.IsRecovered = true
event.LastEvalTime = now
// 可能是因为调整了promql才恢复的所以事件里边要体现最新的promql否则用户会比较困惑
// 当然其实rule的各个字段都可能发生变化了都更新一下吧
event.RuleName = r.rule.Name
event.RuleNote = r.rule.Note
event.RuleProd = r.rule.Prod
event.RuleAlgo = r.rule.Algorithm
event.Severity = r.rule.Severity
event.PromForDuration = r.rule.PromForDuration
event.PromQl = r.rule.PromQl
event.PromEvalInterval = r.rule.PromEvalInterval
event.Callbacks = r.rule.Callbacks
event.CallbacksJSON = r.rule.CallbacksJSON
event.RunbookUrl = r.rule.RunbookUrl
event.NotifyRecovered = r.rule.NotifyRecovered
event.NotifyChannels = r.rule.NotifyChannels
event.NotifyChannelsJSON = r.rule.NotifyChannelsJSON
event.NotifyGroups = r.rule.NotifyGroups
event.NotifyGroupsJSON = r.rule.NotifyGroupsJSON
r.pushEventToQueue(event)
}
func (r *RuleEval) pushEventToQueue(event *models.AlertCurEvent) {
func (r RuleEval) pushEventToQueue(event *models.AlertCurEvent) {
if !event.IsRecovered {
event.LastSentTime = event.LastEvalTime
r.fires.Set(event.Hash, event)
r.fires[event.Hash] = event
}
promstat.CounterAlertsTotal.WithLabelValues(config.C.ClusterName).Inc()
@@ -581,7 +496,6 @@ func (r *RuleEval) pushEventToQueue(event *models.AlertCurEvent) {
logger.Warningf("event_push_queue: queue is full")
}
}
func filterRecordingRules() {
ids := memsto.RecordingRuleCache.GetRuleIds()
@@ -642,11 +556,6 @@ func (r RecordingRuleEval) Work() {
return
}
if reader.Client == nil {
log.Println("reader.Client is nil")
return
}
value, warnings, err := reader.Client.Query(context.Background(), promql, time.Now())
if err != nil {
logger.Errorf("recording_rule_eval:%d promql:%s, error:%v", r.RuleID(), promql, err)
@@ -664,82 +573,3 @@ func (r RecordingRuleEval) Work() {
}
}
}
type RuleEvalForExternalType struct {
sync.RWMutex
rules map[int64]RuleEval
}
var RuleEvalForExternal = RuleEvalForExternalType{rules: make(map[int64]RuleEval)}
func (re *RuleEvalForExternalType) Build() {
rids := memsto.AlertRuleCache.GetRuleIds()
rules := make(map[int64]*models.AlertRule)
for i := 0; i < len(rids); i++ {
rule := memsto.AlertRuleCache.Get(rids[i])
if rule == nil {
continue
}
re.Lock()
rules[rule.Id] = rule
re.Unlock()
}
// stop old
for rid := range re.rules {
if _, has := rules[rid]; !has {
re.Lock()
delete(re.rules, rid)
re.Unlock()
}
}
// start new
re.Lock()
defer re.Unlock()
for rid := range rules {
if _, has := re.rules[rid]; has {
// already exists
continue
}
elst, err := models.AlertCurEventGetByRule(rules[rid].Id)
if err != nil {
logger.Errorf("worker_build: AlertCurEventGetByRule failed: %v", err)
continue
}
firemap := make(map[string]*models.AlertCurEvent)
for i := 0; i < len(elst); i++ {
elst[i].DB2Mem()
firemap[elst[i].Hash] = elst[i]
}
fires := NewAlertCurEventMap()
fires.SetAll(firemap)
newRe := RuleEval{
rule: rules[rid],
quit: make(chan struct{}),
fires: fires,
pendings: NewAlertCurEventMap(),
}
re.rules[rid] = newRe
}
}
func (re *RuleEvalForExternalType) Get(rid int64) (RuleEval, bool) {
rule := memsto.AlertRuleCache.Get(rid)
if rule == nil {
return RuleEval{}, false
}
re.RLock()
defer re.RUnlock()
if ret, has := re.rules[rid]; has {
// already exists
return ret, has
}
return RuleEval{}, false
}

View File

@@ -4,45 +4,57 @@ import (
"context"
"fmt"
"sort"
"strconv"
"strings"
"time"
"github.com/toolkits/pkg/logger"
"github.com/didi/nightingale/v5/src/models"
"github.com/didi/nightingale/v5/src/server/config"
"github.com/didi/nightingale/v5/src/storage"
)
// local servers
var localss string
func Heartbeat(ctx context.Context) error {
if err := heartbeat(); err != nil {
if err := heartbeat(ctx); err != nil {
fmt.Println("failed to heartbeat:", err)
return err
}
go loopHeartbeat()
go loopHeartbeat(ctx)
return nil
}
func loopHeartbeat() {
func loopHeartbeat(ctx context.Context) {
interval := time.Duration(config.C.Heartbeat.Interval) * time.Millisecond
for {
time.Sleep(interval)
if err := heartbeat(); err != nil {
if err := heartbeat(ctx); err != nil {
logger.Warning(err)
}
}
}
func heartbeat() error {
err := models.AlertingEngineHeartbeat(config.C.Heartbeat.Endpoint)
// hash struct:
// /server/heartbeat/Default -> {
// 10.2.3.4:19000 => $timestamp
// 10.2.3.5:19000 => $timestamp
// }
func redisKey(cluster string) string {
return fmt.Sprintf("/server/heartbeat/%s", cluster)
}
func heartbeat(ctx context.Context) error {
now := time.Now().Unix()
key := redisKey(config.C.ClusterName)
err := storage.Redis.HSet(ctx, key, config.C.Heartbeat.Endpoint, now).Err()
if err != nil {
return err
}
servers, err := ActiveServers()
servers, err := ActiveServers(ctx, config.C.ClusterName)
if err != nil {
return err
}
@@ -57,12 +69,37 @@ func heartbeat() error {
return nil
}
func ActiveServers() ([]string, error) {
cluster, err := models.AlertingEngineGetCluster(config.C.Heartbeat.Endpoint)
func clearDeadServer(ctx context.Context, cluster, endpoint string) {
key := redisKey(cluster)
err := storage.Redis.HDel(ctx, key, endpoint).Err()
if err != nil {
logger.Warningf("failed to hdel %s %s, error: %v", key, endpoint, err)
}
}
func ActiveServers(ctx context.Context, cluster string) ([]string, error) {
ret, err := storage.Redis.HGetAll(ctx, redisKey(cluster)).Result()
if err != nil {
return nil, err
}
// 30秒内有心跳就认为是活的
return models.AlertingEngineGetsInstances("cluster = ? and clock > ?", cluster, time.Now().Unix()-30)
now := time.Now().Unix()
dur := int64(20)
actives := make([]string, 0, len(ret))
for endpoint, clockstr := range ret {
clock, err := strconv.ParseInt(clockstr, 10, 64)
if err != nil {
continue
}
if now-clock > dur {
clearDeadServer(ctx, cluster, endpoint)
continue
}
actives = append(actives, endpoint)
}
return actives, nil
}

View File

@@ -1,6 +1,7 @@
package naming
import (
"context"
"sort"
"github.com/didi/nightingale/v5/src/server/config"
@@ -8,7 +9,7 @@ import (
)
func IamLeader() (bool, error) {
servers, err := ActiveServers()
servers, err := ActiveServers(context.Background(), config.C.ClusterName)
if err != nil {
logger.Errorf("failed to get active servers: %v", err)
return false, err

View File

@@ -1,48 +1,34 @@
package reader
import (
"encoding/json"
"fmt"
"net"
"net/http"
"strings"
"time"
"github.com/didi/nightingale/v5/src/models"
"github.com/didi/nightingale/v5/src/pkg/prom"
"github.com/didi/nightingale/v5/src/server/config"
"github.com/prometheus/client_golang/api"
"github.com/toolkits/pkg/logger"
)
var Client prom.API
var LocalPromOption PromOption
func Init() error {
rf := strings.ToLower(strings.TrimSpace(config.C.ReaderFrom))
if rf == "" || rf == "config" {
return initFromConfig()
}
if rf == "database" {
return initFromDatabase()
}
return fmt.Errorf("invalid configuration ReaderFrom: %s", rf)
}
func initFromConfig() error {
opts := config.C.Reader
func Init(opts config.ReaderOptions) error {
cli, err := api.NewClient(api.Config{
Address: opts.Url,
RoundTripper: &http.Transport{
// TLSClientConfig: tlsConfig,
Proxy: http.ProxyFromEnvironment,
DialContext: (&net.Dialer{
Timeout: time.Duration(opts.DialTimeout) * time.Millisecond,
Timeout: time.Duration(opts.DialTimeout) * time.Millisecond,
KeepAlive: time.Duration(opts.KeepAlive) * time.Millisecond,
}).DialContext,
ResponseHeaderTimeout: time.Duration(opts.Timeout) * time.Millisecond,
TLSHandshakeTimeout: time.Duration(opts.TLSHandshakeTimeout) * time.Millisecond,
ExpectContinueTimeout: time.Duration(opts.ExpectContinueTimeout) * time.Millisecond,
MaxConnsPerHost: opts.MaxConnsPerHost,
MaxIdleConns: opts.MaxIdleConns,
MaxIdleConnsPerHost: opts.MaxIdleConnsPerHost,
IdleConnTimeout: time.Duration(opts.IdleConnTimeout) * time.Millisecond,
},
})
@@ -58,114 +44,3 @@ func initFromConfig() error {
return nil
}
func initFromDatabase() error {
go func() {
for {
loadFromDatabase()
time.Sleep(time.Second)
}
}()
return nil
}
type PromOption struct {
Url string
User string
Pass string
Headers []string
Timeout int64
DialTimeout int64
MaxIdleConnsPerHost int
}
func (po *PromOption) Equal(target PromOption) bool {
if po.Url != target.Url {
return false
}
if po.User != target.User {
return false
}
if po.Pass != target.Pass {
return false
}
if po.Timeout != target.Timeout {
return false
}
if po.DialTimeout != target.DialTimeout {
return false
}
if po.MaxIdleConnsPerHost != target.MaxIdleConnsPerHost {
return false
}
if len(po.Headers) != len(target.Headers) {
return false
}
for i := 0; i < len(po.Headers); i++ {
if po.Headers[i] != target.Headers[i] {
return false
}
}
return true
}
func loadFromDatabase() {
cluster, err := models.AlertingEngineGetCluster(config.C.Heartbeat.Endpoint)
if err != nil {
logger.Errorf("failed to get current cluster, error: %v", err)
return
}
ckey := "prom." + cluster + ".option"
cval, err := models.ConfigsGet(ckey)
if err != nil {
logger.Errorf("failed to get ckey: %s, error: %v", ckey, err)
return
}
if cval == "" {
Client = nil
return
}
var po PromOption
err = json.Unmarshal([]byte(cval), &po)
if err != nil {
logger.Errorf("failed to unmarshal PromOption: %s", err)
return
}
if Client == nil || !LocalPromOption.Equal(po) {
cli, err := api.NewClient(api.Config{
Address: po.Url,
RoundTripper: &http.Transport{
// TLSClientConfig: tlsConfig,
Proxy: http.ProxyFromEnvironment,
DialContext: (&net.Dialer{
Timeout: time.Duration(po.DialTimeout) * time.Millisecond,
}).DialContext,
ResponseHeaderTimeout: time.Duration(po.Timeout) * time.Millisecond,
MaxIdleConnsPerHost: po.MaxIdleConnsPerHost,
},
})
if err != nil {
logger.Errorf("failed to NewPromClient: %v", err)
return
}
Client = prom.NewAPI(cli, prom.ClientOptions{
BasicAuthUser: po.User,
BasicAuthPass: po.Pass,
Headers: po.Headers,
})
}
}

View File

@@ -18,7 +18,7 @@ import (
promstat "github.com/didi/nightingale/v5/src/server/stat"
)
func New(version string, reloadFunc func()) *gin.Engine {
func New(version string) *gin.Engine {
gin.SetMode(config.C.RunMode)
loggerMid := aop.Logger()
@@ -37,12 +37,12 @@ func New(version string, reloadFunc func()) *gin.Engine {
r.Use(loggerMid)
}
configRoute(r, version, reloadFunc)
configRoute(r, version)
return r
}
func configRoute(r *gin.Engine, version string, reloadFunc func()) {
func configRoute(r *gin.Engine, version string) {
if config.C.HTTP.PProf {
pprof.Register(r, "/api/debug/pprof")
}
@@ -63,13 +63,8 @@ func configRoute(r *gin.Engine, version string, reloadFunc func()) {
c.String(200, version)
})
r.POST("/-/reload", func(c *gin.Context) {
reloadFunc()
c.String(200, "reload success")
})
r.GET("/servers/active", func(c *gin.Context) {
lst, err := naming.ActiveServers()
lst, err := naming.ActiveServers(c.Request.Context(), config.C.ClusterName)
ginx.NewRender(c).Data(lst, err)
})
@@ -103,8 +98,6 @@ func configRoute(r *gin.Engine, version string, reloadFunc func()) {
service := r.Group("/v1/n9e")
service.POST("/event", pushEventToQueue)
service.POST("/make-event", makeEvent)
service.POST("/judge-event", judgeEvent)
}
func stat() gin.HandlerFunc {

View File

@@ -3,10 +3,8 @@ package router
import (
"fmt"
"strings"
"time"
"github.com/didi/nightingale/v5/src/models"
"github.com/didi/nightingale/v5/src/server/common/conv"
"github.com/didi/nightingale/v5/src/server/config"
"github.com/didi/nightingale/v5/src/server/engine"
promstat "github.com/didi/nightingale/v5/src/server/stat"
@@ -14,7 +12,6 @@ import (
"github.com/gin-gonic/gin"
"github.com/toolkits/pkg/ginx"
"github.com/toolkits/pkg/logger"
"github.com/toolkits/pkg/str"
)
func pushEventToQueue(c *gin.Context) {
@@ -39,13 +36,6 @@ func pushEventToQueue(c *gin.Context) {
event.TagsMap[arr[0]] = arr[1]
}
// isMuted only need TriggerTime RuleName and TagsMap
if engine.IsMuted(event) {
logger.Infof("event_muted: rule_id=%d %s", event.RuleId, event.Hash)
ginx.NewRender(c).Message(nil)
return
}
if err := event.ParseRuleNote(); err != nil {
event.RuleNote = fmt.Sprintf("failed to parse rule note: %v", err)
}
@@ -72,43 +62,3 @@ func pushEventToQueue(c *gin.Context) {
}
ginx.NewRender(c).Message(nil)
}
type eventForm struct {
Alert bool `json:"alert"`
Vectors []conv.Vector `json:"vectors"`
RuleId int64 `json:"rule_id"`
}
func judgeEvent(c *gin.Context) {
var form eventForm
ginx.BindJSON(c, &form)
re, exists := engine.RuleEvalForExternal.Get(form.RuleId)
if !exists {
ginx.Bomb(200, "rule not exists")
}
re.Judge(form.Vectors)
ginx.NewRender(c).Message(nil)
}
func makeEvent(c *gin.Context) {
var events []*eventForm
ginx.BindJSON(c, &events)
now := time.Now().Unix()
for i := 0; i < len(events); i++ {
re, exists := engine.RuleEvalForExternal.Get(events[i].RuleId)
logger.Debugf("handle event:%+v exists:%v", events[i], exists)
if !exists {
ginx.Bomb(200, "rule not exists")
}
if events[i].Alert {
go re.MakeNewEvent("http", now, events[i].Vectors)
} else {
for _, vector := range events[i].Vectors {
hash := str.MD5(fmt.Sprintf("%d_%s", events[i].RuleId, vector.Key))
go re.RecoverEvent(hash, now)
}
}
}
ginx.NewRender(c).Message(nil)
}

View File

@@ -12,7 +12,6 @@ import (
"github.com/gin-gonic/gin"
"github.com/prometheus/common/model"
"github.com/prometheus/prometheus/prompb"
"github.com/toolkits/pkg/logger"
"github.com/didi/nightingale/v5/src/server/common"
"github.com/didi/nightingale/v5/src/server/config"
@@ -157,7 +156,6 @@ func handleOpenTSDB(c *gin.Context) {
}
if err != nil {
logger.Debugf("opentsdb msg format error: %s", err.Error())
c.String(400, err.Error())
return
}
@@ -172,20 +170,12 @@ func handleOpenTSDB(c *gin.Context) {
for i := 0; i < len(arr); i++ {
if err := arr[i].Clean(ts); err != nil {
logger.Debugf("opentsdb msg clean error: %s", err.Error())
if fail == 0 {
msg = fmt.Sprintf("%s , Error clean: %s", msg, err.Error())
}
fail++
continue
}
pt, err := arr[i].ToProm()
if err != nil {
logger.Debugf("opentsdb msg to tsdb error: %s", err.Error())
if fail == 0 {
msg = fmt.Sprintf("%s , Error toprom: %s", msg, err.Error())
}
fail++
continue
}
@@ -212,10 +202,6 @@ func handleOpenTSDB(c *gin.Context) {
idents.Idents.MSet(ids)
}
if fail > 0 {
logger.Debugf("opentsdb msg process error , msg is : %s", string(bs))
}
c.JSON(200, gin.H{
"succ": succ,
"fail": fail,

View File

@@ -38,11 +38,6 @@ func queryPromql(c *gin.Context) {
var f promqlForm
ginx.BindJSON(c, &f)
if reader.Client == nil {
c.String(500, "reader.Client is nil")
return
}
value, warnings, err := reader.Client.Query(c.Request.Context(), f.PromQL, time.Now())
if err != nil {
c.String(500, "promql:%s error:%v", f.PromQL, err)

View File

@@ -9,7 +9,6 @@ import (
"syscall"
"github.com/toolkits/pkg/i18n"
"github.com/toolkits/pkg/logger"
"github.com/didi/nightingale/v5/src/pkg/httpx"
"github.com/didi/nightingale/v5/src/pkg/logx"
@@ -76,7 +75,6 @@ EXIT:
break EXIT
case syscall.SIGHUP:
// reload configuration?
reload()
default:
break EXIT
}
@@ -125,7 +123,7 @@ func (s Server) initialize() (func(), error) {
}
// init prometheus remote reader
if err = reader.Init(); err != nil {
if err = reader.Init(config.C.Reader); err != nil {
return fns.Ret(), err
}
@@ -145,7 +143,7 @@ func (s Server) initialize() (func(), error) {
stat.Init()
// init http server
r := router.New(s.Version, reload)
r := router.New(s.Version)
httpClean := httpx.Init(config.C.HTTP, r)
fns.Add(httpClean)
@@ -175,9 +173,3 @@ func (fs *Functions) Ret() func() {
}
}
}
func reload() {
logger.Info("start reload configs")
engine.Reload()
logger.Info("reload configs finished")
}

View File

@@ -30,10 +30,6 @@ type Usage struct {
}
func getSamples() (float64, error) {
if reader.Client == nil {
return 0, fmt.Errorf("reader.Client is nil")
}
value, warns, err := reader.Client.Query(context.Background(), request, time.Now())
if err != nil {
return 0, err
@@ -59,7 +55,10 @@ func Report() {
}
func report() {
sps, _ := getSamples()
sps, err := getSamples()
if err != nil {
return
}
hostname, err := os.Hostname()
if err != nil {

View File

@@ -9,7 +9,6 @@ import (
"net/http"
"time"
"github.com/didi/nightingale/v5/src/models"
"github.com/didi/nightingale/v5/src/server/config"
"github.com/golang/protobuf/proto"
"github.com/golang/snappy"
@@ -25,28 +24,11 @@ type WriterType struct {
Client api.Client
}
func (w WriterType) writeRelabel(items []*prompb.TimeSeries) []*prompb.TimeSeries {
ritems := make([]*prompb.TimeSeries, 0, len(items))
for _, item := range items {
lbls := models.Process(item.Labels, w.Opts.WriteRelabels...)
if len(lbls) == 0 {
continue
}
ritems = append(ritems, item)
}
return ritems
}
func (w WriterType) Write(index int, items []*prompb.TimeSeries, headers ...map[string]string) {
if len(items) == 0 {
return
}
items = w.writeRelabel(items)
if len(items) == 0 {
return
}
start := time.Now()
defer func() {
promstat.ForwardDuration.WithLabelValues(config.C.ClusterName, fmt.Sprint(index)).Observe(time.Since(start).Seconds())

View File

@@ -14,7 +14,6 @@ import (
"github.com/didi/nightingale/v5/src/pkg/logx"
"github.com/didi/nightingale/v5/src/pkg/oidcc"
"github.com/didi/nightingale/v5/src/pkg/ormx"
"github.com/didi/nightingale/v5/src/pkg/tls"
"github.com/didi/nightingale/v5/src/storage"
)
@@ -78,7 +77,6 @@ func MustLoad(fpaths ...string) {
type Config struct {
RunMode string
I18N string
I18NHeaderKey string
AdminRole string
MetricsYamlFile string
BuiltinAlertsDir string
@@ -99,7 +97,6 @@ type Config struct {
Clusters []ClusterOptions
Ibex Ibex
OIDC oidcc.Config
TargetMetrics map[string]string
}
type ClusterOptions struct {
@@ -113,9 +110,7 @@ type ClusterOptions struct {
Timeout int64
DialTimeout int64
UseTLS bool
tls.ClientConfig
KeepAlive int64
MaxIdleConnsPerHost int
}

View File

@@ -3,43 +3,28 @@ package config
import (
"path"
cmap "github.com/orcaman/concurrent-map"
"github.com/toolkits/pkg/file"
"github.com/toolkits/pkg/runner"
)
// metricDesc , As load map happens before read map, there is no necessary to use concurrent map for metric desc store
type metricDesc struct {
CommonDesc map[string]string `yaml:",inline" json:"common"`
Zh map[string]string `yaml:"zh" json:"zh"`
En map[string]string `yaml:"en" json:"en"`
}
var MetricDesc metricDesc
// GetMetricDesc , if metric is not registered, empty string will be returned
func GetMetricDesc(lang, metric string) string {
var m map[string]string
if lang == "zh" {
m = MetricDesc.Zh
} else {
m = MetricDesc.En
}
if m != nil {
if desc, has := m[metric]; has {
return desc
}
}
return MetricDesc.CommonDesc[metric]
}
var Metrics = cmap.New()
func loadMetricsYaml() error {
fp := C.MetricsYamlFile
if fp == "" {
fp = path.Join(runner.Cwd, "etc", "metrics.yaml")
}
fp := path.Join(runner.Cwd, "etc", "metrics.yaml")
if !file.IsExist(fp) {
return nil
}
return file.ReadYaml(fp, &MetricDesc)
nmap := make(map[string]string)
err := file.ReadYaml(fp, &nmap)
if err != nil {
return err
}
for key, val := range nmap {
Metrics.Set(key, val)
}
return nil
}

View File

@@ -11,7 +11,6 @@ import (
"sync"
"time"
"github.com/didi/nightingale/v5/src/models"
"github.com/didi/nightingale/v5/src/pkg/prom"
"github.com/didi/nightingale/v5/src/webapi/config"
"github.com/prometheus/client_golang/api"
@@ -30,44 +29,10 @@ type ClustersType struct {
mutex *sync.RWMutex
}
type PromOption struct {
Url string
User string
Pass string
Headers []string
Timeout int64
DialTimeout int64
MaxIdleConnsPerHost int
}
func (cs *ClustersType) Put(name string, cluster *ClusterType) {
cs.mutex.Lock()
defer cs.mutex.Unlock()
cs.datas[name] = cluster
// 把配置信息写入DB一份这样n9e-server就可以直接从DB读取了
po := PromOption{
Url: cluster.Opts.Prom,
User: cluster.Opts.BasicAuthUser,
Pass: cluster.Opts.BasicAuthPass,
Headers: cluster.Opts.Headers,
Timeout: cluster.Opts.Timeout,
DialTimeout: cluster.Opts.DialTimeout,
MaxIdleConnsPerHost: cluster.Opts.MaxIdleConnsPerHost,
}
bs, err := json.Marshal(po)
if err != nil {
logger.Fatal("failed to marshal PromOption:", err)
return
}
key := "prom." + name + ".option"
err = models.ConfigsSet(key, string(bs))
if err != nil {
logger.Fatal("failed to set PromOption ", key, " to database, error: ", err)
}
cs.mutex.Unlock()
}
func (cs *ClustersType) Get(name string) (*ClusterType, bool) {
@@ -100,9 +65,6 @@ func initClustersFromConfig() error {
for i := 0; i < len(opts); i++ {
cluster := newClusterByOption(opts[i])
if cluster == nil {
continue
}
Clusters.Put(opts[i].Name, cluster)
}
@@ -203,17 +165,7 @@ func loadClustersFromAPI() {
MaxIdleConnsPerHost: 32,
}
if strings.HasPrefix(opt.Prom, "https") {
opt.UseTLS = true
opt.InsecureSkipVerify = true
}
cluster := newClusterByOption(opt)
if cluster == nil {
continue
}
Clusters.Put(item.Name, cluster)
Clusters.Put(item.Name, newClusterByOption(opt))
continue
}
}
@@ -221,6 +173,7 @@ func loadClustersFromAPI() {
func newClusterByOption(opt config.ClusterOptions) *ClusterType {
transport := &http.Transport{
// TLSClientConfig: tlsConfig,
Proxy: http.ProxyFromEnvironment,
DialContext: (&net.Dialer{
Timeout: time.Duration(opt.DialTimeout) * time.Millisecond,
@@ -229,15 +182,6 @@ func newClusterByOption(opt config.ClusterOptions) *ClusterType {
MaxIdleConnsPerHost: opt.MaxIdleConnsPerHost,
}
if opt.UseTLS {
tlsConfig, err := opt.TLSConfig()
if err != nil {
logger.Errorf("new cluster %s fail: %v", opt.Name, err)
return nil
}
transport.TLSClientConfig = tlsConfig
}
cli, err := api.NewClient(api.Config{
Address: opt.Prom,
RoundTripper: transport,
@@ -245,7 +189,6 @@ func newClusterByOption(opt config.ClusterOptions) *ClusterType {
if err != nil {
logger.Errorf("new client fail: %v", err)
return nil
}
cluster := &ClusterType{

View File

@@ -31,25 +31,6 @@ func stat() gin.HandlerFunc {
}
}
func languageDetector() gin.HandlerFunc {
headerKey := config.C.I18NHeaderKey
return func(c *gin.Context) {
if headerKey != "" {
lang := c.GetHeader(headerKey)
if lang != "" {
if strings.HasPrefix(lang, "*") || strings.HasPrefix(lang, "zh") {
c.Request.Header.Set("X-Language", "zh")
} else if strings.HasPrefix(lang, "en") {
c.Request.Header.Set("X-Language", "en")
} else {
c.Request.Header.Set("X-Language", lang)
}
}
}
c.Next()
}
}
func New(version string) *gin.Engine {
gin.SetMode(config.C.RunMode)
@@ -60,7 +41,6 @@ func New(version string) *gin.Engine {
r := gin.New()
r.Use(stat())
r.Use(languageDetector())
r.Use(aop.Recovery())
// whether print access log
@@ -118,11 +98,14 @@ func configRoute(r *gin.Engine, version string) {
pages := r.Group(pagesPrefix)
{
if config.C.AnonymousAccess.PromQuerier {
pages.Any("/prometheus/*url", prometheusProxy)
pages.POST("/query-range-batch", promBatchQueryRange)
} else {
pages.Any("/prometheus/*url", auth(), prometheusProxy)
pages.POST("/query-range-batch", auth(), promBatchQueryRange)
}
@@ -196,7 +179,6 @@ func configRoute(r *gin.Engine, version string) {
pages.POST("/busi-group/:id/board/:bid/clone", auth(), user(), perm("/dashboards/add"), bgrw(), boardClone)
pages.GET("/board/:bid", auth(), user(), boardGet)
pages.GET("/board/:bid/pure", boardPureGet)
pages.PUT("/board/:bid", auth(), user(), perm("/dashboards/put"), boardPut)
pages.PUT("/board/:bid/configs", auth(), user(), perm("/dashboards/put"), boardPutConfigs)
pages.DELETE("/boards", auth(), user(), perm("/dashboards/del"), boardDel)

View File

@@ -26,10 +26,10 @@ func alertRuleGets(c *gin.Context) {
}
func alertRulesGetByService(c *gin.Context) {
prods := strings.Split(ginx.QueryStr(c, "prods", ""), ",")
prods := strings.Fields(ginx.QueryStr(c, "prods", ""))
query := ginx.QueryStr(c, "query", "")
algorithm := ginx.QueryStr(c, "algorithm", "")
ars, err := models.AlertRulesGetsBy(prods, query, algorithm)
ars, err := models.AlertRulesGetsBy(prods, query)
if err == nil {
cache := make(map[int64]*models.UserGroup)
for i := 0; i < len(ars); i++ {

View File

@@ -51,17 +51,6 @@ func boardGet(c *gin.Context) {
ginx.NewRender(c).Data(board, nil)
}
func boardPureGet(c *gin.Context) {
board, err := models.BoardGetByID(ginx.UrlParamInt64(c, "bid"))
ginx.Dangerous(err)
if board == nil {
ginx.Bomb(http.StatusNotFound, "No such dashboard")
}
ginx.NewRender(c).Data(board, nil)
}
// bgrwCheck
func boardDel(c *gin.Context) {
var f idsForm

View File

@@ -69,12 +69,6 @@ func busiGroupMemberAdd(c *gin.Context) {
username := c.MustGet("username").(string)
targetbg := c.MustGet("busi_group").(*models.BusiGroup)
for i := 0; i < len(members); i++ {
if members[i].BusiGroupId != targetbg.Id {
ginx.Bomb(http.StatusBadRequest, "business group id invalid")
}
}
ginx.NewRender(c).Message(targetbg.AddMembers(members, username))
}
@@ -85,12 +79,6 @@ func busiGroupMemberDel(c *gin.Context) {
username := c.MustGet("username").(string)
targetbg := c.MustGet("busi_group").(*models.BusiGroup)
for i := 0; i < len(members); i++ {
if members[i].BusiGroupId != targetbg.Id {
ginx.Bomb(http.StatusBadRequest, "business group id invalid")
}
}
ginx.NewRender(c).Message(targetbg.DelMembers(members, username))
}

View File

@@ -3,7 +3,6 @@ package router
import (
"fmt"
"net/http"
"strconv"
"strings"
"time"
@@ -32,7 +31,6 @@ func loginPost(c *gin.Context) {
if config.C.LDAP.Enable {
user, err = models.LdapLogin(f.Username, f.Password)
if err != nil {
logger.Debugf("ldap login failed: %v username: %s", err, f.Username)
ginx.NewRender(c).Message(err)
return
}
@@ -117,24 +115,6 @@ func refreshPost(c *gin.Context) {
return
}
userid, err := strconv.ParseInt(strings.Split(userIdentity, "-")[0], 10, 64)
if err != nil {
ginx.NewRender(c, http.StatusUnauthorized).Message("failed to parse user_identity from jwt")
return
}
u, err := models.UserGetById(userid)
if err != nil {
ginx.NewRender(c, http.StatusInternalServerError).Message("failed to query user by id")
return
}
if u == nil {
// user already deleted
ginx.NewRender(c, http.StatusUnauthorized).Message("user already deleted")
return
}
// Delete the previous Refresh Token
err = deleteAuth(c.Request.Context(), refreshUuid)
if err != nil {

View File

@@ -1,14 +1,35 @@
package router
import (
"path"
"github.com/gin-gonic/gin"
"github.com/toolkits/pkg/file"
"github.com/toolkits/pkg/ginx"
"github.com/toolkits/pkg/runner"
"github.com/didi/nightingale/v5/src/webapi/config"
)
func metricsDescGetFile(c *gin.Context) {
c.JSON(200, config.MetricDesc)
fp := config.C.MetricsYamlFile
if fp == "" {
fp = path.Join(runner.Cwd, "etc", "metrics.yaml")
}
if !file.IsExist(fp) {
c.String(404, "%s not found", fp)
return
}
ret := make(map[string]string)
err := file.ReadYaml(fp, &ret)
if err != nil {
c.String(500, err.Error())
return
}
c.JSON(200, ret)
}
// 前端传过来一个metric数组后端去查询有没有对应的释义返回map
@@ -17,8 +38,13 @@ func metricsDescGetMap(c *gin.Context) {
ginx.BindJSON(c, &arr)
ret := make(map[string]string)
for _, key := range arr {
ret[key] = config.GetMetricDesc(c.GetHeader("X-Language"), key)
for i := 0; i < len(arr); i++ {
desc, has := config.Metrics.Get(arr[i])
if !has {
ret[arr[i]] = ""
} else {
ret[arr[i]] = desc.(string)
}
}
ginx.NewRender(c).Data(ret, nil)

View File

@@ -59,7 +59,7 @@ func proxyAuth() gin.HandlerFunc {
return func(c *gin.Context) {
user := handleProxyUser(c)
c.Set("userid", user.Id)
c.Set("username", user.Username)
c.Set("username", user)
c.Next()
}
}
@@ -119,6 +119,7 @@ func jwtMock() gin.HandlerFunc {
"refresh_token": "",
}, nil)
c.Abort()
return
}
}

View File

@@ -32,15 +32,21 @@ type batchQueryForm struct {
func promBatchQueryRange(c *gin.Context) {
xcluster := c.GetHeader("X-Cluster")
if xcluster == "" {
ginx.Bomb(http.StatusBadRequest, "header(X-Cluster) is blank")
c.String(500, "X-Cluster is blank")
return
}
var f batchQueryForm
ginx.Dangerous(c.BindJSON(&f))
err := c.BindJSON(&f)
if err != nil {
c.String(500, err.Error())
return
}
cluster, exist := prom.Clusters.Get(xcluster)
if !exist {
ginx.Bomb(http.StatusBadRequest, "cluster(%s) not found", xcluster)
c.String(http.StatusBadRequest, "cluster(%s) not found", xcluster)
return
}
var lst []model.Value
@@ -53,12 +59,15 @@ func promBatchQueryRange(c *gin.Context) {
}
resp, _, err := cluster.PromClient.QueryRange(context.Background(), item.Query, r)
ginx.Dangerous(err)
if err != nil {
c.String(500, err.Error())
return
}
lst = append(lst, resp)
}
ginx.NewRender(c).Data(lst, nil)
c.JSON(200, lst)
}
func prometheusProxy(c *gin.Context) {

View File

@@ -1,27 +1,21 @@
package router
import (
"context"
"fmt"
"net/http"
"strings"
"time"
"github.com/gin-gonic/gin"
"github.com/prometheus/common/model"
"github.com/toolkits/pkg/ginx"
"github.com/didi/nightingale/v5/src/models"
"github.com/didi/nightingale/v5/src/server/common/conv"
"github.com/didi/nightingale/v5/src/webapi/config"
"github.com/didi/nightingale/v5/src/webapi/prom"
)
func targetGets(c *gin.Context) {
bgid := ginx.QueryInt64(c, "bgid", -1)
query := ginx.QueryStr(c, "query", "")
limit := ginx.QueryInt(c, "limit", 30)
mins := ginx.QueryInt(c, "mins", 2)
clusters := queryClusters(c)
total, err := models.TargetTotal(bgid, clusters, query)
@@ -32,60 +26,8 @@ func targetGets(c *gin.Context) {
if err == nil {
cache := make(map[int64]*models.BusiGroup)
targetsMap := make(map[string]*models.Target)
for i := 0; i < len(list); i++ {
ginx.Dangerous(list[i].FillGroup(cache))
targetsMap[list[i].Cluster+list[i].Ident] = list[i]
}
now := time.Now()
// query LoadPerCore / MemUtil / TargetUp / DiskUsedPercent from prometheus
// map key: cluster, map value: ident list
targets := make(map[string][]string)
for i := 0; i < len(list); i++ {
targets[list[i].Cluster] = append(targets[list[i].Cluster], list[i].Ident)
}
for cluster := range targets {
cc, has := prom.Clusters.Get(cluster)
if !has {
continue
}
targetArr := targets[cluster]
if len(targetArr) == 0 {
continue
}
targetRe := strings.Join(targetArr, "|")
valuesMap := make(map[string]map[string]float64)
for metric, ql := range config.C.TargetMetrics {
promql := fmt.Sprintf(ql, targetRe, mins)
values, err := instantQuery(context.Background(), cc, promql, now)
ginx.Dangerous(err)
valuesMap[metric] = values
}
// handle values
for metric, values := range valuesMap {
for ident := range values {
mapkey := cluster + ident
if t, has := targetsMap[mapkey]; has {
switch metric {
case "LoadPerCore":
t.LoadPerCore = values[ident]
case "MemUtil":
t.MemUtil = values[ident]
case "TargetUp":
t.TargetUp = values[ident]
case "DiskUtil":
t.DiskUtil = values[ident]
}
}
}
}
}
}
@@ -95,29 +37,6 @@ func targetGets(c *gin.Context) {
}, nil)
}
func instantQuery(ctx context.Context, c *prom.ClusterType, promql string, ts time.Time) (map[string]float64, error) {
ret := make(map[string]float64)
val, warnings, err := c.PromClient.Query(ctx, promql, ts)
if err != nil {
return ret, err
}
if len(warnings) > 0 {
return ret, fmt.Errorf("instant query occur warnings, promql: %s, warnings: %v", promql, warnings)
}
vectors := conv.ConvertVectors(val)
for i := range vectors {
ident, has := vectors[i].Labels["ident"]
if has {
ret[string(ident)] = vectors[i].Value
}
}
return ret, nil
}
func targetGetTags(c *gin.Context) {
idents := ginx.QueryStr(c, "idents")
idents = strings.ReplaceAll(idents, ",", " ")