Compare commits

..

1 Commits

Author SHA1 Message Date
Ulric Qin
92163f44e2 add configuration ForceUseServerTS 2022-08-22 23:22:37 +08:00
130 changed files with 2221 additions and 9849 deletions

View File

@@ -1,6 +1,7 @@
<p align="center">
<a href="https://github.com/ccfos/nightingale">
<img src="doc/img/nightingale_logo_h.png" alt="nightingale - cloud native monitoring" width="240" /></a>
<img src="doc/img/ccf-n9e.png" alt="nightingale - cloud native monitoring" width="240" /></a>
<p align="center">夜莺是一款开源的云原生监控系统,采用 all-in-one 的设计,提供企业级的功能特性,开箱即用的产品体验。推荐升级您的 Prometheus + AlertManager + Grafana 组合方案到夜莺</p>
</p>
<p align="center">
@@ -10,23 +11,14 @@
<a href="https://hub.docker.com/u/flashcatcloud">
<img alt="Docker pulls" src="https://img.shields.io/docker/pulls/flashcatcloud/nightingale"/></a>
<img alt="GitHub Repo stars" src="https://img.shields.io/github/stars/ccfos/nightingale">
<img alt="GitHub Repo issues" src="https://img.shields.io/github/issues/ccfos/nightingale">
<img alt="GitHub Repo issues closed" src="https://img.shields.io/github/issues-closed/ccfos/nightingale">
<img alt="GitHub forks" src="https://img.shields.io/github/forks/ccfos/nightingale">
<a href="https://github.com/ccfos/nightingale/graphs/contributors">
<img alt="GitHub contributors" src="https://img.shields.io/github/contributors-anon/ccfos/nightingale"/></a>
<img alt="License" src="https://img.shields.io/badge/license-Apache--2.0-blue"/>
</p>
<p align="center">
<b>All-in-one</b> 的开源云原生监控系统 <br/>
<b>开箱即用</b>,集数据采集、可视化、监控告警于一体 <br/>
推荐升级您的 <b>Prometheus + AlertManager + Grafana</b> 组合方案到夜莺!
</p>
[English](./README_EN.md) | [中文](./README.md)
## Highlighted Features
- **开箱即用**
@@ -34,59 +26,60 @@
- **专业告警**
- 可视化的告警配置和管理,支持丰富的告警规则,提供屏蔽规则、订阅规则的配置能力,支持告警多种送达渠道,支持告警自愈、告警事件管理等;
- **云原生**
- 以交钥匙的方式快速构建企业级的云原生监控体系,支持 [Categraf](https://github.com/flashcatcloud/categraf)、Telegraf、Grafana-agent 等多种采集器,支持 Prometheus、VictoriaMetrics、M3DB、ElasticSearch 等多种数据库,兼容支持导入 Grafana 仪表盘,**与云原生生态无缝集成**
- **高性能 高可用**
- 以交钥匙的方式快速构建企业级的云原生监控体系,支持 [**Categraf**](https://github.com/flashcatcloud/categraf)、Telegraf、Grafana-agent 等多种采集器,支持 Prometheus、VictoriaMetrics、M3DB、ElasticSearch 等多种数据库,兼容支持导入 Grafana 仪表盘,**与云原生生态无缝集成**
- **高性能高可用**
- 得益于夜莺的多数据源管理引擎,和夜莺引擎侧优秀的架构设计,借助于高性能时序库,可以满足数亿时间线的采集、存储、告警分析场景,节省大量成本;
- 夜莺监控组件均可水平扩展,无单点,已在上千家企业部署落地,经受了严苛的生产实践检验。众多互联网头部公司,夜莺集群机器达百台,处理数亿级时间线,重度使用夜莺监控;
- **灵活扩展 中心化管理**
- **灵活扩展中心化管理**
- 夜莺监控,可部署在 1 核 1G 的云主机,可在上百台机器集群化部署,可运行在 K8s 中;也可将时序库、告警引擎等组件下沉到各机房、各 Region兼顾边缘部署和中心化统一管理**解决数据割裂,缺乏统一视图的难题**
- **开放社区**
- 托管于[中国计算机学会开源发展委员会](https://www.ccf.org.cn/kyfzwyh/),有[快猫星云](https://flashcat.cloud)和众多公司的持续投入,和数千名社区用户的积极参与,以及夜莺监控项目清晰明确的定位,都保证了夜莺开源社区健康、长久的发展。活跃、专业的社区用户也在持续迭代和沉淀更多的最佳实践于产品中;
- 托管于[中国计算机学会开源发展委员会](https://www.ccf.org.cn/kyfzwyh/),有[**快猫星云**](https://flashcat.cloud)和众多公司的持续投入,和数千名社区用户的积极参与,以及夜莺监控项目清晰明确的定位,都保证了夜莺开源社区健康、长久的发展。活跃、专业的社区用户也在持续迭代和沉淀更多的最佳实践于产品中;
**如果您在使用 Prometheus 过程中,有以下的一个或者多个需求场景,推荐您无缝升级到夜莺**
> 如果您在使用 Prometheus 过程中,有以下的一个或者多个需求场景,推荐您无缝升级到夜莺:
- Prometheus、Alertmanager、Grafana 等多个系统较为割裂,缺乏统一视图,无法开箱即用;
- 通过修改配置文件来管理 Prometheus、Alertmanager 的方式,学习曲线大,协同有难度;
- 数据量过大而无法扩展您的 Prometheus 集群;
- 生产环境运行多套 Prometheus 集群,面临管理和使用成本高的问题;
**如果您在使用 Zabbix有以下的场景推荐您升级到夜莺**
> 如果您在使用 Zabbix有以下的场景推荐您升级到夜莺
- 监控的数据量太大,希望有更好的扩展解决方案;
- 学习曲线高,多人多团队模式下,希望有更好的协同使用效率;
- 微服务和云原生架构下监控数据的生命周期多变、监控数据维度基数高Zabbix 数据模型不易适配;
> 了解更多Zabbix和夜莺监控的对比推荐您进一步阅读[《Zabbix 和夜莺监控选型对比》](https://flashcat.cloud/blog/zabbx-vs-nightingale/)
> 如果您在使用 [Open-Falcon](https://github.com/open-falcon/falcon-plus),我们更推荐您升级到夜莺:
**如果您在使用 [Open-Falcon](https://github.com/open-falcon/falcon-plus),我们推荐您升级到夜莺:**
- 关于 Open-Falcon 和夜莺的详细介绍,请参考阅读:[云原生监控的十个特点和趋势](https://mp.weixin.qq.com/s?__biz=MzkzNjI5OTM5Nw==&mid=2247483738&idx=1&sn=e8bdbb974a2cd003c1abcc2b5405dd18&chksm=c2a19fb0f5d616a63185cd79277a79a6b80118ef2185890d0683d2bb20451bd9303c78d083c5#rd)。
- 关于 Open-Falcon 和夜莺的详细介绍,请参考阅读:[《云原生监控的十个特点和趋势》](http://flashcat.cloud/blog/10-trends-of-cloudnative-monitoring/)
> 我们推荐您使用 [Categraf](https://github.com/flashcatcloud/categraf) 作为首选的监控数据采集器:
**我们推荐您使用 [Categraf](https://github.com/flashcatcloud/categraf) 作为首选的监控数据采集器**
- [Categraf](https://github.com/flashcatcloud/categraf) 是夜莺监控的默认采集器采用开放插件机制和 All-in-one 的设计理念同时支持 metric、log、trace、event 的采集。Categraf 不仅可以采集 CPU、内存、网络等系统层面的指标也集成了众多开源组件的采集能力支持K8s生态。Categraf 内置了对应的仪表盘和告警规则开箱即用。
- [Categraf](https://github.com/flashcatcloud/categraf) 是夜莺监控的默认采集器采用开放插件机制和 all-in-one 的设计同时支持 metric、log、trace、event 的采集。Categraf 不仅可以采集 CPU、内存、网络等系统层面的指标也集成了众多开源组件的采集能力支持K8s生态。Categraf 内置了对应的仪表盘和告警规则开箱即用。
## Getting Started
[国外文档](https://n9e.github.io/) | [国内文档](http://n9e.flashcat.cloud/)
- [快速安装](https://mp.weixin.qq.com/s/iEC4pfL1TgjMDOWYh8H-FA)
- [详细文档](https://n9e.github.io/)
- [社区分享](https://n9e.github.io/docs/prologue/share/)
## Screenshots
https://user-images.githubusercontent.com/792850/216888712-2565fcea-9df5-47bd-a49e-d60af9bd76e8.mp4
<img src="doc/img/intro.gif" width="480">
## Architecture
<img src="doc/img/arch-product.png" width="600">
<img src="doc/img/arch-product.png" width="480">
夜莺监控可以接收各种采集器上报的监控数据(比如 [Categraf](https://github.com/flashcatcloud/categraf)、telegraf、grafana-agent、Prometheus并写入多种流行的时序数据库中可以支持Prometheus、M3DB、VictoriaMetrics、Thanos、TDEngine等提供告警规则、屏蔽规则、订阅规则的配置能力提供监控数据的查看能力提供告警自愈机制告警触发之后自动回调某个webhook地址或者执行某个脚本提供历史告警事件的存储管理、分组查看的能力。
<img src="doc/img/arch-system.png" width="600">
<img src="doc/img/arch-system.png" width="480">
夜莺 v5 版本的设计非常简单,核心是 server 和 webapi 两个模块webapi 无状态放到中心端承接前端请求将用户配置写入数据库server 是告警引擎和数据转发模块,一般随着时序库走,一个时序库就对应一套 server每套 server 可以只用一个实例也可以多个实例组成集群server 可以接收 Categraf、Telegraf、Grafana-Agent、Datadog-Agent、Falcon-Plugins 上报的数据,写入后端时序库,周期性从数据库同步告警规则,然后查询时序库做告警判断。每套 server 依赖一个 redis。
<img src="doc/img/install-vm.png" width="600">
<img src="doc/img/install-vm.png" width="480">
如果单机版本的时序数据库(比如 Prometheus 性能有瓶颈或容灾较差,我们推荐使用 [VictoriaMetrics](https://github.com/VictoriaMetrics/VictoriaMetrics)VictoriaMetrics 架构较为简单性能优异易于部署和运维架构图如上。VictoriaMetrics 更详尽的文档,还请参考其[官网](https://victoriametrics.com/)。
@@ -103,15 +96,16 @@ https://user-images.githubusercontent.com/792850/216888712-2565fcea-9df5-47bd-a4
**尊重、认可和记录每一位贡献者的工作**是夜莺开源社区的第一指导原则,我们提倡**高效的提问**,这既是对开发者时间的尊重,也是对整个社区知识沉淀的贡献:
- 提问之前请先查阅 [FAQ](https://www.gitlink.org.cn/ccfos/nightingale/wiki/faq)
- 我们使用[GitHub Discussions](https://github.com/ccfos/nightingale/discussions)作为交流论坛,有问题可以到这里搜索、提问
- 我们也推荐你加入微信群,和其他夜莺用户交流经验 (请先加好友:[picobyte](https://www.gitlink.org.cn/UlricQin/gist/tree/master/self.jpeg) 备注:夜莺加群+姓名+公司)
- 提问之前请先搜索 [github issue](https://github.com/ccfos/nightingale/issues)
- 我们优先推荐通过提交 github issue 来提问,如果[有问题点击这里](https://github.com/ccfos/nightingale/issues/new?assignees=&labels=kind%2Fbug&template=bug_report.yml) | [有需求建议点击这里](https://github.com/ccfos/nightingale/issues/new?assignees=&labels=kind%2Ffeature&template=enhancement.md)
- 最后,我们推荐你加入微信群,针对相关开放式问题,相互交流咨询 (请先加好友:[UlricGO](https://www.gitlink.org.cn/UlricQin/gist/tree/master/self.jpeg) 备注:夜莺加群+姓名+公司,交流群里会有开发者团队和专业、热心的群友回答问题)
## Who is using Nightingale
## Who is using
您可以通过在 **[Who is Using Nightingale](https://github.com/ccfos/nightingale/issues/897)** 登记您的使用情况,分享您的使用经验。
## Stargazers over time
## Stargazers
[![Stargazers over time](https://starchart.cc/ccfos/nightingale.svg)](https://starchart.cc/ccfos/nightingale)
## Contributors

View File

@@ -3,5 +3,3 @@
- [xiaoziv](https://github.com/xiaoziv)
- [tanxiao1990](https://github.com/tanxiao1990)
- [bbaobelief](https://github.com/bbaobelief)
- [freedomkk-qfeng](https://github.com/freedomkk-qfeng)
- [lsy1990](https://github.com/lsy1990)

View File

@@ -1,36 +1,29 @@
[夜莺监控](https://github.com/ccfos/nightingale "夜莺监控")是一款开源云原生监控系统由滴滴设计开发2020 年 3 月份开源之后,凭借其优秀的产品设计、灵活性架构和明确清晰的定位,夜莺监控快速发展为国内最活跃的企业级云原生监控方案。[截止当前](具体指2022年8月 "截止当前"),在 [Github](https://github.com/ccfos/nightingale "Github") 上已经迭代发布了 **70** 多个版本,获得了 **5K** 多个 Star**80** 多位代码贡献者。快速的迭代,也让夜莺监控的用户群越来越大,涉及各行各业。
# 夜莺开源项目和社区治理架构(草案)
更进一步,夜莺监控于 2022 年 5 月 11 日,正式捐赠予中国计算机学会开源发展委员会 [CCF ODC](https://www.ccf.org.cn/kyfzwyh/ "CCF ODC"),为 CCF ODC 成立后接受捐赠的第一个开源项目。
## 社区架构
开源项目要更有生命力,离不开开放的治理架构和源源不断的开发者共同参与。夜莺监控项目加入 CCF 开源大家庭后,能在计算机学会的支持和带动下,进一步结合云原生、可观测、国产化等多个技术发展的需求,建立开放、中立的开源治理架构,打造更专业、有活力的开发者社区。
### 用户(User)
**今天,我们郑重发布夜莺监控开源社区治理架构,并公示相关的任命和社区荣誉,期待开源的道路上,一起同行。**
> 欢迎任何个人、公司以及组织,使用夜莺监控,并积极的反馈 bug、提交功能需求、以及相互帮助我们推荐使用 [github issue](https://github.com/ccfos/nightingale/issues) 来跟踪 bug 和管理需求。
# 夜莺监控开源社区架构
社区用户,可以通过在 **[Who is Using Nightingale](https://github.com/ccfos/nightingale/issues/897)** 登记您的使用情况,并分享您使用夜莺监控的经验,将会自动进入 **[END USERS](./end-users.md)** 列表,并获得社区的 **VIP Support**
### User|用户
### 贡献者(Contributer)
> 欢迎任何个人、公司以及组织,使用夜莺监控,并积极的反馈 bug、提交功能需求、以及相互帮助我们推荐使用 [Github Issue](https://github.com/ccfos/nightingale/issues "Github Issue") 来跟踪 bug 和管理需求。
> 欢迎每一位用户,包括但不限于以下列方式参与到夜莺开源社区并做出贡献:
社区用户,可以通过在 **[Who is Using Nightingale](https://github.com/ccfos/nightingale/issues/897 "Who is Using Nightingale")** 登记您的使用情况,并分享您使用夜莺监控的经验,将会自动进入 **[END USERS](https://github.com/ccfos/nightingale/blob/main/doc/end-users.md "END USERS")** 文件列表,并获得社区的 **VIP Support**
### Contributor|贡献者
> 欢迎每一位用户,包括但不限于以下方式参与到夜莺开源社区并做出贡献:
1. 在 [Github Issue](https://github.com/ccfos/nightingale/issues "Github Issue") 中积极参与讨论,参与社区活动;
1. 在 [github issue](https://github.com/ccfos/nightingale/issues) 中积极参与讨论,参与社区活动;
1. 提交代码补丁;
1. 翻译、修订、补充和完善[文档](https://n9e.github.io "文档")
1. 翻译、修订、补充和完善[文档](https://n9e.github.io)
1. 分享夜莺监控的使用经验,积极布道;
1. 提交建议 / 批评;
年度累计向 [CCFOS/NIGHTINGALE](https://github.com/ccfos/nightingale "CCFOS/NIGHTINGALE") 提交 **5** 个PR被合并或者因为其他贡献被**项目管委会**一致认可,将会自动进入到 **[ACTIVE CONTRIBUTORS](https://github.com/ccfos/nightingale/blob/main/doc/active-contributors.md "ACTIVE CONTRIBUTORS")** 列表,并获得夜莺开源社区颁发的证书,享有夜莺开源社区一定的权益和福利。
年度累计向 [CCFOS/NIGHTINGALE](https://github.com/ccfos/nightingale) 提交 **5** 个PR被合并或者因为其他贡献被**项目管委会**一致认可,将会自动进入到 **[ACTIVE CONTRIBUTORS](./active-contributors.md)** 列表,并获得 **[CCF ODC](https://www.ccf.org.cn/kyfzwyh/)** 颁发的电子证书,享有夜莺开源社区一定的权益和福利。
所有向 [CCFOS/NIGHTINGALE](https://github.com/ccfos/nightingale "CCFOS/NIGHTINGALE") 提交过PR被合并或者做出过重要贡献的 Contributor都会被永久记载于 [CONTRIBUTORS](https://github.com/ccfos/nightingale/blob/main/doc/contributors.md "CONTRIBUTORS") 列表。
### Committer|提交者
### 提交者(Committer)
> Committer 是指拥有 [CCFOS/NIGHTINGALE](https://github.com/ccfos/nightingale "CCFOS/NIGHTINGALE") 代码仓库写操作权限的贡献者。原则上 Committer 能够自主决策某个代码补丁是否可以合入到夜莺代码仓库,但是项目管委会拥有最终的决策权。
> Committer 是指拥有 [CCFOS/NIGHTINGALE](https://github.com/ccfos/nightingale) 代码仓库写操作权限的贡献者,他们拥有 ccf.org.cn 为后缀的邮箱地址(待上线)。原则上 Committer 能够自主决策某个代码补丁是否可以合入到夜莺代码仓库,但是项目管委会拥有最终的决策权。
Committer 承担以下一个或多个职责:
- 积极回应 Issues
@@ -38,43 +31,43 @@ Committer 承担以下一个或多个职责:
- 参加开发者例行会议,积极讨论项目规划和技术方案;
- 代表夜莺开源社区出席相关技术会议并做演讲;
Committer 记录并公示于 **[COMMITTERS](https://github.com/ccfos/nightingale/blob/main/doc/committers.md "COMMITTERS")** 列表,并获得夜莺开源社区颁发的证书,以及享有夜莺开源社区的各种权益和福利。
Committer 记录并公示于 **[COMMITTERS](./committers.md)** 列表,并获得 **[CCF ODC](https://www.ccf.org.cn/kyfzwyh/)** 颁发的电子证书,以及享有夜莺开源社区的各种权益和福利。
### PMC|项目管委会
### 项目管委会(PMC)
> PMC项目管委会作为一个实体,来管理和领导夜莺项目,为整个项目的发展全权负责。项目管委会相关内容记录并公示于文件[PMC](https://github.com/ccfos/nightingale/blob/main/doc/pmc.md "PMC").
> 项目管委会作为一个实体,来管理和领导夜莺项目,为整个项目的发展全权负责。项目管委会相关内容记录并公示于文件[PMC](./pmc.md).
- 项目管委会成员(PMC Member),从 Contributor 或者 Committer 中选举产生,他们拥有 [CCFOS/NIGHTINGALE](https://github.com/ccfos/nightingale "CCFOS/NIGHTINGALE") 代码仓库的写操作权限,拥有 Nightingale 社区相关事务的投票权、以及提名 Committer 候选人的权利。
- 项目管委会主席(PMC Chair)从项目管委会成员中投票产生。管委会主席是 **[CCF ODC](https://www.ccf.org.cn/kyfzwyh/ "CCF ODC")** 和项目管委会之间的沟通桥梁,履行特定的项目管理职责。
- 项目管委会成员(PMC Member),从 Contributor 或者 Committer 中选举产生,他们拥有 [CCFOS/NIGHTINGALE](https://github.com/ccfos/nightingale) 代码仓库的写操作权限,拥有 ccf.org.cn 为后缀的邮箱地址(待上线),拥有 Nightingale 社区相关事务的投票权、以及提名 Committer 候选人的权利。
- 项目管委会主席(PMC Chair) **[CCF ODC](https://www.ccf.org.cn/kyfzwyh/)** 从项目管委会成员中任命产生。管委会主席是 CCF ODC 和项目管委会之间的沟通桥梁,履行特定的项目管理职责。
## Communication|沟通机制
## 沟通机制(Communication)
1. 我们推荐使用邮件列表来反馈建议(待发布);
2. 我们推荐使用 [Github Issue](https://github.com/ccfos/nightingale/issues "Github Issue") 跟踪 bug 和管理需求;
3. 我们推荐使用 [Github Milestone](https://github.com/ccfos/nightingale/milestones "Github Milestone") 来管理项目进度和规划;
2. 我们推荐使用 [github issue](https://github.com/ccfos/nightingale/issues) 跟踪 bug 和管理需求;
3. 我们推荐使用 [github milestone](https://github.com/ccfos/nightingale/milestones) 来管理项目进度和规划;
4. 我们推荐使用腾讯会议来定期召开项目例会(会议 ID 待发布);
## Documentation|文档
1. 我们推荐使用 [Github Pages](https://n9e.github.io "Github Pages") 来沉淀文档;
2. 我们推荐使用 [Gitlink Wiki](https://www.gitlink.org.cn/ccfos/nightingale/wiki/faq "Gitlink Wiki") 来沉淀 FAQ
## 文档(Documentation)
1. 我们推荐使用 [github pages](https://n9e.github.io) 来沉淀文档;
2. 我们推荐使用 [gitlink wiki](https://www.gitlink.org.cn/ccfos/nightingale/wiki/faq) 来沉淀FAQ
## Operation|运营机制
## 运营机制(Operation)
1. 我们定期组织用户、贡献者、项目管委会成员之间的沟通会议,讨论项目开发的目标、方案、进度,以及讨论相关需求的合理性、优先级等议题;
2. 我们定期组织 meetup (线上&线下),创造良好的用户交流分享环境,并沉淀相关内容到文档站点;
3. 我们定期组织夜莺开发者大会,分享 [best user story](https://n9e.github.io/docs/prologue/share/ "best user story")、同步年度开发目标和计划、讨论新技术方向等;
3. 我们定期组织夜莺开发者大会,分享 best user story、同步年度开发目标和计划、讨论新技术方向等
## Philosophy|社区指导原则
## 社区指导原则(Philosophy)
>尊重、认可和记录每一位贡献者的工作。
**尊重、认可和记录每一位贡献者的工作。**
## 关于提问的原则
按照**尊重、认可、记录每一位贡献者的工作**原则,我们提倡**高效的提问**,这既是对开发者时间的尊重,也是对整个社区的知识沉淀的贡献:
1. 提问之前请先查阅 [FAQ](https://www.gitlink.org.cn/ccfos/nightingale/wiki/faq "FAQ")
2. 提问之前请先搜索 [Github Issues](https://github.com/ccfos/nightingale/issues "Github Issue")
3. 我们优先推荐通过提交 [Github Issue](https://github.com/ccfos/nightingale/issues "Github Issue") 来提问,如果[有问题点击这里](https://github.com/ccfos/nightingale/issues/new?assignees=&labels=kind%2Fbug&template=bug_report.yml "有问题点击这里") | [有需求建议点击这里](https://github.com/ccfos/nightingale/issues/new?assignees=&labels=kind%2Ffeature&template=enhancement.md "有需求建议点击这里")
1. 提问之前请先查阅 [FAQ](https://www.gitlink.org.cn/ccfos/nightingale/wiki/faq)
2. 提问之前请先搜索 [github issue](https://github.com/ccfos/nightingale/issues)
3. 我们优先推荐通过提交 github issue 来提问,如果[有问题点击这里](https://github.com/ccfos/nightingale/issues/new?assignees=&labels=kind%2Fbug&template=bug_report.yml) | [有需求建议点击这里](https://github.com/ccfos/nightingale/issues/new?assignees=&labels=kind%2Ffeature&template=enhancement.md)
最后,我们推荐你加入微信群,针对相关开放式问题,相互交流咨询 (请先加好友:[UlricGO](https://www.gitlink.org.cn/UlricQin/gist/tree/master/self.jpeg "UlricGO") 备注:夜莺加群+姓名+公司,交流群里会有开发者团队和专业、热心的群友回答问题)
最后,我们推荐你加入微信群,针对相关开放式问题,相互交流咨询 (请先加好友:[UlricGO](https://www.gitlink.org.cn/UlricQin/gist/tree/master/self.jpeg) 备注:夜莺加群+姓名+公司,交流群里会有开发者团队和专业、热心的群友回答问题)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 118 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 131 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 146 KiB

View File

@@ -1,7 +1,7 @@
### PMC Chair
## PMC Chair
- [laiwei](https://github.com/laiwei)
### PMC Co-Chair
- [UlricQin](https://github.com/UlricQin)
## PMC Member
### PMC Member
- [UlricQin](https://github.com/UlricQin)

View File

@@ -48,4 +48,4 @@ max_idle_conns_per_host = 100
enable = false
address = ":9100"
print_access = false
run_mode = "release"
run_mode = "release"

View File

@@ -36,7 +36,7 @@ services:
- nightingale
prometheus:
image: prom/prometheus:v2.37.5
image: prom/prometheus
container_name: prometheus
hostname: prometheus
restart: always
@@ -53,7 +53,7 @@ services:
- "--storage.tsdb.path=/prometheus"
- "--web.console.libraries=/usr/share/prometheus/console_libraries"
- "--web.console.templates=/usr/share/prometheus/consoles"
- "--web.enable-remote-write-receiver"
- "--enable-feature=remote-write-receiver"
- "--query.lookback-delta=2m"
ibex:

View File

@@ -41,12 +41,10 @@ CREATE TABLE `user_group` (
insert into user_group(id, name, create_at, create_by, update_at, update_by) values(1, 'demo-root-group', unix_timestamp(now()), 'root', unix_timestamp(now()), 'root');
CREATE TABLE `user_group_member` (
`id` bigint unsigned not null auto_increment,
`group_id` bigint unsigned not null,
`user_id` bigint unsigned not null,
KEY (`group_id`),
KEY (`user_id`),
PRIMARY KEY(`id`)
KEY (`user_id`)
) ENGINE = InnoDB DEFAULT CHARSET = utf8mb4;
insert into user_group_member(group_id, user_id) values(1, 1);
@@ -72,12 +70,10 @@ insert into `role`(name, note) values('Standard', 'Ordinary user role');
insert into `role`(name, note) values('Guest', 'Readonly user role');
CREATE TABLE `role_operation`(
`id` bigint unsigned not null auto_increment,
`role_name` varchar(128) not null,
`operation` varchar(191) not null,
KEY (`role_name`),
KEY (`operation`),
PRIMARY KEY(`id`)
KEY (`operation`)
) ENGINE = InnoDB DEFAULT CHARSET = utf8mb4;
-- Admin is special, who has no concrete operation but can do anything.
@@ -165,16 +161,13 @@ CREATE TABLE `board` (
`id` bigint unsigned not null auto_increment,
`group_id` bigint not null default 0 comment 'busi group id',
`name` varchar(191) not null,
`ident` varchar(200) not null default '',
`tags` varchar(255) not null comment 'split by space',
`public` tinyint(1) not null default 0 comment '0:false 1:true',
`create_at` bigint not null default 0,
`create_by` varchar(64) not null default '',
`update_at` bigint not null default 0,
`update_by` varchar(64) not null default '',
PRIMARY KEY (`id`),
UNIQUE KEY (`group_id`, `name`),
KEY(`ident`)
UNIQUE KEY (`group_id`, `name`)
) ENGINE = InnoDB DEFAULT CHARSET = utf8mb4;
-- for dashboard new version
@@ -246,9 +239,9 @@ CREATE TABLE `alert_rule` (
`prom_for_duration` int not null comment 'prometheus for, unit:s',
`prom_ql` text not null comment 'promql',
`prom_eval_interval` int not null comment 'evaluate interval',
`enable_stime` char(255) not null default '00:00',
`enable_etime` char(255) not null default '23:59',
`enable_days_of_week` varchar(255) not null default '' comment 'eg: "0 1 2 3 4 5 6 ; 0 1 2"',
`enable_stime` char(5) not null default '00:00',
`enable_etime` char(5) not null default '23:59',
`enable_days_of_week` varchar(32) not null default '' comment 'split by space: 0 1 2 3 4 5 6',
`enable_in_bg` tinyint(1) not null default 0 comment '1: only this bg 0: global',
`notify_recovered` tinyint(1) not null comment 'whether notify when recovery',
`notify_channels` varchar(255) not null default '' comment 'split by space: sms voice email dingtalk wecom',
@@ -272,18 +265,14 @@ CREATE TABLE `alert_mute` (
`id` bigint unsigned not null auto_increment,
`group_id` bigint not null default 0 comment 'busi group id',
`prod` varchar(255) not null default '',
`note` varchar(1024) not null default '',
`cate` varchar(128) not null,
`cluster` varchar(128) not null,
`tags` varchar(4096) not null default '' comment 'json,map,tagkey->regexp|value',
`cause` varchar(255) not null default '',
`btime` bigint not null default 0 comment 'begin time',
`etime` bigint not null default 0 comment 'end time',
`disabled` tinyint(1) not null default 0 comment '0:enabled 1:disabled',
`create_at` bigint not null default 0,
`create_by` varchar(64) not null default '',
`update_at` bigint not null default 0,
`update_by` varchar(64) not null default '',
PRIMARY KEY (`id`),
KEY (`create_at`),
KEY (`group_id`)
@@ -291,8 +280,6 @@ CREATE TABLE `alert_mute` (
CREATE TABLE `alert_subscribe` (
`id` bigint unsigned not null auto_increment,
`name` varchar(255) not null default '',
`disabled` tinyint(1) not null default 0 comment '0:enabled 1:disabled',
`group_id` bigint not null default 0 comment 'busi group id',
`cate` varchar(128) not null,
`cluster` varchar(128) not null,
@@ -366,7 +353,7 @@ CREATE TABLE `recording_rule` (
`cluster` varchar(128) not null,
`name` varchar(255) not null comment 'new metric name',
`note` varchar(255) not null comment 'rule note',
`disabled` tinyint(1) not null default 0 comment '0:enabled 1:disabled',
`disabled` tinyint(1) not null comment '0:enabled 1:disabled',
`prom_ql` varchar(8192) not null comment 'promql',
`prom_eval_interval` int not null comment 'evaluate interval',
`append_tags` varchar(255) default '' comment 'split by space: service=n9e mod=api',
@@ -525,5 +512,6 @@ CREATE TABLE `alerting_engines`
`instance` varchar(128) not null default '' comment 'instance identification, e.g. 10.9.0.9:9090',
`cluster` varchar(128) not null default '' comment 'target reader cluster',
`clock` bigint not null,
PRIMARY KEY (`id`)
) ENGINE = InnoDB DEFAULT CHARSET = utf8mb4;
PRIMARY KEY (`id`),
UNIQUE KEY (`instance`)
) ENGINE = InnoDB DEFAULT CHARSET = utf8mb4;

View File

@@ -43,11 +43,9 @@ CREATE INDEX user_group_update_at_idx ON user_group (update_at);
insert into user_group(id, name, create_at, create_by, update_at, update_by) values(1, 'demo-root-group', date_part('epoch',current_timestamp)::int, 'root', date_part('epoch',current_timestamp)::int, 'root');
CREATE TABLE user_group_member (
id bigserial,
group_id bigint not null,
user_id bigint not null
) ;
ALTER TABLE user_group_member ADD CONSTRAINT user_group_member_pk PRIMARY KEY (id);
CREATE INDEX user_group_member_group_id_idx ON user_group_member (group_id);
CREATE INDEX user_group_member_user_id_idx ON user_group_member (user_id);
@@ -74,11 +72,9 @@ insert into role(name, note) values('Standard', 'Ordinary user role');
insert into role(name, note) values('Guest', 'Readonly user role');
CREATE TABLE role_operation(
id bigserial,
role_name varchar(128) not null,
operation varchar(191) not null
) ;
ALTER TABLE role_operation ADD CONSTRAINT role_operation_pk PRIMARY KEY (id);
CREATE INDEX role_operation_role_name_idx ON role_operation (role_name);
CREATE INDEX role_operation_operation_idx ON role_operation (operation);
@@ -198,9 +194,7 @@ CREATE TABLE board (
id bigserial not null ,
group_id bigint not null default 0 ,
name varchar(191) not null,
ident varchar(200) not null default '',
tags varchar(255) not null ,
public smallint not null default 0,
create_at bigint not null default 0,
create_by varchar(64) not null default '',
update_at bigint not null default 0,
@@ -210,8 +204,6 @@ ALTER TABLE board ADD CONSTRAINT board_pk PRIMARY KEY (id);
ALTER TABLE board ADD CONSTRAINT board_un UNIQUE (group_id,"name");
COMMENT ON COLUMN board.group_id IS 'busi group id';
COMMENT ON COLUMN board.tags IS 'split by space';
COMMENT ON COLUMN board.public IS '0:false 1:true';
CREATE INDEX board_ident_idx ON board (ident);
-- for dashboard new version
CREATE TABLE board_payload (
@@ -267,7 +259,6 @@ CREATE INDEX chart_share_create_at_idx ON chart_share (create_at);
CREATE TABLE alert_rule (
id bigserial NOT NULL,
group_id int8 NOT NULL DEFAULT 0,
cate varchar(128) not null default '' ,
"cluster" varchar(128) NOT NULL,
"name" varchar(255) NOT NULL,
note varchar(1024) NOT NULL,
@@ -323,19 +314,14 @@ COMMENT ON COLUMN alert_rule.append_tags IS 'split by space: service=n9e mod=api
CREATE TABLE alert_mute (
id bigserial,
group_id bigint not null default 0 ,
cate varchar(128) not null default '' ,
prod varchar(255) NOT NULL DEFAULT '' ,
note varchar(1024) not null default '',
cluster varchar(128) not null,
tags varchar(4096) not null default '' ,
cause varchar(255) not null default '',
btime bigint not null default 0 ,
etime bigint not null default 0 ,
disabled smallint not null default 0 ,
create_at bigint not null default 0,
create_by varchar(64) not null default '',
update_at bigint not null default 0,
update_by varchar(64) not null default ''
create_by varchar(64) not null default ''
) ;
ALTER TABLE alert_mute ADD CONSTRAINT alert_mute_pk PRIMARY KEY (id);
CREATE INDEX alert_mute_create_at_idx ON alert_mute (create_at);
@@ -344,14 +330,10 @@ COMMENT ON COLUMN alert_mute.group_id IS 'busi group id';
COMMENT ON COLUMN alert_mute.tags IS 'json,map,tagkey->regexp|value';
COMMENT ON COLUMN alert_mute.btime IS 'begin time';
COMMENT ON COLUMN alert_mute.etime IS 'end time';
COMMENT ON COLUMN alert_mute.disabled IS '0:enabled 1:disabled';
CREATE TABLE alert_subscribe (
id bigserial,
"name" varchar(255) NOT NULL default '',
disabled int2 NOT NULL default 0 ,
group_id bigint not null default 0 ,
cate varchar(128) not null default '' ,
cluster varchar(128) not null,
rule_id bigint not null default 0,
tags jsonb not null ,
@@ -368,7 +350,6 @@ CREATE TABLE alert_subscribe (
ALTER TABLE alert_subscribe ADD CONSTRAINT alert_subscribe_pk PRIMARY KEY (id);
CREATE INDEX alert_subscribe_group_id_idx ON alert_subscribe (group_id);
CREATE INDEX alert_subscribe_update_at_idx ON alert_subscribe (update_at);
COMMENT ON COLUMN alert_subscribe.disabled IS '0:enabled 1:disabled';
COMMENT ON COLUMN alert_subscribe.group_id IS 'busi group id';
COMMENT ON COLUMN alert_subscribe.tags IS 'json,map,tagkey->regexp|value';
COMMENT ON COLUMN alert_subscribe.redefine_severity IS 'is redefine severity?';
@@ -435,7 +416,6 @@ insert into alert_aggr_view(name, rule, cate) values('By RuleName', 'field:rule_
CREATE TABLE alert_cur_event (
id bigserial NOT NULL,
cate varchar(128) not null default '' ,
"cluster" varchar(128) NOT NULL,
group_id int8 NOT NULL,
group_name varchar(255) NOT NULL DEFAULT ''::character varying,
@@ -489,7 +469,6 @@ COMMENT ON COLUMN alert_cur_event.tags IS 'merge data_tags rule_tags, split by ,
CREATE TABLE alert_his_event (
id bigserial NOT NULL,
is_recovered int2 NOT NULL,
cate varchar(128) not null default '' ,
"cluster" varchar(128) NOT NULL,
group_id int8 NOT NULL,
group_name varchar(255) NOT NULL DEFAULT ''::character varying,
@@ -610,5 +589,6 @@ CREATE TABLE alerting_engines
clock bigint not null
) ;
ALTER TABLE alerting_engines ADD CONSTRAINT alerting_engines_pk PRIMARY KEY (id);
ALTER TABLE alerting_engines ADD CONSTRAINT alerting_engines_un UNIQUE (instance);
COMMENT ON COLUMN alerting_engines.instance IS 'instance identification, e.g. 10.9.0.9:9090';
COMMENT ON COLUMN alerting_engines.cluster IS 'target reader cluster';

View File

@@ -23,11 +23,6 @@ class Sender(object):
def send_feishu(cls, payload):
# already done in go code
pass
@classmethod
def send_mm(cls, payload):
# already done in go code
pass
@classmethod
def send_sms(cls, payload):

View File

@@ -29,7 +29,6 @@ func (n *N9EPlugin) Notify(bs []byte) {
"dingtalk_robot_token",
"wecom_robot_token",
"feishu_robot_token",
"telegram_robot_token",
}
for _, ch := range channels {
if ret := gjson.GetBytes(bs, ch); ret.Exists() {

View File

@@ -9,12 +9,7 @@ ClusterName = "Default"
BusiGroupLabelKey = "busigroup"
# sleep x seconds, then start judge engine
EngineDelay = 60
DisableUsageReport = true
# config | database
ReaderFrom = "config"
EngineDelay = 120
[Log]
# log write dir
@@ -73,12 +68,10 @@ InsecureSkipVerify = true
Batch = 5
[Alerting]
# timeout settings, unit: ms, default: 30000ms
Timeout=30000
TemplatesDir = "./etc/template"
NotifyConcurrency = 10
# use builtin go code notify
NotifyBuiltinChannels = ["email", "dingtalk", "wecom", "feishu", "feishucard","mm", "telegram"]
NotifyBuiltinChannels = ["email", "dingtalk", "wecom", "feishu"]
[Alerting.CallScript]
# built in sending capability in go code
@@ -90,8 +83,7 @@ ScriptPath = "./etc/script/notify.py"
Enable = false
# use a plugin via `go build -buildmode=plugin -o notify.so`
PluginPath = "./etc/script/notify.so"
# The first letter must be capitalized to be exported
Caller = "N9eCaller"
Caller = "n9eCaller"
[Alerting.RedisPub]
Enable = false
@@ -109,7 +101,7 @@ Headers = ["Content-Type", "application/json", "X-From", "N9E"]
[NoData]
Metric = "target_up"
# unit: second
Interval = 120
Interval = 15
[Ibex]
# callback: ${ibex}/${tplid}/${host}
@@ -144,7 +136,7 @@ MaxIdleConns = 50
# table prefix
TablePrefix = ""
# enable auto migrate or not
# EnableAutoMigrate = false
EnableAutoMigrate = false
[Reader]
# prometheus base url
@@ -155,18 +147,23 @@ BasicAuthUser = ""
BasicAuthPass = ""
# timeout settings, unit: ms
Timeout = 30000
DialTimeout = 3000
MaxIdleConnsPerHost = 100
DialTimeout = 10000
TLSHandshakeTimeout = 30000
ExpectContinueTimeout = 1000
IdleConnTimeout = 90000
# time duration, unit: ms
KeepAlive = 30000
MaxConnsPerHost = 0
MaxIdleConns = 100
MaxIdleConnsPerHost = 10
[WriterOpt]
# queue channel count
QueueCount = 1000
QueueCount = 100
# queue max size
QueueMaxSize = 1000000
QueueMaxSize = 200000
# once pop samples number from queue
QueuePopSize = 1000
# metric or ident
ShardingKey = "ident"
QueuePopSize = 2000
[[Writers]]
Url = "http://prometheus:9090/api/v1/write"
@@ -175,8 +172,8 @@ BasicAuthUser = ""
# Basic auth password
BasicAuthPass = ""
# timeout settings, unit: ms
Timeout = 10000
DialTimeout = 3000
Timeout = 30000
DialTimeout = 10000
TLSHandshakeTimeout = 30000
ExpectContinueTimeout = 1000
IdleConnTimeout = 90000
@@ -185,12 +182,6 @@ KeepAlive = 30000
MaxConnsPerHost = 0
MaxIdleConns = 100
MaxIdleConnsPerHost = 100
# [[Writers.WriteRelabels]]
# Action = "replace"
# SourceLabels = ["__address__"]
# Regex = "([^:]+)(?::\\d+)?"
# Replacement = "$1:80"
# TargetLabel = "__address__"
# [[Writers]]
# Url = "http://m3db:7201/api/v1/prom/remote/write"

View File

@@ -1,15 +0,0 @@
{{ if .IsRecovered }}
**告警集群:** {{.Cluster}}
**级别状态:** S{{.Severity}} Recovered
**告警名称:** {{.RuleName}}
**恢复时间:** {{timeformat .LastEvalTime}}
**告警描述:** **服务已恢复**
{{- else }}
**告警集群:** {{.Cluster}}
**级别状态:** S{{.Severity}} Triggered
**告警名称:** {{.RuleName}}
**触发时间:** {{timeformat .TriggerTime}}
**发送时间:** {{timestamp}}
**触发时值:** {{.TriggerValue}}
{{if .RuleNote }}**告警描述:** **{{.RuleNote}}**{{end}}
{{- end -}}

View File

@@ -1,7 +0,0 @@
**级别状态**: {{if .IsRecovered}}<font color="info">S{{.Severity}} Recovered</font>{{else}}<font color="warning">S{{.Severity}} Triggered</font>{{end}}
**规则标题**: {{.RuleName}}{{if .RuleNote}}
**规则备注**: {{.RuleNote}}{{end}}
**监控指标**: {{.TagsJSON}}
{{if .IsRecovered}}**恢复时间**{{timeformat .LastEvalTime}}{{else}}**触发时间**: {{timeformat .TriggerTime}}
**触发时值**: {{.TriggerValue}}{{end}}
**发送时间**: {{timestamp}}

View File

@@ -4,21 +4,12 @@ RunMode = "release"
# # custom i18n dict config
# I18N = "./etc/i18n.json"
# # custom i18n request header key
# I18NHeaderKey = "X-Language"
# metrics descriptions
MetricsYamlFile = "./etc/metrics.yaml"
BuiltinAlertsDir = "./etc/alerts"
BuiltinDashboardsDir = "./etc/dashboards"
# config | api
ClustersFrom = "config"
# using when ClustersFrom = "api"
ClustersFromAPIs = []
[[NotifyChannels]]
Label = "邮箱"
# do not change Key
@@ -39,21 +30,6 @@ Label = "飞书机器人"
# do not change Key
Key = "feishu"
[[NotifyChannels]]
Label = "飞书机器人消息卡片"
# do not change Key
Key = "feishucard"
[[NotifyChannels]]
Label = "mm bot"
# do not change Key
Key = "mm"
[[NotifyChannels]]
Label = "telegram机器人"
# do not change Key
Key = "telegram"
[[ContactKeys]]
Label = "Wecom Robot Token"
# do not change Key
@@ -69,16 +45,6 @@ Label = "Feishu Robot Token"
# do not change Key
Key = "feishu_robot_token"
[[ContactKeys]]
Label = "MatterMost Webhook URL"
# do not change Key
Key = "mm_webhook_url"
[[ContactKeys]]
Label = "Telegram Robot Token"
# do not change Key
Key = "telegram_robot_token"
[Log]
# log write dir
Dir = "logs"
@@ -126,13 +92,6 @@ AccessExpired = 1500
RefreshExpired = 10080
RedisKeyPrefix = "/jwt/"
[ProxyAuth]
# if proxy auth enabled, jwt auth is disabled
Enable = false
# username key in http proxy header
HeaderUserNameKey = "X-User-Name"
DefaultRoles = ["Standard"]
[BasicAuth]
user001 = "ccc26da7b9aba533cbb263a36c07dcc5"
@@ -162,20 +121,6 @@ Nickname = "cn"
Phone = "mobile"
Email = "mail"
[OIDC]
Enable = false
RedirectURL = "http://n9e.com/callback"
SsoAddr = "http://sso.example.org"
ClientId = ""
ClientSecret = ""
CoverAttributes = true
DefaultRoles = ["Standard"]
[OIDC.Attributes]
Nickname = "nickname"
Phone = "phone_number"
Email = "email"
[Redis]
# address, ip:port
Address = "redis:6379"
@@ -200,7 +145,7 @@ MaxIdleConns = 50
# table prefix
TablePrefix = ""
# enable auto migrate or not
# EnableAutoMigrate = false
EnableAutoMigrate = false
[[Clusters]]
# Prometheus cluster name
@@ -213,7 +158,14 @@ BasicAuthUser = ""
BasicAuthPass = ""
# timeout settings, unit: ms
Timeout = 30000
DialTimeout = 3000
DialTimeout = 10000
TLSHandshakeTimeout = 30000
ExpectContinueTimeout = 1000
IdleConnTimeout = 90000
# time duration, unit: ms
KeepAlive = 30000
MaxConnsPerHost = 0
MaxIdleConns = 100
MaxIdleConnsPerHost = 100
[Ibex]
@@ -228,4 +180,4 @@ Timeout = 3000
TargetUp = '''max(max_over_time(target_up{ident=~"(%s)"}[%dm])) by (ident)'''
LoadPerCore = '''max(max_over_time(system_load_norm_1{ident=~"(%s)"}[%dm])) by (ident)'''
MemUtil = '''100-max(max_over_time(mem_available_percent{ident=~"(%s)"}[%dm])) by (ident)'''
DiskUtil = '''max(max_over_time(disk_used_percent{ident=~"(%s)", path="/"}[%dm])) by (ident)'''
DiskUtil = '''max(max_over_time(disk_used_percent{ident=~"(%s)", path="/"}[%dm])) by (ident)'''

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,4 @@
zh:
ip_conntrack_count: 连接跟踪表条目总数单位int, count
ip_conntrack_max: 连接跟踪表最大容量单位int, size
cpu_usage_idle: CPU空闲率单位%
cpu_usage_active: CPU使用率单位%
cpu_usage_system: CPU内核态时间占比单位%
@@ -252,8 +250,6 @@ zh:
cloudwatch_aws_rds_write_throughput_sum: rds 写入吞吐量总和
en:
ip_conntrack_count: the number of entries in the conntrack tableunitint, count
ip_conntrack_max: the max capacity of the conntrack tableunitint, size
cpu_usage_idle: "CPU idle rate(unit%)"
cpu_usage_active: "CPU usage rate(unit%)"
cpu_usage_system: "CPU kernel state time proportion(unit%)"

View File

@@ -24,11 +24,6 @@ class Sender(object):
# already done in go code
pass
@classmethod
def send_mm(cls, payload):
# already done in go code
pass
@classmethod
def send_sms(cls, payload):
users = payload.get('event').get("notify_users_obj")

View File

@@ -23,7 +23,6 @@ func (n *N9EPlugin) Notify(bs []byte) {
"dingtalk_robot_token",
"wecom_robot_token",
"feishu_robot_token",
"telegram_robot_token",
}
for _, ch := range channels {
if ret := gjson.GetBytes(bs, ch); ret.Exists() {

View File

@@ -1,92 +0,0 @@
#!/usr/bin/env python
# -*- coding: UTF-8 -*-
import sys
import json
import requests
class Sender(object):
@classmethod
def send_email(cls, payload):
# already done in go code
pass
@classmethod
def send_wecom(cls, payload):
# already done in go code
pass
@classmethod
def send_dingtalk(cls, payload):
# already done in go code
pass
@classmethod
def send_ifeishu(cls, payload):
users = payload.get('event').get("notify_users_obj")
tokens = {}
phones = {}
for u in users:
if u.get("phone"):
phones[u.get("phone")] = 1
contacts = u.get("contacts")
if contacts.get("feishu_robot_token", ""):
tokens[contacts.get("feishu_robot_token", "")] = 1
headers = {
"Content-Type": "application/json;charset=utf-8",
"Host": "open.feishu.cn"
}
for t in tokens:
url = "https://open.feishu.cn/open-apis/bot/v2/hook/{}".format(t)
body = {
"msg_type": "text",
"content": {
"text": payload.get('tpls').get("feishu.tpl", "feishu.tpl not found")
},
"at": {
"atMobiles": phones.keys(),
"isAtAll": False
}
}
response = requests.post(url, headers=headers, data=json.dumps(body))
print(f"notify_feishu: token={t} status_code={response.status_code} response_text={response.text}")
@classmethod
def send_mm(cls, payload):
# already done in go code
pass
@classmethod
def send_sms(cls, payload):
pass
@classmethod
def send_voice(cls, payload):
pass
def main():
payload = json.load(sys.stdin)
with open(".payload", 'w') as f:
f.write(json.dumps(payload, indent=4))
for ch in payload.get('event').get('notify_channels'):
send_func_name = "send_{}".format(ch.strip())
if not hasattr(Sender, send_func_name):
print("function: {} not found", send_func_name)
continue
send_func = getattr(Sender, send_func_name)
send_func(payload)
def hello():
print("hello nightingale")
if __name__ == "__main__":
if len(sys.argv) == 1:
main()
elif sys.argv[1] == "hello":
hello()
else:
print("I am confused")

View File

@@ -9,16 +9,13 @@ ClusterName = "Default"
BusiGroupLabelKey = "busigroup"
# sleep x seconds, then start judge engine
EngineDelay = 30
EngineDelay = 120
DisableUsageReport = true
DisableUsageReport = false
# config | database
ReaderFrom = "config"
# if true, target tags can rewrite labels defined in categraf config file
LabelRewrite = false
[Log]
# log write dir
Dir = "logs"
@@ -76,12 +73,10 @@ InsecureSkipVerify = true
Batch = 5
[Alerting]
# timeout settings, unit: ms, default: 30000ms
Timeout=30000
TemplatesDir = "./etc/template"
NotifyConcurrency = 10
# use builtin go code notify
NotifyBuiltinChannels = ["email", "dingtalk", "wecom", "feishu", "feishucard","mm", "telegram"]
NotifyBuiltinChannels = ["email", "dingtalk", "wecom", "feishu"]
[Alerting.CallScript]
# built in sending capability in go code
@@ -112,7 +107,7 @@ Headers = ["Content-Type", "application/json", "X-From", "N9E"]
[NoData]
Metric = "target_up"
# unit: second
Interval = 120
Interval = 15
[Ibex]
# callback: ${ibex}/${tplid}/${host}
@@ -135,8 +130,6 @@ Address = "127.0.0.1:6379"
RedisType = "standalone"
# Mastername for sentinel type
# MasterName = "mymaster"
# SentinelUsername = ""
# SentinelPassword = ""
[DB]
# postgres: host=%s port=%s user=%s dbname=%s password=%s sslmode=%s
@@ -168,28 +161,13 @@ Timeout = 30000
DialTimeout = 3000
MaxIdleConnsPerHost = 100
# [[Readers]]
# ClusterName = "Default"
# prometheus base url
# Url = "http://127.0.0.1:9090"
# Basic auth username
# BasicAuthUser = ""
# Basic auth password
# BasicAuthPass = ""
# timeout settings, unit: ms
# Timeout = 30000
# DialTimeout = 3000
# MaxIdleConnsPerHost = 100
[WriterOpt]
# queue channel count
QueueCount = 1000
QueueCount = 100
# queue max size
QueueMaxSize = 1000000
QueueMaxSize = 200000
# once pop samples number from queue
QueuePopSize = 1000
# metric or ident
ShardingKey = "ident"
QueuePopSize = 2000
[[Writers]]
Url = "http://127.0.0.1:9090/api/v1/write"
@@ -198,7 +176,6 @@ BasicAuthUser = ""
# Basic auth password
BasicAuthPass = ""
# timeout settings, unit: ms
Headers = ["X-From", "n9e"]
Timeout = 10000
DialTimeout = 3000
TLSHandshakeTimeout = 30000

View File

@@ -1,15 +0,0 @@
{{ if .IsRecovered }}
**告警集群:** {{.Cluster}}
**级别状态:** S{{.Severity}} Recovered
**告警名称:** {{.RuleName}}
**恢复时间:** {{timeformat .LastEvalTime}}
**告警描述:** **服务已恢复**
{{- else }}
**告警集群:** {{.Cluster}}
**级别状态:** S{{.Severity}} Triggered
**告警名称:** {{.RuleName}}
**触发时间:** {{timeformat .TriggerTime}}
**发送时间:** {{timestamp}}
**触发时值:** {{.TriggerValue}}
{{if .RuleNote }}**告警描述:** **{{.RuleNote}}**{{end}}
{{- end -}}

View File

@@ -1,7 +0,0 @@
级别状态: S{{.Severity}} {{if .IsRecovered}}Recovered{{else}}Triggered{{end}}
规则名称: {{.RuleName}}{{if .RuleNote}}
规则备注: {{.RuleNote}}{{end}}
监控指标: {{.TagsJSON}}
{{if .IsRecovered}}恢复时间:{{timeformat .LastEvalTime}}{{else}}触发时间: {{timeformat .TriggerTime}}
触发时值: {{.TriggerValue}}{{end}}
发送时间: {{timestamp}}

View File

@@ -1,9 +0,0 @@
**级别状态**: {{if .IsRecovered}}<font color="info">S{{.Severity}} Recovered</font>{{else}}<font color="warning">S{{.Severity}} Triggered</font>{{end}}
**规则标题**: {{.RuleName}}{{if .RuleNote}}
**规则备注**: {{.RuleNote}}{{end}}{{if .TargetIdent}}
**监控对象**: {{.TargetIdent}}{{end}}
**监控指标**: {{.TagsJSON}}{{if not .IsRecovered}}
**触发时值**: {{.TriggerValue}}{{end}}
{{if .IsRecovered}}**恢复时间**: {{timeformat .LastEvalTime}}{{else}}**首次触发时间**: {{timeformat .FirstTriggerTime}}{{end}}
{{$time_duration := sub now.Unix .FirstTriggerTime }}{{if .IsRecovered}}{{$time_duration = sub .LastEvalTime .FirstTriggerTime }}{{end}}**持续时长**: {{humanizeDurationInterface $time_duration}}
**发送时间**: {{timestamp}}

View File

@@ -1,9 +1,7 @@
**级别状态**: {{if .IsRecovered}}<font color="info">S{{.Severity}} Recovered</font>{{else}}<font color="warning">S{{.Severity}} Triggered</font>{{end}}
**规则标题**: {{.RuleName}}{{if .RuleNote}}
**规则备注**: {{.RuleNote}}{{end}}{{if .TargetIdent}}
**监控对象**: {{.TargetIdent}}{{end}}
**监控指标**: {{.TagsJSON}}{{if not .IsRecovered}}
**规则备注**: {{.RuleNote}}{{end}}
**监控指标**: {{.TagsJSON}}
{{if .IsRecovered}}**恢复时间**{{timeformat .LastEvalTime}}{{else}}**触发时间**: {{timeformat .TriggerTime}}
**触发时值**: {{.TriggerValue}}{{end}}
{{if .IsRecovered}}**恢复时间**: {{timeformat .LastEvalTime}}{{else}}**首次触发时间**: {{timeformat .FirstTriggerTime}}{{end}}
{{$time_duration := sub now.Unix .FirstTriggerTime }}{{if .IsRecovered}}{{$time_duration = sub .LastEvalTime .FirstTriggerTime }}{{end}}**持续时长**: {{humanizeDurationInterface $time_duration}}
**发送时间**: {{timestamp}}

View File

@@ -39,21 +39,6 @@ Label = "飞书机器人"
# do not change Key
Key = "feishu"
[[NotifyChannels]]
Label = "飞书机器人消息卡片"
# do not change Key
Key = "feishucard"
[[NotifyChannels]]
Label = "mm bot"
# do not change Key
Key = "mm"
[[NotifyChannels]]
Label = "telegram机器人"
# do not change Key
Key = "telegram"
[[ContactKeys]]
Label = "Wecom Robot Token"
# do not change Key
@@ -69,16 +54,6 @@ Label = "Feishu Robot Token"
# do not change Key
Key = "feishu_robot_token"
[[ContactKeys]]
Label = "MatterMost Webhook URL"
# do not change Key
Key = "mm_webhook_url"
[[ContactKeys]]
Label = "Telegram Robot Token"
# do not change Key
Key = "telegram_robot_token"
[Log]
# log write dir
Dir = "logs"
@@ -164,7 +139,6 @@ Email = "mail"
[OIDC]
Enable = false
DisplayName = "OIDC登录"
RedirectURL = "http://n9e.com/callback"
SsoAddr = "http://sso.example.org"
ClientId = ""
@@ -177,54 +151,6 @@ Nickname = "nickname"
Phone = "phone_number"
Email = "email"
[CAS]
Enable = false
DisplayName = "CAS登录"
SsoAddr = "https://cas.example.com/cas/"
RedirectURL = "http://127.0.0.1:18000/callback/cas"
CoverAttributes = false
# cas user default roles
DefaultRoles = ["Standard"]
[CAS.Attributes]
Nickname = "nickname"
Phone = "phone_number"
Email = "email"
[OAuth]
Enable = false
DisplayName = "OAuth2登录"
RedirectURL = "http://127.0.0.1:18000/callback/oauth"
SsoAddr = "https://sso.example.com/oauth2/authorize"
TokenAddr = "https://sso.example.com/oauth2/token"
UserInfoAddr = "https://api.example.com/api/v1/user/info"
# "header" "querystring" "formdata"
TranTokenMethod = "header"
ClientId = ""
ClientSecret = ""
CoverAttributes = true
DefaultRoles = ["Standard"]
UserinfoIsArray = false
UserinfoPrefix = "data"
Scopes = ["profile", "email", "phone"]
[OAuth.Attributes]
# Username must be defined
Username = "username"
Nickname = "nickname"
Phone = "phone_number"
Email = "email"
# example
# # nested : UserinfoIsArray=false, UserinfoPrefix="data"
# # {"data":{"username":"123456","nickname":"姓名"},"code":0,"message":"ok"}
# # nested and array : UserinfoIsArray=true, UserinfoPrefix="data"
# # {"data":[{"username":"123456","nickname":"姓名"}],"code":0,"message":"ok"}
# # flat : UserinfoIsArray=false, UserinfoPrefix=""
# # {"username":"123456","nickname":"姓名"}
# # flat and array : UserinfoIsArray=true, UserinfoPrefix=""
# # [{"username":"123456","nickname":"姓名"}]
[Redis]
# address, ip:port or ip1:port,ip2:port for cluster and sentinel(SentinelAddrs)
Address = "127.0.0.1:6379"
@@ -237,8 +163,6 @@ Address = "127.0.0.1:6379"
RedisType = "standalone"
# Mastername for sentinel type
# MasterName = "mymaster"
# SentinelUsername = ""
# SentinelPassword = ""
[DB]
DSN="root:1234@tcp(127.0.0.1:3306)/n9e_v5?charset=utf8mb4&parseTime=True&loc=Local&allowNativePasswords=true"
@@ -270,7 +194,6 @@ BasicAuthPass = ""
Timeout = 30000
DialTimeout = 3000
MaxIdleConnsPerHost = 100
Headers = ["X-From", "n9e"]
[Ibex]
Address = "http://127.0.0.1:10090"
@@ -284,4 +207,4 @@ Timeout = 3000
TargetUp = '''max(max_over_time(target_up{ident=~"(%s)"}[%dm])) by (ident)'''
LoadPerCore = '''max(max_over_time(system_load_norm_1{ident=~"(%s)"}[%dm])) by (ident)'''
MemUtil = '''100-max(max_over_time(mem_available_percent{ident=~"(%s)"}[%dm])) by (ident)'''
DiskUtil = '''max(max_over_time(disk_used_percent{ident=~"(%s)", path="/"}[%dm])) by (ident)'''
DiskUtil = '''max(max_over_time(disk_used_percent{ident=~"(%s)", path="/"}[%dm])) by (ident)'''

16
go.mod
View File

@@ -6,9 +6,9 @@ require (
github.com/coreos/go-oidc v2.2.1+incompatible
github.com/dgrijalva/jwt-go v3.2.0+incompatible
github.com/gin-contrib/pprof v1.3.0
github.com/gin-gonic/gin v1.7.7
github.com/gin-gonic/gin v1.7.4
github.com/go-ldap/ldap/v3 v3.4.1
github.com/go-redis/redis/v9 v9.0.0-rc.1
github.com/go-redis/redis/v8 v8.11.3
github.com/gogo/protobuf v1.3.2
github.com/golang-jwt/jwt v3.2.2+incompatible
github.com/golang/protobuf v1.5.2
@@ -16,7 +16,6 @@ require (
github.com/google/uuid v1.3.0
github.com/json-iterator/go v1.1.12
github.com/koding/multiconfig v0.0.0-20171124222453-69c27309b2d7
github.com/mailru/easyjson v0.7.7
github.com/mattn/go-isatty v0.0.12
github.com/orcaman/concurrent-map v0.0.0-20210501183033-44dafcb38ecc
github.com/pkg/errors v0.9.1
@@ -24,7 +23,7 @@ require (
github.com/prometheus/common v0.32.1
github.com/prometheus/prometheus v2.5.0+incompatible
github.com/tidwall/gjson v1.14.0
github.com/toolkits/pkg v1.3.2
github.com/toolkits/pkg v1.2.9
github.com/urfave/cli/v2 v2.3.0
golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c
gopkg.in/gomail.v2 v2.0.0-20160411212932-81ebce5c23df
@@ -59,7 +58,6 @@ require (
github.com/jackc/pgx/v4 v4.13.0 // indirect
github.com/jinzhu/inflection v1.0.0 // indirect
github.com/jinzhu/now v1.1.2 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/leodido/go-urn v1.2.0 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.1 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
@@ -74,10 +72,10 @@ require (
github.com/tidwall/pretty v1.2.0 // indirect
github.com/ugorji/go/codec v1.1.7 // indirect
go.uber.org/automaxprocs v1.4.0 // indirect
golang.org/x/crypto v0.1.0 // indirect
golang.org/x/net v0.7.0 // indirect
golang.org/x/sys v0.5.0 // indirect
golang.org/x/text v0.7.0 // indirect
golang.org/x/crypto v0.0.0-20210817164053-32db794688a5 // indirect
golang.org/x/net v0.0.0-20210805182204-aaa1db679c0d // indirect
golang.org/x/sys v0.0.0-20220114195835-da31bd327af9 // indirect
golang.org/x/text v0.3.7 // indirect
google.golang.org/appengine v1.6.6 // indirect
google.golang.org/genproto v0.0.0-20211007155348-82e027067bd4 // indirect
google.golang.org/grpc v1.41.0 // indirect

63
go.sum
View File

@@ -89,7 +89,9 @@ github.com/fatih/camelcase v1.0.0 h1:hxNvNX/xYBp0ovncs8WyWZrOrpBNub/JfaMvbURyft8
github.com/fatih/camelcase v1.0.0/go.mod h1:yN2Sb0lFhZJUdVvtELVWefmrXpuZESvPmqwoZc+/fpc=
github.com/fatih/structs v1.1.0 h1:Q7juDM0QtcnhCpeyLGQKyg4TOIghuNXrkL32pHAUMxo=
github.com/fatih/structs v1.1.0/go.mod h1:9NiDSp5zOcgEDl+j00MP/WkGVPOlPRLejGD8Ga6PJ7M=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/fsnotify/fsnotify v1.4.9 h1:hsms1Qyu0jgnwNXIxa+/V/PDsU6CfLf6CNO8H7IWoS4=
github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ=
github.com/garyburd/redigo v1.6.2/go.mod h1:NR3MbYisc3/PwhQ00EMzDiPmrwpPxAn5GI05/YaO1SY=
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/gin-contrib/pprof v1.3.0 h1:G9eK6HnbkSqDZBYbzG4wrjCsA4e+cvYAHUZw6W+W9K0=
@@ -97,9 +99,8 @@ github.com/gin-contrib/pprof v1.3.0/go.mod h1:waMjT1H9b179t3CxuG1cV3DHpga6ybizwf
github.com/gin-contrib/sse v0.1.0 h1:Y/yl/+YNO8GZSjAhjMsSuLt29uWRFHdHYUb5lYOV9qE=
github.com/gin-contrib/sse v0.1.0/go.mod h1:RHrZQHXnP2xjPF+u1gW/2HnVO7nvIa9PG3Gm+fLHvGI=
github.com/gin-gonic/gin v1.6.2/go.mod h1:75u5sXoLsGZoRN5Sgbi1eraJ4GU3++wFwWzhwvtwp4M=
github.com/gin-gonic/gin v1.7.4 h1:QmUZXrvJ9qZ3GfWvQ+2wnW/1ePrTEJqPKMYEU3lD/DM=
github.com/gin-gonic/gin v1.7.4/go.mod h1:jD2toBW3GZUr5UMcdrwQA10I7RuaFOl/SGeDjXkfUtY=
github.com/gin-gonic/gin v1.7.7 h1:3DoBmSbJbZAWqXJC3SLjAPfutPJJRN1U5pALB7EeTTs=
github.com/gin-gonic/gin v1.7.7/go.mod h1:axIBovoeJpVj8S3BwE0uPMTeReE4+AfFtqpqaZ1qq1U=
github.com/go-asn1-ber/asn1-ber v1.5.1 h1:pDbRAunXzIUXfx4CB2QJFv5IuPiuoW+sWvr/Us009o8=
github.com/go-asn1-ber/asn1-ber v1.5.1/go.mod h1:hEBeB/ic+5LoWskz+yKT7vGhhPYkProFKoKdwZRWMe0=
github.com/go-gl/glfw v0.0.0-20190409004039-e6da0acd62b1/go.mod h1:vR7hzQXu2zJy9AVAgeJqvqgH9Q5CA+iKCZ2gyEVpxRU=
@@ -122,11 +123,12 @@ github.com/go-playground/universal-translator v0.17.0/go.mod h1:UkSxE5sNxxRwHyU+
github.com/go-playground/validator/v10 v10.2.0/go.mod h1:uOYAAleCW8F/7oMFd6aG0GOhaH6EGOAJShg8Id5JGkI=
github.com/go-playground/validator/v10 v10.4.1 h1:pH2c5ADXtd66mxoE0Zm9SUhxE20r7aM3F26W0hOn+GE=
github.com/go-playground/validator/v10 v10.4.1/go.mod h1:nlOn6nFhuKACm19sB/8EGNn9GlaMV7XkbRSipzJ0Ii4=
github.com/go-redis/redis/v9 v9.0.0-rc.1 h1:/+bS+yeUnanqAbuD3QwlejzQZ+4eqgfUtFTG4b+QnXs=
github.com/go-redis/redis/v9 v9.0.0-rc.1/go.mod h1:8et+z03j0l8N+DvsVnclzjf3Dl/pFHgRk+2Ct1qw66A=
github.com/go-redis/redis/v8 v8.11.3 h1:GCjoYp8c+yQTJfc0n69iwSiHjvuAdruxl7elnZCxgt8=
github.com/go-redis/redis/v8 v8.11.3/go.mod h1:xNJ9xDG09FsIPwh3bWdk+0oDWHbtF9rPN0F/oD9XeKc=
github.com/go-sql-driver/mysql v1.6.0 h1:BCTh4TKNUYmOmMUcQ3IipzF5prigylS7XXjEkfCHuOE=
github.com/go-sql-driver/mysql v1.6.0/go.mod h1:DCzpHaOWr8IXmIStZouvnhqoel9Qv2LBy8hT2VhHyBg=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/go-task/slim-sprig v0.0.0-20210107165309-348f09dbbbc0/go.mod h1:fyg7847qk6SyHyPtNmDHnmrv/HOrqktSC+C9fM+CJOE=
github.com/gofrs/uuid v4.0.0+incompatible h1:1SD/1F5pU8p29ybwgQSwpQk+mwdRrXCYuPhW6m+TnJw=
github.com/gofrs/uuid v4.0.0+incompatible/go.mod h1:b2aQJv3Z4Fp6yNu3cdSllBxTCLRxnplIgP/c0N/04lM=
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
@@ -175,7 +177,8 @@ github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/
github.com/google/go-cmp v0.5.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.8 h1:e6P7q2lk1O+qJJb4BtCQXlK8vWEO8V1ZeuEdJNOqZyg=
github.com/google/go-cmp v0.5.6 h1:BKbKCqvP6I+rmFHt06ZmyQtvB8xAkWdhFyr0ZUNZcxQ=
github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
github.com/google/martian/v3 v3.0.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0=
@@ -196,6 +199,7 @@ github.com/grpc-ecosystem/grpc-gateway v1.16.0 h1:gmcG1KaJ57LophUzW0Hy8NmPhnMZb4
github.com/grpc-ecosystem/grpc-gateway v1.16.0/go.mod h1:BDjrQk3hbvj6Nolgz8mAMFbcEtjT1g+wF4CSlocrBnw=
github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
github.com/ianlancetaylor/demangle v0.0.0-20181102032728-5e5cf60278f6/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
github.com/jackc/chunkreader v1.0.0/go.mod h1:RT6O25fNZIuasFJRyZ4R/Y2BbhasbmZXF9QQ7T3kePo=
github.com/jackc/chunkreader/v2 v2.0.0/go.mod h1:odVSm741yZoC3dpHEUXIqA9tQRhFrgOHwnPIn9lDKlk=
@@ -246,8 +250,6 @@ github.com/jinzhu/inflection v1.0.0 h1:K317FqzuhWc8YvSVlFMCCUb36O/S9MCKRDI7QkRKD
github.com/jinzhu/inflection v1.0.0/go.mod h1:h+uFLlag+Qp1Va5pdKtLDYj+kHp5pxUVkryuEj+Srlc=
github.com/jinzhu/now v1.1.2 h1:eVKgfIdy9b6zbWBMgFpfDPoAMifwSZagU9HmEU6zgiI=
github.com/jinzhu/now v1.1.2/go.mod h1:d3SSVoowX0Lcu0IBviAWJpolVfI5UJVZZ7cO71lE/z8=
github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8HmY=
github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y=
github.com/jpillora/backoff v1.0.0/go.mod h1:J/6gKK9jxlEcS3zixgDgUAsiuZ7yrSoa/FX5e0EB2j4=
github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
github.com/json-iterator/go v1.1.9/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
@@ -280,8 +282,6 @@ github.com/lib/pq v1.1.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
github.com/lib/pq v1.2.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
github.com/lib/pq v1.10.2 h1:AqzbZs4ZoCBp+GtejcpCpcxM3zlSMx29dXbUSeVtJb8=
github.com/lib/pq v1.10.2/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
github.com/mailru/easyjson v0.7.7 h1:UGYAvKxe3sBsEDzO8ZeWOSlIQfWFlxbzLZe7hwFURr0=
github.com/mailru/easyjson v0.7.7/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc=
github.com/mattn/go-colorable v0.1.1/go.mod h1:FuOcm+DKB9mbwrcAfNl7/TZVBZ6rcnceauSikq3lYCQ=
github.com/mattn/go-colorable v0.1.6/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc=
github.com/mattn/go-isatty v0.0.5/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s=
@@ -299,9 +299,17 @@ github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9G
github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/nxadm/tail v1.4.4/go.mod h1:kenIhsEOeOJmVchQTgglprH7qJGnHDVpk1VPCcaMI8A=
github.com/nxadm/tail v1.4.8 h1:nPr65rt6Y5JFSKQO7qToXr7pePgD6Gwiw05lkbyAQTE=
github.com/onsi/ginkgo v1.16.5 h1:8xi0RTUf59SOSfEtZMvwTvXYMzG4gV23XVHOZiXNtnE=
github.com/onsi/gomega v1.21.1 h1:OB/euWYIExnPBohllTicTHmGTrMaqJ67nIu80j0/uEM=
github.com/nxadm/tail v1.4.8/go.mod h1:+ncqLTQzXmGhMZNUePPaPqPvBxHAIsmXswZKocGu+AU=
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.12.1/go.mod h1:zj2OWP4+oCPe1qIXoGWkgMRwljMUYCdkwsT2108oapk=
github.com/onsi/ginkgo v1.16.4 h1:29JGrr5oVBm5ulCWet69zQkzWipVXIol6ygQUe/EzNc=
github.com/onsi/ginkgo v1.16.4/go.mod h1:dX+/inL/fNMqNlz0e9LfyB9TswhZpCVdJM/Z6Vvnwo0=
github.com/onsi/gomega v1.7.1/go.mod h1:XdKZgCCFLUoM/7CFJVPcG8C1xQ1AJ0vpAezJrB7JYyY=
github.com/onsi/gomega v1.10.1/go.mod h1:iN09h71vgCQne3DLsj+A5owkum+a2tYe+TOCB1ybHNo=
github.com/onsi/gomega v1.15.0 h1:WjP/FQ/sk43MRmnEcT+MlDw2TFvkrXlprrPST/IudjU=
github.com/onsi/gomega v1.15.0/go.mod h1:cIuvLEne0aoVhAgh/O6ac0Op8WWw9H6eYCriF+tEHG0=
github.com/orcaman/concurrent-map v0.0.0-20210501183033-44dafcb38ecc h1:Ak86L+yDSOzKFa7WM5bf5itSOo1e3Xh8bm5YCMUXIjQ=
github.com/orcaman/concurrent-map v0.0.0-20210501183033-44dafcb38ecc/go.mod h1:Lu3tH6HLW3feq74c2GC+jIMS/K2CFcDWnWD9XkenwhI=
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
@@ -365,16 +373,16 @@ github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UV
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA=
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.0 h1:nwc3DEeHmmLAfoZucVR881uASk0Mfjw8xYJ99tb5CcY=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.0 h1:pSgiaMZlXftHpm5L7V1+rVB+AZJydKsMxsQBIJw4PKk=
github.com/tidwall/gjson v1.14.0 h1:6aeJ0bzojgWLa82gDQHcx3S0Lr/O51I9bJ5nv6JFx5w=
github.com/tidwall/gjson v1.14.0/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk=
github.com/tidwall/match v1.1.1 h1:+Ho715JplO36QYgwN9PGYNhgZvoUSc9X2c80KVTi+GA=
github.com/tidwall/match v1.1.1/go.mod h1:eRSPERbgtNPcGhD8UCthc6PmLEQXEWd3PRB5JTxsfmM=
github.com/tidwall/pretty v1.2.0 h1:RWIZEg2iJ8/g6fDDYzMpobmaoGh5OLl4AXtGUGPcqCs=
github.com/tidwall/pretty v1.2.0/go.mod h1:ITEVvHYasfjBbM0u2Pg8T2nJnzm8xPwvNhhsoaGGjNU=
github.com/toolkits/pkg v1.3.2 h1:elEW//SWOO956RQymAwcxBHGBKhrvCUAfXDo8wAkmJs=
github.com/toolkits/pkg v1.3.2/go.mod h1:PvTBg/UxazPgBz6VaCM7FM7kJldjfVrsuN6k4HT/VuY=
github.com/toolkits/pkg v1.2.9 h1:zGlrJDl+2sMBoxBRIoMtAwvKmW5wctuji2+qHCecMKk=
github.com/toolkits/pkg v1.2.9/go.mod h1:ZUsQAOoaR99PSbes+RXSirvwmtd6+XIUvizCmrjfUYc=
github.com/ugorji/go v1.1.7/go.mod h1:kZn38zHttfInRq0xu/PH0az30d+z6vm202qpg1oXVMw=
github.com/ugorji/go/codec v1.1.7 h1:2SvQaVZ1ouYrrKKwoSk2pzd4A9evlKJb9oTL+OaLUSs=
github.com/ugorji/go/codec v1.1.7/go.mod h1:Ax+UKWsSmolVDwsd+7N3ZtXu+yMGCf907BLYF3GoBXY=
@@ -416,9 +424,8 @@ golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPh
golang.org/x/crypto v0.0.0-20201203163018-be400aefbc4c/go.mod h1:jdWPYTVW3xRLrWPugEBEK3UY2ZEsg3UU495nc5E+M+I=
golang.org/x/crypto v0.0.0-20210616213533-5ff15b29337e/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.0.0-20210711020723-a769d52b0f97/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.0.0-20210817164053-32db794688a5 h1:HWj/xjIHfjYU5nVXpTM0s39J9CbLn7Cc5a7IC5rwsMQ=
golang.org/x/crypto v0.0.0-20210817164053-32db794688a5/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.1.0 h1:MDRAIl0xIo9Io2xV565hzXHw3zVseKrJKodhohM5CjU=
golang.org/x/crypto v0.1.0/go.mod h1:RecgLatLF4+eUMCP1PoPZQb+cVrJcOPbHkTkbkB9sbw=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
@@ -451,6 +458,7 @@ golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
@@ -474,6 +482,7 @@ golang.org/x/net v0.0.0-20200324143707-d3edc9973b7e/go.mod h1:qpuaurCH72eLCgpAm/
golang.org/x/net v0.0.0-20200501053045-e0ff5e5a1de5/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20200506145744-7e3656a0809f/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20200513185701-a91f0712d120/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20200520004742-59133d7f0dd7/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20200520182314-0ba52f642ac2/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20200625001655-4c5254603344/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20200707034311-ab3426394381/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
@@ -481,9 +490,10 @@ golang.org/x/net v0.0.0-20200822124328-c89045814202/go.mod h1:/O7V0waA8r7cgGh81R
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM=
golang.org/x/net v0.0.0-20210428140749-89ef3d95e781/go.mod h1:OJAsFXCWl8Ukc7SiCT/9KSuxbyM7479/AVlXFRxuMCk=
golang.org/x/net v0.0.0-20210525063256-abc453219eb5/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.7.0 h1:rJrUqqhjsgNp7KqAIc25s9pZnjU7TUcSY7HcVZjdn1g=
golang.org/x/net v0.7.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
golang.org/x/net v0.0.0-20210805182204-aaa1db679c0d h1:20cMwl2fHAzkJMEA+8J4JgqBQcQGzbisXo31MIeenXI=
golang.org/x/net v0.0.0-20210805182204-aaa1db679c0d/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
@@ -503,6 +513,7 @@ golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJ
golang.org/x/sync v0.0.0-20201207232520-09787c993a3a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
@@ -516,8 +527,11 @@ golang.org/x/sys v0.0.0-20190606165138-5da285871e9c/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190726091711-fc99dfbffb4e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190813064441-fde4db37ae7a/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190904154756-749cb33beabd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191001151750-bb3f8db39f24/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191005200804-aed5e4c7ecf9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191120155948-bd437916bb0e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191228213918-04cbcbbfeed8/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200106162015-b016eb3dc98e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@@ -539,15 +553,15 @@ golang.org/x/sys v0.0.0-20200625212154-ddb9806d33ae/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20200803210538-64077c9b5642/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210112080510-489259a85091/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210603081109-ebe580a85c40/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220114195835-da31bd327af9 h1:XfKQ4OlFl8okEOr5UvAqFRVj8pY/4yfcXrddB8qAbU0=
golang.org/x/sys v0.0.0-20220114195835-da31bd327af9/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.5.0 h1:MUK/U/4lj1t1oPg0HfuXDN/Z1wv31ZJ/YcPiGccS4DU=
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
@@ -558,9 +572,8 @@ golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.4/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7 h1:olpwvP2KacW1ZWvsR7uQhoyTYvKAupfQrRGBFM352Gk=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.7.0 h1:4BRB4x83lYWy72KwLD/qYDuTu7q9PjSagHvijDw7cLo=
golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
@@ -610,12 +623,14 @@ golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roY
golang.org/x/tools v0.0.0-20200729194436-6467de6f59a7/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA=
golang.org/x/tools v0.0.0-20200804011535-6c149bb5ef0d/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA=
golang.org/x/tools v0.0.0-20200825202427-b303f430e36d/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA=
golang.org/x/tools v0.0.0-20201224043029-2b0845dc783e/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/xerrors v0.0.0-20190410155217-1f06c39b4373/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20190513163551-3ee3066db522/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 h1:go1bK/D/BFZV2I8cIQd1NKEZ+0owSTG1fDTci4IqFcE=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/api v0.4.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEtwE=
google.golang.org/api v0.7.0/go.mod h1:WtwebWUNSVBH/HAw79HIFXZNqEvBhG+Ra+ax0hx3E3M=
@@ -711,12 +726,14 @@ gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 h1:YR8cESwS4TdDjEe65xsg0ogRM/Nc3DYOhEAlW+xobZo=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
gopkg.in/gomail.v2 v2.0.0-20160411212932-81ebce5c23df h1:n7WqCuqOuCbNr617RXOY0AWRXxgwEyPp2z+p0+hgMuE=
gopkg.in/gomail.v2 v2.0.0-20160411212932-81ebce5c23df/go.mod h1:LRQQ+SO6ZHR7tOkpBDuZnXENFzX8qRjMDMyPD6BRkCw=
gopkg.in/inconshreveable/log15.v2 v2.0.0-20180818164646-67afb5ed74ec/go.mod h1:aPpfJ7XW+gOuirDoZ8gHhLh3kZ1B08FtV2bbmy7Jv3s=
gopkg.in/square/go-jose.v2 v2.6.0 h1:NGk74WTnPKBNUhNzQX7PYcTLUjoq7mzKk2OKbvwk2iI=
gopkg.in/square/go-jose.v2 v2.6.0/go.mod h1:M9dMgbHiYLoDGQrXy7OpJDJWiKiU//h+vD76mk0e1AI=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.3/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
@@ -726,8 +743,8 @@ gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c h1:dUUwHk2QECo/6vqA44rthZ8ie2QXMNeKRTHCNY2nXvo=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gorm.io/driver/mysql v1.1.2 h1:OofcyE2lga734MxwcCW9uB4mWNXMr50uaGRVwQL2B0M=
gorm.io/driver/mysql v1.1.2/go.mod h1:4P/X9vSc3WTrhTLZ259cpFd6xKNYiSSdSZngkSBGIMM=
gorm.io/driver/postgres v1.1.1 h1:tWLmqYCyaoh89fi7DhM6QggujrOnmfo3H98AzgNAAu0=

View File

@@ -34,11 +34,6 @@ func newWebapiCmd() *cli.Command {
Aliases: []string{"c"},
Usage: "specify configuration file(.json,.yaml,.toml)",
},
&cli.StringFlag{
Name: "key",
Aliases: []string{"k"},
Usage: "specify the secret key for configuration file field encryption",
},
},
Action: func(c *cli.Context) error {
printEnv()
@@ -48,9 +43,6 @@ func newWebapiCmd() *cli.Command {
opts = append(opts, webapi.SetConfigFile(c.String("conf")))
}
opts = append(opts, webapi.SetVersion(version.VERSION))
if c.String("key") != "" {
opts = append(opts, webapi.SetKey(c.String("key")))
}
webapi.Run(opts...)
return nil
@@ -68,11 +60,6 @@ func newServerCmd() *cli.Command {
Aliases: []string{"c"},
Usage: "specify configuration file(.json,.yaml,.toml)",
},
&cli.StringFlag{
Name: "key",
Aliases: []string{"k"},
Usage: "specify the secret key for configuration file field encryption",
},
},
Action: func(c *cli.Context) error {
printEnv()
@@ -82,9 +69,6 @@ func newServerCmd() *cli.Command {
opts = append(opts, server.SetConfigFile(c.String("conf")))
}
opts = append(opts, server.SetVersion(version.VERSION))
if c.String("key") != "" {
opts = append(opts, server.SetKey(c.String("key")))
}
server.Run(opts...)
return nil

View File

@@ -3,9 +3,9 @@ package models
import (
"bytes"
"fmt"
"html/template"
"strconv"
"strings"
"text/template"
"github.com/didi/nightingale/v5/src/pkg/tplx"
)
@@ -63,11 +63,10 @@ type AggrRule struct {
Value string
}
func (e *AlertCurEvent) ParseRule(field string) error {
f := e.GetField(field)
f = strings.TrimSpace(f)
func (e *AlertCurEvent) ParseRuleNote() error {
e.RuleNote = strings.TrimSpace(e.RuleNote)
if f == "" {
if e.RuleNote == "" {
return nil
}
@@ -76,8 +75,8 @@ func (e *AlertCurEvent) ParseRule(field string) error {
"{{$value := .TriggerValue}}",
}
text := strings.Join(append(defs, f), "")
t, err := template.New(fmt.Sprint(e.RuleId)).Funcs(template.FuncMap(tplx.TemplateFuncMap)).Parse(text)
text := strings.Join(append(defs, e.RuleNote), "")
t, err := template.New(fmt.Sprint(e.RuleId)).Funcs(tplx.TemplateFuncMap).Parse(text)
if err != nil {
return err
}
@@ -88,13 +87,7 @@ func (e *AlertCurEvent) ParseRule(field string) error {
return err
}
if field == "rule_name" {
e.RuleName = body.String()
}
if field == "rule_note" {
e.RuleNote = body.String()
}
e.RuleNote = body.String()
return nil
}
@@ -140,8 +133,6 @@ func (e *AlertCurEvent) GetField(field string) string {
return fmt.Sprint(e.RuleId)
case "rule_name":
return e.RuleName
case "rule_note":
return e.RuleNote
case "severity":
return fmt.Sprint(e.Severity)
case "runbook_url":
@@ -420,9 +411,9 @@ func AlertCurEventGetByIds(ids []int64) ([]*AlertCurEvent, error) {
return lst, err
}
func AlertCurEventGetByRuleIdAndCluster(ruleId int64, cluster string) ([]*AlertCurEvent, error) {
func AlertCurEventGetByRule(ruleId int64) ([]*AlertCurEvent, error) {
var lst []*AlertCurEvent
err := DB().Where("rule_id=? and cluster=?", ruleId, cluster).Find(&lst).Error
err := DB().Where("rule_id=?", ruleId).Find(&lst).Error
return lst, err
}

View File

@@ -22,7 +22,6 @@ type TagFilter struct {
type AlertMute struct {
Id int64 `json:"id" gorm:"primaryKey"`
GroupId int64 `json:"group_id"`
Note string `json:"note"`
Cate string `json:"cate"`
Prod string `json:"prod"` // product empty means n9e
Cluster string `json:"cluster"` // take effect by clusters, seperated by space
@@ -30,11 +29,8 @@ type AlertMute struct {
Cause string `json:"cause"`
Btime int64 `json:"btime"`
Etime int64 `json:"etime"`
Disabled int `json:"disabled"` // 0: enabled, 1: disabled
CreateBy string `json:"create_by"`
UpdateBy string `json:"update_by"`
CreateAt int64 `json:"create_at"`
UpdateAt int64 `json:"update_at"`
ITags []TagFilter `json:"-" gorm:"-"` // inner tags
}
@@ -42,24 +38,6 @@ func (m *AlertMute) TableName() string {
return "alert_mute"
}
func AlertMuteGetById(id int64) (*AlertMute, error) {
return AlertMuteGet("id=?", id)
}
func AlertMuteGet(where string, args ...interface{}) (*AlertMute, error) {
var lst []*AlertMute
err := DB().Where(where, args...).Find(&lst).Error
if err != nil {
return nil, err
}
if len(lst) == 0 {
return nil, nil
}
return lst[0], nil
}
func AlertMuteGets(prods []string, bgid int64, query string) (lst []AlertMute, err error) {
session := DB().Where("group_id = ? and prod in (?)", bgid, prods)
@@ -136,31 +114,10 @@ func (m *AlertMute) Add() error {
if err := m.Verify(); err != nil {
return err
}
now := time.Now().Unix()
m.CreateAt = now
m.UpdateAt = now
m.CreateAt = time.Now().Unix()
return Insert(m)
}
func (m *AlertMute) Update(arm AlertMute) error {
arm.Id = m.Id
arm.GroupId = m.GroupId
arm.CreateAt = m.CreateAt
arm.CreateBy = m.CreateBy
arm.UpdateAt = time.Now().Unix()
err := arm.Verify()
if err != nil {
return err
}
return DB().Model(m).Select("*").Updates(arm).Error
}
func (m *AlertMute) UpdateFieldsMap(fields map[string]interface{}) error {
return DB().Model(m).Updates(fields).Error
}
func AlertMuteDel(ids []int64) error {
if len(ids) == 0 {
return nil
@@ -169,20 +126,13 @@ func AlertMuteDel(ids []int64) error {
}
func AlertMuteStatistics(cluster string) (*Statistics, error) {
// clean expired first
buf := int64(30)
err := DB().Where("etime < ?", time.Now().Unix()-buf).Delete(new(AlertMute)).Error
if err != nil {
return nil, err
}
session := DB().Model(&AlertMute{}).Select("count(*) as total", "max(update_at) as last_updated")
session := DB().Model(&AlertMute{}).Select("count(*) as total", "max(create_at) as last_updated")
if cluster != "" {
session = session.Where("(cluster like ? or cluster = ?)", "%"+cluster+"%", ClusterAll)
}
var stats []*Statistics
err = session.Find(&stats).Error
err := session.Find(&stats).Error
if err != nil {
return nil, err
}
@@ -191,6 +141,13 @@ func AlertMuteStatistics(cluster string) (*Statistics, error) {
}
func AlertMuteGetsByCluster(cluster string) ([]*AlertMute, error) {
// clean expired first
buf := int64(30)
err := DB().Where("etime < ?", time.Now().Unix()+buf).Delete(new(AlertMute)).Error
if err != nil {
return nil, err
}
// get my cluster's mutes
session := DB().Model(&AlertMute{})
if cluster != "" {
@@ -199,15 +156,10 @@ func AlertMuteGetsByCluster(cluster string) ([]*AlertMute, error) {
var lst []*AlertMute
var mlst []*AlertMute
err := session.Find(&lst).Error
err = session.Find(&lst).Error
if err != nil {
return nil, err
}
if cluster == "" {
return lst, nil
}
for _, m := range lst {
if MatchCluster(m.Cluster, cluster) {
mlst = append(mlst, m)

View File

@@ -14,50 +14,45 @@ import (
)
type AlertRule struct {
Id int64 `json:"id" gorm:"primaryKey"`
GroupId int64 `json:"group_id"` // busi group id
Cate string `json:"cate"` // alert rule cate (prometheus|elasticsearch)
Cluster string `json:"cluster"` // take effect by clusters, seperated by space
Name string `json:"name"` // rule name
Note string `json:"note"` // will sent in notify
Prod string `json:"prod"` // product empty means n9e
Algorithm string `json:"algorithm"` // algorithm (''|holtwinters), empty means threshold
AlgoParams string `json:"-" gorm:"algo_params"` // params algorithm need
AlgoParamsJson interface{} `json:"algo_params" gorm:"-"` //
Delay int `json:"delay"` // Time (in seconds) to delay evaluation
Severity int `json:"severity"` // 1: Emergency 2: Warning 3: Notice
Disabled int `json:"disabled"` // 0: enabled, 1: disabled
PromForDuration int `json:"prom_for_duration"` // prometheus for, unit:s
PromQl string `json:"prom_ql"` // just one ql
PromEvalInterval int `json:"prom_eval_interval"` // unit:s
EnableStime string `json:"-"` // split by space: "00:00 10:00 12:00"
EnableStimeJSON string `json:"enable_stime" gorm:"-"` // for fe
EnableStimesJSON []string `json:"enable_stimes" gorm:"-"` // for fe
EnableEtime string `json:"-"` // split by space: "00:00 10:00 12:00"
EnableEtimeJSON string `json:"enable_etime" gorm:"-"` // for fe
EnableEtimesJSON []string `json:"enable_etimes" gorm:"-"` // for fe
EnableDaysOfWeek string `json:"-"` // eg: "0 1 2 3 4 5 6 ; 0 1 2"
EnableDaysOfWeekJSON []string `json:"enable_days_of_week" gorm:"-"` // for fe
EnableDaysOfWeeksJSON [][]string `json:"enable_days_of_weeks" gorm:"-"` // for fe
EnableInBG int `json:"enable_in_bg"` // 0: global 1: enable one busi-group
NotifyRecovered int `json:"notify_recovered"` // whether notify when recovery
NotifyChannels string `json:"-"` // split by space: sms voice email dingtalk wecom
NotifyChannelsJSON []string `json:"notify_channels" gorm:"-"` // for fe
NotifyGroups string `json:"-"` // split by space: 233 43
NotifyGroupsObj []UserGroup `json:"notify_groups_obj" gorm:"-"` // for fe
NotifyGroupsJSON []string `json:"notify_groups" gorm:"-"` // for fe
NotifyRepeatStep int `json:"notify_repeat_step"` // notify repeat interval, unit: min
NotifyMaxNumber int `json:"notify_max_number"` // notify: max number
RecoverDuration int64 `json:"recover_duration"` // unit: s
Callbacks string `json:"-"` // split by space: http://a.com/api/x http://a.com/api/y'
CallbacksJSON []string `json:"callbacks" gorm:"-"` // for fe
RunbookUrl string `json:"runbook_url"` // sop url
AppendTags string `json:"-"` // split by space: service=n9e mod=api
AppendTagsJSON []string `json:"append_tags" gorm:"-"` // for fe
CreateAt int64 `json:"create_at"`
CreateBy string `json:"create_by"`
UpdateAt int64 `json:"update_at"`
UpdateBy string `json:"update_by"`
Id int64 `json:"id" gorm:"primaryKey"`
GroupId int64 `json:"group_id"` // busi group id
Cate string `json:"cate"` // alert rule cate (prometheus|elasticsearch)
Cluster string `json:"cluster"` // take effect by clusters, seperated by space
Name string `json:"name"` // rule name
Note string `json:"note"` // will sent in notify
Prod string `json:"prod"` // product empty means n9e
Algorithm string `json:"algorithm"` // algorithm (''|holtwinters), empty means threshold
AlgoParams string `json:"-" gorm:"algo_params"` // params algorithm need
AlgoParamsJson interface{} `json:"algo_params" gorm:"-"` //
Delay int `json:"delay"` // Time (in seconds) to delay evaluation
Severity int `json:"severity"` // 1: Emergency 2: Warning 3: Notice
Disabled int `json:"disabled"` // 0: enabled, 1: disabled
PromForDuration int `json:"prom_for_duration"` // prometheus for, unit:s
PromQl string `json:"prom_ql"` // just one ql
PromEvalInterval int `json:"prom_eval_interval"` // unit:s
EnableStime string `json:"enable_stime"` // e.g. 00:00
EnableEtime string `json:"enable_etime"` // e.g. 23:59
EnableDaysOfWeek string `json:"-"` // split by space: 0 1 2 3 4 5 6
EnableDaysOfWeekJSON []string `json:"enable_days_of_week" gorm:"-"` // for fe
EnableInBG int `json:"enable_in_bg"` // 0: global 1: enable one busi-group
NotifyRecovered int `json:"notify_recovered"` // whether notify when recovery
NotifyChannels string `json:"-"` // split by space: sms voice email dingtalk wecom
NotifyChannelsJSON []string `json:"notify_channels" gorm:"-"` // for fe
NotifyGroups string `json:"-"` // split by space: 233 43
NotifyGroupsObj []UserGroup `json:"notify_groups_obj" gorm:"-"` // for fe
NotifyGroupsJSON []string `json:"notify_groups" gorm:"-"` // for fe
NotifyRepeatStep int `json:"notify_repeat_step"` // notify repeat interval, unit: min
NotifyMaxNumber int `json:"notify_max_number"` // notify: max number
RecoverDuration int64 `json:"recover_duration"` // unit: s
Callbacks string `json:"-"` // split by space: http://a.com/api/x http://a.com/api/y'
CallbacksJSON []string `json:"callbacks" gorm:"-"` // for fe
RunbookUrl string `json:"runbook_url"` // sop url
AppendTags string `json:"-"` // split by space: service=n9e mod=api
AppendTagsJSON []string `json:"append_tags" gorm:"-"` // for fe
CreateAt int64 `json:"create_at"`
CreateBy string `json:"create_by"`
UpdateAt int64 `json:"update_at"`
UpdateBy string `json:"update_by"`
}
func (ar *AlertRule) TableName() string {
@@ -229,30 +224,7 @@ func (ar *AlertRule) FillNotifyGroups(cache map[int64]*UserGroup) error {
}
func (ar *AlertRule) FE2DB() error {
if len(ar.EnableStimesJSON) > 0 {
ar.EnableStime = strings.Join(ar.EnableStimesJSON, " ")
ar.EnableEtime = strings.Join(ar.EnableEtimesJSON, " ")
} else {
ar.EnableStime = ar.EnableStimeJSON
ar.EnableEtime = ar.EnableEtimeJSON
}
if len(ar.EnableDaysOfWeeksJSON) > 0 {
for i := 0; i < len(ar.EnableDaysOfWeeksJSON); i++ {
if len(ar.EnableDaysOfWeeksJSON) == 1 {
ar.EnableDaysOfWeek = strings.Join(ar.EnableDaysOfWeeksJSON[i], " ")
} else {
if i == len(ar.EnableDaysOfWeeksJSON)-1 {
ar.EnableDaysOfWeek += strings.Join(ar.EnableDaysOfWeeksJSON[i], " ")
} else {
ar.EnableDaysOfWeek += strings.Join(ar.EnableDaysOfWeeksJSON[i], " ") + ";"
}
}
}
} else {
ar.EnableDaysOfWeek = strings.Join(ar.EnableDaysOfWeekJSON, " ")
}
ar.EnableDaysOfWeek = strings.Join(ar.EnableDaysOfWeekJSON, " ")
ar.NotifyChannels = strings.Join(ar.NotifyChannelsJSON, " ")
ar.NotifyGroups = strings.Join(ar.NotifyGroupsJSON, " ")
ar.Callbacks = strings.Join(ar.CallbacksJSON, " ")
@@ -267,21 +239,7 @@ func (ar *AlertRule) FE2DB() error {
}
func (ar *AlertRule) DB2FE() {
ar.EnableStimesJSON = strings.Fields(ar.EnableStime)
ar.EnableEtimesJSON = strings.Fields(ar.EnableEtime)
if len(ar.EnableEtimesJSON) > 0 {
ar.EnableStimeJSON = ar.EnableStimesJSON[0]
ar.EnableEtimeJSON = ar.EnableEtimesJSON[0]
}
cache := strings.Split(ar.EnableDaysOfWeek, ";")
for i := 0; i < len(cache); i++ {
ar.EnableDaysOfWeeksJSON = append(ar.EnableDaysOfWeeksJSON, strings.Fields(cache[i]))
}
if len(ar.EnableDaysOfWeeksJSON) > 0 {
ar.EnableDaysOfWeekJSON = ar.EnableDaysOfWeeksJSON[0]
}
ar.EnableDaysOfWeekJSON = strings.Fields(ar.EnableDaysOfWeek)
ar.NotifyChannelsJSON = strings.Fields(ar.NotifyChannels)
ar.NotifyGroupsJSON = strings.Fields(ar.NotifyGroups)
ar.CallbacksJSON = strings.Fields(ar.Callbacks)
@@ -467,38 +425,3 @@ func AlertRuleStatistics(cluster string) (*Statistics, error) {
return stats[0], nil
}
func (ar *AlertRule) IsPrometheusRule() bool {
return ar.Algorithm == "" && (ar.Cate == "" || strings.ToLower(ar.Cate) == "prometheus")
}
func (ar *AlertRule) GenerateNewEvent() *AlertCurEvent {
event := &AlertCurEvent{}
ar.UpdateEvent(event)
return event
}
func (ar *AlertRule) UpdateEvent(event *AlertCurEvent) {
if event == nil {
return
}
event.GroupId = ar.GroupId
event.Cate = ar.Cate
event.RuleId = ar.Id
event.RuleName = ar.Name
event.RuleNote = ar.Note
event.RuleProd = ar.Prod
event.RuleAlgo = ar.Algorithm
event.Severity = ar.Severity
event.PromForDuration = ar.PromForDuration
event.PromQl = ar.PromQl
event.PromEvalInterval = ar.PromEvalInterval
event.Callbacks = ar.Callbacks
event.CallbacksJSON = ar.CallbacksJSON
event.RunbookUrl = ar.RunbookUrl
event.NotifyRecovered = ar.NotifyRecovered
event.NotifyChannels = ar.NotifyChannels
event.NotifyChannelsJSON = ar.NotifyChannelsJSON
event.NotifyGroups = ar.NotifyGroups
event.NotifyGroupsJSON = ar.NotifyGroupsJSON
}

View File

@@ -13,8 +13,6 @@ import (
type AlertSubscribe struct {
Id int64 `json:"id" gorm:"primaryKey"`
Name string `json:"name"` // AlertSubscribe name
Disabled int `json:"disabled"` // 0: enabled, 1: disabled
GroupId int64 `json:"group_id"`
Cate string `json:"cate"`
Cluster string `json:"cluster"` // take effect by clusters, seperated by space
@@ -57,10 +55,6 @@ func AlertSubscribeGet(where string, args ...interface{}) (*AlertSubscribe, erro
return lst[0], nil
}
func (s *AlertSubscribe) IsDisabled() bool {
return s.Disabled == 1
}
func (s *AlertSubscribe) Verify() error {
if s.Cluster == "" {
return errors.New("cluster invalid")
@@ -238,11 +232,6 @@ func AlertSubscribeGetsByCluster(cluster string) ([]*AlertSubscribe, error) {
if err != nil {
return nil, err
}
if cluster == "" {
return lst, nil
}
for _, s := range lst {
if MatchCluster(s.Cluster, cluster) {
slst = append(slst, s)
@@ -250,30 +239,3 @@ func AlertSubscribeGetsByCluster(cluster string) ([]*AlertSubscribe, error) {
}
return slst, err
}
func (s *AlertSubscribe) MatchCluster(cluster string) bool {
if s.Cluster == ClusterAll {
return true
}
clusters := strings.Fields(s.Cluster)
for _, c := range clusters {
if c == cluster {
return true
}
}
return false
}
func (s *AlertSubscribe) ModifyEvent(event *AlertCurEvent) {
if s.RedefineSeverity == 1 {
event.Severity = s.NewSeverity
}
if s.RedefineChannels == 1 {
event.NotifyChannels = s.NewChannels
event.NotifyChannelsJSON = strings.Fields(s.NewChannels)
}
event.NotifyGroups = s.UserGroupIds
event.NotifyGroupsJSON = strings.Fields(s.UserGroupIds)
}

View File

@@ -1,9 +1,6 @@
package models
import (
"fmt"
"time"
)
import "time"
type AlertingEngines struct {
Id int64 `json:"id" gorm:"primaryKey"`
@@ -18,62 +15,23 @@ func (e *AlertingEngines) TableName() string {
// UpdateCluster 页面上用户会给各个n9e-server分配要关联的目标集群是什么
func (e *AlertingEngines) UpdateCluster(c string) error {
count, err := Count(DB().Model(&AlertingEngines{}).Where("id<>? and instance=? and cluster=?", e.Id, e.Instance, c))
if err != nil {
return err
}
if count > 0 {
return fmt.Errorf("instance %s and cluster %s already exists", e.Instance, c)
}
e.Cluster = c
return DB().Model(e).Select("cluster").Updates(e).Error
}
func AlertingEngineAdd(instance, cluster string) error {
count, err := Count(DB().Model(&AlertingEngines{}).Where("instance=? and cluster=?", instance, cluster))
if err != nil {
return err
}
if count > 0 {
return fmt.Errorf("instance %s and cluster %s already exists", instance, cluster)
}
err = DB().Create(&AlertingEngines{
Instance: instance,
Cluster: cluster,
Clock: time.Now().Unix(),
}).Error
return err
}
func AlertingEngineDel(ids []int64) error {
if len(ids) == 0 {
return nil
}
return DB().Where("id in ?", ids).Delete(new(AlertingEngines)).Error
}
// AlertingEngineGetCluster 根据实例名获取对应的集群名字
func AlertingEngineGetClusters(instance string) ([]string, error) {
func AlertingEngineGetCluster(instance string) (string, error) {
var objs []AlertingEngines
err := DB().Where("instance=?", instance).Find(&objs).Error
if err != nil {
return []string{}, err
return "", err
}
if len(objs) == 0 {
return []string{}, nil
}
var clusters []string
for i := 0; i < len(objs); i++ {
clusters = append(clusters, objs[i].Cluster)
return "", nil
}
return clusters, nil
return objs[0].Cluster, nil
}
// AlertingEngineGets 拉取列表数据,用户要在页面上看到所有 n9e-server 实例列表,然后为其分配 cluster
@@ -114,9 +72,9 @@ func AlertingEngineGetsInstances(where string, args ...interface{}) ([]string, e
return arr, err
}
func AlertingEngineHeartbeatWithCluster(instance, cluster string) error {
func AlertingEngineHeartbeat(instance string) error {
var total int64
err := DB().Model(new(AlertingEngines)).Where("instance=? and cluster=?", instance, cluster).Count(&total).Error
err := DB().Model(new(AlertingEngines)).Where("instance=?", instance).Count(&total).Error
if err != nil {
return err
}
@@ -125,20 +83,12 @@ func AlertingEngineHeartbeatWithCluster(instance, cluster string) error {
// insert
err = DB().Create(&AlertingEngines{
Instance: instance,
Cluster: cluster,
Clock: time.Now().Unix(),
}).Error
} else {
// updates
fields := map[string]interface{}{"clock": time.Now().Unix()}
err = DB().Model(new(AlertingEngines)).Where("instance=? and cluster=?", instance, cluster).Updates(fields).Error
// update
err = DB().Model(new(AlertingEngines)).Where("instance=?", instance).Update("clock", time.Now().Unix()).Error
}
return err
}
func AlertingEngineHeartbeat(instance string) error {
fields := map[string]interface{}{"clock": time.Now().Unix()}
err := DB().Model(new(AlertingEngines)).Where("instance=?", instance).Updates(fields).Error
return err
}

View File

@@ -13,14 +13,12 @@ type Board struct {
Id int64 `json:"id" gorm:"primaryKey"`
GroupId int64 `json:"group_id"`
Name string `json:"name"`
Ident string `json:"ident"`
Tags string `json:"tags"`
CreateAt int64 `json:"create_at"`
CreateBy string `json:"create_by"`
UpdateAt int64 `json:"update_at"`
UpdateBy string `json:"update_by"`
Configs string `json:"configs" gorm:"-"`
Public int `json:"public"` // 0: false, 1: true
}
func (b *Board) TableName() string {
@@ -39,36 +37,11 @@ func (b *Board) Verify() error {
return nil
}
func (b *Board) CanRenameIdent(ident string) (bool, error) {
if ident == "" {
return true, nil
}
cnt, err := Count(DB().Model(b).Where("ident=? and id <> ?", ident, b.Id))
if err != nil {
return false, err
}
return cnt == 0, nil
}
func (b *Board) Add() error {
if err := b.Verify(); err != nil {
return err
}
if b.Ident != "" {
// ident duplicate check
cnt, err := Count(DB().Model(b).Where("ident=?", b.Ident))
if err != nil {
return err
}
if cnt > 0 {
return errors.New("Ident duplicate")
}
}
now := time.Now().Unix()
b.CreateAt = now
b.UpdateAt = now

View File

@@ -74,62 +74,7 @@ func ConfigsSet(ckey, cval string) error {
return err
}
func ConfigGet(id int64) (*Configs, error) {
var objs []*Configs
err := DB().Where("id=?", id).Find(&objs).Error
if len(objs) == 0 {
return nil, nil
}
return objs[0], err
}
func ConfigsGets(prefix string, limit, offset int) ([]*Configs, error) {
var objs []*Configs
session := DB()
if prefix != "" {
session = session.Where("ckey like ?", prefix+"%")
}
err := session.Order("id desc").Limit(limit).Offset(offset).Find(&objs).Error
return objs, err
}
func (c *Configs) Add() error {
num, err := Count(DB().Model(&Configs{}).Where("ckey=?", c.Ckey))
if err != nil {
return errors.WithMessage(err, "failed to count configs")
}
if num > 0 {
return errors.WithMessage(err, "key is exists")
}
// insert
err = DB().Create(&Configs{
Ckey: c.Ckey,
Cval: c.Cval,
}).Error
return err
}
func (c *Configs) Update() error {
num, err := Count(DB().Model(&Configs{}).Where("id<>? and ckey=?", c.Id, c.Ckey))
if err != nil {
return errors.WithMessage(err, "failed to count configs")
}
if num > 0 {
return errors.WithMessage(err, "key is exists")
}
err = DB().Model(&Configs{}).Where("id=?", c.Id).Updates(c).Error
return err
}
func ConfigsDel(ids []int64) error {
return DB().Where("id in ?", ids).Delete(&Configs{}).Error
}
func ConfigsGetsByKey(ckeys []string) (map[string]string, error) {
func ConfigsGets(ckeys []string) (map[string]string, error) {
var objs []Configs
err := DB().Where("ckey in ?", ckeys).Find(&objs).Error
if err != nil {

View File

@@ -7,8 +7,6 @@ import (
"time"
"github.com/pkg/errors"
"github.com/tidwall/gjson"
"github.com/toolkits/pkg/logger"
"github.com/toolkits/pkg/slice"
"github.com/toolkits/pkg/str"
"gorm.io/gorm"
@@ -18,22 +16,6 @@ import (
"github.com/didi/nightingale/v5/src/webapi/config"
)
const (
Dingtalk = "dingtalk"
Wecom = "wecom"
Feishu = "feishu"
FeishuCard = "feishucard"
Mm = "mm"
Telegram = "telegram"
Email = "email"
DingtalkKey = "dingtalk_robot_token"
WecomKey = "wecom_robot_token"
FeishuKey = "feishu_robot_token"
MmKey = "mm_webhook_url"
TelegramKey = "telegram_robot_token"
)
type User struct {
Id int64 `json:"id" gorm:"primaryKey"`
Username string `json:"username"`
@@ -113,7 +95,7 @@ func (u *User) Update(selectField interface{}, selectFields ...interface{}) erro
return err
}
return DB().Model(u).Select(selectField, selectFields...).Updates(u).Error
return DB().Model(u).Select(selectField, selectFields).Updates(u).Error
}
func (u *User) UpdateAllFields() error {
@@ -480,10 +462,6 @@ func (u *User) BusiGroups(limit int, query string, all ...bool) ([]BusiGroup, er
return lst, err
}
if t == nil {
return lst, nil
}
err = DB().Order("name").Limit(limit).Where("id=?", t.GroupId).Find(&lst).Error
}
@@ -530,23 +508,6 @@ func (u *User) UserGroups(limit int, query string) ([]UserGroup, error) {
var lst []UserGroup
if u.IsAdmin() {
err := session.Where("name like ?", "%"+query+"%").Find(&lst).Error
if err != nil {
return lst, err
}
if len(lst) == 0 && len(query) > 0 {
// 隐藏功能一般人不告诉哈哈。query可能是给的用户名所以上面的sql没有查到当做user来查一下试试
user, err := UserGetByUsername(query)
if user == nil {
return lst, err
}
var ids []int64
ids, err = MyGroupIds(user.Id)
if err != nil || len(ids) == 0 {
return lst, err
}
lst, err = UserGroupGetByIds(ids)
}
return lst, err
}
@@ -564,57 +525,3 @@ func (u *User) UserGroups(limit int, query string) ([]UserGroup, error) {
err = session.Where("name like ?", "%"+query+"%").Find(&lst).Error
return lst, err
}
func (u *User) ExtractToken(key string) (string, bool) {
bs, err := u.Contacts.MarshalJSON()
if err != nil {
logger.Errorf("handle_notice: failed to marshal contacts: %v", err)
return "", false
}
switch key {
case Dingtalk:
ret := gjson.GetBytes(bs, DingtalkKey)
return ret.String(), ret.Exists()
case Wecom:
ret := gjson.GetBytes(bs, WecomKey)
return ret.String(), ret.Exists()
case Feishu:
ret := gjson.GetBytes(bs, FeishuKey)
return ret.String(), ret.Exists()
case FeishuCard:
ret := gjson.GetBytes(bs, FeishuKey)
return ret.String(), ret.Exists()
case Mm:
ret := gjson.GetBytes(bs, MmKey)
return ret.String(), ret.Exists()
case Telegram:
ret := gjson.GetBytes(bs, TelegramKey)
return ret.String(), ret.Exists()
case Email:
return u.Email, u.Email != ""
default:
return "", false
}
}
func (u *User) ExtractAllToken() map[string]string {
ret := make(map[string]string)
if u.Email != "" {
ret[Email] = u.Email
}
bs, err := u.Contacts.MarshalJSON()
if err != nil {
logger.Errorf("handle_notice: failed to marshal contacts: %v", err)
return ret
}
ret[Dingtalk] = gjson.GetBytes(bs, DingtalkKey).String()
ret[Wecom] = gjson.GetBytes(bs, WecomKey).String()
ret[Feishu] = gjson.GetBytes(bs, FeishuKey).String()
ret[FeishuCard] = gjson.GetBytes(bs, FeishuKey).String()
ret[Mm] = gjson.GetBytes(bs, MmKey).String()
ret[Telegram] = gjson.GetBytes(bs, TelegramKey).String()
return ret
}

View File

@@ -1,169 +0,0 @@
package cas
import (
"bytes"
"context"
"net/url"
"strings"
"time"
"github.com/didi/nightingale/v5/src/storage"
"github.com/google/uuid"
"github.com/toolkits/pkg/cas"
"github.com/toolkits/pkg/logger"
)
type Config struct {
Enable bool
SsoAddr string
LoginPath string
RedirectURL string
DisplayName string
CoverAttributes bool
Attributes struct {
Nickname string
Phone string
Email string
}
DefaultRoles []string
}
type ssoClient struct {
config Config
ssoAddr string
callbackAddr string
displayName string
attributes struct {
nickname string
phone string
email string
}
}
var (
cli ssoClient
)
func Init(cf Config) {
if !cf.Enable {
return
}
cli = ssoClient{}
cli.config = cf
cli.ssoAddr = cf.SsoAddr
cli.callbackAddr = cf.RedirectURL
cli.displayName = cf.DisplayName
cli.attributes.nickname = cf.Attributes.Nickname
cli.attributes.phone = cf.Attributes.Phone
cli.attributes.email = cf.Attributes.Email
}
func GetDisplayName() string {
return cli.displayName
}
// Authorize return the cas authorize location and state
func Authorize(redirect string) (string, string, error) {
state := uuid.New().String()
ctx := context.Background()
err := storage.Redis.Set(ctx, wrapStateKey(state), redirect, time.Duration(300*time.Second)).Err()
if err != nil {
return "", "", err
}
return cli.genRedirectURL(state), state, nil
}
func fetchRedirect(ctx context.Context, state string) (string, error) {
return storage.Redis.Get(ctx, wrapStateKey(state)).Result()
}
func deleteRedirect(ctx context.Context, state string) error {
return storage.Redis.Del(ctx, wrapStateKey(state)).Err()
}
func wrapStateKey(key string) string {
return "n9e_cas_" + key
}
func (cli *ssoClient) genRedirectURL(state string) string {
var buf bytes.Buffer
ssoAddr, err := url.Parse(cli.config.SsoAddr)
if cli.config.LoginPath == "" {
if strings.Contains(cli.config.SsoAddr, "p3") {
ssoAddr.Path = "login"
} else {
ssoAddr.Path = "cas/login"
}
} else {
ssoAddr.Path = cli.config.LoginPath
}
if err != nil {
logger.Error(err)
return buf.String()
}
buf.WriteString(ssoAddr.String())
v := url.Values{
"service": {cli.callbackAddr},
}
if strings.Contains(cli.ssoAddr, "?") {
buf.WriteByte('&')
} else {
buf.WriteByte('?')
}
buf.WriteString(v.Encode())
return buf.String()
}
type CallbackOutput struct {
Redirect string `json:"redirect"`
Msg string `json:"msg"`
AccessToken string `json:"accessToken"`
Username string `json:"username"`
Nickname string `json:"nickname"`
Phone string `yaml:"phone"`
Email string `yaml:"email"`
}
func ValidateServiceTicket(ctx context.Context, ticket, state string) (ret *CallbackOutput, err error) {
casUrl, err := url.Parse(cli.config.SsoAddr)
if err != nil {
logger.Error(err)
return
}
serviceUrl, err := url.Parse(cli.callbackAddr)
if err != nil {
logger.Error(err)
return
}
resOptions := &cas.RestOptions{
CasURL: casUrl,
ServiceURL: serviceUrl,
}
resCli := cas.NewRestClient(resOptions)
authRet, err := resCli.ValidateServiceTicket(cas.ServiceTicket(ticket))
if err != nil {
logger.Errorf("Ticket Validating Failed: %s", err)
return
}
ret = &CallbackOutput{}
ret.Username = authRet.User
ret.Nickname = authRet.Attributes.Get(cli.attributes.nickname)
logger.Debugf("CAS Authentication Response's Attributes--[Nickname]: %s", ret.Nickname)
ret.Email = authRet.Attributes.Get(cli.attributes.email)
logger.Debugf("CAS Authentication Response's Attributes--[Email]: %s", ret.Email)
ret.Phone = authRet.Attributes.Get(cli.attributes.phone)
logger.Debugf("CAS Authentication Response's Attributes--[Phone]: %s", ret.Phone)
ret.Redirect, err = fetchRedirect(ctx, state)
if err != nil {
logger.Debugf("get redirect err:%s state:%s", state, err)
}
err = deleteRedirect(ctx, state)
if err != nil {
logger.Debugf("delete redirect err:%s state:%s", state, err)
}
return
}

View File

@@ -1,225 +0,0 @@
package oauth2x
import (
"bytes"
"context"
"fmt"
"io/ioutil"
"net/http"
"time"
"github.com/didi/nightingale/v5/src/storage"
"github.com/toolkits/pkg/logger"
"github.com/google/uuid"
jsoniter "github.com/json-iterator/go"
"golang.org/x/oauth2"
)
type ssoClient struct {
config oauth2.Config
ssoAddr string
userInfoAddr string
TranTokenMethod string
callbackAddr string
displayName string
coverAttributes bool
attributes struct {
username string
nickname string
phone string
email string
}
userinfoIsArray bool
userinfoPrefix string
}
type Config struct {
Enable bool
DisplayName string
RedirectURL string
SsoAddr string
TokenAddr string
UserInfoAddr string
TranTokenMethod string
ClientId string
ClientSecret string
CoverAttributes bool
Attributes struct {
Username string
Nickname string
Phone string
Email string
}
DefaultRoles []string
UserinfoIsArray bool
UserinfoPrefix string
Scopes []string
}
var (
cli ssoClient
)
func Init(cf Config) {
if !cf.Enable {
return
}
cli.ssoAddr = cf.SsoAddr
cli.userInfoAddr = cf.UserInfoAddr
cli.TranTokenMethod = cf.TranTokenMethod
cli.callbackAddr = cf.RedirectURL
cli.displayName = cf.DisplayName
cli.coverAttributes = cf.CoverAttributes
cli.attributes.username = cf.Attributes.Username
cli.attributes.nickname = cf.Attributes.Nickname
cli.attributes.phone = cf.Attributes.Phone
cli.attributes.email = cf.Attributes.Email
cli.userinfoIsArray = cf.UserinfoIsArray
cli.userinfoPrefix = cf.UserinfoPrefix
cli.config = oauth2.Config{
ClientID: cf.ClientId,
ClientSecret: cf.ClientSecret,
Endpoint: oauth2.Endpoint{
AuthURL: cf.SsoAddr,
TokenURL: cf.TokenAddr,
},
RedirectURL: cf.RedirectURL,
Scopes: cf.Scopes,
}
}
func GetDisplayName() string {
return cli.displayName
}
func wrapStateKey(key string) string {
return "n9e_oauth_" + key
}
// Authorize return the sso authorize location with state
func Authorize(redirect string) (string, error) {
state := uuid.New().String()
ctx := context.Background()
err := storage.Redis.Set(ctx, wrapStateKey(state), redirect, time.Duration(300*time.Second)).Err()
if err != nil {
return "", err
}
return cli.config.AuthCodeURL(state), nil
}
func fetchRedirect(ctx context.Context, state string) (string, error) {
return storage.Redis.Get(ctx, wrapStateKey(state)).Result()
}
func deleteRedirect(ctx context.Context, state string) error {
return storage.Redis.Del(ctx, wrapStateKey(state)).Err()
}
// Callback 用 code 兑换 accessToken 以及 用户信息
func Callback(ctx context.Context, code, state string) (*CallbackOutput, error) {
ret, err := exchangeUser(code)
if err != nil {
return nil, fmt.Errorf("ilegal user:%v", err)
}
ret.Redirect, err = fetchRedirect(ctx, state)
if err != nil {
logger.Errorf("get redirect err:%v code:%s state:%s", code, state, err)
}
err = deleteRedirect(ctx, state)
if err != nil {
logger.Errorf("delete redirect err:%v code:%s state:%s", code, state, err)
}
return ret, nil
}
type CallbackOutput struct {
Redirect string `json:"redirect"`
Msg string `json:"msg"`
AccessToken string `json:"accessToken"`
Username string `json:"username"`
Nickname string `json:"nickname"`
Phone string `yaml:"phone"`
Email string `yaml:"email"`
}
func exchangeUser(code string) (*CallbackOutput, error) {
ctx := context.Background()
oauth2Token, err := cli.config.Exchange(ctx, code)
if err != nil {
return nil, fmt.Errorf("failed to exchange token: %s", err)
}
userInfo, err := getUserInfo(cli.userInfoAddr, oauth2Token.AccessToken, cli.TranTokenMethod)
if err != nil {
logger.Errorf("failed to get user info: %s", err)
return nil, fmt.Errorf("failed to get user info: %s", err)
}
return &CallbackOutput{
AccessToken: oauth2Token.AccessToken,
Username: getUserinfoField(userInfo, cli.userinfoIsArray, cli.userinfoPrefix, cli.attributes.username),
Nickname: getUserinfoField(userInfo, cli.userinfoIsArray, cli.userinfoPrefix, cli.attributes.nickname),
Phone: getUserinfoField(userInfo, cli.userinfoIsArray, cli.userinfoPrefix, cli.attributes.phone),
Email: getUserinfoField(userInfo, cli.userinfoIsArray, cli.userinfoPrefix, cli.attributes.email),
}, nil
}
func getUserInfo(userInfoAddr, accessToken string, TranTokenMethod string) ([]byte, error) {
var req *http.Request
if TranTokenMethod == "formdata" {
body := bytes.NewBuffer([]byte("access_token=" + accessToken))
r, err := http.NewRequest("POST", userInfoAddr, body)
if err != nil {
return nil, err
}
r.Header.Add("Content-Type", "application/x-www-form-urlencoded")
req = r
} else if TranTokenMethod == "querystring" {
r, err := http.NewRequest("GET", userInfoAddr+"?access_token="+accessToken, nil)
if err != nil {
return nil, err
}
r.Header.Add("Authorization", "Bearer "+accessToken)
req = r
} else {
r, err := http.NewRequest("GET", userInfoAddr, nil)
if err != nil {
return nil, err
}
r.Header.Add("Authorization", "Bearer "+accessToken)
req = r
}
resp, err := http.DefaultClient.Do(req)
if err != nil {
return nil, err
}
body, err := ioutil.ReadAll(resp.Body)
resp.Body.Close()
if err != nil {
return nil, nil
}
return body, err
}
func getUserinfoField(input []byte, isArray bool, prefix, field string) string {
if prefix == "" {
if isArray {
return jsoniter.Get(input, 0).Get(field).ToString()
} else {
return jsoniter.Get(input, field).ToString()
}
} else {
if isArray {
return jsoniter.Get(input, prefix, 0).Get(field).ToString()
} else {
return jsoniter.Get(input, prefix).Get(field).ToString()
}
}
}

View File

@@ -20,7 +20,6 @@ type ssoClient struct {
ssoAddr string
callbackAddr string
coverAttributes bool
displayName string
attributes struct {
username string
nickname string
@@ -31,7 +30,6 @@ type ssoClient struct {
type Config struct {
Enable bool
DisplayName string
RedirectURL string
SsoAddr string
ClientId string
@@ -61,7 +59,6 @@ func Init(cf Config) {
cli.attributes.nickname = cf.Attributes.Nickname
cli.attributes.phone = cf.Attributes.Phone
cli.attributes.email = cf.Attributes.Email
cli.displayName = cf.DisplayName
provider, err := oidc.NewProvider(context.Background(), cf.SsoAddr)
if err != nil {
log.Fatal(err)
@@ -80,10 +77,6 @@ func Init(cf Config) {
}
}
func GetDisplayName() string {
return cli.displayName
}
func wrapStateKey(key string) string {
return "n9e_oidc_" + key
}

View File

@@ -1,100 +0,0 @@
package secu
import (
"bytes"
"crypto/aes"
"crypto/cipher"
"encoding/base64"
"strings"
)
// BASE64StdEncode base64编码
func BASE64StdEncode(src []byte) string {
return base64.StdEncoding.EncodeToString(src)
}
// BASE64StdDecode base64解码
func BASE64StdDecode(src string) ([]byte, error) {
dst, err := base64.StdEncoding.DecodeString(src)
if err != nil {
return nil, err
}
return dst, nil
}
func PKCS7Padding(ciphertext []byte, blockSize int) []byte {
padding := blockSize - len(ciphertext)%blockSize
padtext := bytes.Repeat([]byte{byte(padding)}, padding)
return append(ciphertext, padtext...)
}
func PKCS7UnPadding(originData []byte) []byte {
length := len(originData)
unpadding := int(originData[length-1])
return originData[:(length - unpadding)]
}
//AES加密
func AesEncrypt(origData, key []byte) ([]byte, error) {
block, err := aes.NewCipher(key)
if err != nil {
return nil, err
}
//加密块填充
blockSize := block.BlockSize()
padOrigData := PKCS7Padding(origData, blockSize)
//初始化CBC加密
blockMode := cipher.NewCBCEncrypter(block, key[:blockSize])
crypted := make([]byte, len(padOrigData))
//加密
blockMode.CryptBlocks(crypted, padOrigData)
return crypted, nil
}
//AES解密
func AesDecrypt(crypted, key []byte) ([]byte, error) {
block, err := aes.NewCipher(key)
if err != nil {
return nil, err
}
blockSize := block.BlockSize()
blockMode := cipher.NewCBCDecrypter(block, key[:blockSize])
origData := make([]byte, len(crypted))
//解密
blockMode.CryptBlocks(origData, crypted)
//去除填充
origData = PKCS7UnPadding(origData)
return origData, nil
}
// 针对配置文件属性进行解密处理
func DealWithDecrypt(src string, key string) (string, error) {
//如果是{{cipher}}前缀,则代表是加密过的属性,先解密
if strings.HasPrefix(src, "{{cipher}}") {
data := src[10:]
decodeData, err := BASE64StdDecode(data)
if err != nil {
return src, err
}
//解密
origin, err := AesDecrypt(decodeData, []byte(key))
if err != nil {
return src, err
}
//返回明文
return string(origin), nil
} else {
return src, nil
}
}
// 针对配置文件属性进行加密处理
func DealWithEncrypt(src string, key string) (string, error) {
encrypted, err := AesEncrypt([]byte(src), []byte(key))
if err != nil {
return src, err
}
data := BASE64StdEncode(encrypted)
return "{{cipher}}" + data, nil
}

View File

@@ -4,7 +4,6 @@ import (
"fmt"
"html/template"
"math"
"reflect"
"regexp"
"strconv"
"time"
@@ -34,10 +33,6 @@ func Timestamp(pattern ...string) string {
return time.Now().Format(defp)
}
func Now() time.Time {
return time.Now()
}
func Args(args ...interface{}) map[string]interface{} {
result := make(map[string]interface{})
for i, a := range args {
@@ -100,27 +95,11 @@ func Humanize1024(s string) string {
return fmt.Sprintf("%.4g%s", v, prefix)
}
func ToString(v interface{}) string {
return fmt.Sprint(v)
}
func HumanizeDuration(s string) string {
v, err := strconv.ParseFloat(s, 64)
if err != nil {
return s
}
return HumanizeDurationFloat64(v)
}
func HumanizeDurationInterface(i interface{}) string {
f, err := ToFloat64(i)
if err != nil {
return ToString(i)
}
return HumanizeDurationFloat64(f)
}
func HumanizeDurationFloat64(v float64) string {
if math.IsNaN(v) || math.IsInf(v, 0) {
return fmt.Sprintf("%.4g", v)
}
@@ -176,179 +155,3 @@ func HumanizePercentageH(s string) string {
}
return fmt.Sprintf("%.2f%%", v)
}
// Add returns the sum of a and b.
func Add(a, b interface{}) (interface{}, error) {
av := reflect.ValueOf(a)
bv := reflect.ValueOf(b)
switch av.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
switch bv.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return av.Int() + bv.Int(), nil
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
return av.Int() + int64(bv.Uint()), nil
case reflect.Float32, reflect.Float64:
return float64(av.Int()) + bv.Float(), nil
default:
return nil, fmt.Errorf("add: unknown type for %q (%T)", bv, b)
}
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
switch bv.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return int64(av.Uint()) + bv.Int(), nil
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
return av.Uint() + bv.Uint(), nil
case reflect.Float32, reflect.Float64:
return float64(av.Uint()) + bv.Float(), nil
default:
return nil, fmt.Errorf("add: unknown type for %q (%T)", bv, b)
}
case reflect.Float32, reflect.Float64:
switch bv.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return av.Float() + float64(bv.Int()), nil
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
return av.Float() + float64(bv.Uint()), nil
case reflect.Float32, reflect.Float64:
return av.Float() + bv.Float(), nil
default:
return nil, fmt.Errorf("add: unknown type for %q (%T)", bv, b)
}
default:
return nil, fmt.Errorf("add: unknown type for %q (%T)", av, a)
}
}
// Subtract returns the difference of b from a.
func Subtract(a, b interface{}) (interface{}, error) {
av := reflect.ValueOf(a)
bv := reflect.ValueOf(b)
switch av.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
switch bv.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return av.Int() - bv.Int(), nil
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
return av.Int() - int64(bv.Uint()), nil
case reflect.Float32, reflect.Float64:
return float64(av.Int()) - bv.Float(), nil
default:
return nil, fmt.Errorf("subtract: unknown type for %q (%T)", bv, b)
}
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
switch bv.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return int64(av.Uint()) - bv.Int(), nil
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
return av.Uint() - bv.Uint(), nil
case reflect.Float32, reflect.Float64:
return float64(av.Uint()) - bv.Float(), nil
default:
return nil, fmt.Errorf("subtract: unknown type for %q (%T)", bv, b)
}
case reflect.Float32, reflect.Float64:
switch bv.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return av.Float() - float64(bv.Int()), nil
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
return av.Float() - float64(bv.Uint()), nil
case reflect.Float32, reflect.Float64:
return av.Float() - bv.Float(), nil
default:
return nil, fmt.Errorf("subtract: unknown type for %q (%T)", bv, b)
}
default:
return nil, fmt.Errorf("subtract: unknown type for %q (%T)", av, a)
}
}
// Multiply returns the product of a and b.
func Multiply(a, b interface{}) (interface{}, error) {
av := reflect.ValueOf(a)
bv := reflect.ValueOf(b)
switch av.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
switch bv.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return av.Int() * bv.Int(), nil
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
return av.Int() * int64(bv.Uint()), nil
case reflect.Float32, reflect.Float64:
return float64(av.Int()) * bv.Float(), nil
default:
return nil, fmt.Errorf("multiply: unknown type for %q (%T)", bv, b)
}
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
switch bv.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return int64(av.Uint()) * bv.Int(), nil
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
return av.Uint() * bv.Uint(), nil
case reflect.Float32, reflect.Float64:
return float64(av.Uint()) * bv.Float(), nil
default:
return nil, fmt.Errorf("multiply: unknown type for %q (%T)", bv, b)
}
case reflect.Float32, reflect.Float64:
switch bv.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return av.Float() * float64(bv.Int()), nil
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
return av.Float() * float64(bv.Uint()), nil
case reflect.Float32, reflect.Float64:
return av.Float() * bv.Float(), nil
default:
return nil, fmt.Errorf("multiply: unknown type for %q (%T)", bv, b)
}
default:
return nil, fmt.Errorf("multiply: unknown type for %q (%T)", av, a)
}
}
// Divide returns the division of b from a.
func Divide(a, b interface{}) (interface{}, error) {
av := reflect.ValueOf(a)
bv := reflect.ValueOf(b)
switch av.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
switch bv.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return av.Int() / bv.Int(), nil
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
return av.Int() / int64(bv.Uint()), nil
case reflect.Float32, reflect.Float64:
return float64(av.Int()) / bv.Float(), nil
default:
return nil, fmt.Errorf("divide: unknown type for %q (%T)", bv, b)
}
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
switch bv.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return int64(av.Uint()) / bv.Int(), nil
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
return av.Uint() / bv.Uint(), nil
case reflect.Float32, reflect.Float64:
return float64(av.Uint()) / bv.Float(), nil
default:
return nil, fmt.Errorf("divide: unknown type for %q (%T)", bv, b)
}
case reflect.Float32, reflect.Float64:
switch bv.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return av.Float() / float64(bv.Int()), nil
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
return av.Float() / float64(bv.Uint()), nil
case reflect.Float32, reflect.Float64:
return av.Float() / bv.Float(), nil
default:
return nil, fmt.Errorf("divide: unknown type for %q (%T)", bv, b)
}
default:
return nil, fmt.Errorf("divide: unknown type for %q (%T)", av, a)
}
}

View File

@@ -1,73 +0,0 @@
package tplx
import (
"fmt"
"strconv"
)
// ToFloat64 convert interface to float64
func ToFloat64(val interface{}) (float64, error) {
switch v := val.(type) {
case string:
if f, err := strconv.ParseFloat(v, 64); err == nil {
return f, nil
}
// try int
if i, err := strconv.ParseInt(v, 0, 64); err == nil {
return float64(i), nil
}
// try bool
b, err := strconv.ParseBool(v)
if err == nil {
if b {
return 1, nil
} else {
return 0, nil
}
}
if v == "Yes" || v == "yes" || v == "YES" || v == "Y" || v == "ON" || v == "on" || v == "On" || v == "ok" || v == "up" {
return 1, nil
}
if v == "No" || v == "no" || v == "NO" || v == "N" || v == "OFF" || v == "off" || v == "Off" || v == "fail" || v == "err" || v == "down" {
return 0, nil
}
return 0, fmt.Errorf("unparseable value %v", v)
case float64:
return v, nil
case uint64:
return float64(v), nil
case uint32:
return float64(v), nil
case uint16:
return float64(v), nil
case uint8:
return float64(v), nil
case uint:
return float64(v), nil
case int64:
return float64(v), nil
case int32:
return float64(v), nil
case int16:
return float64(v), nil
case int8:
return float64(v), nil
case bool:
if v {
return 1, nil
} else {
return 0, nil
}
case int:
return float64(v), nil
case float32:
return float64(v), nil
default:
return strconv.ParseFloat(fmt.Sprint(v), 64)
}
}

View File

@@ -8,27 +8,20 @@ import (
)
var TemplateFuncMap = template.FuncMap{
"escape": url.PathEscape,
"unescaped": Unescaped,
"urlconvert": Urlconvert,
"timeformat": Timeformat,
"timestamp": Timestamp,
"args": Args,
"reReplaceAll": ReReplaceAll,
"match": regexp.MatchString,
"toUpper": strings.ToUpper,
"toLower": strings.ToLower,
"contains": strings.Contains,
"humanize": Humanize,
"humanize1024": Humanize1024,
"humanizeDuration": HumanizeDuration,
"humanizeDurationInterface": HumanizeDurationInterface,
"humanizePercentage": HumanizePercentage,
"humanizePercentageH": HumanizePercentageH,
"add": Add,
"sub": Subtract,
"mul": Multiply,
"div": Divide,
"now": Now,
"toString": ToString,
"escape": url.PathEscape,
"unescaped": Unescaped,
"urlconvert": Urlconvert,
"timeformat": Timeformat,
"timestamp": Timestamp,
"args": Args,
"reReplaceAll": ReReplaceAll,
"match": regexp.MatchString,
"toUpper": strings.ToUpper,
"toLower": strings.ToLower,
"contains": strings.Contains,
"humanize": Humanize,
"humanize1024": Humanize1024,
"humanizeDuration": HumanizeDuration,
"humanizePercentage": HumanizePercentage,
"humanizePercentageH": HumanizePercentageH,
}

View File

@@ -1,9 +1,7 @@
package conv
import (
"fmt"
"math"
"strings"
"github.com/prometheus/common/model"
)
@@ -15,12 +13,6 @@ type Vector struct {
Value float64 `json:"value"`
}
func (v *Vector) ReadableValue() string {
ret := fmt.Sprintf("%.5f", v.Value)
ret = strings.TrimRight(ret, "0")
return strings.TrimRight(ret, ".")
}
func ConvertVectors(value model.Value) (lst []Vector) {
if value == nil {
return

View File

@@ -96,9 +96,6 @@ func labelsToLabelsProto(labels model.Metric, rule *models.RecordingRule) (resul
}
result = append(result, nameLs)
for k, v := range labels {
if k == LabelName {
continue
}
if model.LabelNameRE.MatchString(string(k)) {
result = append(result, &prompb.Label{
Name: string(k),

View File

@@ -12,17 +12,13 @@ func AppendLabels(pt *prompb.TimeSeries, target *models.Target) {
return
}
labelKeys := make(map[string]int)
labelKeys := make(map[string]struct{})
for j := 0; j < len(pt.Labels); j++ {
labelKeys[pt.Labels[j].Name] = j
labelKeys[pt.Labels[j].Name] = struct{}{}
}
for key, value := range target.TagsMap {
if index, has := labelKeys[key]; has {
// overwrite labels
if config.C.LabelRewrite {
pt.Labels[index].Value = value
}
if _, has := labelKeys[key]; has {
continue
}
@@ -36,7 +32,7 @@ func AppendLabels(pt *prompb.TimeSeries, target *models.Target) {
if _, has := labelKeys[config.C.BusiGroupLabelKey]; has {
return
}
// 将业务组名称作为tag附加到数据上
if target.GroupId > 0 && len(config.C.BusiGroupLabelKey) > 0 {
bg := memsto.BusiGroupCache.GetByBusiGroupId(target.GroupId)
if bg == nil {

View File

@@ -1,16 +1,21 @@
package sender
import (
"html/template"
"net/url"
"strings"
"time"
"github.com/toolkits/pkg/logger"
"github.com/didi/nightingale/v5/src/models"
"github.com/didi/nightingale/v5/src/pkg/poster"
"github.com/toolkits/pkg/logger"
)
type DingtalkMessage struct {
Title string
Text string
AtMobiles []string
Tokens []string
}
type dingtalkMarkdown struct {
Title string `json:"title"`
Text string `json:"text"`
@@ -27,91 +32,45 @@ type dingtalk struct {
At dingtalkAt `json:"at"`
}
type DingtalkSender struct {
tpl *template.Template
}
func (ds *DingtalkSender) Send(ctx MessageContext) {
if len(ctx.Users) == 0 || ctx.Rule == nil || ctx.Event == nil {
return
func SendDingtalk(message DingtalkMessage) {
ats := make([]string, len(message.AtMobiles))
for i := 0; i < len(message.AtMobiles); i++ {
ats[i] = "@" + message.AtMobiles[i]
}
urls, ats := ds.extract(ctx.Users)
if len(urls) == 0 {
return
}
message := BuildTplMessage(ds.tpl, ctx.Event)
for i := 0; i < len(message.Tokens); i++ {
u, err := url.Parse(message.Tokens[i])
if err != nil {
logger.Errorf("dingtalk_sender: failed to parse error=%v", err)
}
for _, url := range urls {
var body dingtalk
// NoAt in url
if strings.Contains(url, "noat=1") {
body = dingtalk{
Msgtype: "markdown",
Markdown: dingtalkMarkdown{
Title: ctx.Rule.Name,
Text: message,
},
v, err := url.ParseQuery(u.RawQuery)
if err != nil {
logger.Errorf("dingtalk_sender: failed to parse query error=%v", err)
}
ur := "https://oapi.dingtalk.com/robot/send?access_token=" + u.Path
body := dingtalk{
Msgtype: "markdown",
Markdown: dingtalkMarkdown{
Title: message.Title,
Text: message.Text,
},
}
if v.Get("noat") != "1" {
body.Markdown.Text = message.Text + " " + strings.Join(ats, " ")
body.At = dingtalkAt{
AtMobiles: message.AtMobiles,
IsAtAll: false,
}
}
res, code, err := poster.PostJSON(ur, time.Second*5, body, 3)
if err != nil {
logger.Errorf("dingtalk_sender: result=fail url=%s code=%d error=%v response=%s", ur, code, err, string(res))
} else {
body = dingtalk{
Msgtype: "markdown",
Markdown: dingtalkMarkdown{
Title: ctx.Rule.Name,
Text: message + " " + strings.Join(ats, " "),
},
At: dingtalkAt{
AtMobiles: ats,
IsAtAll: false,
},
}
}
ds.doSend(url, body)
}
}
func (ds *DingtalkSender) SendRaw(users []*models.User, title, message string) {
if len(users) == 0 {
return
}
urls, _ := ds.extract(users)
body := dingtalk{
Msgtype: "markdown",
Markdown: dingtalkMarkdown{
Title: title,
Text: message,
},
}
for _, url := range urls {
ds.doSend(url, body)
}
}
// extract urls and ats from Users
func (ds *DingtalkSender) extract(users []*models.User) ([]string, []string) {
urls := make([]string, 0, len(users))
ats := make([]string, 0, len(users))
for _, user := range users {
if user.Phone != "" {
ats = append(ats, "@"+user.Phone)
}
if token, has := user.ExtractToken(models.Dingtalk); has {
url := token
if !strings.HasPrefix(token, "https://") {
url = "https://oapi.dingtalk.com/robot/send?access_token=" + token
}
urls = append(urls, url)
logger.Infof("dingtalk_sender: result=succ url=%s code=%d response=%s", ur, code, string(res))
}
}
return urls, ats
}
func (ds *DingtalkSender) doSend(url string, body dingtalk) {
res, code, err := poster.PostJSON(url, time.Second*5, body, 3)
if err != nil {
logger.Errorf("dingtalk_sender: result=fail url=%s code=%d error=%v response=%s", url, code, err, string(res))
} else {
logger.Infof("dingtalk_sender: result=succ url=%s code=%d response=%s", url, code, string(res))
}
}

View File

@@ -2,54 +2,15 @@ package sender
import (
"crypto/tls"
"html/template"
"time"
"github.com/didi/nightingale/v5/src/server/config"
"github.com/toolkits/pkg/logger"
"gopkg.in/gomail.v2"
"github.com/didi/nightingale/v5/src/models"
"github.com/didi/nightingale/v5/src/server/config"
)
var mailch chan *gomail.Message
type EmailSender struct {
subjectTpl *template.Template
contentTpl *template.Template
}
func (es *EmailSender) Send(ctx MessageContext) {
if len(ctx.Users) == 0 || ctx.Rule == nil || ctx.Event == nil {
return
}
tos := es.extract(ctx.Users)
var subject string
if es.subjectTpl != nil {
subject = BuildTplMessage(es.subjectTpl, ctx.Event)
} else {
subject = ctx.Rule.Name
}
content := BuildTplMessage(es.contentTpl, ctx.Event)
WriteEmail(subject, content, tos)
}
func (es *EmailSender) SendRaw(users []*models.User, title, message string) {
tos := es.extract(users)
WriteEmail(title, message, tos)
}
func (es *EmailSender) extract(users []*models.User) []string {
tos := make([]string, 0, len(users))
for _, u := range users {
if u.Email != "" {
tos = append(tos, u.Email)
}
}
return tos
}
func SendEmail(subject, content string, tos []string) {
conf := config.C.SMTP

View File

@@ -1,16 +1,18 @@
package sender
import (
"html/template"
"strings"
"time"
"github.com/toolkits/pkg/logger"
"github.com/didi/nightingale/v5/src/models"
"github.com/didi/nightingale/v5/src/pkg/poster"
"github.com/toolkits/pkg/logger"
)
type FeishuMessage struct {
Text string
AtMobiles []string
Tokens []string
}
type feishuContent struct {
Text string `json:"text"`
}
@@ -26,73 +28,25 @@ type feishu struct {
At feishuAt `json:"at"`
}
type FeishuSender struct {
tpl *template.Template
}
func (fs *FeishuSender) Send(ctx MessageContext) {
if len(ctx.Users) == 0 || ctx.Rule == nil || ctx.Event == nil {
return
}
urls, ats := fs.extract(ctx.Users)
message := BuildTplMessage(fs.tpl, ctx.Event)
for _, url := range urls {
func SendFeishu(message FeishuMessage) {
for i := 0; i < len(message.Tokens); i++ {
url := "https://open.feishu.cn/open-apis/bot/v2/hook/" + message.Tokens[i]
body := feishu{
Msgtype: "text",
Content: feishuContent{
Text: message,
Text: message.Text,
},
At: feishuAt{
AtMobiles: message.AtMobiles,
IsAtAll: false,
},
}
if !strings.Contains(url, "noat=1") {
body.At = feishuAt{
AtMobiles: ats,
IsAtAll: false,
}
}
fs.doSend(url, body)
}
}
func (fs *FeishuSender) SendRaw(users []*models.User, title, message string) {
if len(users) == 0 {
return
}
urls, _ := fs.extract(users)
body := feishu{
Msgtype: "text",
Content: feishuContent{
Text: message,
},
}
for _, url := range urls {
fs.doSend(url, body)
}
}
func (fs *FeishuSender) extract(users []*models.User) ([]string, []string) {
urls := make([]string, 0, len(users))
ats := make([]string, 0, len(users))
for _, user := range users {
if user.Phone != "" {
ats = append(ats, user.Phone)
}
if token, has := user.ExtractToken(models.Feishu); has {
url := token
if !strings.HasPrefix(token, "https://") {
url = "https://open.feishu.cn/open-apis/bot/v2/hook/" + token
}
urls = append(urls, url)
res, code, err := poster.PostJSON(url, time.Second*5, body, 3)
if err != nil {
logger.Errorf("feishu_sender: result=fail url=%s code=%d error=%v response=%s", url, code, err, string(res))
} else {
logger.Infof("feishu_sender: result=succ url=%s code=%d response=%s", url, code, string(res))
}
}
return urls, ats
}
func (fs *FeishuSender) doSend(url string, body feishu) {
res, code, err := poster.PostJSON(url, time.Second*5, body, 3)
if err != nil {
logger.Errorf("feishu_sender: result=fail url=%s code=%d error=%v response=%s", url, code, err, string(res))
} else {
logger.Infof("feishu_sender: result=succ url=%s code=%d response=%s", url, code, string(res))
}
}

View File

@@ -1,161 +0,0 @@
package sender
import (
"fmt"
"html/template"
"strings"
"time"
"github.com/toolkits/pkg/logger"
"github.com/didi/nightingale/v5/src/models"
"github.com/didi/nightingale/v5/src/pkg/poster"
)
type feishuCardContent struct {
Text string `json:"text"`
}
type Conf struct {
WideScreenMode bool `json:"wide_screen_mode"`
EnableForward bool `json:"enable_forward"`
}
type Te struct {
Content string `json:"content"`
Tag string `json:"tag"`
}
type Element struct {
Tag string `json:"tag"`
Text Te `json:"text"`
Content string `json:"content"`
Elements []Element `json:"elements"`
}
type Titles struct {
Content string `json:"content"`
Tag string `json:"tag"`
}
type Headers struct {
Title Titles `json:"title"`
Template string `json:"template"`
}
type Cards struct {
Config Conf `json:"config"`
Elements []Element `json:"elements"`
Header Headers `json:"header"`
}
type feishuCard struct {
Msgtype string `json:"msg_type"`
Content feishuCardContent `json:"content"`
Card Cards `json:"card"`
}
type FeishuCardSender struct {
tpl *template.Template
}
func (fs *FeishuCardSender) Send(ctx MessageContext) {
if len(ctx.Users) == 0 || ctx.Rule == nil || ctx.Event == nil {
return
}
urls, _ := fs.extract(ctx.Users)
message := BuildTplMessage(fs.tpl, ctx.Event)
for _, url := range urls {
var color string
if strings.Count(message, "Recovered") > 0 && strings.Count(message, "Triggered") > 0 {
color = "orange"
} else if strings.Count(message, "Recovered") > 0 {
color = "green"
} else {
color = "red"
}
SendTitle := fmt.Sprintf("🔔 [告警提醒] - %s", ctx.Rule.Name)
body := feishuCard{
Msgtype: "interactive",
Card: Cards{
Config: Conf{
WideScreenMode: true,
EnableForward: true,
},
Header: Headers{
Title: Titles{
Content: SendTitle,
Tag: "plain_text",
},
Template: color,
},
Elements: []Element{
Element{
Tag: "div",
Text: Te{
Content: message,
Tag: "lark_md",
},
},
{
Tag: "hr",
},
{
Tag: "note",
Elements: []Element{
{
Content: SendTitle,
Tag: "lark_md",
},
},
},
},
},
}
fs.doSend(url, body)
}
}
func (fs *FeishuCardSender) SendRaw(users []*models.User, title, message string) {
if len(users) == 0 {
return
}
urls, _ := fs.extract(users)
body := feishuCard{
Msgtype: "text",
Content: feishuCardContent{
Text: message,
},
}
for _, url := range urls {
fs.doSend(url, body)
}
}
func (fs *FeishuCardSender) extract(users []*models.User) ([]string, []string) {
urls := make([]string, 0, len(users))
ats := make([]string, 0, len(users))
for _, user := range users {
if user.Phone != "" {
ats = append(ats, user.Phone)
}
if token, has := user.ExtractToken(models.FeishuCard); has {
url := token
if !strings.HasPrefix(token, "https://") {
url = "https://open.feishu.cn/open-apis/bot/v2/hook/" + token
}
urls = append(urls, url)
}
}
return urls, ats
}
func (fs *FeishuCardSender) doSend(url string, body feishuCard) {
res, code, err := poster.PostJSON(url, time.Second*5, body, 3)
if err != nil {
logger.Errorf("feishu_sender: result=fail url=%s code=%d error=%v response=%s", url, code, err, string(res))
} else {
logger.Infof("feishu_sender: result=succ url=%s code=%d response=%s", url, code, string(res))
}
}

View File

@@ -1,117 +0,0 @@
package sender
import (
"html/template"
"net/url"
"strings"
"time"
"github.com/toolkits/pkg/logger"
"github.com/didi/nightingale/v5/src/models"
"github.com/didi/nightingale/v5/src/pkg/poster"
)
type MatterMostMessage struct {
Text string
Tokens []string
}
type mm struct {
Channel string `json:"channel"`
Username string `json:"username"`
Text string `json:"text"`
}
type MmSender struct {
tpl *template.Template
}
func (ms *MmSender) Send(ctx MessageContext) {
if len(ctx.Users) == 0 || ctx.Rule == nil || ctx.Event == nil {
return
}
urls := ms.extract(ctx.Users)
if len(urls) == 0 {
return
}
message := BuildTplMessage(ms.tpl, ctx.Event)
SendMM(MatterMostMessage{
Text: message,
Tokens: urls,
})
}
func (ms *MmSender) SendRaw(users []*models.User, title, message string) {
urls := ms.extract(users)
if len(urls) == 0 {
return
}
SendMM(MatterMostMessage{
Text: message,
Tokens: urls,
})
}
func (ms *MmSender) extract(users []*models.User) []string {
tokens := make([]string, 0, len(users))
for _, user := range users {
if token, has := user.ExtractToken(models.Mm); has {
tokens = append(tokens, token)
}
}
return tokens
}
func SendMM(message MatterMostMessage) {
for i := 0; i < len(message.Tokens); i++ {
u, err := url.Parse(message.Tokens[i])
if err != nil {
logger.Errorf("mm_sender: failed to parse error=%v", err)
}
v, err := url.ParseQuery(u.RawQuery)
if err != nil {
logger.Errorf("mm_sender: failed to parse query error=%v", err)
}
channels := v["channel"] // do not get
txt := ""
atuser := v["atuser"]
if len(atuser) != 0 {
txt = strings.Join(MapStrToStr(atuser, func(u string) string {
return "@" + u
}), ",") + "\n"
}
username := v.Get("username")
if err != nil {
logger.Errorf("mm_sender: failed to parse error=%v", err)
}
// simple concatenating
ur := u.Scheme + "://" + u.Host + u.Path
for _, channel := range channels {
body := mm{
Channel: channel,
Username: username,
Text: txt + message.Text,
}
res, code, err := poster.PostJSON(ur, time.Second*5, body, 3)
if err != nil {
logger.Errorf("mm_sender: result=fail url=%s code=%d error=%v response=%s", ur, code, err, string(res))
} else {
logger.Infof("mm_sender: result=succ url=%s code=%d response=%s", ur, code, string(res))
}
}
}
}
func MapStrToStr(arr []string, fn func(s string) string) []string {
var newArray = []string{}
for _, it := range arr {
newArray = append(newArray, fn(it))
}
return newArray
}

View File

@@ -1,77 +0,0 @@
package sender
import (
"bytes"
"os/exec"
"time"
"github.com/toolkits/pkg/logger"
"github.com/didi/nightingale/v5/src/notifier"
"github.com/didi/nightingale/v5/src/pkg/sys"
"github.com/didi/nightingale/v5/src/server/config"
)
func MayPluginNotify(noticeBytes []byte) {
if len(noticeBytes) == 0 {
return
}
alertingCallPlugin(noticeBytes)
alertingCallScript(noticeBytes)
}
func alertingCallScript(stdinBytes []byte) {
// not enable or no notify.py? do nothing
if !config.C.Alerting.CallScript.Enable || config.C.Alerting.CallScript.ScriptPath == "" {
return
}
fpath := config.C.Alerting.CallScript.ScriptPath
cmd := exec.Command(fpath)
cmd.Stdin = bytes.NewReader(stdinBytes)
// combine stdout and stderr
var buf bytes.Buffer
cmd.Stdout = &buf
cmd.Stderr = &buf
err := startCmd(cmd)
if err != nil {
logger.Errorf("event_notify: run cmd err: %v", err)
return
}
err, isTimeout := sys.WrapTimeout(cmd, time.Duration(config.C.Alerting.Timeout)*time.Millisecond)
if isTimeout {
if err == nil {
logger.Errorf("event_notify: timeout and killed process %s", fpath)
}
if err != nil {
logger.Errorf("event_notify: kill process %s occur error %v", fpath, err)
}
return
}
if err != nil {
logger.Errorf("event_notify: exec script %s occur error: %v, output: %s", fpath, err, buf.String())
return
}
logger.Infof("event_notify: exec %s output: %s", fpath, buf.String())
}
// call notify.so via golang plugin build
// ig. etc/script/notify/notify.so
func alertingCallPlugin(stdinBytes []byte) {
if !config.C.Alerting.CallPlugin.Enable {
return
}
logger.Debugf("alertingCallPlugin begin")
logger.Debugf("payload:", string(stdinBytes))
notifier.Instance.Notify(stdinBytes)
logger.Debugf("alertingCallPlugin done")
}

View File

@@ -1,26 +0,0 @@
package sender
import (
"context"
"github.com/toolkits/pkg/logger"
"github.com/didi/nightingale/v5/src/server/config"
"github.com/didi/nightingale/v5/src/storage"
)
func PublishToRedis(clusterName string, bs []byte) {
if len(bs) == 0 {
return
}
if !config.C.Alerting.RedisPub.Enable {
return
}
// pub all alerts to redis
channelKey := config.C.Alerting.RedisPub.ChannelPrefix + clusterName
err := storage.Redis.Publish(context.Background(), channelKey, bs).Err()
if err != nil {
logger.Errorf("event_notify: redis publish %s err: %v", channelKey, err)
}
}

View File

@@ -1,73 +0,0 @@
package sender
import (
"bytes"
"html/template"
"github.com/toolkits/pkg/slice"
"github.com/didi/nightingale/v5/src/models"
"github.com/didi/nightingale/v5/src/server/config"
"github.com/didi/nightingale/v5/src/server/memsto"
)
type (
// Sender 发送消息通知的接口
Sender interface {
Send(ctx MessageContext)
// SendRaw 发送原始消息,目前在notifyMaintainer时使用
SendRaw(users []*models.User, title, message string)
}
// MessageContext 一个event所生成的告警通知的上下文
MessageContext struct {
Users []*models.User
Rule *models.AlertRule
Event *models.AlertCurEvent
}
)
func NewSender(key string, tpls map[string]*template.Template) Sender {
if !slice.ContainsString(config.C.Alerting.NotifyBuiltinChannels, key) {
return nil
}
switch key {
case models.Dingtalk:
return &DingtalkSender{tpl: tpls["dingtalk.tpl"]}
case models.Wecom:
return &WecomSender{tpl: tpls["wecom.tpl"]}
case models.Feishu:
return &FeishuSender{tpl: tpls["feishu.tpl"]}
case models.FeishuCard:
return &FeishuCardSender{tpl: tpls["feishucard.tpl"]}
case models.Email:
return &EmailSender{subjectTpl: tpls["subject.tpl"], contentTpl: tpls["mailbody.tpl"]}
case models.Mm:
return &MmSender{tpl: tpls["mm.tpl"]}
case models.Telegram:
return &TelegramSender{tpl: tpls["telegram.tpl"]}
}
return nil
}
func BuildMessageContext(rule *models.AlertRule, event *models.AlertCurEvent, uids []int64) MessageContext {
users := memsto.UserCache.GetByUserIds(uids)
return MessageContext{
Rule: rule,
Event: event,
Users: users,
}
}
func BuildTplMessage(tpl *template.Template, event *models.AlertCurEvent) string {
if tpl == nil {
return "tpl for current sender not found, please check configuration"
}
var body bytes.Buffer
if err := tpl.Execute(&body, event); err != nil {
return err.Error()
}
return body.String()
}

View File

@@ -1,90 +0,0 @@
package sender
import (
"html/template"
"strings"
"time"
"github.com/toolkits/pkg/logger"
"github.com/didi/nightingale/v5/src/models"
"github.com/didi/nightingale/v5/src/pkg/poster"
)
type TelegramMessage struct {
Text string
Tokens []string
}
type telegram struct {
ParseMode string `json:"parse_mode"`
Text string `json:"text"`
}
type TelegramSender struct {
tpl *template.Template
}
func (ts *TelegramSender) Send(ctx MessageContext) {
if len(ctx.Users) == 0 || ctx.Rule == nil || ctx.Event == nil {
return
}
tokens := ts.extract(ctx.Users)
message := BuildTplMessage(ts.tpl, ctx.Event)
SendTelegram(TelegramMessage{
Text: message,
Tokens: tokens,
})
}
func (ts *TelegramSender) SendRaw(users []*models.User, title, message string) {
tokens := ts.extract(users)
SendTelegram(TelegramMessage{
Text: message,
Tokens: tokens,
})
}
func (ts *TelegramSender) extract(users []*models.User) []string {
tokens := make([]string, 0, len(users))
for _, user := range users {
if token, has := user.ExtractToken(models.Telegram); has {
tokens = append(tokens, token)
}
}
return tokens
}
func SendTelegram(message TelegramMessage) {
for i := 0; i < len(message.Tokens); i++ {
if !strings.Contains(message.Tokens[i], "/") && !strings.HasPrefix(message.Tokens[i], "https://") {
logger.Errorf("telegram_sender: result=fail invalid token=%s", message.Tokens[i])
continue
}
var url string
if strings.HasPrefix(message.Tokens[i], "https://") {
url = message.Tokens[i]
} else {
array := strings.Split(message.Tokens[i], "/")
if len(array) != 2 {
logger.Errorf("telegram_sender: result=fail invalid token=%s", message.Tokens[i])
continue
}
botToken := array[0]
chatId := array[1]
url = "https://api.telegram.org/bot" + botToken + "/sendMessage?chat_id=" + chatId
}
body := telegram{
ParseMode: "markdown",
Text: message.Text,
}
res, code, err := poster.PostJSON(url, time.Second*5, body, 3)
if err != nil {
logger.Errorf("telegram_sender: result=fail url=%s code=%d error=%v response=%s", url, code, err, string(res))
} else {
logger.Infof("telegram_sender: result=succ url=%s code=%d response=%s", url, code, string(res))
}
}
}

View File

@@ -1,66 +0,0 @@
package sender
import (
"bytes"
"encoding/json"
"io/ioutil"
"net/http"
"github.com/toolkits/pkg/logger"
"github.com/didi/nightingale/v5/src/models"
"github.com/didi/nightingale/v5/src/server/config"
)
func SendWebhooks(webhooks []config.Webhook, event *models.AlertCurEvent) {
for _, conf := range webhooks {
if conf.Url == "" || !conf.Enable {
continue
}
bs, err := json.Marshal(event)
if err != nil {
continue
}
bf := bytes.NewBuffer(bs)
req, err := http.NewRequest("POST", conf.Url, bf)
if err != nil {
logger.Warning("alertingWebhook failed to new request", err)
continue
}
if conf.BasicAuthUser != "" && conf.BasicAuthPass != "" {
req.SetBasicAuth(conf.BasicAuthUser, conf.BasicAuthPass)
}
if len(conf.Headers) > 0 && len(conf.Headers)%2 == 0 {
for i := 0; i < len(conf.Headers); i += 2 {
if conf.Headers[i] == "host" {
req.Host = conf.Headers[i+1]
continue
}
req.Header.Set(conf.Headers[i], conf.Headers[i+1])
}
}
client := http.Client{
Timeout: conf.TimeoutDuration,
}
var resp *http.Response
resp, err = client.Do(req)
if err != nil {
logger.Warningf("WebhookCallError, ruleId: [%d], eventId: [%d], url: [%s], error: [%s]", event.RuleId, event.Id, conf.Url, err)
continue
}
var body []byte
if resp.Body != nil {
defer resp.Body.Close()
body, _ = ioutil.ReadAll(resp.Body)
}
logger.Debugf("alertingWebhook done, url: %s, response code: %d, body: %s", conf.Url, resp.StatusCode, string(body))
}
}

View File

@@ -1,16 +1,17 @@
package sender
import (
"html/template"
"strings"
"time"
"github.com/toolkits/pkg/logger"
"github.com/didi/nightingale/v5/src/models"
"github.com/didi/nightingale/v5/src/pkg/poster"
"github.com/toolkits/pkg/logger"
)
type WecomMessage struct {
Text string
Tokens []string
}
type wecomMarkdown struct {
Content string `json:"content"`
}
@@ -20,59 +21,21 @@ type wecom struct {
Markdown wecomMarkdown `json:"markdown"`
}
type WecomSender struct {
tpl *template.Template
}
func (ws *WecomSender) Send(ctx MessageContext) {
if len(ctx.Users) == 0 || ctx.Rule == nil || ctx.Event == nil {
return
}
urls := ws.extract(ctx.Users)
message := BuildTplMessage(ws.tpl, ctx.Event)
for _, url := range urls {
func SendWecom(message WecomMessage) {
for i := 0; i < len(message.Tokens); i++ {
url := "https://qyapi.weixin.qq.com/cgi-bin/webhook/send?key=" + message.Tokens[i]
body := wecom{
Msgtype: "markdown",
Markdown: wecomMarkdown{
Content: message,
Content: message.Text,
},
}
ws.doSend(url, body)
}
}
func (ws *WecomSender) SendRaw(users []*models.User, title, message string) {
urls := ws.extract(users)
for _, url := range urls {
body := wecom{
Msgtype: "markdown",
Markdown: wecomMarkdown{
Content: message,
},
}
ws.doSend(url, body)
}
}
func (ws *WecomSender) extract(users []*models.User) []string {
urls := make([]string, 0, len(users))
for _, user := range users {
if token, has := user.ExtractToken(models.Wecom); has {
url := token
if !strings.HasPrefix(token, "https://") {
url = "https://qyapi.weixin.qq.com/cgi-bin/webhook/send?key=" + token
}
urls = append(urls, url)
res, code, err := poster.PostJSON(url, time.Second*5, body, 3)
if err != nil {
logger.Errorf("wecom_sender: result=fail url=%s code=%d error=%v response=%s", url, code, err, string(res))
} else {
logger.Infof("wecom_sender: result=succ url=%s code=%d response=%s", url, code, string(res))
}
}
return urls
}
func (ws *WecomSender) doSend(url string, body wecom) {
res, code, err := poster.PostJSON(url, time.Second*5, body, 3)
if err != nil {
logger.Errorf("wecom_sender: result=fail url=%s code=%d error=%v response=%s", url, code, err, string(res))
} else {
logger.Infof("wecom_sender: result=succ url=%s code=%d response=%s", url, code, string(res))
}
}

View File

@@ -2,11 +2,9 @@ package config
import (
"fmt"
"html/template"
"log"
"net"
"os"
"path"
"plugin"
"runtime"
"strings"
@@ -15,17 +13,12 @@ import (
"github.com/gin-gonic/gin"
"github.com/koding/multiconfig"
"github.com/pkg/errors"
"github.com/toolkits/pkg/file"
"github.com/toolkits/pkg/runner"
"github.com/didi/nightingale/v5/src/models"
"github.com/didi/nightingale/v5/src/notifier"
"github.com/didi/nightingale/v5/src/pkg/httpx"
"github.com/didi/nightingale/v5/src/pkg/logx"
"github.com/didi/nightingale/v5/src/pkg/ormx"
"github.com/didi/nightingale/v5/src/pkg/secu"
"github.com/didi/nightingale/v5/src/pkg/tplx"
"github.com/didi/nightingale/v5/src/storage"
)
@@ -34,68 +27,7 @@ var (
once sync.Once
)
func DealConfigCrypto(key string) {
decryptDsn, err := secu.DealWithDecrypt(C.DB.DSN, key)
if err != nil {
fmt.Println("failed to decrypt the db dsn", err)
os.Exit(1)
}
C.DB.DSN = decryptDsn
decryptRedisPwd, err := secu.DealWithDecrypt(C.Redis.Password, key)
if err != nil {
fmt.Println("failed to decrypt the redis password", err)
os.Exit(1)
}
C.Redis.Password = decryptRedisPwd
decryptSmtpPwd, err := secu.DealWithDecrypt(C.SMTP.Pass, key)
if err != nil {
fmt.Println("failed to decrypt the smtp password", err)
os.Exit(1)
}
C.SMTP.Pass = decryptSmtpPwd
decryptHookPwd, err := secu.DealWithDecrypt(C.Alerting.Webhook.BasicAuthPass, key)
if err != nil {
fmt.Println("failed to decrypt the alert webhook password", err)
os.Exit(1)
}
C.Alerting.Webhook.BasicAuthPass = decryptHookPwd
decryptIbexPwd, err := secu.DealWithDecrypt(C.Ibex.BasicAuthPass, key)
if err != nil {
fmt.Println("failed to decrypt the ibex password", err)
os.Exit(1)
}
C.Ibex.BasicAuthPass = decryptIbexPwd
if len(C.Readers) == 0 {
C.Reader.ClusterName = C.ClusterName
C.Readers = append(C.Readers, C.Reader)
}
for index, v := range C.Readers {
decryptReaderPwd, err := secu.DealWithDecrypt(v.BasicAuthPass, key)
if err != nil {
fmt.Printf("failed to decrypt the reader password: %s , error: %s", v.BasicAuthPass, err.Error())
os.Exit(1)
}
C.Readers[index].BasicAuthPass = decryptReaderPwd
}
for index, v := range C.Writers {
decryptWriterPwd, err := secu.DealWithDecrypt(v.BasicAuthPass, key)
if err != nil {
fmt.Printf("failed to decrypt the writer password: %s , error: %s", v.BasicAuthPass, err.Error())
os.Exit(1)
}
C.Writers[index].BasicAuthPass = decryptWriterPwd
}
}
func MustLoad(key string, fpaths ...string) {
func MustLoad(fpaths ...string) {
once.Do(func() {
loaders := []multiconfig.Loader{
&multiconfig.TagLoader{},
@@ -134,8 +66,6 @@ func MustLoad(key string, fpaths ...string) {
}
m.MustLoad(C)
DealConfigCrypto(key)
if C.EngineDelay == 0 {
C.EngineDelay = 120
}
@@ -144,11 +74,6 @@ func MustLoad(key string, fpaths ...string) {
C.ReaderFrom = "config"
}
if C.ReaderFrom == "config" && C.ClusterName == "" {
fmt.Println("configuration ClusterName is blank")
os.Exit(1)
}
if C.Heartbeat.IP == "" {
// auto detect
// C.Heartbeat.IP = fmt.Sprint(GetOutboundIP())
@@ -174,10 +99,48 @@ func MustLoad(key string, fpaths ...string) {
C.Heartbeat.Endpoint = fmt.Sprintf("%s:%d", C.Heartbeat.IP, C.HTTP.Port)
C.Alerting.check()
if C.Alerting.Webhook.Enable {
if C.Alerting.Webhook.Timeout == "" {
C.Alerting.Webhook.TimeoutDuration = time.Second * 5
} else {
dur, err := time.ParseDuration(C.Alerting.Webhook.Timeout)
if err != nil {
fmt.Println("failed to parse Alerting.Webhook.Timeout")
os.Exit(1)
}
C.Alerting.Webhook.TimeoutDuration = dur
}
}
if C.Alerting.CallPlugin.Enable {
if runtime.GOOS == "windows" {
fmt.Println("notify plugin on unsupported os:", runtime.GOOS)
os.Exit(1)
}
p, err := plugin.Open(C.Alerting.CallPlugin.PluginPath)
if err != nil {
fmt.Println("failed to load plugin:", err)
os.Exit(1)
}
caller, err := p.Lookup(C.Alerting.CallPlugin.Caller)
if err != nil {
fmt.Println("failed to lookup plugin Caller:", err)
os.Exit(1)
}
ins, ok := caller.(notifier.Notifier)
if !ok {
log.Println("notifier interface not implemented")
os.Exit(1)
}
notifier.Instance = ins
}
if C.WriterOpt.QueueMaxSize <= 0 {
C.WriterOpt.QueueMaxSize = 10000000
C.WriterOpt.QueueMaxSize = 100000
}
if C.WriterOpt.QueuePopSize <= 0 {
@@ -185,18 +148,10 @@ func MustLoad(key string, fpaths ...string) {
}
if C.WriterOpt.QueueCount <= 0 {
C.WriterOpt.QueueCount = 1000
C.WriterOpt.QueueCount = 100
}
if C.WriterOpt.ShardingKey == "" {
C.WriterOpt.ShardingKey = "ident"
}
for i, write := range C.Writers {
if C.Writers[i].ClusterName == "" {
C.Writers[i].ClusterName = C.ClusterName
}
for _, write := range C.Writers {
for _, relabel := range write.WriteRelabels {
regex, ok := relabel.Regex.(string)
if !ok {
@@ -230,12 +185,11 @@ func MustLoad(key string, fpaths ...string) {
type Config struct {
RunMode string
ClusterName string // 监控对象上报时,指定的集群名称
ClusterName string
BusiGroupLabelKey string
EngineDelay int64
DisableUsageReport bool
ReaderFrom string
LabelRewrite bool
ForceUseServerTS bool
Log logx.Config
HTTP httpx.Config
@@ -249,12 +203,10 @@ type Config struct {
WriterOpt WriterGlobalOpt
Writers []WriterOptions
Reader PromOption
Readers []PromOption
Ibex Ibex
}
type WriterOptions struct {
ClusterName string
Url string
BasicAuthUser string
BasicAuthPass string
@@ -279,7 +231,6 @@ type WriterGlobalOpt struct {
QueueCount int
QueueMaxSize int
QueuePopSize int
ShardingKey string
}
type HeartbeatConfig struct {
@@ -299,7 +250,6 @@ type SMTPConfig struct {
}
type Alerting struct {
Timeout int64
TemplatesDir string
NotifyConcurrency int
NotifyBuiltinChannels []string
@@ -309,89 +259,6 @@ type Alerting struct {
Webhook Webhook
}
func (a *Alerting) check() {
if a.Webhook.Enable {
if a.Webhook.Timeout == "" {
a.Webhook.TimeoutDuration = time.Second * 5
} else {
dur, err := time.ParseDuration(C.Alerting.Webhook.Timeout)
if err != nil {
fmt.Println("failed to parse Alerting.Webhook.Timeout")
os.Exit(1)
}
a.Webhook.TimeoutDuration = dur
}
}
if a.CallPlugin.Enable {
if runtime.GOOS == "windows" {
fmt.Println("notify plugin on unsupported os:", runtime.GOOS)
os.Exit(1)
}
p, err := plugin.Open(a.CallPlugin.PluginPath)
if err != nil {
fmt.Println("failed to load plugin:", err)
os.Exit(1)
}
caller, err := p.Lookup(a.CallPlugin.Caller)
if err != nil {
fmt.Println("failed to lookup plugin Caller:", err)
os.Exit(1)
}
ins, ok := caller.(notifier.Notifier)
if !ok {
log.Println("notifier interface not implemented")
os.Exit(1)
}
notifier.Instance = ins
}
if a.TemplatesDir == "" {
a.TemplatesDir = path.Join(runner.Cwd, "etc", "template")
}
if a.Timeout == 0 {
a.Timeout = 30000
}
}
func (a *Alerting) ListTpls() (map[string]*template.Template, error) {
filenames, err := file.FilesUnder(a.TemplatesDir)
if err != nil {
return nil, errors.WithMessage(err, "failed to exec FilesUnder")
}
if len(filenames) == 0 {
return nil, errors.New("no tpl files under " + a.TemplatesDir)
}
tplFiles := make([]string, 0, len(filenames))
for i := 0; i < len(filenames); i++ {
if strings.HasSuffix(filenames[i], ".tpl") {
tplFiles = append(tplFiles, filenames[i])
}
}
if len(tplFiles) == 0 {
return nil, errors.New("no tpl files under " + a.TemplatesDir)
}
tpls := make(map[string]*template.Template)
for _, tplFile := range tplFiles {
tplpath := path.Join(a.TemplatesDir, tplFile)
tpl, err := template.New(tplFile).Funcs(tplx.TemplateFuncMap).ParseFiles(tplpath)
if err != nil {
return nil, errors.WithMessage(err, "failed to parse tpl: "+tplpath)
}
tpls[tplFile] = tpl
}
return tpls, nil
}
type CallScript struct {
Enable bool
ScriptPath string

View File

@@ -1,92 +1,59 @@
package config
import (
"strings"
"sync"
"github.com/didi/nightingale/v5/src/models"
"github.com/didi/nightingale/v5/src/pkg/prom"
)
type PromClientMap struct {
type PromClient struct {
prom.API
ClusterName string
sync.RWMutex
Clients map[string]prom.API
}
var ReaderClients = &PromClientMap{Clients: make(map[string]prom.API)}
var ReaderClient *PromClient = &PromClient{}
func (pc *PromClientMap) Set(clusterName string, c prom.API) {
if c == nil {
return
}
func (pc *PromClient) Set(clusterName string, c prom.API) {
pc.Lock()
defer pc.Unlock()
pc.Clients[clusterName] = c
pc.ClusterName = clusterName
pc.API = c
}
func (pc *PromClientMap) GetClusterNames() []string {
func (pc *PromClient) Get() (string, prom.API) {
pc.RLock()
defer pc.RUnlock()
var clusterNames []string
for k := range pc.Clients {
clusterNames = append(clusterNames, k)
}
return clusterNames
return pc.ClusterName, pc.API
}
func (pc *PromClientMap) GetCli(cluster string) prom.API {
func (pc *PromClient) GetClusterName() string {
pc.RLock()
defer pc.RUnlock()
c := pc.Clients[cluster]
return c
return pc.ClusterName
}
func (pc *PromClientMap) IsNil(cluster string) bool {
func (pc *PromClient) GetCli() prom.API {
pc.RLock()
defer pc.RUnlock()
return pc.API
}
c, exists := pc.Clients[cluster]
if !exists {
func (pc *PromClient) IsNil() bool {
if pc == nil {
return true
}
return c == nil
}
// Hit 根据当前有效的cluster和规则的cluster配置计算有效的cluster列表
func (pc *PromClientMap) Hit(cluster string) []string {
pc.RLock()
defer pc.RUnlock()
clusters := make([]string, 0, len(pc.Clients))
if cluster == models.ClusterAll {
for c := range pc.Clients {
clusters = append(clusters, c)
}
return clusters
}
ruleClusters := strings.Fields(cluster)
for c := range pc.Clients {
for _, rc := range ruleClusters {
if rc == c {
clusters = append(clusters, c)
continue
}
}
}
return clusters
return pc.API == nil
}
func (pc *PromClientMap) Reset() {
func (pc *PromClient) Reset() {
pc.Lock()
defer pc.Unlock()
pc.Clients = make(map[string]prom.API)
}
func (pc *PromClientMap) Del(cluster string) {
pc.Lock()
defer pc.Unlock()
delete(pc.Clients, cluster)
pc.ClusterName = ""
pc.API = nil
}

View File

@@ -1,13 +1,8 @@
package config
import (
"sync"
"github.com/didi/nightingale/v5/src/pkg/tls"
)
import "sync"
type PromOption struct {
ClusterName string
Url string
BasicAuthUser string
BasicAuthPass string
@@ -15,9 +10,6 @@ type PromOption struct {
Timeout int64
DialTimeout int64
UseTLS bool
tls.ClientConfig
MaxIdleConnsPerHost int
Headers []string
@@ -72,9 +64,9 @@ func (pos *PromOptionsStruct) Set(clusterName string, po PromOption) {
pos.Unlock()
}
func (pos *PromOptionsStruct) Del(clusterName string) {
func (pos *PromOptionsStruct) Sets(clusterName string, po PromOption) {
pos.Lock()
delete(pos.Data, clusterName)
pos.Data = map[string]PromOption{clusterName: po}
pos.Unlock()
}

View File

@@ -17,19 +17,7 @@ import (
func InitReader() error {
rf := strings.ToLower(strings.TrimSpace(C.ReaderFrom))
if rf == "" || rf == "config" {
if len(C.Readers) == 0 {
C.Reader.ClusterName = C.ClusterName
C.Readers = append(C.Readers, C.Reader)
}
for _, reader := range C.Readers {
err := setClientFromPromOption(reader.ClusterName, reader)
if err != nil {
logger.Errorf("failed to setClientFromPromOption: %v", err)
continue
}
}
return nil
return setClientFromPromOption(C.ClusterName, C.Reader)
}
if rf == "database" {
@@ -50,96 +38,72 @@ func initFromDatabase() error {
}
func loadFromDatabase() {
clusters, err := models.AlertingEngineGetClusters(C.Heartbeat.Endpoint)
cluster, err := models.AlertingEngineGetCluster(C.Heartbeat.Endpoint)
if err != nil {
logger.Errorf("failed to get current cluster, error: %v", err)
return
}
if len(clusters) == 0 {
ReaderClients.Reset()
if cluster == "" {
ReaderClient.Reset()
logger.Warning("no datasource binded to me")
return
}
newCluster := make(map[string]struct{})
for _, cluster := range clusters {
newCluster[cluster] = struct{}{}
ckey := "prom." + cluster + ".option"
cval, err := models.ConfigsGet(ckey)
if err != nil {
logger.Errorf("failed to get ckey: %s, error: %v", ckey, err)
continue
}
if cval == "" {
logger.Debugf("ckey: %s is empty", ckey)
continue
}
var po PromOption
err = json.Unmarshal([]byte(cval), &po)
if err != nil {
logger.Errorf("failed to unmarshal PromOption: %s", err)
continue
}
if ReaderClients.IsNil(cluster) {
// first time
if err = setClientFromPromOption(cluster, po); err != nil {
logger.Errorf("failed to setClientFromPromOption: %v", err)
continue
}
logger.Info("setClientFromPromOption success: ", cluster)
PromOptions.Set(cluster, po)
continue
}
localPo, has := PromOptions.Get(cluster)
if !has || !localPo.Equal(po) {
if err = setClientFromPromOption(cluster, po); err != nil {
logger.Errorf("failed to setClientFromPromOption: %v", err)
continue
}
PromOptions.Set(cluster, po)
}
ckey := "prom." + cluster + ".option"
cval, err := models.ConfigsGet(ckey)
if err != nil {
logger.Errorf("failed to get ckey: %s, error: %v", ckey, err)
return
}
// delete useless cluster
oldClusters := ReaderClients.GetClusterNames()
for _, oldCluster := range oldClusters {
if _, has := newCluster[oldCluster]; !has {
ReaderClients.Del(oldCluster)
PromOptions.Del(oldCluster)
logger.Info("delete cluster: ", oldCluster)
if cval == "" {
ReaderClient.Reset()
return
}
var po PromOption
err = json.Unmarshal([]byte(cval), &po)
if err != nil {
logger.Errorf("failed to unmarshal PromOption: %s", err)
return
}
if ReaderClient.IsNil() {
// first time
if err = setClientFromPromOption(cluster, po); err != nil {
logger.Errorf("failed to setClientFromPromOption: %v", err)
return
}
PromOptions.Sets(cluster, po)
return
}
localPo, has := PromOptions.Get(cluster)
if !has || !localPo.Equal(po) {
if err = setClientFromPromOption(cluster, po); err != nil {
logger.Errorf("failed to setClientFromPromOption: %v", err)
return
}
PromOptions.Sets(cluster, po)
return
}
}
func newClientFromPromOption(po PromOption) (api.Client, error) {
transport := &http.Transport{
Proxy: http.ProxyFromEnvironment,
DialContext: (&net.Dialer{
Timeout: time.Duration(po.DialTimeout) * time.Millisecond,
}).DialContext,
ResponseHeaderTimeout: time.Duration(po.Timeout) * time.Millisecond,
MaxIdleConnsPerHost: po.MaxIdleConnsPerHost,
}
if po.UseTLS {
tlsConfig, err := po.TLSConfig()
if err != nil {
logger.Errorf("new cluster %s fail: %v", po.Url, err)
return nil, err
}
transport.TLSClientConfig = tlsConfig
}
return api.NewClient(api.Config{
Address: po.Url,
RoundTripper: transport,
Address: po.Url,
RoundTripper: &http.Transport{
// TLSClientConfig: tlsConfig,
Proxy: http.ProxyFromEnvironment,
DialContext: (&net.Dialer{
Timeout: time.Duration(po.DialTimeout) * time.Millisecond,
}).DialContext,
ResponseHeaderTimeout: time.Duration(po.Timeout) * time.Millisecond,
MaxIdleConnsPerHost: po.MaxIdleConnsPerHost,
},
})
}
@@ -152,18 +116,12 @@ func setClientFromPromOption(clusterName string, po PromOption) error {
return fmt.Errorf("prometheus url is blank")
}
if strings.HasPrefix(po.Url, "https") {
po.UseTLS = true
po.InsecureSkipVerify = true
}
cli, err := newClientFromPromOption(po)
if err != nil {
return fmt.Errorf("failed to newClientFromPromOption: %v", err)
}
logger.Debugf("setClientFromPromOption: %s, %+v", clusterName, po)
ReaderClients.Set(clusterName, prom.NewAPI(cli, prom.ClientOptions{
ReaderClient.Set(clusterName, prom.NewAPI(cli, prom.ClientOptions{
BasicAuthUser: po.BasicAuthUser,
BasicAuthPass: po.BasicAuthPass,
Headers: po.Headers,

View File

@@ -1,4 +1,4 @@
package sender
package engine
import (
"strconv"
@@ -14,7 +14,8 @@ import (
"github.com/didi/nightingale/v5/src/server/memsto"
)
func SendCallbacks(urls []string, event *models.AlertCurEvent) {
func callback(event *models.AlertCurEvent) {
urls := strings.Fields(event.Callbacks)
for _, url := range urls {
if url == "" {
continue

View File

@@ -1,7 +1,7 @@
//go:build !windows
// +build !windows
package sender
package engine
import (
"os/exec"

View File

@@ -1,4 +1,4 @@
package sender
package engine
import "os/exec"

View File

@@ -45,11 +45,7 @@ func consume(events []interface{}, sema *semaphore.Semaphore) {
func consumeOne(event *models.AlertCurEvent) {
LogEvent(event, "consume")
if err := event.ParseRule("rule_name"); err != nil {
event.RuleName = fmt.Sprintf("failed to parse rule name: %v", err)
}
if err := event.ParseRule("rule_note"); err != nil {
if err := event.ParseRuleNote(); err != nil {
event.RuleNote = fmt.Sprintf("failed to parse rule note: %v", err)
}
@@ -59,7 +55,9 @@ func consumeOne(event *models.AlertCurEvent) {
return
}
HandleEventNotify(event, false)
fillUsers(event)
callback(event)
notify(event)
}
func persist(event *models.AlertCurEvent) {
@@ -74,10 +72,9 @@ func persist(event *models.AlertCurEvent) {
// 不管是告警还是恢复,全量告警里都要记录
if err := his.Add(); err != nil {
logger.Errorf(
"event_persist_his_fail: %v rule_id=%d cluster:%s hash=%s tags=%v timestamp=%d value=%s",
"event_persist_his_fail: %v rule_id=%d hash=%s tags=%v timestamp=%d value=%s",
err,
event.RuleId,
event.Cluster,
event.Hash,
event.TagsJSON,
event.TriggerTime,
@@ -100,10 +97,9 @@ func persist(event *models.AlertCurEvent) {
if event.Id > 0 {
if err := event.Add(); err != nil {
logger.Errorf(
"event_persist_cur_fail: %v rule_id=%d cluster:%s hash=%s tags=%v timestamp=%d value=%s",
"event_persist_cur_fail: %v rule_id=%d hash=%s tags=%v timestamp=%d value=%s",
err,
event.RuleId,
event.Cluster,
event.Hash,
event.TagsJSON,
event.TriggerTime,
@@ -126,10 +122,9 @@ func persist(event *models.AlertCurEvent) {
if event.Id > 0 {
if err := event.Add(); err != nil {
logger.Errorf(
"event_persist_cur_fail: %v rule_id=%d cluster:%s hash=%s tags=%v timestamp=%d value=%s",
"event_persist_cur_fail: %v rule_id=%d hash=%s tags=%v timestamp=%d value=%s",
err,
event.RuleId,
event.Cluster,
event.Hash,
event.TagsJSON,
event.TriggerTime,
@@ -147,6 +142,7 @@ func fillUsers(e *models.AlertCurEvent) {
if err != nil {
continue
}
gids = append(gids, gid)
}
@@ -170,3 +166,11 @@ func mapKeys(m map[int64]struct{}) []int64 {
}
return lst
}
func StringSetKeys(m map[string]struct{}) []string {
lst := make([]string, 0, len(m))
for k := range m {
lst = append(lst, k)
}
return lst
}

View File

@@ -0,0 +1,33 @@
package engine
import (
"strconv"
"strings"
"time"
"github.com/didi/nightingale/v5/src/models"
)
func isNoneffective(timestamp int64, alertRule *models.AlertRule) bool {
if alertRule.Disabled == 1 {
return true
}
tm := time.Unix(timestamp, 0)
triggerTime := tm.Format("15:04")
triggerWeek := strconv.Itoa(int(tm.Weekday()))
if alertRule.EnableStime <= alertRule.EnableEtime {
if triggerTime < alertRule.EnableStime || triggerTime > alertRule.EnableEtime {
return true
}
} else {
if triggerTime < alertRule.EnableStime && triggerTime > alertRule.EnableEtime {
return true
}
}
alertRule.EnableDaysOfWeek = strings.Replace(alertRule.EnableDaysOfWeek, "7", "0", 1)
return !strings.Contains(alertRule.EnableDaysOfWeek, triggerWeek)
}

View File

@@ -5,15 +5,13 @@ import (
"fmt"
"time"
"github.com/toolkits/pkg/logger"
"github.com/didi/nightingale/v5/src/server/common/sender"
"github.com/didi/nightingale/v5/src/server/config"
promstat "github.com/didi/nightingale/v5/src/server/stat"
"github.com/toolkits/pkg/container/list"
"github.com/toolkits/pkg/logger"
)
var EventQueue = list.NewSafeListLimited(10000000)
func Start(ctx context.Context) error {
err := reloadTpls()
if err != nil {
@@ -23,7 +21,8 @@ func Start(ctx context.Context) error {
// start loop consumer
go loopConsume(ctx)
go ruleHolder.LoopSyncRules(ctx)
// filter my rules and start worker
go loopFilterRules(ctx)
go reportQueueSize()
@@ -54,7 +53,10 @@ func Reload() {
func reportQueueSize() {
for {
time.Sleep(time.Second)
promstat.GaugeAlertQueueSize.Set(float64(EventQueue.Len()))
clusterName := config.ReaderClient.GetClusterName()
if clusterName == "" {
continue
}
promstat.GaugeAlertQueueSize.WithLabelValues(clusterName).Set(float64(EventQueue.Len()))
}
}

View File

@@ -1,9 +1,8 @@
package engine
import (
"github.com/toolkits/pkg/logger"
"github.com/didi/nightingale/v5/src/models"
"github.com/toolkits/pkg/logger"
)
func LogEvent(event *models.AlertCurEvent, location string, err ...error) {
@@ -18,12 +17,11 @@ func LogEvent(event *models.AlertCurEvent, location string, err ...error) {
}
logger.Infof(
"event(%s %s) %s: rule_id=%d cluster:%s %v%s@%d %s",
"event(%s %s) %s: rule_id=%d %v%s@%d %s",
event.Hash,
status,
location,
event.RuleId,
event.Cluster,
event.TagsJSON,
event.TriggerValue,
event.TriggerTime,

69
src/server/engine/mute.go Normal file
View File

@@ -0,0 +1,69 @@
package engine
import (
"github.com/didi/nightingale/v5/src/models"
"github.com/didi/nightingale/v5/src/server/memsto"
)
// 如果传入了clock这个可选参数就表示使用这个clock表示的时间否则就从event的字段中取TriggerTime
func IsMuted(event *models.AlertCurEvent, clock ...int64) bool {
mutes, has := memsto.AlertMuteCache.Gets(event.GroupId)
if !has || len(mutes) == 0 {
return false
}
for i := 0; i < len(mutes); i++ {
if matchMute(event, mutes[i], clock...) {
return true
}
}
return false
}
func matchMute(event *models.AlertCurEvent, mute *models.AlertMute, clock ...int64) bool {
ts := event.TriggerTime
if len(clock) > 0 {
ts = clock[0]
}
if ts < mute.Btime || ts > mute.Etime {
return false
}
return matchTags(event.TagsMap, mute.ITags)
}
func matchTag(value string, filter models.TagFilter) bool {
switch filter.Func {
case "==":
return filter.Value == value
case "!=":
return filter.Value != value
case "in":
_, has := filter.Vset[value]
return has
case "not in":
_, has := filter.Vset[value]
return !has
case "=~":
return filter.Regexp.MatchString(value)
case "!~":
return !filter.Regexp.MatchString(value)
}
// unexpect func
return false
}
func matchTags(eventTagsMap map[string]string, itags []models.TagFilter) bool {
for _, filter := range itags {
value, has := eventTagsMap[filter.Key]
if !has {
return false
}
if !matchTag(value, filter) {
return false
}
}
return true
}

View File

@@ -1,185 +0,0 @@
package engine
import (
"strconv"
"strings"
"time"
"github.com/toolkits/pkg/logger"
"github.com/didi/nightingale/v5/src/models"
"github.com/didi/nightingale/v5/src/server/memsto"
)
type MuteStrategyFunc func(rule *models.AlertRule, event *models.AlertCurEvent) bool
var AlertMuteStrategies = []MuteStrategyFunc{
TimeNonEffectiveMuteStrategy,
IdentNotExistsMuteStrategy,
BgNotMatchMuteStrategy,
EventMuteStrategy,
}
func IsMuted(rule *models.AlertRule, event *models.AlertCurEvent) bool {
for _, strategyFunc := range AlertMuteStrategies {
if strategyFunc(rule, event) {
return true
}
}
return false
}
// TimeNonEffectiveMuteStrategy 根据规则配置的告警时间过滤,如果产生的告警不在规则配置的告警时间内,则不告警
func TimeNonEffectiveMuteStrategy(rule *models.AlertRule, event *models.AlertCurEvent) bool {
if rule.Disabled == 1 {
return true
}
tm := time.Unix(event.TriggerTime, 0)
triggerTime := tm.Format("15:04")
triggerWeek := strconv.Itoa(int(tm.Weekday()))
enableStime := strings.Fields(rule.EnableStime)
enableEtime := strings.Fields(rule.EnableEtime)
enableDaysOfWeek := strings.Split(rule.EnableDaysOfWeek, ";")
length := len(enableDaysOfWeek)
// enableStime,enableEtime,enableDaysOfWeek三者长度肯定相同这里循环一个即可
for i := 0; i < length; i++ {
enableDaysOfWeek[i] = strings.Replace(enableDaysOfWeek[i], "7", "0", 1)
if !strings.Contains(enableDaysOfWeek[i], triggerWeek) {
continue
}
if enableStime[i] <= enableEtime[i] {
if triggerTime < enableStime[i] || triggerTime > enableEtime[i] {
continue
}
} else {
if triggerTime < enableStime[i] && triggerTime > enableEtime[i] {
continue
}
}
// 到这里说明当前时刻在告警规则的某组生效时间范围内,直接返回 false
return false
}
return true
}
// IdentNotExistsMuteStrategy 根据ident是否存在过滤,如果ident不存在,则target_up的告警直接过滤掉
func IdentNotExistsMuteStrategy(rule *models.AlertRule, event *models.AlertCurEvent) bool {
ident, has := event.TagsMap["ident"]
if !has {
return false
}
_, exists := memsto.TargetCache.Get(ident)
// 如果是target_up的告警,且ident已经不存在了,直接过滤掉
// 这里的判断有点太粗暴了,但是目前没有更好的办法
if !exists && strings.Contains(rule.PromQl, "target_up") {
logger.Debugf("[%s] mute: rule_eval:%d cluster:%s ident:%s", "IdentNotExistsMuteStrategy", rule.Id, event.Cluster, ident)
return true
}
return false
}
// BgNotMatchMuteStrategy 当规则开启只在bg内部告警时,对于非bg内部的机器过滤
func BgNotMatchMuteStrategy(rule *models.AlertRule, event *models.AlertCurEvent) bool {
// 没有开启BG内部告警,直接不过滤
if rule.EnableInBG == 0 {
return false
}
ident, has := event.TagsMap["ident"]
if !has {
return false
}
target, exists := memsto.TargetCache.Get(ident)
// 对于包含ident的告警事件check一下ident所属bg和rule所属bg是否相同
// 如果告警规则选择了只在本BG生效那其他BG的机器就不能因此规则产生告警
if exists && target.GroupId != rule.GroupId {
logger.Debugf("[%s] mute: rule_eval:%d cluster:%s", "BgNotMatchMuteStrategy", rule.Id, event.Cluster)
return true
}
return false
}
func EventMuteStrategy(rule *models.AlertRule, event *models.AlertCurEvent) bool {
mutes, has := memsto.AlertMuteCache.Gets(event.GroupId)
if !has || len(mutes) == 0 {
return false
}
for i := 0; i < len(mutes); i++ {
if matchMute(event, mutes[i]) {
return true
}
}
return false
}
// matchMute 如果传入了clock这个可选参数就表示使用这个clock表示的时间否则就从event的字段中取TriggerTime
func matchMute(event *models.AlertCurEvent, mute *models.AlertMute, clock ...int64) bool {
if mute.Disabled == 1 {
return false
}
ts := event.TriggerTime
if len(clock) > 0 {
ts = clock[0]
}
// 如果不是全局的,判断 cluster
if mute.Cluster != models.ClusterAll {
// mute.Cluster 是一个字符串可能是多个cluster的组合比如"cluster1 cluster2"
clusters := strings.Fields(mute.Cluster)
cm := make(map[string]struct{}, len(clusters))
for i := 0; i < len(clusters); i++ {
cm[clusters[i]] = struct{}{}
}
// 判断event.Cluster是否包含在cm中
if _, has := cm[event.Cluster]; !has {
return false
}
}
if ts < mute.Btime || ts > mute.Etime {
return false
}
return matchTags(event.TagsMap, mute.ITags)
}
func matchTag(value string, filter models.TagFilter) bool {
switch filter.Func {
case "==":
return filter.Value == value
case "!=":
return filter.Value != value
case "in":
_, has := filter.Vset[value]
return has
case "not in":
_, has := filter.Vset[value]
return !has
case "=~":
return filter.Regexp.MatchString(value)
case "!~":
return !filter.Regexp.MatchString(value)
}
// unexpect func
return false
}
func matchTags(eventTagsMap map[string]string, itags []models.TagFilter) bool {
for _, filter := range itags {
value, has := eventTagsMap[filter.Key]
if !has {
return false
}
if !matchTag(value, filter) {
return false
}
}
return true
}

View File

@@ -2,164 +2,88 @@ package engine
import (
"bytes"
"context"
"encoding/json"
"html/template"
"io/ioutil"
"net/http"
"os/exec"
"path"
"strings"
"sync"
"time"
"github.com/pkg/errors"
"github.com/tidwall/gjson"
"github.com/toolkits/pkg/file"
"github.com/toolkits/pkg/logger"
"github.com/toolkits/pkg/runner"
"github.com/toolkits/pkg/slice"
"github.com/didi/nightingale/v5/src/models"
"github.com/didi/nightingale/v5/src/notifier"
"github.com/didi/nightingale/v5/src/pkg/sys"
"github.com/didi/nightingale/v5/src/pkg/tplx"
"github.com/didi/nightingale/v5/src/server/common/sender"
"github.com/didi/nightingale/v5/src/server/config"
"github.com/didi/nightingale/v5/src/server/memsto"
"github.com/toolkits/pkg/logger"
"github.com/didi/nightingale/v5/src/storage"
)
var (
rwLock sync.RWMutex
tpls map[string]*template.Template
Senders map[string]sender.Sender
// 处理事件到subscription关系,处理的subscription用OrMerge进行合并
routers = []Router{GroupRouter, GlobalWebhookRouter, EventCallbacksRouter}
// 额外去掉一些订阅,处理的subscription用AndMerge进行合并, 如设置 channel=false,合并后不通过这个channel发送
// 如果实现了相关Router,可以添加到interceptors中
interceptors []Router
// 额外的订阅event逻辑处理
subscribeRouters = []Router{GroupRouter}
subscribeInterceptors []Router
tpls map[string]*template.Template
rwLock sync.RWMutex
)
func reloadTpls() error {
tmpTpls, err := config.C.Alerting.ListTpls()
if err != nil {
return err
if config.C.Alerting.TemplatesDir == "" {
config.C.Alerting.TemplatesDir = path.Join(runner.Cwd, "etc", "template")
}
senders := map[string]sender.Sender{
models.Email: sender.NewSender(models.Email, tmpTpls),
models.Dingtalk: sender.NewSender(models.Dingtalk, tmpTpls),
models.Wecom: sender.NewSender(models.Wecom, tmpTpls),
models.Feishu: sender.NewSender(models.Feishu, tmpTpls),
models.FeishuCard: sender.NewSender(models.FeishuCard, tmpTpls),
models.Mm: sender.NewSender(models.Mm, tmpTpls),
models.Telegram: sender.NewSender(models.Telegram, tmpTpls),
filenames, err := file.FilesUnder(config.C.Alerting.TemplatesDir)
if err != nil {
return errors.WithMessage(err, "failed to exec FilesUnder")
}
if len(filenames) == 0 {
return errors.New("no tpl files under " + config.C.Alerting.TemplatesDir)
}
tplFiles := make([]string, 0, len(filenames))
for i := 0; i < len(filenames); i++ {
if strings.HasSuffix(filenames[i], ".tpl") {
tplFiles = append(tplFiles, filenames[i])
}
}
if len(tplFiles) == 0 {
return errors.New("no tpl files under " + config.C.Alerting.TemplatesDir)
}
tmpTpls := make(map[string]*template.Template)
for i := 0; i < len(tplFiles); i++ {
tplpath := path.Join(config.C.Alerting.TemplatesDir, tplFiles[i])
tpl, err := template.New(tplFiles[i]).Funcs(tplx.TemplateFuncMap).ParseFiles(tplpath)
if err != nil {
return errors.WithMessage(err, "failed to parse tpl: "+tplpath)
}
tmpTpls[tplFiles[i]] = tpl
}
rwLock.Lock()
tpls = tmpTpls
Senders = senders
rwLock.Unlock()
return nil
}
// HandleEventNotify 处理event事件的主逻辑
// event: 告警/恢复事件
// isSubscribe: 告警事件是否由subscribe的配置产生
func HandleEventNotify(event *models.AlertCurEvent, isSubscribe bool) {
rule := memsto.AlertRuleCache.Get(event.RuleId)
if rule == nil {
return
}
fillUsers(event)
var (
handlers []Router
interceptorHandlers []Router
)
if isSubscribe {
handlers = subscribeRouters
interceptorHandlers = subscribeInterceptors
} else {
handlers = routers
interceptorHandlers = interceptors
}
subscription := NewSubscription()
// 处理订阅关系使用OrMerge
for _, handler := range handlers {
subscription.OrMerge(handler(rule, event, subscription))
}
// 处理移除订阅关系的逻辑,比如员工离职,临时静默某个通道的策略等
for _, handler := range interceptorHandlers {
subscription.AndMerge(handler(rule, event, subscription))
}
// 处理事件发送,这里用一个goroutine处理一个event的所有发送事件
go Send(rule, event, subscription, isSubscribe)
// 如果是不是订阅规则出现的event,则需要处理订阅规则的event
if !isSubscribe {
handleSubs(event)
}
}
func handleSubs(event *models.AlertCurEvent) {
// handle alert subscribes
subscribes := make([]*models.AlertSubscribe, 0)
// rule specific subscribes
if subs, has := memsto.AlertSubscribeCache.Get(event.RuleId); has {
subscribes = append(subscribes, subs...)
}
// global subscribes
if subs, has := memsto.AlertSubscribeCache.Get(0); has {
subscribes = append(subscribes, subs...)
}
for _, sub := range subscribes {
handleSub(sub, *event)
}
}
// handleSub 处理订阅规则的event,注意这里event要使用值传递,因为后面会修改event的状态
func handleSub(sub *models.AlertSubscribe, event models.AlertCurEvent) {
if sub.IsDisabled() || !sub.MatchCluster(event.Cluster) {
return
}
if !matchTags(event.TagsMap, sub.ITags) {
return
}
sub.ModifyEvent(&event)
LogEvent(&event, "subscribe")
HandleEventNotify(&event, true)
}
func Send(rule *models.AlertRule, event *models.AlertCurEvent, subscription *Subscription, isSubscribe bool) {
for channel, uids := range subscription.ToChannelUserMap() {
ctx := sender.BuildMessageContext(rule, event, uids)
rwLock.RLock()
s := Senders[channel]
rwLock.RUnlock()
if s == nil {
logger.Warningf("no sender for channel: %s", channel)
continue
}
s.Send(ctx)
}
// handle event callbacks
sender.SendCallbacks(subscription.ToCallbackList(), event)
// handle global webhooks
sender.SendWebhooks(subscription.ToWebhookList(), event)
noticeBytes := genNoticeBytes(event)
// handle plugin call
go sender.MayPluginNotify(noticeBytes)
if !isSubscribe {
// handle redis pub
sender.PublishToRedis(event.Cluster, noticeBytes)
}
}
type Notice struct {
Event *models.AlertCurEvent `json:"event"`
Tpls map[string]string `json:"tpls"`
}
func genNoticeBytes(event *models.AlertCurEvent) []byte {
func genNotice(event *models.AlertCurEvent) Notice {
// build notice body with templates
ntpls := make(map[string]string)
@@ -174,12 +98,325 @@ func genNoticeBytes(event *models.AlertCurEvent) []byte {
}
}
notice := Notice{Event: event, Tpls: ntpls}
return Notice{Event: event, Tpls: ntpls}
}
func alertingRedisPub(clusterName string, bs []byte) {
channelKey := config.C.Alerting.RedisPub.ChannelPrefix + clusterName
// pub all alerts to redis
if config.C.Alerting.RedisPub.Enable {
err := storage.Redis.Publish(context.Background(), channelKey, bs).Err()
if err != nil {
logger.Errorf("event_notify: redis publish %s err: %v", channelKey, err)
}
}
}
func handleNotice(notice Notice, bs []byte) {
alertingCallScript(bs)
alertingCallPlugin(bs)
if len(config.C.Alerting.NotifyBuiltinChannels) == 0 {
return
}
emailset := make(map[string]struct{})
phoneset := make(map[string]struct{})
wecomset := make(map[string]struct{})
dingtalkset := make(map[string]struct{})
feishuset := make(map[string]struct{})
for _, user := range notice.Event.NotifyUsersObj {
if user.Email != "" {
emailset[user.Email] = struct{}{}
}
if user.Phone != "" {
phoneset[user.Phone] = struct{}{}
}
bs, err := user.Contacts.MarshalJSON()
if err != nil {
logger.Errorf("handle_notice: failed to marshal contacts: %v", err)
continue
}
ret := gjson.GetBytes(bs, "dingtalk_robot_token")
if ret.Exists() {
dingtalkset[ret.String()] = struct{}{}
}
ret = gjson.GetBytes(bs, "wecom_robot_token")
if ret.Exists() {
wecomset[ret.String()] = struct{}{}
}
ret = gjson.GetBytes(bs, "feishu_robot_token")
if ret.Exists() {
feishuset[ret.String()] = struct{}{}
}
}
phones := StringSetKeys(phoneset)
for _, ch := range notice.Event.NotifyChannelsJSON {
switch ch {
case "email":
if len(emailset) == 0 {
continue
}
if !slice.ContainsString(config.C.Alerting.NotifyBuiltinChannels, "email") {
continue
}
subject, has := notice.Tpls["subject.tpl"]
if !has {
subject = "subject.tpl not found"
}
content, has := notice.Tpls["mailbody.tpl"]
if !has {
content = "mailbody.tpl not found"
}
sender.WriteEmail(subject, content, StringSetKeys(emailset))
case "dingtalk":
if len(dingtalkset) == 0 {
continue
}
if !slice.ContainsString(config.C.Alerting.NotifyBuiltinChannels, "dingtalk") {
continue
}
content, has := notice.Tpls["dingtalk.tpl"]
if !has {
content = "dingtalk.tpl not found"
}
sender.SendDingtalk(sender.DingtalkMessage{
Title: notice.Event.RuleName,
Text: content,
AtMobiles: phones,
Tokens: StringSetKeys(dingtalkset),
})
case "wecom":
if len(wecomset) == 0 {
continue
}
if !slice.ContainsString(config.C.Alerting.NotifyBuiltinChannels, "wecom") {
continue
}
content, has := notice.Tpls["wecom.tpl"]
if !has {
content = "wecom.tpl not found"
}
sender.SendWecom(sender.WecomMessage{
Text: content,
Tokens: StringSetKeys(wecomset),
})
case "feishu":
if len(feishuset) == 0 {
continue
}
if !slice.ContainsString(config.C.Alerting.NotifyBuiltinChannels, "feishu") {
continue
}
content, has := notice.Tpls["feishu.tpl"]
if !has {
content = "feishu.tpl not found"
}
sender.SendFeishu(sender.FeishuMessage{
Text: content,
AtMobiles: phones,
Tokens: StringSetKeys(feishuset),
})
}
}
}
func notify(event *models.AlertCurEvent) {
LogEvent(event, "notify")
notice := genNotice(event)
stdinBytes, err := json.Marshal(notice)
if err != nil {
logger.Errorf("event_notify: failed to marshal notice: %v", err)
return nil
return
}
return stdinBytes
alertingRedisPub(event.Cluster, stdinBytes)
alertingWebhook(event)
handleNotice(notice, stdinBytes)
// handle alert subscribes
subs, has := memsto.AlertSubscribeCache.Get(event.RuleId)
if has {
handleSubscribes(*event, subs)
}
subs, has = memsto.AlertSubscribeCache.Get(0)
if has {
handleSubscribes(*event, subs)
}
}
func alertingWebhook(event *models.AlertCurEvent) {
conf := config.C.Alerting.Webhook
if !conf.Enable {
return
}
if conf.Url == "" {
return
}
bs, err := json.Marshal(event)
if err != nil {
return
}
bf := bytes.NewBuffer(bs)
req, err := http.NewRequest("POST", conf.Url, bf)
if err != nil {
logger.Warning("alertingWebhook failed to new request", err)
return
}
if conf.BasicAuthUser != "" && conf.BasicAuthPass != "" {
req.SetBasicAuth(conf.BasicAuthUser, conf.BasicAuthPass)
}
if len(conf.Headers) > 0 && len(conf.Headers)%2 == 0 {
for i := 0; i < len(conf.Headers); i += 2 {
req.Header.Set(conf.Headers[i], conf.Headers[i+1])
}
}
client := http.Client{
Timeout: conf.TimeoutDuration,
}
var resp *http.Response
resp, err = client.Do(req)
if err != nil {
logger.Warning("alertingWebhook failed to call url, error: ", err)
return
}
var body []byte
if resp.Body != nil {
defer resp.Body.Close()
body, _ = ioutil.ReadAll(resp.Body)
}
logger.Debugf("alertingWebhook done, url: %s, response code: %d, body: %s", conf.Url, resp.StatusCode, string(body))
}
func handleSubscribes(event models.AlertCurEvent, subs []*models.AlertSubscribe) {
for i := 0; i < len(subs); i++ {
handleSubscribe(event, subs[i])
}
}
func handleSubscribe(event models.AlertCurEvent, sub *models.AlertSubscribe) {
if !matchTags(event.TagsMap, sub.ITags) {
return
}
if sub.RedefineSeverity == 1 {
event.Severity = sub.NewSeverity
}
if sub.RedefineChannels == 1 {
event.NotifyChannels = sub.NewChannels
event.NotifyChannelsJSON = strings.Fields(sub.NewChannels)
}
event.NotifyGroups = sub.UserGroupIds
event.NotifyGroupsJSON = strings.Fields(sub.UserGroupIds)
if len(event.NotifyGroupsJSON) == 0 {
return
}
LogEvent(&event, "subscribe")
fillUsers(&event)
notice := genNotice(&event)
stdinBytes, err := json.Marshal(notice)
if err != nil {
logger.Errorf("event_notify: failed to marshal notice: %v", err)
return
}
handleNotice(notice, stdinBytes)
}
func alertingCallScript(stdinBytes []byte) {
if !config.C.Alerting.CallScript.Enable {
return
}
// no notify.py? do nothing
if config.C.Alerting.CallScript.ScriptPath == "" {
return
}
fpath := config.C.Alerting.CallScript.ScriptPath
cmd := exec.Command(fpath)
cmd.Stdin = bytes.NewReader(stdinBytes)
// combine stdout and stderr
var buf bytes.Buffer
cmd.Stdout = &buf
cmd.Stderr = &buf
err := startCmd(cmd)
if err != nil {
logger.Errorf("event_notify: run cmd err: %v", err)
return
}
err, isTimeout := sys.WrapTimeout(cmd, time.Duration(30)*time.Second)
if isTimeout {
if err == nil {
logger.Errorf("event_notify: timeout and killed process %s", fpath)
}
if err != nil {
logger.Errorf("event_notify: kill process %s occur error %v", fpath, err)
}
return
}
if err != nil {
logger.Errorf("event_notify: exec script %s occur error: %v, output: %s", fpath, err, buf.String())
return
}
logger.Infof("event_notify: exec %s output: %s", fpath, buf.String())
}
// call notify.so via golang plugin build
// ig. etc/script/notify/notify.so
func alertingCallPlugin(stdinBytes []byte) {
if !config.C.Alerting.CallPlugin.Enable {
return
}
logger.Debugf("alertingCallPlugin begin")
logger.Debugf("payload:", string(stdinBytes))
notifier.Instance.Notify(stdinBytes)
logger.Debugf("alertingCallPlugin done")
}

View File

@@ -4,12 +4,13 @@ import (
"encoding/json"
"time"
"github.com/toolkits/pkg/logger"
"github.com/didi/nightingale/v5/src/models"
"github.com/didi/nightingale/v5/src/notifier"
"github.com/didi/nightingale/v5/src/server/common/sender"
"github.com/didi/nightingale/v5/src/server/config"
"github.com/didi/nightingale/v5/src/server/memsto"
"github.com/tidwall/gjson"
"github.com/toolkits/pkg/logger"
)
type MaintainMessage struct {
@@ -28,13 +29,12 @@ func notifyToMaintainer(title, msg string) {
}
triggerTime := time.Now().Format("2006/01/02 - 15:04:05")
msg = "Title: " + title + "\nContent: " + msg + "\nTime: " + triggerTime
notifyMaintainerWithPlugin(title, msg, users)
notifyMaintainerWithBuiltin(title, msg, users)
notifyMaintainerWithPlugin(title, msg, triggerTime, users)
notifyMaintainerWithBuiltin(title, msg, triggerTime, users)
}
func notifyMaintainerWithPlugin(title, msg string, users []*models.User) {
func notifyMaintainerWithPlugin(title, msg, triggerTime string, users []*models.User) {
if !config.C.Alerting.CallPlugin.Enable {
return
}
@@ -42,7 +42,7 @@ func notifyMaintainerWithPlugin(title, msg string, users []*models.User) {
stdinBytes, err := json.Marshal(MaintainMessage{
Tos: users,
Title: title,
Content: msg,
Content: "Title: " + title + "\nContent: " + msg + "\nTime: " + triggerTime,
})
if err != nil {
@@ -54,17 +54,89 @@ func notifyMaintainerWithPlugin(title, msg string, users []*models.User) {
logger.Debugf("notify maintainer with plugin done")
}
func notifyMaintainerWithBuiltin(title, msg string, users []*models.User) {
subscription := NewSubscriptionFromUsers(users)
for channel, uids := range subscription.ToChannelUserMap() {
currentUsers := memsto.UserCache.GetByUserIds(uids)
rwLock.RLock()
s := Senders[channel]
rwLock.RUnlock()
if s == nil {
logger.Warningf("no sender for channel: %s", channel)
func notifyMaintainerWithBuiltin(title, msg, triggerTime string, users []*models.User) {
if len(config.C.Alerting.NotifyBuiltinChannels) == 0 {
return
}
emailset := make(map[string]struct{})
phoneset := make(map[string]struct{})
wecomset := make(map[string]struct{})
dingtalkset := make(map[string]struct{})
feishuset := make(map[string]struct{})
for _, user := range users {
if user.Email != "" {
emailset[user.Email] = struct{}{}
}
if user.Phone != "" {
phoneset[user.Phone] = struct{}{}
}
bs, err := user.Contacts.MarshalJSON()
if err != nil {
logger.Errorf("handle_notice: failed to marshal contacts: %v", err)
continue
}
go s.SendRaw(currentUsers, title, msg)
ret := gjson.GetBytes(bs, "dingtalk_robot_token")
if ret.Exists() {
dingtalkset[ret.String()] = struct{}{}
}
ret = gjson.GetBytes(bs, "wecom_robot_token")
if ret.Exists() {
wecomset[ret.String()] = struct{}{}
}
ret = gjson.GetBytes(bs, "feishu_robot_token")
if ret.Exists() {
feishuset[ret.String()] = struct{}{}
}
}
phones := StringSetKeys(phoneset)
for _, ch := range config.C.Alerting.NotifyBuiltinChannels {
switch ch {
case "email":
if len(emailset) == 0 {
continue
}
content := "Title: " + title + "\nContent: " + msg + "\nTime: " + triggerTime
sender.WriteEmail(title, content, StringSetKeys(emailset))
case "dingtalk":
if len(dingtalkset) == 0 {
continue
}
content := "**Title: **" + title + "\n**Content: **" + msg + "\n**Time: **" + triggerTime
sender.SendDingtalk(sender.DingtalkMessage{
Title: title,
Text: content,
AtMobiles: phones,
Tokens: StringSetKeys(dingtalkset),
})
case "wecom":
if len(wecomset) == 0 {
continue
}
content := "**Title: **" + title + "\n**Content: **" + msg + "\n**Time: **" + triggerTime
sender.SendWecom(sender.WecomMessage{
Text: content,
Tokens: StringSetKeys(wecomset),
})
case "feishu":
if len(feishuset) == 0 {
continue
}
content := "Title: " + title + "\nContent: " + msg + "\nTime: " + triggerTime
sender.SendFeishu(sender.FeishuMessage{
Text: content,
AtMobiles: phones,
Tokens: StringSetKeys(feishuset),
})
}
}
}

View File

@@ -0,0 +1,5 @@
package engine
import "github.com/toolkits/pkg/container/list"
var EventQueue = list.NewSafeListLimited(10000000)

View File

@@ -1,55 +0,0 @@
package engine
import (
"strconv"
"github.com/didi/nightingale/v5/src/models"
"github.com/didi/nightingale/v5/src/server/config"
"github.com/didi/nightingale/v5/src/server/memsto"
)
// Router 抽象由告警事件到订阅者的路由策略
// rule: 告警规则
// event: 告警事件
// prev: 前一次路由结果, Router的实现可以直接修改prev, 也可以返回一个新的Subscription用于AndMerge/OrMerge
type Router func(rule *models.AlertRule, event *models.AlertCurEvent, prev *Subscription) *Subscription
// GroupRouter 处理告警规则的组订阅关系
func GroupRouter(rule *models.AlertRule, event *models.AlertCurEvent, prev *Subscription) *Subscription {
groupIds := make([]int64, 0, len(event.NotifyGroupsJSON))
for _, groupId := range event.NotifyGroupsJSON {
gid, err := strconv.ParseInt(groupId, 10, 64)
if err != nil {
continue
}
groupIds = append(groupIds, gid)
}
groups := memsto.UserGroupCache.GetByUserGroupIds(groupIds)
subscription := NewSubscription()
for _, group := range groups {
for _, userId := range group.UserIds {
subscription.userMap[userId] = NewNotifyChannels(event.NotifyChannelsJSON)
}
}
return subscription
}
func GlobalWebhookRouter(rule *models.AlertRule, event *models.AlertCurEvent, prev *Subscription) *Subscription {
conf := config.C.Alerting.Webhook
if !conf.Enable {
return nil
}
subscription := NewSubscription()
subscription.webhooks[conf.Url] = conf
return subscription
}
func EventCallbacksRouter(rule *models.AlertRule, event *models.AlertCurEvent, prev *Subscription) *Subscription {
for _, c := range event.CallbacksJSON {
if c == "" {
continue
}
prev.callbacks[c] = struct{}{}
}
return nil
}

View File

@@ -1,169 +0,0 @@
package engine
import (
"context"
"fmt"
"strings"
"sync"
"time"
"github.com/didi/nightingale/v5/src/server/config"
"github.com/didi/nightingale/v5/src/server/memsto"
"github.com/didi/nightingale/v5/src/server/naming"
)
// RuleContext is the interface for alert rule and record rule
type RuleContext interface {
Key() string
Hash() string
Prepare()
Start()
Eval()
Stop()
}
var ruleHolder = &RuleHolder{
alertRules: make(map[string]RuleContext),
recordRules: make(map[string]RuleContext),
externalAlertRules: make(map[string]*AlertRuleContext),
}
// RuleHolder is the global rule holder
type RuleHolder struct {
// key: hash
alertRules map[string]RuleContext
recordRules map[string]RuleContext
// key: key of rule
externalLock sync.RWMutex
externalAlertRules map[string]*AlertRuleContext
}
func (rh *RuleHolder) LoopSyncRules(ctx context.Context) {
time.Sleep(time.Duration(config.C.EngineDelay) * time.Second)
duration := 9000 * time.Millisecond
for {
select {
case <-ctx.Done():
return
case <-time.After(duration):
rh.SyncAlertRules()
rh.SyncRecordRules()
}
}
}
func (rh *RuleHolder) SyncAlertRules() {
ids := memsto.AlertRuleCache.GetRuleIds()
alertRules := make(map[string]RuleContext)
externalAllRules := make(map[string]*AlertRuleContext)
for _, id := range ids {
rule := memsto.AlertRuleCache.Get(id)
if rule == nil {
continue
}
ruleClusters := config.ReaderClients.Hit(rule.Cluster)
if !rule.IsPrometheusRule() {
// 非 Prometheus 的规则, 不支持 $all, 直接从 rule.Cluster 解析
ruleClusters = strings.Fields(rule.Cluster)
}
for _, cluster := range ruleClusters {
// hash ring not hit
if !naming.ClusterHashRing.IsHit(cluster, fmt.Sprintf("%d", rule.Id), config.C.Heartbeat.Endpoint) {
continue
}
if rule.IsPrometheusRule() {
// 正常的告警规则
alertRule := NewAlertRuleContext(rule, cluster)
alertRules[alertRule.Hash()] = alertRule
} else {
// 如果 rule 不是通过 prometheus engine 来告警的,则创建为 externalRule
externalRule := NewAlertRuleContext(rule, cluster)
externalAllRules[externalRule.Key()] = externalRule
}
}
}
for hash, rule := range alertRules {
if _, has := rh.alertRules[hash]; !has {
rule.Prepare()
rule.Start()
rh.alertRules[hash] = rule
}
}
for hash, rule := range rh.alertRules {
if _, has := alertRules[hash]; !has {
rule.Stop()
delete(rh.alertRules, hash)
}
}
rh.externalLock.Lock()
for key, rule := range externalAllRules {
if curRule, has := rh.externalAlertRules[key]; has {
// rule存在,且hash一致,认为没有变更,这里可以根据需求单独实现一个关联数据更多的hash函数
if rule.Hash() == curRule.Hash() {
continue
}
}
// 现有规则中没有rule以及有rule但hash不一致的场景需要触发rule的update
rule.Prepare()
rh.externalAlertRules[key] = rule
}
for key := range rh.externalAlertRules {
if _, has := externalAllRules[key]; !has {
delete(rh.externalAlertRules, key)
}
}
rh.externalLock.Unlock()
}
func (rh *RuleHolder) SyncRecordRules() {
ids := memsto.RecordingRuleCache.GetRuleIds()
recordRules := make(map[string]RuleContext)
for _, id := range ids {
rule := memsto.RecordingRuleCache.Get(id)
if rule == nil {
continue
}
ruleClusters := config.ReaderClients.Hit(rule.Cluster)
for _, cluster := range ruleClusters {
if !naming.ClusterHashRing.IsHit(cluster, fmt.Sprintf("%d", rule.Id), config.C.Heartbeat.Endpoint) {
continue
}
recordRule := NewRecordRuleContext(rule, cluster)
recordRules[recordRule.Hash()] = recordRule
}
}
for hash, rule := range recordRules {
if _, has := rh.recordRules[hash]; !has {
rule.Prepare()
rule.Start()
rh.recordRules[hash] = rule
}
}
for hash, rule := range rh.recordRules {
if _, has := recordRules[hash]; !has {
rule.Stop()
delete(rh.recordRules, hash)
}
}
}
func GetExternalAlertRule(cluster string, id int64) (*AlertRuleContext, bool) {
ruleHolder.externalLock.RLock()
defer ruleHolder.externalLock.RUnlock()
rule, has := ruleHolder.externalAlertRules[ruleKey(cluster, id)]
return rule, has
}
func ruleKey(cluster string, id int64) string {
return fmt.Sprintf("alert-%s-%d", cluster, id)
}

View File

@@ -1,294 +0,0 @@
package engine
import (
"context"
"fmt"
"strings"
"time"
"github.com/prometheus/common/model"
"github.com/toolkits/pkg/logger"
"github.com/toolkits/pkg/str"
"github.com/didi/nightingale/v5/src/models"
"github.com/didi/nightingale/v5/src/pkg/prom"
"github.com/didi/nightingale/v5/src/server/common/conv"
"github.com/didi/nightingale/v5/src/server/config"
"github.com/didi/nightingale/v5/src/server/memsto"
promstat "github.com/didi/nightingale/v5/src/server/stat"
)
type AlertRuleContext struct {
cluster string
quit chan struct{}
rule *models.AlertRule
fires *AlertCurEventMap
pendings *AlertCurEventMap
}
func NewAlertRuleContext(rule *models.AlertRule, cluster string) *AlertRuleContext {
return &AlertRuleContext{
cluster: cluster,
quit: make(chan struct{}),
rule: rule,
}
}
func (arc *AlertRuleContext) RuleFromCache() *models.AlertRule {
return memsto.AlertRuleCache.Get(arc.rule.Id)
}
func (arc *AlertRuleContext) Key() string {
return ruleKey(arc.cluster, arc.rule.Id)
}
func (arc *AlertRuleContext) Hash() string {
return str.MD5(fmt.Sprintf("%d_%d_%s_%s",
arc.rule.Id,
arc.rule.PromEvalInterval,
arc.rule.PromQl,
arc.cluster,
))
}
func (arc *AlertRuleContext) Prepare() {
arc.recoverAlertCurEventFromDb()
}
func (arc *AlertRuleContext) Start() {
logger.Infof("eval:%s started", arc.Key())
interval := arc.rule.PromEvalInterval
if interval <= 0 {
interval = 10
}
go func() {
for {
select {
case <-arc.quit:
return
default:
arc.Eval()
time.Sleep(time.Duration(interval) * time.Second)
}
}
}()
}
func (arc *AlertRuleContext) Eval() {
promql := strings.TrimSpace(arc.rule.PromQl)
if promql == "" {
logger.Errorf("rule_eval:%s promql is blank", arc.Key())
return
}
if config.ReaderClients.IsNil(arc.cluster) {
logger.Errorf("rule_eval:%s error reader client is nil", arc.Key())
return
}
readerClient := config.ReaderClients.GetCli(arc.cluster)
var value model.Value
var err error
cachedRule := arc.RuleFromCache()
if cachedRule == nil {
logger.Errorf("rule_eval:%s rule not found", arc.Key())
return
}
// 如果是单个goroutine执行, 完全可以考虑把cachedRule赋值给arc.rule, 不会有问题
// 但是在externalRule的场景中, 会调用HandleVectors/RecoverSingle;就行不通了,还是在需要的时候从cache中拿rule吧
// arc.rule = cachedRule
// 如果cache中的规则由prometheus规则改为其他类型也没必要再去prometheus查询了
if cachedRule.IsPrometheusRule() {
var warnings prom.Warnings
value, warnings, err = readerClient.Query(context.Background(), promql, time.Now())
if err != nil {
logger.Errorf("rule_eval:%s promql:%s, error:%v", arc.Key(), promql, err)
//notifyToMaintainer(err, "failed to query prometheus")
Report(QueryPrometheusError)
return
}
if len(warnings) > 0 {
logger.Errorf("rule_eval:%s promql:%s, warnings:%v", arc.Key(), promql, warnings)
return
}
logger.Debugf("rule_eval:%s promql:%s, value:%v", arc.Key(), promql, value)
}
arc.HandleVectors(conv.ConvertVectors(value), "inner")
}
func (arc *AlertRuleContext) HandleVectors(vectors []conv.Vector, from string) {
// 有可能rule的一些配置已经发生变化比如告警接收人、callbacks等
// 这些信息的修改是不会引起worker restart的但是确实会影响告警处理逻辑
// 所以这里直接从memsto.AlertRuleCache中获取并覆盖
cachedRule := arc.RuleFromCache()
if cachedRule == nil {
logger.Errorf("rule_eval:%s rule not found", arc.Key())
return
}
now := time.Now().Unix()
alertingKeys := map[string]struct{}{}
for _, vector := range vectors {
alertVector := NewAlertVector(arc, cachedRule, vector, from)
event := alertVector.BuildEvent(now)
// 如果event被mute了,本质也是fire的状态,这里无论如何都添加到alertingKeys中,防止fire的事件自动恢复了
alertingKeys[alertVector.Hash()] = struct{}{}
if IsMuted(cachedRule, event) {
continue
}
arc.handleEvent(event)
}
arc.HandleRecover(alertingKeys, now)
}
func (arc *AlertRuleContext) HandleRecover(alertingKeys map[string]struct{}, now int64) {
for _, hash := range arc.pendings.Keys() {
if _, has := alertingKeys[hash]; has {
continue
}
arc.pendings.Delete(hash)
}
for hash := range arc.fires.GetAll() {
if _, has := alertingKeys[hash]; has {
continue
}
arc.RecoverSingle(hash, now, nil)
}
}
func (arc *AlertRuleContext) RecoverSingle(hash string, now int64, value *string) {
cachedRule := arc.RuleFromCache()
if cachedRule == nil {
return
}
event, has := arc.fires.Get(hash)
if !has {
return
}
// 如果配置了留观时长,就不能立马恢复了
if cachedRule.RecoverDuration > 0 && now-event.LastEvalTime < cachedRule.RecoverDuration {
return
}
if value != nil {
event.TriggerValue = *value
}
// 没查到触发阈值的vector姑且就认为这个vector的值恢复了
// 我确实无法分辨是prom中有值但是未满足阈值所以没返回还是prom中确实丢了一些点导致没有数据可以返回尴尬
arc.fires.Delete(hash)
arc.pendings.Delete(hash)
// 可能是因为调整了promql才恢复的所以事件里边要体现最新的promql否则用户会比较困惑
// 当然其实rule的各个字段都可能发生变化了都更新一下吧
cachedRule.UpdateEvent(event)
event.IsRecovered = true
event.LastEvalTime = now
arc.pushEventToQueue(event)
}
func (arc *AlertRuleContext) handleEvent(event *models.AlertCurEvent) {
if event == nil {
return
}
if event.PromForDuration == 0 {
arc.fireEvent(event)
return
}
var preTriggerTime int64
preEvent, has := arc.pendings.Get(event.Hash)
if has {
arc.pendings.UpdateLastEvalTime(event.Hash, event.LastEvalTime)
preTriggerTime = preEvent.TriggerTime
} else {
arc.pendings.Set(event.Hash, event)
preTriggerTime = event.TriggerTime
}
if event.LastEvalTime-preTriggerTime+int64(event.PromEvalInterval) >= int64(event.PromForDuration) {
arc.fireEvent(event)
}
}
func (arc *AlertRuleContext) fireEvent(event *models.AlertCurEvent) {
// As arc.rule maybe outdated, use rule from cache
cachedRule := arc.RuleFromCache()
if cachedRule == nil {
return
}
if fired, has := arc.fires.Get(event.Hash); has {
arc.fires.UpdateLastEvalTime(event.Hash, event.LastEvalTime)
if cachedRule.NotifyRepeatStep == 0 {
// 说明不想重复通知那就直接返回了nothing to do
return
}
// 之前发送过告警了,这次是否要继续发送,要看是否过了通道静默时间
if event.LastEvalTime > fired.LastSentTime+int64(cachedRule.NotifyRepeatStep)*60 {
if cachedRule.NotifyMaxNumber == 0 {
// 最大可以发送次数如果是0表示不想限制最大发送次数一直发即可
event.NotifyCurNumber = fired.NotifyCurNumber + 1
event.FirstTriggerTime = fired.FirstTriggerTime
arc.pushEventToQueue(event)
} else {
// 有最大发送次数的限制,就要看已经发了几次了,是否达到了最大发送次数
if fired.NotifyCurNumber >= cachedRule.NotifyMaxNumber {
return
} else {
event.NotifyCurNumber = fired.NotifyCurNumber + 1
event.FirstTriggerTime = fired.FirstTriggerTime
arc.pushEventToQueue(event)
}
}
}
} else {
event.NotifyCurNumber = 1
event.FirstTriggerTime = event.TriggerTime
arc.pushEventToQueue(event)
}
}
func (arc *AlertRuleContext) pushEventToQueue(event *models.AlertCurEvent) {
if !event.IsRecovered {
event.LastSentTime = event.LastEvalTime
arc.fires.Set(event.Hash, event)
}
promstat.CounterAlertsTotal.WithLabelValues(event.Cluster).Inc()
LogEvent(event, "push_queue")
if !EventQueue.PushFront(event) {
logger.Warningf("event_push_queue: queue is full, event:%+v", event)
}
}
func (arc *AlertRuleContext) Stop() {
logger.Infof("%s stopped", arc.Key())
close(arc.quit)
}
func (arc *AlertRuleContext) recoverAlertCurEventFromDb() {
arc.pendings = NewAlertCurEventMap(nil)
curEvents, err := models.AlertCurEventGetByRuleIdAndCluster(arc.rule.Id, arc.cluster)
if err != nil {
logger.Errorf("recover event from db for rule:%s failed, err:%s", arc.Key(), err)
arc.fires = NewAlertCurEventMap(nil)
return
}
fireMap := make(map[string]*models.AlertCurEvent)
for _, event := range curEvents {
event.DB2Mem()
fireMap[event.Hash] = event
}
arc.fires = NewAlertCurEventMap(fireMap)
}

View File

@@ -1,189 +0,0 @@
package engine
import (
"fmt"
"sort"
"strings"
"sync"
"github.com/toolkits/pkg/str"
"github.com/didi/nightingale/v5/src/models"
"github.com/didi/nightingale/v5/src/server/common/conv"
"github.com/didi/nightingale/v5/src/server/memsto"
)
type AlertCurEventMap struct {
sync.RWMutex
Data map[string]*models.AlertCurEvent
}
func (a *AlertCurEventMap) SetAll(data map[string]*models.AlertCurEvent) {
a.Lock()
defer a.Unlock()
a.Data = data
}
func (a *AlertCurEventMap) Set(key string, value *models.AlertCurEvent) {
a.Lock()
defer a.Unlock()
a.Data[key] = value
}
func (a *AlertCurEventMap) Get(key string) (*models.AlertCurEvent, bool) {
a.RLock()
defer a.RUnlock()
event, exists := a.Data[key]
return event, exists
}
func (a *AlertCurEventMap) UpdateLastEvalTime(key string, lastEvalTime int64) {
a.Lock()
defer a.Unlock()
event, exists := a.Data[key]
if !exists {
return
}
event.LastEvalTime = lastEvalTime
}
func (a *AlertCurEventMap) Delete(key string) {
a.Lock()
defer a.Unlock()
delete(a.Data, key)
}
func (a *AlertCurEventMap) Keys() []string {
a.RLock()
defer a.RUnlock()
keys := make([]string, 0, len(a.Data))
for k := range a.Data {
keys = append(keys, k)
}
return keys
}
func (a *AlertCurEventMap) GetAll() map[string]*models.AlertCurEvent {
a.RLock()
defer a.RUnlock()
return a.Data
}
func NewAlertCurEventMap(data map[string]*models.AlertCurEvent) *AlertCurEventMap {
if data == nil {
return &AlertCurEventMap{
Data: make(map[string]*models.AlertCurEvent),
}
}
return &AlertCurEventMap{
Data: data,
}
}
// AlertVector 包含一个告警事件的告警上下文
type AlertVector struct {
Ctx *AlertRuleContext
Rule *models.AlertRule
Vector conv.Vector
From string
tagsMap map[string]string
tagsArr []string
target string
targetNote string
groupName string
}
func NewAlertVector(ctx *AlertRuleContext, rule *models.AlertRule, vector conv.Vector, from string) *AlertVector {
if rule == nil {
rule = ctx.rule
}
av := &AlertVector{
Ctx: ctx,
Rule: rule,
Vector: vector,
From: from,
}
av.fillTags()
av.mayHandleIdent()
av.mayHandleGroup()
return av
}
func (av *AlertVector) Hash() string {
return str.MD5(fmt.Sprintf("%d_%s_%s", av.Rule.Id, av.Vector.Key, av.Ctx.cluster))
}
func (av *AlertVector) fillTags() {
// handle series tags
tagsMap := make(map[string]string)
for label, value := range av.Vector.Labels {
tagsMap[string(label)] = string(value)
}
// handle rule tags
for _, tag := range av.Rule.AppendTagsJSON {
arr := strings.SplitN(tag, "=", 2)
tagsMap[arr[0]] = arr[1]
}
tagsMap["rulename"] = av.Rule.Name
av.tagsMap = tagsMap
// handle tagsArr
av.tagsArr = labelMapToArr(tagsMap)
}
func (av *AlertVector) mayHandleIdent() {
// handle ident
if ident, has := av.tagsMap["ident"]; has {
if target, exists := memsto.TargetCache.Get(ident); exists {
av.target = target.Ident
av.targetNote = target.Note
}
}
}
func (av *AlertVector) mayHandleGroup() {
// handle bg
bg := memsto.BusiGroupCache.GetByBusiGroupId(av.Rule.GroupId)
if bg != nil {
av.groupName = bg.Name
}
}
func (av *AlertVector) BuildEvent(now int64) *models.AlertCurEvent {
event := av.Rule.GenerateNewEvent()
event.TriggerTime = av.Vector.Timestamp
event.TagsMap = av.tagsMap
event.Cluster = av.Ctx.cluster
event.Hash = av.Hash()
event.TargetIdent = av.target
event.TargetNote = av.targetNote
event.TriggerValue = av.Vector.ReadableValue()
event.TagsJSON = av.tagsArr
event.GroupName = av.groupName
event.Tags = strings.Join(av.tagsArr, ",,")
event.IsRecovered = false
if av.From == "inner" {
event.LastEvalTime = now
} else {
event.LastEvalTime = event.TriggerTime
}
return event
}
func labelMapToArr(m map[string]string) []string {
numLabels := len(m)
labelStrings := make([]string, 0, numLabels)
for label, value := range m {
labelStrings = append(labelStrings, fmt.Sprintf("%s=%s", label, value))
}
if numLabels > 1 {
sort.Strings(labelStrings)
}
return labelStrings
}

View File

@@ -1,100 +0,0 @@
package engine
import (
"context"
"fmt"
"strings"
"time"
"github.com/toolkits/pkg/logger"
"github.com/toolkits/pkg/str"
"github.com/didi/nightingale/v5/src/models"
"github.com/didi/nightingale/v5/src/server/common/conv"
"github.com/didi/nightingale/v5/src/server/config"
"github.com/didi/nightingale/v5/src/server/writer"
)
type RecordRuleContext struct {
cluster string
quit chan struct{}
rule *models.RecordingRule
}
func NewRecordRuleContext(rule *models.RecordingRule, cluster string) *RecordRuleContext {
return &RecordRuleContext{
cluster: cluster,
quit: make(chan struct{}),
rule: rule,
}
}
func (rrc *RecordRuleContext) Key() string {
return fmt.Sprintf("record-%s-%d", rrc.cluster, rrc.rule.Id)
}
func (rrc *RecordRuleContext) Hash() string {
return str.MD5(fmt.Sprintf("%d_%d_%s_%s",
rrc.rule.Id,
rrc.rule.PromEvalInterval,
rrc.rule.PromQl,
rrc.cluster,
))
}
func (rrc *RecordRuleContext) Prepare() {}
func (rrc *RecordRuleContext) Start() {
logger.Infof("eval:%s started", rrc.Key())
interval := rrc.rule.PromEvalInterval
if interval <= 0 {
interval = 10
}
go func() {
for {
select {
case <-rrc.quit:
return
default:
rrc.Eval()
time.Sleep(time.Duration(interval) * time.Second)
}
}
}()
}
func (rrc *RecordRuleContext) Eval() {
promql := strings.TrimSpace(rrc.rule.PromQl)
if promql == "" {
logger.Errorf("eval:%s promql is blank", rrc.Key())
return
}
if config.ReaderClients.IsNil(rrc.cluster) {
logger.Errorf("eval:%s reader client is nil", rrc.Key())
return
}
value, warnings, err := config.ReaderClients.GetCli(rrc.cluster).Query(context.Background(), promql, time.Now())
if err != nil {
logger.Errorf("eval:%s promql:%s, error:%v", rrc.Key(), promql, err)
return
}
if len(warnings) > 0 {
logger.Errorf("eval:%s promql:%s, warnings:%v", rrc.Key(), promql, warnings)
return
}
ts := conv.ConvertToTimeSeries(value, rrc.rule)
if len(ts) != 0 {
for _, v := range ts {
writer.Writers.PushSample(rrc.rule.Name, v, rrc.cluster)
}
}
}
func (rrc *RecordRuleContext) Stop() {
logger.Infof("%s stopped", rrc.Key())
close(rrc.quit)
}

View File

@@ -1,139 +0,0 @@
package engine
import (
"github.com/didi/nightingale/v5/src/models"
"github.com/didi/nightingale/v5/src/server/config"
)
// NotifyChannels channelKey -> bool
type NotifyChannels map[string]bool
func NewNotifyChannels(channels []string) NotifyChannels {
nc := make(NotifyChannels)
for _, ch := range channels {
nc[ch] = true
}
return nc
}
func (nc NotifyChannels) OrMerge(other NotifyChannels) {
nc.merge(other, func(a, b bool) bool { return a || b })
}
func (nc NotifyChannels) AndMerge(other NotifyChannels) {
nc.merge(other, func(a, b bool) bool { return a && b })
}
func (nc NotifyChannels) merge(other NotifyChannels, f func(bool, bool) bool) {
if other == nil {
return
}
for k, v := range other {
if curV, has := nc[k]; has {
nc[k] = f(curV, v)
} else {
nc[k] = v
}
}
}
// Subscription 维护所有需要发送的用户-通道/回调/钩子信息,用map维护的数据结构具有去重功能
type Subscription struct {
userMap map[int64]NotifyChannels
webhooks map[string]config.Webhook
callbacks map[string]struct{}
}
func NewSubscription() *Subscription {
return &Subscription{
userMap: make(map[int64]NotifyChannels),
webhooks: make(map[string]config.Webhook),
callbacks: make(map[string]struct{}),
}
}
// NewSubscriptionFromUsers 根据用户的token配置,生成订阅信息用于notifyMaintainer
func NewSubscriptionFromUsers(users []*models.User) *Subscription {
s := NewSubscription()
for _, u := range users {
if u == nil {
continue
}
for channel, token := range u.ExtractAllToken() {
if token == "" {
continue
}
if channelMap, has := s.userMap[u.Id]; has {
channelMap[channel] = true
} else {
s.userMap[u.Id] = map[string]bool{
channel: true,
}
}
}
}
return s
}
// OrMerge 将channelMap按照or的方式合并,方便实现多种组合的策略,比如根据某个tag进行路由等
func (s *Subscription) OrMerge(other *Subscription) {
s.merge(other, NotifyChannels.OrMerge)
}
// AndMerge 将channelMap中的bool值按照and的逻辑进行合并,可以单独将人/通道维度的通知移除
// 常用的场景有:
// 1. 人员离职了不需要发送告警了
// 2. 某个告警通道进行维护,暂时不需要发送告警了
// 3. 业务值班的重定向逻辑,将高等级的告警额外发送给应急人员等
// 可以结合业务需求自己实现router
func (s *Subscription) AndMerge(other *Subscription) {
s.merge(other, NotifyChannels.AndMerge)
}
func (s *Subscription) merge(other *Subscription, f func(NotifyChannels, NotifyChannels)) {
if other == nil {
return
}
for k, v := range other.userMap {
if curV, has := s.userMap[k]; has {
f(curV, v)
} else {
s.userMap[k] = v
}
}
for k, v := range other.webhooks {
s.webhooks[k] = v
}
for k, v := range other.callbacks {
s.callbacks[k] = v
}
}
// ToChannelUserMap userMap(map[uid][channel]bool) 转换为 map[channel][]uid 的结构
func (s *Subscription) ToChannelUserMap() map[string][]int64 {
m := make(map[string][]int64)
for uid, nc := range s.userMap {
for ch, send := range nc {
if send {
m[ch] = append(m[ch], uid)
}
}
}
return m
}
func (s *Subscription) ToCallbackList() []string {
callbacks := make([]string, 0, len(s.callbacks))
for cb := range s.callbacks {
callbacks = append(callbacks, cb)
}
return callbacks
}
func (s *Subscription) ToWebhookList() []config.Webhook {
webhooks := make([]config.Webhook, 0, len(s.webhooks))
for _, wh := range s.webhooks {
webhooks = append(webhooks, wh)
}
return webhooks
}

754
src/server/engine/worker.go Normal file
View File

@@ -0,0 +1,754 @@
package engine
import (
"context"
"fmt"
"log"
"sort"
"strings"
"sync"
"time"
"github.com/didi/nightingale/v5/src/server/writer"
"github.com/prometheus/common/model"
"github.com/toolkits/pkg/logger"
"github.com/toolkits/pkg/str"
"github.com/didi/nightingale/v5/src/models"
"github.com/didi/nightingale/v5/src/pkg/prom"
"github.com/didi/nightingale/v5/src/server/common/conv"
"github.com/didi/nightingale/v5/src/server/config"
"github.com/didi/nightingale/v5/src/server/memsto"
"github.com/didi/nightingale/v5/src/server/naming"
promstat "github.com/didi/nightingale/v5/src/server/stat"
)
func loopFilterRules(ctx context.Context) {
// wait for samples
time.Sleep(time.Duration(config.C.EngineDelay) * time.Second)
duration := time.Duration(9000) * time.Millisecond
for {
select {
case <-ctx.Done():
return
case <-time.After(duration):
filterRules()
filterRecordingRules()
}
}
}
func filterRules() {
ids := memsto.AlertRuleCache.GetRuleIds()
logger.Debugf("AlertRuleCache.GetRuleIds successids.len: %d", len(ids))
count := len(ids)
mines := make([]int64, 0, count)
for i := 0; i < count; i++ {
node, err := naming.HashRing.GetNode(fmt.Sprint(ids[i]))
if err != nil {
logger.Warning("failed to get node from hashring:", err)
continue
}
if node == config.C.Heartbeat.Endpoint {
mines = append(mines, ids[i])
}
}
Workers.Build(mines)
RuleEvalForExternal.Build()
}
type RuleEval struct {
rule *models.AlertRule
fires *AlertCurEventMap
pendings *AlertCurEventMap
quit chan struct{}
}
type AlertCurEventMap struct {
sync.RWMutex
Data map[string]*models.AlertCurEvent
}
func (a *AlertCurEventMap) SetAll(data map[string]*models.AlertCurEvent) {
a.Lock()
defer a.Unlock()
a.Data = data
}
func (a *AlertCurEventMap) Set(key string, value *models.AlertCurEvent) {
a.Lock()
defer a.Unlock()
a.Data[key] = value
}
func (a *AlertCurEventMap) Get(key string) (*models.AlertCurEvent, bool) {
a.RLock()
defer a.RUnlock()
event, exists := a.Data[key]
return event, exists
}
func (a *AlertCurEventMap) UpdateLastEvalTime(key string, lastEvalTime int64) {
a.Lock()
defer a.Unlock()
event, exists := a.Data[key]
if !exists {
return
}
event.LastEvalTime = lastEvalTime
}
func (a *AlertCurEventMap) Delete(key string) {
a.Lock()
defer a.Unlock()
delete(a.Data, key)
}
func (a *AlertCurEventMap) Keys() []string {
a.RLock()
defer a.RUnlock()
keys := make([]string, 0, len(a.Data))
for k := range a.Data {
keys = append(keys, k)
}
return keys
}
func (a *AlertCurEventMap) GetAll() map[string]*models.AlertCurEvent {
a.RLock()
defer a.RUnlock()
return a.Data
}
func NewAlertCurEventMap() *AlertCurEventMap {
return &AlertCurEventMap{
Data: make(map[string]*models.AlertCurEvent),
}
}
func (r *RuleEval) Stop() {
logger.Infof("rule_eval:%d stopping", r.RuleID())
close(r.quit)
}
func (r *RuleEval) RuleID() int64 {
return r.rule.Id
}
func (r *RuleEval) Start() {
logger.Infof("rule_eval:%d started", r.RuleID())
for {
select {
case <-r.quit:
// logger.Infof("rule_eval:%d stopped", r.RuleID())
return
default:
r.Work()
logger.Debugf("rule executed, rule_eval:%d", r.RuleID())
interval := r.rule.PromEvalInterval
if interval <= 0 {
interval = 10
}
time.Sleep(time.Duration(interval) * time.Second)
}
}
}
func (r *RuleEval) Work() {
promql := strings.TrimSpace(r.rule.PromQl)
if promql == "" {
logger.Errorf("rule_eval:%d promql is blank", r.RuleID())
return
}
if config.ReaderClient.IsNil() {
logger.Error("reader client is nil")
return
}
clusterName, readerClient := config.ReaderClient.Get()
var value model.Value
var err error
if r.rule.Algorithm == "" && (r.rule.Cate == "" || r.rule.Cate == "prometheus") {
var warnings prom.Warnings
value, warnings, err = readerClient.Query(context.Background(), promql, time.Now())
if err != nil {
logger.Errorf("rule_eval:%d promql:%s, error:%v", r.RuleID(), promql, err)
//notifyToMaintainer(err, "failed to query prometheus")
Report(QueryPrometheusError)
return
}
if len(warnings) > 0 {
logger.Errorf("rule_eval:%d promql:%s, warnings:%v", r.RuleID(), promql, warnings)
return
}
logger.Debugf("rule_eval:%d promql:%s, value:%v", r.RuleID(), promql, value)
}
r.Judge(clusterName, conv.ConvertVectors(value))
}
type WorkersType struct {
rules map[string]*RuleEval
recordRules map[string]RecordingRuleEval
}
var Workers = &WorkersType{rules: make(map[string]*RuleEval), recordRules: make(map[string]RecordingRuleEval)}
func (ws *WorkersType) Build(rids []int64) {
rules := make(map[string]*models.AlertRule)
for i := 0; i < len(rids); i++ {
rule := memsto.AlertRuleCache.Get(rids[i])
if rule == nil {
continue
}
hash := str.MD5(fmt.Sprintf("%d_%d_%s",
rule.Id,
rule.PromEvalInterval,
rule.PromQl,
))
rules[hash] = rule
}
// stop old
for hash := range Workers.rules {
if _, has := rules[hash]; !has {
Workers.rules[hash].Stop()
delete(Workers.rules, hash)
}
}
// start new
for hash := range rules {
if _, has := Workers.rules[hash]; has {
// already exists
continue
}
elst, err := models.AlertCurEventGetByRule(rules[hash].Id)
if err != nil {
logger.Errorf("worker_build: AlertCurEventGetByRule failed: %v", err)
continue
}
firemap := make(map[string]*models.AlertCurEvent)
for i := 0; i < len(elst); i++ {
elst[i].DB2Mem()
firemap[elst[i].Hash] = elst[i]
}
fires := NewAlertCurEventMap()
fires.SetAll(firemap)
re := &RuleEval{
rule: rules[hash],
quit: make(chan struct{}),
fires: fires,
pendings: NewAlertCurEventMap(),
}
go re.Start()
Workers.rules[hash] = re
}
}
func (ws *WorkersType) BuildRe(rids []int64) {
rules := make(map[string]*models.RecordingRule)
for i := 0; i < len(rids); i++ {
rule := memsto.RecordingRuleCache.Get(rids[i])
if rule == nil {
continue
}
if rule.Disabled == 1 {
continue
}
hash := str.MD5(fmt.Sprintf("%d_%d_%s_%s",
rule.Id,
rule.PromEvalInterval,
rule.PromQl,
rule.AppendTags,
))
rules[hash] = rule
}
// stop old
for hash := range Workers.recordRules {
if _, has := rules[hash]; !has {
Workers.recordRules[hash].Stop()
delete(Workers.recordRules, hash)
}
}
// start new
for hash := range rules {
if _, has := Workers.recordRules[hash]; has {
// already exists
continue
}
re := RecordingRuleEval{
rule: rules[hash],
quit: make(chan struct{}),
}
go re.Start()
Workers.recordRules[hash] = re
}
}
func (r *RuleEval) Judge(clusterName string, vectors []conv.Vector) {
now := time.Now().Unix()
alertingKeys, ruleExists := r.MakeNewEvent("inner", now, clusterName, vectors)
if !ruleExists {
return
}
// handle recovered events
r.recoverRule(alertingKeys, now)
}
func (r *RuleEval) MakeNewEvent(from string, now int64, clusterName string, vectors []conv.Vector) (map[string]struct{}, bool) {
// 有可能rule的一些配置已经发生变化比如告警接收人、callbacks等
// 这些信息的修改是不会引起worker restart的但是确实会影响告警处理逻辑
// 所以这里直接从memsto.AlertRuleCache中获取并覆盖
curRule := memsto.AlertRuleCache.Get(r.rule.Id)
if curRule == nil {
return map[string]struct{}{}, false
}
r.rule = curRule
count := len(vectors)
alertingKeys := make(map[string]struct{})
for i := 0; i < count; i++ {
// compute hash
hash := str.MD5(fmt.Sprintf("%d_%s", r.rule.Id, vectors[i].Key))
alertingKeys[hash] = struct{}{}
// rule disabled in this time span?
if isNoneffective(vectors[i].Timestamp, r.rule) {
logger.Debugf("event_disabled: rule_eval:%d rule:%v timestamp:%d", r.rule.Id, r.rule, vectors[i].Timestamp)
continue
}
// handle series tags
tagsMap := make(map[string]string)
for label, value := range vectors[i].Labels {
tagsMap[string(label)] = string(value)
}
// handle rule tags
for _, tag := range r.rule.AppendTagsJSON {
arr := strings.SplitN(tag, "=", 2)
tagsMap[arr[0]] = arr[1]
}
tagsMap["rulename"] = r.rule.Name
// handle target note
targetIdent, has := vectors[i].Labels["ident"]
targetNote := ""
if has {
target, exists := memsto.TargetCache.Get(string(targetIdent))
if exists {
targetNote = target.Note
// 对于包含ident的告警事件check一下ident所属bg和rule所属bg是否相同
// 如果告警规则选择了只在本BG生效那其他BG的机器就不能因此规则产生告警
if r.rule.EnableInBG == 1 && target.GroupId != r.rule.GroupId {
logger.Debugf("event_enable_in_bg: rule_eval:%d", r.rule.Id)
continue
}
}
}
event := &models.AlertCurEvent{
TriggerTime: vectors[i].Timestamp,
TagsMap: tagsMap,
GroupId: r.rule.GroupId,
RuleName: r.rule.Name,
}
bg := memsto.BusiGroupCache.GetByBusiGroupId(r.rule.GroupId)
if bg != nil {
event.GroupName = bg.Name
}
// isMuted only need TriggerTime RuleName and TagsMap
if IsMuted(event) {
logger.Infof("event_muted: rule_id=%d %s", r.rule.Id, vectors[i].Key)
continue
}
tagsArr := labelMapToArr(tagsMap)
sort.Strings(tagsArr)
event.Cluster = clusterName
event.Cate = r.rule.Cate
event.Hash = hash
event.RuleId = r.rule.Id
event.RuleName = r.rule.Name
event.RuleNote = r.rule.Note
event.RuleProd = r.rule.Prod
event.RuleAlgo = r.rule.Algorithm
event.Severity = r.rule.Severity
event.PromForDuration = r.rule.PromForDuration
event.PromQl = r.rule.PromQl
event.PromEvalInterval = r.rule.PromEvalInterval
event.Callbacks = r.rule.Callbacks
event.CallbacksJSON = r.rule.CallbacksJSON
event.RunbookUrl = r.rule.RunbookUrl
event.NotifyRecovered = r.rule.NotifyRecovered
event.NotifyChannels = r.rule.NotifyChannels
event.NotifyChannelsJSON = r.rule.NotifyChannelsJSON
event.NotifyGroups = r.rule.NotifyGroups
event.NotifyGroupsJSON = r.rule.NotifyGroupsJSON
event.TargetIdent = string(targetIdent)
event.TargetNote = targetNote
event.TriggerValue = readableValue(vectors[i].Value)
event.TagsJSON = tagsArr
event.Tags = strings.Join(tagsArr, ",,")
event.IsRecovered = false
event.LastEvalTime = now
if from != "inner" {
event.LastEvalTime = event.TriggerTime
}
r.handleNewEvent(event)
}
return alertingKeys, true
}
func readableValue(value float64) string {
ret := fmt.Sprintf("%.5f", value)
ret = strings.TrimRight(ret, "0")
return strings.TrimRight(ret, ".")
}
func labelMapToArr(m map[string]string) []string {
numLabels := len(m)
labelStrings := make([]string, 0, numLabels)
for label, value := range m {
labelStrings = append(labelStrings, fmt.Sprintf("%s=%s", label, value))
}
if numLabels > 1 {
sort.Strings(labelStrings)
}
return labelStrings
}
func (r *RuleEval) handleNewEvent(event *models.AlertCurEvent) {
if event.PromForDuration == 0 {
r.fireEvent(event)
return
}
var preTriggerTime int64
preEvent, has := r.pendings.Get(event.Hash)
if has {
r.pendings.UpdateLastEvalTime(event.Hash, event.LastEvalTime)
preTriggerTime = preEvent.TriggerTime
} else {
r.pendings.Set(event.Hash, event)
preTriggerTime = event.TriggerTime
}
if event.LastEvalTime-preTriggerTime+int64(event.PromEvalInterval) >= int64(event.PromForDuration) {
r.fireEvent(event)
}
}
func (r *RuleEval) fireEvent(event *models.AlertCurEvent) {
if fired, has := r.fires.Get(event.Hash); has {
r.fires.UpdateLastEvalTime(event.Hash, event.LastEvalTime)
if r.rule.NotifyRepeatStep == 0 {
// 说明不想重复通知那就直接返回了nothing to do
return
}
// 之前发送过告警了,这次是否要继续发送,要看是否过了通道静默时间
if event.LastEvalTime > fired.LastSentTime+int64(r.rule.NotifyRepeatStep)*60 {
if r.rule.NotifyMaxNumber == 0 {
// 最大可以发送次数如果是0表示不想限制最大发送次数一直发即可
event.NotifyCurNumber = fired.NotifyCurNumber + 1
event.FirstTriggerTime = fired.FirstTriggerTime
r.pushEventToQueue(event)
} else {
// 有最大发送次数的限制,就要看已经发了几次了,是否达到了最大发送次数
if fired.NotifyCurNumber >= r.rule.NotifyMaxNumber {
return
} else {
event.NotifyCurNumber = fired.NotifyCurNumber + 1
event.FirstTriggerTime = fired.FirstTriggerTime
r.pushEventToQueue(event)
}
}
}
} else {
event.NotifyCurNumber = 1
event.FirstTriggerTime = event.TriggerTime
r.pushEventToQueue(event)
}
}
func (r *RuleEval) recoverRule(alertingKeys map[string]struct{}, now int64) {
for _, hash := range r.pendings.Keys() {
if _, has := alertingKeys[hash]; has {
continue
}
r.pendings.Delete(hash)
}
for hash, event := range r.fires.GetAll() {
if _, has := alertingKeys[hash]; has {
continue
}
r.recoverEvent(hash, event, now)
}
}
func (r *RuleEval) RecoverEvent(hash string, now int64, value float64) {
curRule := memsto.AlertRuleCache.Get(r.rule.Id)
if curRule == nil {
return
}
r.rule = curRule
r.pendings.Delete(hash)
event, has := r.fires.Get(hash)
if !has {
return
}
event.TriggerValue = fmt.Sprintf("%.5f", value)
r.recoverEvent(hash, event, now)
}
func (r *RuleEval) recoverEvent(hash string, event *models.AlertCurEvent, now int64) {
// 如果配置了留观时长,就不能立马恢复了
if r.rule.RecoverDuration > 0 && now-event.LastEvalTime < r.rule.RecoverDuration {
return
}
// 没查到触发阈值的vector姑且就认为这个vector的值恢复了
// 我确实无法分辨是prom中有值但是未满足阈值所以没返回还是prom中确实丢了一些点导致没有数据可以返回尴尬
r.fires.Delete(hash)
r.pendings.Delete(hash)
event.IsRecovered = true
event.LastEvalTime = now
// 可能是因为调整了promql才恢复的所以事件里边要体现最新的promql否则用户会比较困惑
// 当然其实rule的各个字段都可能发生变化了都更新一下吧
event.RuleName = r.rule.Name
event.RuleNote = r.rule.Note
event.RuleProd = r.rule.Prod
event.RuleAlgo = r.rule.Algorithm
event.Severity = r.rule.Severity
event.PromForDuration = r.rule.PromForDuration
event.PromQl = r.rule.PromQl
event.PromEvalInterval = r.rule.PromEvalInterval
event.Callbacks = r.rule.Callbacks
event.CallbacksJSON = r.rule.CallbacksJSON
event.RunbookUrl = r.rule.RunbookUrl
event.NotifyRecovered = r.rule.NotifyRecovered
event.NotifyChannels = r.rule.NotifyChannels
event.NotifyChannelsJSON = r.rule.NotifyChannelsJSON
event.NotifyGroups = r.rule.NotifyGroups
event.NotifyGroupsJSON = r.rule.NotifyGroupsJSON
r.pushEventToQueue(event)
}
func (r *RuleEval) pushEventToQueue(event *models.AlertCurEvent) {
if !event.IsRecovered {
event.LastSentTime = event.LastEvalTime
r.fires.Set(event.Hash, event)
}
promstat.CounterAlertsTotal.WithLabelValues(event.Cluster).Inc()
LogEvent(event, "push_queue")
if !EventQueue.PushFront(event) {
logger.Warningf("event_push_queue: queue is full")
}
}
func filterRecordingRules() {
ids := memsto.RecordingRuleCache.GetRuleIds()
count := len(ids)
mines := make([]int64, 0, count)
for i := 0; i < count; i++ {
node, err := naming.HashRing.GetNode(fmt.Sprint(ids[i]))
if err != nil {
logger.Warning("failed to get node from hashring:", err)
continue
}
if node == config.C.Heartbeat.Endpoint {
mines = append(mines, ids[i])
}
}
Workers.BuildRe(mines)
}
type RecordingRuleEval struct {
rule *models.RecordingRule
quit chan struct{}
}
func (r RecordingRuleEval) Stop() {
logger.Infof("recording_rule_eval:%d stopping", r.RuleID())
close(r.quit)
}
func (r RecordingRuleEval) RuleID() int64 {
return r.rule.Id
}
func (r RecordingRuleEval) Start() {
logger.Infof("recording_rule_eval:%d started", r.RuleID())
for {
select {
case <-r.quit:
// logger.Infof("rule_eval:%d stopped", r.RuleID())
return
default:
r.Work()
interval := r.rule.PromEvalInterval
if interval <= 0 {
interval = 10
}
time.Sleep(time.Duration(interval) * time.Second)
}
}
}
func (r RecordingRuleEval) Work() {
promql := strings.TrimSpace(r.rule.PromQl)
if promql == "" {
logger.Errorf("recording_rule_eval:%d promql is blank", r.RuleID())
return
}
if config.ReaderClient.IsNil() {
log.Println("reader client is nil")
return
}
value, warnings, err := config.ReaderClient.GetCli().Query(context.Background(), promql, time.Now())
if err != nil {
logger.Errorf("recording_rule_eval:%d promql:%s, error:%v", r.RuleID(), promql, err)
return
}
if len(warnings) > 0 {
logger.Errorf("recording_rule_eval:%d promql:%s, warnings:%v", r.RuleID(), promql, warnings)
return
}
ts := conv.ConvertToTimeSeries(value, r.rule)
if len(ts) != 0 {
for _, v := range ts {
writer.Writers.PushSample(r.rule.Name, v)
}
}
}
type RuleEvalForExternalType struct {
sync.RWMutex
rules map[int64]RuleEval
}
var RuleEvalForExternal = RuleEvalForExternalType{rules: make(map[int64]RuleEval)}
func (re *RuleEvalForExternalType) Build() {
rids := memsto.AlertRuleCache.GetRuleIds()
rules := make(map[int64]*models.AlertRule)
for i := 0; i < len(rids); i++ {
rule := memsto.AlertRuleCache.Get(rids[i])
if rule == nil {
continue
}
re.Lock()
rules[rule.Id] = rule
re.Unlock()
}
// stop old
for rid := range re.rules {
if _, has := rules[rid]; !has {
re.Lock()
delete(re.rules, rid)
re.Unlock()
}
}
// start new
re.Lock()
defer re.Unlock()
for rid := range rules {
if _, has := re.rules[rid]; has {
// already exists
continue
}
elst, err := models.AlertCurEventGetByRule(rules[rid].Id)
if err != nil {
logger.Errorf("worker_build: AlertCurEventGetByRule failed: %v", err)
continue
}
firemap := make(map[string]*models.AlertCurEvent)
for i := 0; i < len(elst); i++ {
elst[i].DB2Mem()
firemap[elst[i].Hash] = elst[i]
}
fires := NewAlertCurEventMap()
fires.SetAll(firemap)
newRe := RuleEval{
rule: rules[rid],
quit: make(chan struct{}),
fires: fires,
pendings: NewAlertCurEventMap(),
}
re.rules[rid] = newRe
}
}
func (re *RuleEvalForExternalType) Get(rid int64) (RuleEval, bool) {
rule := memsto.AlertRuleCache.Get(rid)
if rule == nil {
return RuleEval{}, false
}
re.RLock()
defer re.RUnlock()
if ret, has := re.rules[rid]; has {
// already exists
return ret, has
}
return RuleEval{}, false
}

View File

@@ -41,7 +41,7 @@ func toRedis() {
return
}
if config.ReaderClients.IsNil(config.C.ClusterName) {
if config.ReaderClient.IsNil() {
return
}
@@ -49,11 +49,11 @@ func toRedis() {
// clean old idents
for key, at := range items {
if at.(int64) < now-config.C.NoData.Interval {
if at.(int64) < now-10 {
Idents.Remove(key)
} else {
// use now as timestamp to redis
err := storage.Redis.HSet(context.Background(), redisKey(config.C.ClusterName), key, now).Err()
err := storage.Redis.HSet(context.Background(), redisKey(config.ReaderClient.GetClusterName()), key, now).Err()
if err != nil {
logger.Errorf("redis hset idents failed: %v", err)
}
@@ -96,8 +96,7 @@ func loopPushMetrics(ctx context.Context) {
}
func pushMetrics() {
clusterName := config.C.ClusterName
isLeader, err := naming.IamLeader(clusterName)
isLeader, err := naming.IamLeader()
if err != nil {
logger.Errorf("handle_idents: %v", err)
return
@@ -108,6 +107,12 @@ func pushMetrics() {
return
}
clusterName := config.ReaderClient.GetClusterName()
if clusterName == "" {
logger.Warning("cluster name is blank")
return
}
// get all the target heartbeat timestamp
ret, err := storage.Redis.HGetAll(context.Background(), redisKey(clusterName)).Result()
if err != nil {
@@ -179,9 +184,6 @@ func pushMetrics() {
// 把actives传给TargetCache看看除了active的部分还有别的target么有的话返回设置target_up = 0
deads := memsto.TargetCache.GetDeads(actives)
for ident, dead := range deads {
if ident == "" {
continue
}
// build metrics
pt := &prompb.TimeSeries{}
pt.Samples = append(pt.Samples, prompb.Sample{

View File

@@ -9,6 +9,7 @@ import (
"github.com/toolkits/pkg/logger"
"github.com/didi/nightingale/v5/src/models"
"github.com/didi/nightingale/v5/src/server/config"
promstat "github.com/didi/nightingale/v5/src/server/stat"
)
@@ -98,19 +99,26 @@ func loopSyncAlertMutes() {
func syncAlertMutes() error {
start := time.Now()
stat, err := models.AlertMuteStatistics("")
clusterName := config.ReaderClient.GetClusterName()
if clusterName == "" {
AlertMuteCache.Reset()
logger.Warning("cluster name is blank")
return nil
}
stat, err := models.AlertMuteStatistics(clusterName)
if err != nil {
return errors.WithMessage(err, "failed to exec AlertMuteStatistics")
}
if !AlertMuteCache.StatChanged(stat.Total, stat.LastUpdated) {
promstat.GaugeCronDuration.WithLabelValues("sync_alert_mutes").Set(0)
promstat.GaugeSyncNumber.WithLabelValues("sync_alert_mutes").Set(0)
promstat.GaugeCronDuration.WithLabelValues(clusterName, "sync_alert_mutes").Set(0)
promstat.GaugeSyncNumber.WithLabelValues(clusterName, "sync_alert_mutes").Set(0)
logger.Debug("alert mutes not changed")
return nil
}
lst, err := models.AlertMuteGetsByCluster("")
lst, err := models.AlertMuteGetsByCluster(clusterName)
if err != nil {
return errors.WithMessage(err, "failed to exec AlertMuteGetsByCluster")
}
@@ -130,8 +138,8 @@ func syncAlertMutes() error {
AlertMuteCache.Set(oks, stat.Total, stat.LastUpdated)
ms := time.Since(start).Milliseconds()
promstat.GaugeCronDuration.WithLabelValues("sync_alert_mutes").Set(float64(ms))
promstat.GaugeSyncNumber.WithLabelValues("sync_alert_mutes").Set(float64(len(lst)))
promstat.GaugeCronDuration.WithLabelValues(clusterName, "sync_alert_mutes").Set(float64(ms))
promstat.GaugeSyncNumber.WithLabelValues(clusterName, "sync_alert_mutes").Set(float64(len(lst)))
logger.Infof("timer: sync mutes done, cost: %dms, number: %d", ms, len(lst))
return nil

View File

@@ -9,6 +9,7 @@ import (
"github.com/toolkits/pkg/logger"
"github.com/didi/nightingale/v5/src/models"
"github.com/didi/nightingale/v5/src/server/config"
promstat "github.com/didi/nightingale/v5/src/server/stat"
)
@@ -94,19 +95,27 @@ func loopSyncAlertRules() {
func syncAlertRules() error {
start := time.Now()
stat, err := models.AlertRuleStatistics("")
clusterName := config.ReaderClient.GetClusterName()
if clusterName == "" {
AlertRuleCache.Reset()
logger.Warning("cluster name is blank")
return nil
}
stat, err := models.AlertRuleStatistics(clusterName)
if err != nil {
return errors.WithMessage(err, "failed to exec AlertRuleStatistics")
}
if !AlertRuleCache.StatChanged(stat.Total, stat.LastUpdated) {
promstat.GaugeCronDuration.WithLabelValues("sync_alert_rules").Set(0)
promstat.GaugeSyncNumber.WithLabelValues("sync_alert_rules").Set(0)
promstat.GaugeCronDuration.WithLabelValues(clusterName, "sync_alert_rules").Set(0)
promstat.GaugeSyncNumber.WithLabelValues(clusterName, "sync_alert_rules").Set(0)
logger.Debug("alert rules not changed")
return nil
}
lst, err := models.AlertRuleGetsByCluster("")
lst, err := models.AlertRuleGetsByCluster(clusterName)
if err != nil {
return errors.WithMessage(err, "failed to exec AlertRuleGetsByCluster")
}
@@ -119,8 +128,8 @@ func syncAlertRules() error {
AlertRuleCache.Set(m, stat.Total, stat.LastUpdated)
ms := time.Since(start).Milliseconds()
promstat.GaugeCronDuration.WithLabelValues("sync_alert_rules").Set(float64(ms))
promstat.GaugeSyncNumber.WithLabelValues("sync_alert_rules").Set(float64(len(m)))
promstat.GaugeCronDuration.WithLabelValues(clusterName, "sync_alert_rules").Set(float64(ms))
promstat.GaugeSyncNumber.WithLabelValues(clusterName, "sync_alert_rules").Set(float64(len(m)))
logger.Infof("timer: sync rules done, cost: %dms, number: %d", ms, len(m))
return nil

View File

@@ -9,6 +9,7 @@ import (
"github.com/toolkits/pkg/logger"
"github.com/didi/nightingale/v5/src/models"
"github.com/didi/nightingale/v5/src/server/config"
promstat "github.com/didi/nightingale/v5/src/server/stat"
)
@@ -100,19 +101,27 @@ func loopSyncAlertSubscribes() {
func syncAlertSubscribes() error {
start := time.Now()
stat, err := models.AlertSubscribeStatistics("")
clusterName := config.ReaderClient.GetClusterName()
if clusterName == "" {
AlertSubscribeCache.Reset()
logger.Warning("cluster name is blank")
return nil
}
stat, err := models.AlertSubscribeStatistics(clusterName)
if err != nil {
return errors.WithMessage(err, "failed to exec AlertSubscribeStatistics")
}
if !AlertSubscribeCache.StatChanged(stat.Total, stat.LastUpdated) {
promstat.GaugeCronDuration.WithLabelValues("sync_alert_subscribes").Set(0)
promstat.GaugeSyncNumber.WithLabelValues("sync_alert_subscribes").Set(0)
promstat.GaugeCronDuration.WithLabelValues(clusterName, "sync_alert_subscribes").Set(0)
promstat.GaugeSyncNumber.WithLabelValues(clusterName, "sync_alert_subscribes").Set(0)
logger.Debug("alert subscribes not changed")
return nil
}
lst, err := models.AlertSubscribeGetsByCluster("")
lst, err := models.AlertSubscribeGetsByCluster(clusterName)
if err != nil {
return errors.WithMessage(err, "failed to exec AlertSubscribeGetsByCluster")
}
@@ -132,8 +141,8 @@ func syncAlertSubscribes() error {
AlertSubscribeCache.Set(subs, stat.Total, stat.LastUpdated)
ms := time.Since(start).Milliseconds()
promstat.GaugeCronDuration.WithLabelValues("sync_alert_subscribes").Set(float64(ms))
promstat.GaugeSyncNumber.WithLabelValues("sync_alert_subscribes").Set(float64(len(lst)))
promstat.GaugeCronDuration.WithLabelValues(clusterName, "sync_alert_subscribes").Set(float64(ms))
promstat.GaugeSyncNumber.WithLabelValues(clusterName, "sync_alert_subscribes").Set(float64(len(lst)))
logger.Infof("timer: sync subscribes done, cost: %dms, number: %d", ms, len(lst))
return nil

View File

@@ -9,6 +9,7 @@ import (
"github.com/toolkits/pkg/logger"
"github.com/didi/nightingale/v5/src/models"
"github.com/didi/nightingale/v5/src/server/config"
promstat "github.com/didi/nightingale/v5/src/server/stat"
)
@@ -78,9 +79,13 @@ func syncBusiGroups() error {
return errors.WithMessage(err, "failed to exec BusiGroupStatistics")
}
clusterName := config.ReaderClient.GetClusterName()
if !BusiGroupCache.StatChanged(stat.Total, stat.LastUpdated) {
promstat.GaugeCronDuration.WithLabelValues("sync_busi_groups").Set(0)
promstat.GaugeSyncNumber.WithLabelValues("sync_busi_groups").Set(0)
if clusterName != "" {
promstat.GaugeCronDuration.WithLabelValues(clusterName, "sync_busi_groups").Set(0)
promstat.GaugeSyncNumber.WithLabelValues(clusterName, "sync_busi_groups").Set(0)
}
logger.Debug("busi_group not changed")
return nil
@@ -94,8 +99,10 @@ func syncBusiGroups() error {
BusiGroupCache.Set(m, stat.Total, stat.LastUpdated)
ms := time.Since(start).Milliseconds()
promstat.GaugeCronDuration.WithLabelValues("sync_busi_groups").Set(float64(ms))
promstat.GaugeSyncNumber.WithLabelValues("sync_busi_groups").Set(float64(len(m)))
if clusterName != "" {
promstat.GaugeCronDuration.WithLabelValues(clusterName, "sync_busi_groups").Set(float64(ms))
promstat.GaugeSyncNumber.WithLabelValues(clusterName, "sync_busi_groups").Set(float64(len(m)))
}
logger.Infof("timer: sync busi groups done, cost: %dms, number: %d", ms, len(m))

View File

@@ -1,44 +0,0 @@
package memsto
import (
"sync"
)
type LogSampleCacheType struct {
sync.RWMutex
m map[string]map[string]struct{} // map[labelName]map[labelValue]struct{}
}
var LogSampleCache = LogSampleCacheType{
m: make(map[string]map[string]struct{}),
}
func (l *LogSampleCacheType) Set(m map[string][]string) {
l.Lock()
for k, v := range m {
l.m[k] = make(map[string]struct{})
for _, vv := range v {
l.m[k][vv] = struct{}{}
}
}
l.Unlock()
}
func (l *LogSampleCacheType) Get() map[string]map[string]struct{} {
l.RLock()
defer l.RUnlock()
return l.m
}
func (l *LogSampleCacheType) Clean() {
l.Lock()
l.m = make(map[string]map[string]struct{})
l.Unlock()
}
func (l *LogSampleCacheType) Len() int {
l.RLock()
defer l.RUnlock()
return len(l.m)
}

View File

@@ -6,6 +6,7 @@ import (
"time"
"github.com/didi/nightingale/v5/src/models"
"github.com/didi/nightingale/v5/src/server/config"
promstat "github.com/didi/nightingale/v5/src/server/stat"
"github.com/pkg/errors"
"github.com/toolkits/pkg/logger"
@@ -94,19 +95,26 @@ func loopSyncRecordingRules() {
func syncRecordingRules() error {
start := time.Now()
stat, err := models.RecordingRuleStatistics("")
clusterName := config.ReaderClient.GetClusterName()
if clusterName == "" {
RecordingRuleCache.Reset()
logger.Warning("cluster name is blank")
return nil
}
stat, err := models.RecordingRuleStatistics(clusterName)
if err != nil {
return errors.WithMessage(err, "failed to exec RecordingRuleStatistics")
}
if !RecordingRuleCache.StatChanged(stat.Total, stat.LastUpdated) {
promstat.GaugeCronDuration.WithLabelValues("sync_recording_rules").Set(0)
promstat.GaugeSyncNumber.WithLabelValues("sync_recording_rules").Set(0)
promstat.GaugeCronDuration.WithLabelValues(clusterName, "sync_recording_rules").Set(0)
promstat.GaugeSyncNumber.WithLabelValues(clusterName, "sync_recording_rules").Set(0)
logger.Debug("recoding rules not changed")
return nil
}
lst, err := models.RecordingRuleGetsByCluster("")
lst, err := models.RecordingRuleGetsByCluster(clusterName)
if err != nil {
return errors.WithMessage(err, "failed to exec RecordingRuleGetsByCluster")
}
@@ -119,8 +127,8 @@ func syncRecordingRules() error {
RecordingRuleCache.Set(m, stat.Total, stat.LastUpdated)
ms := time.Since(start).Milliseconds()
promstat.GaugeCronDuration.WithLabelValues("sync_recording_rules").Set(float64(ms))
promstat.GaugeSyncNumber.WithLabelValues("sync_recording_rules").Set(float64(len(m)))
promstat.GaugeCronDuration.WithLabelValues(clusterName, "sync_recording_rules").Set(float64(ms))
promstat.GaugeSyncNumber.WithLabelValues(clusterName, "sync_recording_rules").Set(float64(len(m)))
logger.Infof("timer: sync recording rules done, cost: %dms, number: %d", ms, len(m))
return nil

View File

@@ -103,7 +103,7 @@ func loopSyncTargets() {
func syncTargets() error {
start := time.Now()
clusterName := config.C.ClusterName
clusterName := config.ReaderClient.GetClusterName()
if clusterName == "" {
TargetCache.Reset()
logger.Warning("cluster name is blank")
@@ -116,8 +116,8 @@ func syncTargets() error {
}
if !TargetCache.StatChanged(stat.Total, stat.LastUpdated) {
promstat.GaugeCronDuration.WithLabelValues("sync_targets").Set(0)
promstat.GaugeSyncNumber.WithLabelValues("sync_targets").Set(0)
promstat.GaugeCronDuration.WithLabelValues(clusterName, "sync_targets").Set(0)
promstat.GaugeSyncNumber.WithLabelValues(clusterName, "sync_targets").Set(0)
logger.Debug("targets not changed")
return nil
}
@@ -145,8 +145,8 @@ func syncTargets() error {
TargetCache.Set(m, stat.Total, stat.LastUpdated)
ms := time.Since(start).Milliseconds()
promstat.GaugeCronDuration.WithLabelValues("sync_targets").Set(float64(ms))
promstat.GaugeSyncNumber.WithLabelValues("sync_targets").Set(float64(len(lst)))
promstat.GaugeCronDuration.WithLabelValues(clusterName, "sync_targets").Set(float64(ms))
promstat.GaugeSyncNumber.WithLabelValues(clusterName, "sync_targets").Set(float64(len(lst)))
logger.Infof("timer: sync targets done, cost: %dms, number: %d", ms, len(lst))
return nil

View File

@@ -9,6 +9,7 @@ import (
"github.com/toolkits/pkg/logger"
"github.com/didi/nightingale/v5/src/models"
"github.com/didi/nightingale/v5/src/server/config"
promstat "github.com/didi/nightingale/v5/src/server/stat"
)
@@ -123,9 +124,13 @@ func syncUsers() error {
return errors.WithMessage(err, "failed to exec UserStatistics")
}
clusterName := config.ReaderClient.GetClusterName()
if !UserCache.StatChanged(stat.Total, stat.LastUpdated) {
promstat.GaugeCronDuration.WithLabelValues("sync_users").Set(0)
promstat.GaugeSyncNumber.WithLabelValues("sync_users").Set(0)
if clusterName != "" {
promstat.GaugeCronDuration.WithLabelValues(clusterName, "sync_users").Set(0)
promstat.GaugeSyncNumber.WithLabelValues(clusterName, "sync_users").Set(0)
}
logger.Debug("users not changed")
return nil
@@ -144,8 +149,10 @@ func syncUsers() error {
UserCache.Set(m, stat.Total, stat.LastUpdated)
ms := time.Since(start).Milliseconds()
promstat.GaugeCronDuration.WithLabelValues("sync_users").Set(float64(ms))
promstat.GaugeSyncNumber.WithLabelValues("sync_users").Set(float64(len(m)))
if clusterName != "" {
promstat.GaugeCronDuration.WithLabelValues(clusterName, "sync_users").Set(float64(ms))
promstat.GaugeSyncNumber.WithLabelValues(clusterName, "sync_users").Set(float64(len(m)))
}
logger.Infof("timer: sync users done, cost: %dms, number: %d", ms, len(m))

View File

@@ -9,6 +9,7 @@ import (
"github.com/toolkits/pkg/logger"
"github.com/didi/nightingale/v5/src/models"
"github.com/didi/nightingale/v5/src/server/config"
promstat "github.com/didi/nightingale/v5/src/server/stat"
)
@@ -105,9 +106,13 @@ func syncUserGroups() error {
return errors.WithMessage(err, "failed to exec UserGroupStatistics")
}
clusterName := config.ReaderClient.GetClusterName()
if !UserGroupCache.StatChanged(stat.Total, stat.LastUpdated) {
promstat.GaugeCronDuration.WithLabelValues("sync_user_groups").Set(0)
promstat.GaugeSyncNumber.WithLabelValues("sync_user_groups").Set(0)
if clusterName != "" {
promstat.GaugeCronDuration.WithLabelValues(clusterName, "sync_user_groups").Set(0)
promstat.GaugeSyncNumber.WithLabelValues(clusterName, "sync_user_groups").Set(0)
}
logger.Debug("user_group not changed")
return nil
@@ -145,8 +150,10 @@ func syncUserGroups() error {
UserGroupCache.Set(m, stat.Total, stat.LastUpdated)
ms := time.Since(start).Milliseconds()
promstat.GaugeCronDuration.WithLabelValues("sync_user_groups").Set(float64(ms))
promstat.GaugeSyncNumber.WithLabelValues("sync_user_groups").Set(float64(len(m)))
if clusterName != "" {
promstat.GaugeCronDuration.WithLabelValues(clusterName, "sync_user_groups").Set(float64(ms))
promstat.GaugeSyncNumber.WithLabelValues(clusterName, "sync_user_groups").Set(float64(len(m)))
}
logger.Infof("timer: sync user groups done, cost: %dms, number: %d", ms, len(m))

View File

@@ -9,56 +9,51 @@ import (
const NodeReplicas = 500
type ClusterHashRingType struct {
type ConsistentHashRing struct {
sync.RWMutex
Rings map[string]*consistent.Consistent
ring *consistent.Consistent
}
// for alert_rule sharding
var ClusterHashRing = ClusterHashRingType{Rings: make(map[string]*consistent.Consistent)}
var HashRing = NewConsistentHashRing(int32(NodeReplicas), []string{})
func NewConsistentHashRing(replicas int32, nodes []string) *consistent.Consistent {
ret := consistent.New()
ret.NumberOfReplicas = int(replicas)
func (chr *ConsistentHashRing) GetNode(pk string) (string, error) {
chr.RLock()
defer chr.RUnlock()
return chr.ring.Get(pk)
}
func (chr *ConsistentHashRing) Set(r *consistent.Consistent) {
chr.Lock()
defer chr.Unlock()
chr.ring = r
}
func (chr *ConsistentHashRing) GetRing() *consistent.Consistent {
chr.RLock()
defer chr.RUnlock()
return chr.ring
}
func NewConsistentHashRing(replicas int32, nodes []string) *ConsistentHashRing {
ret := &ConsistentHashRing{ring: consistent.New()}
ret.ring.NumberOfReplicas = int(replicas)
for i := 0; i < len(nodes); i++ {
ret.Add(nodes[i])
ret.ring.Add(nodes[i])
}
return ret
}
func RebuildConsistentHashRing(cluster string, nodes []string) {
func RebuildConsistentHashRing(nodes []string) {
r := consistent.New()
r.NumberOfReplicas = NodeReplicas
for i := 0; i < len(nodes); i++ {
r.Add(nodes[i])
}
ClusterHashRing.Set(cluster, r)
logger.Infof("hash ring %s rebuild %+v", cluster, r.Members())
}
HashRing.Set(r)
func (chr *ClusterHashRingType) GetNode(cluster, pk string) (string, error) {
chr.RLock()
defer chr.RUnlock()
_, exists := chr.Rings[cluster]
if !exists {
chr.Rings[cluster] = NewConsistentHashRing(int32(NodeReplicas), []string{})
}
return chr.Rings[cluster].Get(pk)
}
func (chr *ClusterHashRingType) IsHit(cluster string, pk string, currentNode string) bool {
node, err := chr.GetNode(cluster, pk)
if err != nil {
logger.Debugf("cluster:%s pk:%s failed to get node from hashring:%v", cluster, pk, err)
return false
}
return node == currentNode
}
func (chr *ClusterHashRingType) Set(cluster string, r *consistent.Consistent) {
chr.RLock()
defer chr.RUnlock()
chr.Rings[cluster] = r
logger.Infof("hash ring rebuild %+v", r.Members())
}

View File

@@ -14,10 +14,9 @@ import (
)
// local servers
var localss map[string]string
var localss string
func Heartbeat(ctx context.Context) error {
localss = make(map[string]string)
if err := heartbeat(); err != nil {
fmt.Println("failed to heartbeat:", err)
return err
@@ -38,64 +37,30 @@ func loopHeartbeat() {
}
func heartbeat() error {
var clusters []string
var err error
if config.C.ReaderFrom == "config" {
// 在配置文件维护实例和集群的对应关系
for i := 0; i < len(config.C.Readers); i++ {
clusters = append(clusters, config.C.Readers[i].ClusterName)
err := models.AlertingEngineHeartbeatWithCluster(config.C.Heartbeat.Endpoint, config.C.Readers[i].ClusterName)
if err != nil {
logger.Warningf("heartbeat with cluster %s err:%v", config.C.Readers[i].ClusterName, err)
continue
}
}
} else {
// 在页面上维护实例和集群的对应关系
clusters, err = models.AlertingEngineGetClusters(config.C.Heartbeat.Endpoint)
if err != nil {
return err
}
if len(clusters) == 0 {
// 告警引擎页面还没有配置集群,先上报一个空的集群,让 n9e-server 实例在页面上显示出来
err := models.AlertingEngineHeartbeatWithCluster(config.C.Heartbeat.Endpoint, "")
if err != nil {
logger.Warningf("heartbeat with cluster %s err:%v", "", err)
}
logger.Warningf("heartbeat %s no cluster", config.C.Heartbeat.Endpoint)
}
err := models.AlertingEngineHeartbeat(config.C.Heartbeat.Endpoint)
if err != nil {
return err
}
err := models.AlertingEngineHeartbeat(config.C.Heartbeat.Endpoint)
if err != nil {
return err
}
for i := 0; i < len(clusters); i++ {
servers, err := ActiveServers(clusters[i])
if err != nil {
logger.Warningf("hearbeat %s get active server err:", clusters[i], err)
continue
}
servers, err := ActiveServers()
if err != nil {
return err
}
sort.Strings(servers)
newss := strings.Join(servers, " ")
oldss, exists := localss[clusters[i]]
if exists && oldss == newss {
continue
}
RebuildConsistentHashRing(clusters[i], servers)
localss[clusters[i]] = newss
sort.Strings(servers)
newss := strings.Join(servers, " ")
if newss != localss {
RebuildConsistentHashRing(servers)
localss = newss
}
return nil
}
func ActiveServers(cluster string) ([]string, error) {
if cluster == "" {
return nil, fmt.Errorf("cluster is empty")
func ActiveServers() ([]string, error) {
cluster, err := models.AlertingEngineGetCluster(config.C.Heartbeat.Endpoint)
if err != nil {
return nil, err
}
// 30秒内有心跳就认为是活的

View File

@@ -7,8 +7,8 @@ import (
"github.com/toolkits/pkg/logger"
)
func IamLeader(cluster string) (bool, error) {
servers, err := ActiveServers(cluster)
func IamLeader() (bool, error) {
servers, err := ActiveServers()
if err != nil {
logger.Errorf("failed to get active servers: %v", err)
return false, err

View File

@@ -14,6 +14,7 @@ import (
"github.com/didi/nightingale/v5/src/pkg/aop"
"github.com/didi/nightingale/v5/src/server/config"
"github.com/didi/nightingale/v5/src/server/naming"
promstat "github.com/didi/nightingale/v5/src/server/stat"
)
@@ -68,7 +69,7 @@ func configRoute(r *gin.Engine, version string, reloadFunc func()) {
})
r.GET("/servers/active", func(c *gin.Context) {
lst, err := naming.ActiveServers(ginx.QueryStr(c, "cluster"))
lst, err := naming.ActiveServers()
ginx.NewRender(c).Data(lst, err)
})
@@ -100,10 +101,6 @@ func configRoute(r *gin.Engine, version string, reloadFunc func()) {
r.GET("/metrics", gin.WrapH(promhttp.Handler()))
r.GET("/log-sample-filter", logSampleFilterGet)
r.POST("/log-sample-filter", logSampleFilterAdd)
r.DELETE("/log-sample-filter", logSampleFilterDel)
service := r.Group("/v1/n9e")
service.POST("/event", pushEventToQueue)
service.POST("/make-event", makeEvent)

View File

@@ -3,6 +3,7 @@ package router
import (
"compress/gzip"
"compress/zlib"
"encoding/json"
"fmt"
"io/ioutil"
"net/http"
@@ -16,17 +17,14 @@ import (
promstat "github.com/didi/nightingale/v5/src/server/stat"
"github.com/didi/nightingale/v5/src/server/writer"
"github.com/gin-gonic/gin"
"github.com/mailru/easyjson"
"github.com/prometheus/common/model"
"github.com/prometheus/prometheus/prompb"
)
//easyjson:json
type TimeSeries struct {
Series []*DatadogMetric `json:"series"`
}
//easyjson:json
type DatadogMetric struct {
Metric string `json:"metric"`
Points []DatadogPoint `json:"points"`
@@ -34,7 +32,6 @@ type DatadogMetric struct {
Tags []string `json:"tags,omitempty"`
}
//easyjson:json
type DatadogPoint [2]float64
func (m *DatadogMetric) Clean() error {
@@ -217,7 +214,7 @@ func datadogSeries(c *gin.Context) {
}
var series TimeSeries
err = easyjson.Unmarshal(bs, &series)
err = json.Unmarshal(bs, &series)
if err != nil {
c.String(400, err.Error())
return
@@ -266,22 +263,13 @@ func datadogSeries(c *gin.Context) {
}
}
LogSample(c.Request.RemoteAddr, pt)
if config.C.WriterOpt.ShardingKey == "ident" {
if ident == "" {
writer.Writers.PushSample("-", pt)
} else {
writer.Writers.PushSample(ident, pt)
}
} else {
writer.Writers.PushSample(item.Metric, pt)
}
writer.Writers.PushSample(item.Metric, pt)
succ++
}
if succ > 0 {
cn := config.C.ClusterName
cn := config.ReaderClient.GetClusterName()
if cn != "" {
promstat.CounterSampleTotal.WithLabelValues(cn, "datadog").Add(float64(succ))
}

View File

@@ -1,334 +0,0 @@
// Code generated by easyjson for marshaling/unmarshaling. DO NOT EDIT.
package router
import (
json "encoding/json"
easyjson "github.com/mailru/easyjson"
jlexer "github.com/mailru/easyjson/jlexer"
jwriter "github.com/mailru/easyjson/jwriter"
)
// suppress unused package warning
var (
_ *json.RawMessage
_ *jlexer.Lexer
_ *jwriter.Writer
_ easyjson.Marshaler
)
func easyjsonF301f710DecodeGithubComDidiNightingaleV5SrcServerRouter(in *jlexer.Lexer, out *TimeSeries) {
isTopLevel := in.IsStart()
if in.IsNull() {
if isTopLevel {
in.Consumed()
}
in.Skip()
return
}
in.Delim('{')
for !in.IsDelim('}') {
key := in.UnsafeFieldName(false)
in.WantColon()
if in.IsNull() {
in.Skip()
in.WantComma()
continue
}
switch key {
case "series":
if in.IsNull() {
in.Skip()
out.Series = nil
} else {
in.Delim('[')
if out.Series == nil {
if !in.IsDelim(']') {
out.Series = make([]*DatadogMetric, 0, 8)
} else {
out.Series = []*DatadogMetric{}
}
} else {
out.Series = (out.Series)[:0]
}
for !in.IsDelim(']') {
var v1 *DatadogMetric
if in.IsNull() {
in.Skip()
v1 = nil
} else {
if v1 == nil {
v1 = new(DatadogMetric)
}
(*v1).UnmarshalEasyJSON(in)
}
out.Series = append(out.Series, v1)
in.WantComma()
}
in.Delim(']')
}
default:
in.SkipRecursive()
}
in.WantComma()
}
in.Delim('}')
if isTopLevel {
in.Consumed()
}
}
func easyjsonF301f710EncodeGithubComDidiNightingaleV5SrcServerRouter(out *jwriter.Writer, in TimeSeries) {
out.RawByte('{')
first := true
_ = first
{
const prefix string = ",\"series\":"
out.RawString(prefix[1:])
if in.Series == nil && (out.Flags&jwriter.NilSliceAsEmpty) == 0 {
out.RawString("null")
} else {
out.RawByte('[')
for v2, v3 := range in.Series {
if v2 > 0 {
out.RawByte(',')
}
if v3 == nil {
out.RawString("null")
} else {
(*v3).MarshalEasyJSON(out)
}
}
out.RawByte(']')
}
}
out.RawByte('}')
}
// MarshalJSON supports json.Marshaler interface
func (v TimeSeries) MarshalJSON() ([]byte, error) {
w := jwriter.Writer{}
easyjsonF301f710EncodeGithubComDidiNightingaleV5SrcServerRouter(&w, v)
return w.Buffer.BuildBytes(), w.Error
}
// MarshalEasyJSON supports easyjson.Marshaler interface
func (v TimeSeries) MarshalEasyJSON(w *jwriter.Writer) {
easyjsonF301f710EncodeGithubComDidiNightingaleV5SrcServerRouter(w, v)
}
// UnmarshalJSON supports json.Unmarshaler interface
func (v *TimeSeries) UnmarshalJSON(data []byte) error {
r := jlexer.Lexer{Data: data}
easyjsonF301f710DecodeGithubComDidiNightingaleV5SrcServerRouter(&r, v)
return r.Error()
}
// UnmarshalEasyJSON supports easyjson.Unmarshaler interface
func (v *TimeSeries) UnmarshalEasyJSON(l *jlexer.Lexer) {
easyjsonF301f710DecodeGithubComDidiNightingaleV5SrcServerRouter(l, v)
}
func easyjsonF301f710DecodeGithubComDidiNightingaleV5SrcServerRouter1(in *jlexer.Lexer, out *DatadogPoint) {
isTopLevel := in.IsStart()
if in.IsNull() {
in.Skip()
} else {
in.Delim('[')
v4 := 0
for !in.IsDelim(']') {
if v4 < 2 {
(*out)[v4] = float64(in.Float64())
v4++
} else {
in.SkipRecursive()
}
in.WantComma()
}
in.Delim(']')
}
if isTopLevel {
in.Consumed()
}
}
func easyjsonF301f710EncodeGithubComDidiNightingaleV5SrcServerRouter1(out *jwriter.Writer, in DatadogPoint) {
out.RawByte('[')
for v5 := range in {
if v5 > 0 {
out.RawByte(',')
}
out.Float64(float64((in)[v5]))
}
out.RawByte(']')
}
// MarshalJSON supports json.Marshaler interface
func (v DatadogPoint) MarshalJSON() ([]byte, error) {
w := jwriter.Writer{}
easyjsonF301f710EncodeGithubComDidiNightingaleV5SrcServerRouter1(&w, v)
return w.Buffer.BuildBytes(), w.Error
}
// MarshalEasyJSON supports easyjson.Marshaler interface
func (v DatadogPoint) MarshalEasyJSON(w *jwriter.Writer) {
easyjsonF301f710EncodeGithubComDidiNightingaleV5SrcServerRouter1(w, v)
}
// UnmarshalJSON supports json.Unmarshaler interface
func (v *DatadogPoint) UnmarshalJSON(data []byte) error {
r := jlexer.Lexer{Data: data}
easyjsonF301f710DecodeGithubComDidiNightingaleV5SrcServerRouter1(&r, v)
return r.Error()
}
// UnmarshalEasyJSON supports easyjson.Unmarshaler interface
func (v *DatadogPoint) UnmarshalEasyJSON(l *jlexer.Lexer) {
easyjsonF301f710DecodeGithubComDidiNightingaleV5SrcServerRouter1(l, v)
}
func easyjsonF301f710DecodeGithubComDidiNightingaleV5SrcServerRouter2(in *jlexer.Lexer, out *DatadogMetric) {
isTopLevel := in.IsStart()
if in.IsNull() {
if isTopLevel {
in.Consumed()
}
in.Skip()
return
}
in.Delim('{')
for !in.IsDelim('}') {
key := in.UnsafeFieldName(false)
in.WantColon()
if in.IsNull() {
in.Skip()
in.WantComma()
continue
}
switch key {
case "metric":
out.Metric = string(in.String())
case "points":
if in.IsNull() {
in.Skip()
out.Points = nil
} else {
in.Delim('[')
if out.Points == nil {
if !in.IsDelim(']') {
out.Points = make([]DatadogPoint, 0, 4)
} else {
out.Points = []DatadogPoint{}
}
} else {
out.Points = (out.Points)[:0]
}
for !in.IsDelim(']') {
var v6 DatadogPoint
(v6).UnmarshalEasyJSON(in)
out.Points = append(out.Points, v6)
in.WantComma()
}
in.Delim(']')
}
case "host":
out.Host = string(in.String())
case "tags":
if in.IsNull() {
in.Skip()
out.Tags = nil
} else {
in.Delim('[')
if out.Tags == nil {
if !in.IsDelim(']') {
out.Tags = make([]string, 0, 4)
} else {
out.Tags = []string{}
}
} else {
out.Tags = (out.Tags)[:0]
}
for !in.IsDelim(']') {
var v7 string
v7 = string(in.String())
out.Tags = append(out.Tags, v7)
in.WantComma()
}
in.Delim(']')
}
default:
in.SkipRecursive()
}
in.WantComma()
}
in.Delim('}')
if isTopLevel {
in.Consumed()
}
}
func easyjsonF301f710EncodeGithubComDidiNightingaleV5SrcServerRouter2(out *jwriter.Writer, in DatadogMetric) {
out.RawByte('{')
first := true
_ = first
{
const prefix string = ",\"metric\":"
out.RawString(prefix[1:])
out.String(string(in.Metric))
}
{
const prefix string = ",\"points\":"
out.RawString(prefix)
if in.Points == nil && (out.Flags&jwriter.NilSliceAsEmpty) == 0 {
out.RawString("null")
} else {
out.RawByte('[')
for v8, v9 := range in.Points {
if v8 > 0 {
out.RawByte(',')
}
(v9).MarshalEasyJSON(out)
}
out.RawByte(']')
}
}
{
const prefix string = ",\"host\":"
out.RawString(prefix)
out.String(string(in.Host))
}
if len(in.Tags) != 0 {
const prefix string = ",\"tags\":"
out.RawString(prefix)
{
out.RawByte('[')
for v10, v11 := range in.Tags {
if v10 > 0 {
out.RawByte(',')
}
out.String(string(v11))
}
out.RawByte(']')
}
}
out.RawByte('}')
}
// MarshalJSON supports json.Marshaler interface
func (v DatadogMetric) MarshalJSON() ([]byte, error) {
w := jwriter.Writer{}
easyjsonF301f710EncodeGithubComDidiNightingaleV5SrcServerRouter2(&w, v)
return w.Buffer.BuildBytes(), w.Error
}
// MarshalEasyJSON supports easyjson.Marshaler interface
func (v DatadogMetric) MarshalEasyJSON(w *jwriter.Writer) {
easyjsonF301f710EncodeGithubComDidiNightingaleV5SrcServerRouter2(w, v)
}
// UnmarshalJSON supports json.Unmarshaler interface
func (v *DatadogMetric) UnmarshalJSON(data []byte) error {
r := jlexer.Lexer{Data: data}
easyjsonF301f710DecodeGithubComDidiNightingaleV5SrcServerRouter2(&r, v)
return r.Error()
}
// UnmarshalEasyJSON supports easyjson.Unmarshaler interface
func (v *DatadogMetric) UnmarshalEasyJSON(l *jlexer.Lexer) {
easyjsonF301f710DecodeGithubComDidiNightingaleV5SrcServerRouter2(l, v)
}

Some files were not shown because too many files have changed in this diff Show More