mirror of
https://github.com/ccfos/nightingale.git
synced 2026-03-04 15:08:52 +00:00
Compare commits
7 Commits
optimize-o
...
webhook-ba
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
b057940134 | ||
|
|
4500c4aba8 | ||
|
|
726e994d58 | ||
|
|
71c4c24f00 | ||
|
|
d14a834149 | ||
|
|
5f895552a9 | ||
|
|
1e3f62b92f |
21
README.md
21
README.md
@@ -29,15 +29,15 @@
|
||||
|
||||
## 夜莺 Nightingale 是什么
|
||||
|
||||
夜莺监控是一款开源云原生观测分析工具,采用 All-in-One 的设计理念,集数据采集、可视化、监控告警、数据分析于一体,与云原生生态紧密集成,提供开箱即用的企业级监控分析和告警能力。夜莺于 2020 年 3 月 20 日,在 GitHub 上发布 v1 版本,已累计迭代 100 多个版本。
|
||||
夜莺监控是一款开源云原生观测分析工具,采用 All-in-One 的设计理念,集数据采集、可视化、监控告警、数据分析于一体,与云原生生态紧密集成,提供开箱即用的企业级监控分析和告警能力。夜莺于 2020 年 3 月 20 日,在 github 上发布 v1 版本,已累计迭代 100 多个版本。
|
||||
|
||||
夜莺最初由滴滴开发和开源,并于 2022 年 5 月 11 日,捐赠予中国计算机学会开源发展委员会(CCF ODC),为 CCF ODC 成立后接受捐赠的第一个开源项目。夜莺的核心研发团队,也是 Open-Falcon 项目原核心研发人员,从 2014 年(Open-Falcon 是 2014 年开源)算起来,也有 10 年了,只为把监控这个事情做好。
|
||||
|
||||
|
||||
## 快速开始
|
||||
- 👉 [文档中心](https://flashcat.cloud/docs/) | [下载中心](https://flashcat.cloud/download/nightingale/)
|
||||
- ❤️ [报告 Bug](https://github.com/ccfos/nightingale/issues/new?assignees=&labels=&projects=&template=question.yml)
|
||||
- ℹ️ 为了提供更快速的访问体验,上述文档和下载站点托管于 [FlashcatCloud](https://flashcat.cloud)
|
||||
- 👉[文档中心](https://flashcat.cloud/docs/) | [下载中心](https://flashcat.cloud/download/nightingale/)
|
||||
- ❤️[报告 Bug](https://github.com/ccfos/nightingale/issues/new?assignees=&labels=&projects=&template=question.yml)
|
||||
- ℹ️为了提供更快速的访问体验,上述文档和下载站点托管于 [FlashcatCloud](https://flashcat.cloud)
|
||||
|
||||
## 功能特点
|
||||
|
||||
@@ -50,11 +50,6 @@
|
||||
|
||||
## 截图演示
|
||||
|
||||
|
||||
你可以在页面的右上角,切换语言和主题,目前我们支持英语、简体中文、繁体中文。
|
||||
|
||||

|
||||
|
||||
即时查询,类似 Prometheus 内置的查询分析页面,做 ad-hoc 查询,夜莺做了一些 UI 优化,同时提供了一些内置 promql 指标,让不太了解 promql 的用户也可以快速查询。
|
||||
|
||||

|
||||
@@ -88,17 +83,15 @@
|
||||
- 报告Bug,优先推荐提交[夜莺GitHub Issue](https://github.com/ccfos/nightingale/issues/new?assignees=&labels=kind%2Fbug&projects=&template=bug_report.yml)
|
||||
- 推荐完整浏览[夜莺文档站点](https://flashcat.cloud/docs/content/flashcat-monitor/nightingale-v7/introduction/),了解更多信息
|
||||
- 推荐搜索关注夜莺公众号,第一时间获取社区动态:`夜莺监控Nightingale`
|
||||
- 日常问题交流:
|
||||
- QQ群:730841964
|
||||
- [加入微信群](https://download.flashcat.cloud/ulric/20240926101321.png),如果二维码过期了,可以联系我(我的微信:`picobyte`)拉群,备注: `夜莺互助群`
|
||||
- 日常问题交流推荐加入[知识星球](https://download.flashcat.cloud/ulric/20240319095409.png),也可以加我微信 `picobyte`,备注:`夜莺加群-<公司>-<姓名>` 拉入微信群,不过研发人员主要是关注 github issue 和星球,微信群关注较少
|
||||
|
||||
## 广受关注
|
||||
[](https://star-history.com/#ccfos/nightingale&Date)
|
||||
|
||||
|
||||
## 社区共建
|
||||
- ❇️ 请阅读浏览[夜莺开源项目和社区治理架构草案](./doc/community-governance.md),真诚欢迎每一位用户、开发者、公司以及组织,使用夜莺监控、积极反馈 Bug、提交功能需求、分享最佳实践,共建专业、活跃的夜莺开源社区。
|
||||
- ❤️ 夜莺贡献者
|
||||
- ❇️请阅读浏览[夜莺开源项目和社区治理架构草案](./doc/community-governance.md),真诚欢迎每一位用户、开发者、公司以及组织,使用夜莺监控、积极反馈 Bug、提交功能需求、分享最佳实践,共建专业、活跃的夜莺开源社区。
|
||||
- 夜莺贡献者❤️
|
||||
<a href="https://github.com/ccfos/nightingale/graphs/contributors">
|
||||
<img src="https://contrib.rocks/image?repo=ccfos/nightingale" />
|
||||
</a>
|
||||
|
||||
129
README_en.md
129
README_en.md
@@ -1,113 +1,104 @@
|
||||
<p align="center">
|
||||
<a href="https://github.com/ccfos/nightingale">
|
||||
<img src="doc/img/Nightingale_L_V.png" alt="nightingale - cloud native monitoring" width="100" /></a>
|
||||
</p>
|
||||
<p align="center">
|
||||
<b>Open-source Alert Management Expert, an Integrated Observability Platform</b>
|
||||
<img src="doc/img/Nightingale_L_V.png" alt="nightingale - cloud native monitoring" width="240" /></a>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<a href="https://flashcat.cloud/docs/">
|
||||
<img alt="GitHub latest release" src="https://img.shields.io/github/v/release/ccfos/nightingale"/>
|
||||
<a href="https://n9e.github.io">
|
||||
<img alt="Docs" src="https://img.shields.io/badge/docs-get%20started-brightgreen"/></a>
|
||||
<a href="https://hub.docker.com/u/flashcatcloud">
|
||||
<img alt="Docker pulls" src="https://img.shields.io/docker/pulls/flashcatcloud/nightingale"/></a>
|
||||
<img alt="GitHub Repo stars" src="https://img.shields.io/github/stars/ccfos/nightingale">
|
||||
<img alt="GitHub Repo issues" src="https://img.shields.io/github/issues/ccfos/nightingale">
|
||||
<img alt="GitHub Repo issues closed" src="https://img.shields.io/github/issues-closed/ccfos/nightingale">
|
||||
<img alt="GitHub forks" src="https://img.shields.io/github/forks/ccfos/nightingale">
|
||||
<a href="https://github.com/ccfos/nightingale/graphs/contributors">
|
||||
<img alt="GitHub contributors" src="https://img.shields.io/github/contributors-anon/ccfos/nightingale"/></a>
|
||||
<img alt="GitHub Repo stars" src="https://img.shields.io/github/stars/ccfos/nightingale">
|
||||
<img alt="GitHub forks" src="https://img.shields.io/github/forks/ccfos/nightingale">
|
||||
<br/><img alt="GitHub Repo issues" src="https://img.shields.io/github/issues/ccfos/nightingale">
|
||||
<img alt="GitHub Repo issues closed" src="https://img.shields.io/github/issues-closed/ccfos/nightingale">
|
||||
<img alt="GitHub latest release" src="https://img.shields.io/github/v/release/ccfos/nightingale"/>
|
||||
<img alt="License" src="https://img.shields.io/badge/license-Apache--2.0-blue"/>
|
||||
<a href="https://n9e-talk.slack.com/">
|
||||
<img alt="GitHub contributors" src="https://img.shields.io/badge/join%20slack-%23n9e-brightgreen.svg"/></a>
|
||||
<img alt="License" src="https://img.shields.io/badge/license-Apache--2.0-blue"/>
|
||||
</p>
|
||||
<p align="center">
|
||||
An open-source cloud-native monitoring system that is <b>all-in-one</b> <br/>
|
||||
<b>Out-of-the-box</b>, it integrates data collection, visualization, and monitoring alert <br/>
|
||||
We recommend upgrading your <b>Prometheus + AlertManager + Grafana</b> combination to Nightingale!
|
||||
</p>
|
||||
|
||||
|
||||
|
||||
[English](./README_en.md) | [中文](./README.md)
|
||||
|
||||
## What is Nightingale
|
||||
|
||||
Nightingale aims to combine the advantages of Prometheus and Grafana. It manages alert rules and visualizes metrics, logs, traces in a beautiful WebUI.
|
||||
## Highlighted Features
|
||||
|
||||
Originally developed and open-sourced by Didi, Nightingale was donated to the China Computer Federation Open Source Development Committee (CCF ODC) on May 11, 2022, becoming the first open-source project accepted by the CCF ODC after its establishment.
|
||||
- **Out-of-the-box**
|
||||
- Supports multiple deployment methods such as **Docker, Helm Chart, and cloud services**, integrates data collection, monitoring, and alerting into one system, and comes with various monitoring dashboards, quick views, and alert rule templates. **It greatly reduces the construction cost, learning cost, and usage cost of cloud-native monitoring systems**.
|
||||
- **Professional Alerting**
|
||||
- Provides visual alert configuration and management, supports various alert rules, offers the ability to configure silence and subscription rules, supports multiple alert delivery channels, and has features such as alert self-healing and event management.
|
||||
- **Cloud-Native**
|
||||
- Quickly builds an enterprise-level cloud-native monitoring system through a turnkey approach, supports multiple collectors such as [Categraf](https://github.com/flashcatcloud/categraf), Telegraf, and Grafana-agent, supports multiple data sources such as Prometheus, VictoriaMetrics, M3DB, ElasticSearch, and Jaeger, and is compatible with importing Grafana dashboards. **It seamlessly integrates with the cloud-native ecosystem**.
|
||||
- **High Performance and High Availability**
|
||||
- Due to the multi-data-source management engine of Nightingale and its excellent architecture design, and utilizing a high-performance time-series database, it can handle data collection, storage, and alert analysis scenarios with billions of time-series data, saving a lot of costs.
|
||||
- Nightingale components can be horizontally scaled with no single point of failure. It has been deployed in thousands of enterprises and tested in harsh production practices. Many leading Internet companies have used Nightingale for cluster machines with hundreds of nodes, processing billions of time-series data.
|
||||
- **Flexible Extension and Centralized Management**
|
||||
- Nightingale can be deployed on a 1-core 1G cloud host, deployed in a cluster of hundreds of machines, or run in Kubernetes. Time-series databases, alert engines, and other components can also be decentralized to various data centers and regions, balancing edge deployment with centralized management. **It solves the problem of data fragmentation and lack of unified views**.
|
||||
|
||||
|
||||
## Quick Start
|
||||
#### If you are using Prometheus and have one or more of the following requirement scenarios, it is recommended that you upgrade to Nightingale:
|
||||
|
||||
- 👉 [Documentation](https://flashcat.cloud/docs/) | [Download](https://flashcat.cloud/download/nightingale/)
|
||||
- ❤️ [Report a Bug](https://github.com/ccfos/nightingale/issues/new?assignees=&labels=&projects=&template=question.yml)
|
||||
- ℹ️ For faster access, the above documentation and download sites are hosted on [FlashcatCloud](https://flashcat.cloud).
|
||||
- Multiple systems such as Prometheus, Alertmanager, Grafana, etc. are fragmented and lack a unified view and cannot be used out of the box;
|
||||
- The way to manage Prometheus and Alertmanager by modifying configuration files has a big learning curve and is difficult to collaborate;
|
||||
- Too much data to scale-up your Prometheus cluster;
|
||||
- Multiple Prometheus clusters running in production environments, which faced high management and usage costs;
|
||||
|
||||
## Features
|
||||
#### If you are using Zabbix and have the following scenarios, it is recommended that you upgrade to Nightingale:
|
||||
|
||||
- **Integration with Multiple Time-Series Databases:** Supports integration with various time-series databases such as Prometheus, VictoriaMetrics, Thanos, Mimir, M3DB, and TDengine, enabling unified alert management.
|
||||
- **Advanced Alerting Capabilities:** Comes with built-in support for multiple alerting rules, extensible to common notification channels. It also supports alert suppression, silencing, subscription, self-healing, and alert event management.
|
||||
- **High-Performance Visualization Engine:** Offers various chart styles with numerous built-in dashboard templates and the ability to import Grafana templates. Ready to use with a business-friendly open-source license.
|
||||
- **Support for Common Collectors:** Compatible with [Categraf](https://flashcat.cloud/product/categraf), Telegraf, Grafana-agent, Datadog-agent, and various exporters as collectors—there's no data that can't be monitored.
|
||||
- **Seamless Integration with [Flashduty](https://flashcat.cloud/product/flashcat-duty/):** Enables alert aggregation, acknowledgment, escalation, scheduling, and IM integration, ensuring no alerts are missed, reducing unnecessary interruptions, and enhancing efficient collaboration.
|
||||
- Monitoring too much data and wanting a better scalable solution;
|
||||
- A high learning curve and a desire for better efficiency of collaborative use in a multi-person, multi-team model;
|
||||
- Microservice and cloud-native architectures with variable monitoring data lifecycles and high monitoring data dimension bases, which are not easily adaptable to the Zabbix data model;
|
||||
|
||||
|
||||
#### If you are using [open-falcon](https://github.com/open-falcon/falcon-plus), we recommend you to upgrade to Nightingale:
|
||||
- For more information about open-falcon and Nightingale, please refer to read [Ten features and trends of cloud-native monitoring](https://mp.weixin.qq.com/s?__biz=MzkzNjI5OTM5Nw==&mid=2247483738&idx=1&sn=e8bdbb974a2cd003c1abcc2b5405dd18&chksm=c2a19fb0f5d616a63185cd79277a79a6b80118ef2185890d0683d2bb20451bd9303c78d083c5#rd)。
|
||||
|
||||
## Getting Started
|
||||
|
||||
[https://n9e.github.io/](https://n9e.github.io/)
|
||||
|
||||
## Screenshots
|
||||
|
||||
You can switch languages and themes in the top right corner. We now support English, Simplified Chinese, and Traditional Chinese.
|
||||
|
||||

|
||||
|
||||
### Instant Query
|
||||
|
||||
Similar to the built-in query analysis page in Prometheus, Nightingale offers an ad-hoc query feature with UI enhancements. It also provides built-in PromQL metrics, allowing users unfamiliar with PromQL to quickly perform queries.
|
||||
|
||||

|
||||
|
||||
### Metric View
|
||||
|
||||
Alternatively, you can use the Metric View to access data. With this feature, Instant Query becomes less necessary, as it caters more to advanced users. Regular users can easily perform queries using the Metric View.
|
||||
|
||||

|
||||
|
||||
### Built-in Dashboards
|
||||
|
||||
Nightingale includes commonly used dashboards that can be imported and used directly. You can also import Grafana dashboards, although compatibility is limited to basic Grafana charts. If you’re accustomed to Grafana, it’s recommended to continue using it for visualization, with Nightingale serving as an alerting engine.
|
||||
|
||||

|
||||
|
||||
### Built-in Alert Rules
|
||||
|
||||
In addition to the built-in dashboards, Nightingale also comes with numerous alert rules that are ready to use out of the box.
|
||||
|
||||

|
||||
|
||||
|
||||
https://user-images.githubusercontent.com/792850/216888712-2565fcea-9df5-47bd-a49e-d60af9bd76e8.mp4
|
||||
|
||||
## Architecture
|
||||
|
||||
In most community scenarios, Nightingale is primarily used as an alert engine, integrating with multiple time-series databases to unify alert rule management. Grafana remains the preferred tool for visualization. As an alert engine, the product architecture of Nightingale is as follows:
|
||||
<img src="doc/img/arch-product.png" width="600">
|
||||
|
||||

|
||||
Nightingale monitoring can receive monitoring data reported by various collectors (such as [Categraf](https://github.com/flashcatcloud/categraf) , telegraf, grafana-agent, Prometheus, etc.) and write them to various popular time-series databases (such as Prometheus, M3DB, VictoriaMetrics, Thanos, TDEngine, etc.). It provides configuration capabilities for alert rules, silence rules, and subscription rules, as well as the ability to view monitoring data. It also provides automatic alarm self-healing mechanisms (such as automatically calling back to a webhook address or executing a script after an alarm is triggered), and the ability to store and manage historical alarm events and view them in groups.
|
||||
|
||||
For certain edge data centers with poor network connectivity to the central Nightingale server, we offer a distributed deployment mode for the alert engine. In this mode, even if the network is disconnected, the alerting functionality remains unaffected.
|
||||
If the performance of a standalone time-series database (such as Prometheus) has bottlenecks or poor disaster recovery, we recommend using [VictoriaMetrics](https://github.com/VictoriaMetrics/VictoriaMetrics). The VictoriaMetrics architecture is relatively simple, has excellent performance, and is easy to deploy and maintain. The architecture diagram is as shown above. For more detailed documentation on VictoriaMetrics, please refer to its [official website](https://victoriametrics.com/).
|
||||
|
||||

|
||||
**We welcome you to participate in the Nightingale open-source project and community in various ways, including but not limited to**:
|
||||
- Adding and improving documentation => [n9e.github.io](https://n9e.github.io/)
|
||||
- Sharing your best practices and experience in using Nightingale monitoring => [Article sharing]((https://n9e.github.io/docs/prologue/share/))
|
||||
- Submitting product suggestions => [github issue](https://github.com/ccfos/nightingale/issues/new?assignees=&labels=kind%2Ffeature&template=enhancement.md)
|
||||
- Submitting code to make Nightingale monitoring faster, more stable, and easier to use => [github pull request](https://github.com/didi/nightingale/pulls)
|
||||
|
||||
|
||||
## Communication Channels
|
||||
**Respecting, recognizing, and recording the work of every contributor** is the first guiding principle of the Nightingale open-source community. We advocate effective questioning, which not only respects the developer's time but also contributes to the accumulation of knowledge in the entire community
|
||||
- Before asking a question, please first refer to the [FAQ](https://www.gitlink.org.cn/ccfos/nightingale/wiki/faq)
|
||||
- We use [GitHub Discussions](https://github.com/ccfos/nightingale/discussions) as the communication forum. You can search and ask questions here.
|
||||
- We also recommend that you join ours [Slack channel](https://n9e-talk.slack.com/) to exchange experiences with other Nightingale users.
|
||||
|
||||
- **Report Bugs:** It is highly recommended to submit issues via the [Nightingale GitHub Issue tracker](https://github.com/ccfos/nightingale/issues/new?assignees=&labels=kind%2Fbug&projects=&template=bug_report.yml).
|
||||
- **Documentation:** For more information, we recommend thoroughly browsing the [Nightingale Documentation Site](https://flashcat.cloud/docs/content/flashcat-monitor/nightingale-v7/introduction/).
|
||||
|
||||
## Who is using Nightingale
|
||||
You can register your usage and share your experience by posting on **[Who is Using Nightingale](https://github.com/ccfos/nightingale/issues/897)**.
|
||||
|
||||
## Stargazers over time
|
||||
[](https://starchart.cc/ccfos/nightingale)
|
||||
|
||||
[](https://star-history.com/#ccfos/nightingale&Date)
|
||||
|
||||
## Community Co-Building
|
||||
|
||||
- ❇️ Please read the [Nightingale Open Source Project and Community Governance Draft](./doc/community-governance.md). We sincerely welcome every user, developer, company, and organization to use Nightingale, actively report bugs, submit feature requests, share best practices, and help build a professional and active open-source community.
|
||||
- ❤️ Nightingale Contributors
|
||||
## Contributors
|
||||
<a href="https://github.com/ccfos/nightingale/graphs/contributors">
|
||||
<img src="https://contrib.rocks/image?repo=ccfos/nightingale" />
|
||||
</a>
|
||||
|
||||
## License
|
||||
- [Apache License V2.0](https://github.com/didi/nightingale/blob/main/LICENSE)
|
||||
[Apache License V2.0](https://github.com/didi/nightingale/blob/main/LICENSE)
|
||||
@@ -112,5 +112,5 @@ func Start(alertc aconf.Alert, pushgwc pconf.Pushgw, syncStats *memsto.Stats, al
|
||||
go consumer.LoopConsume()
|
||||
|
||||
go queue.ReportQueueSize(alertStats)
|
||||
go sender.InitEmailSender(ctx, notifyConfigCache)
|
||||
go sender.InitEmailSender(notifyConfigCache)
|
||||
}
|
||||
|
||||
@@ -2,7 +2,6 @@ package common
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
)
|
||||
@@ -35,9 +34,9 @@ func MatchGroupsName(groupName string, groupFilter []models.TagFilter) bool {
|
||||
func matchTag(value string, filter models.TagFilter) bool {
|
||||
switch filter.Func {
|
||||
case "==":
|
||||
return strings.TrimSpace(filter.Value) == strings.TrimSpace(value)
|
||||
return filter.Value == value
|
||||
case "!=":
|
||||
return strings.TrimSpace(filter.Value) != strings.TrimSpace(value)
|
||||
return filter.Value != value
|
||||
case "in":
|
||||
_, has := filter.Vset[value]
|
||||
return has
|
||||
|
||||
@@ -167,7 +167,7 @@ func (e *Dispatch) HandleEventNotify(event *models.AlertCurEvent, isSubscribe bo
|
||||
}
|
||||
|
||||
// 处理事件发送,这里用一个goroutine处理一个event的所有发送事件
|
||||
go e.Send(rule, event, notifyTarget, isSubscribe)
|
||||
go e.Send(rule, event, notifyTarget)
|
||||
|
||||
// 如果是不是订阅规则出现的event, 则需要处理订阅规则的event
|
||||
if !isSubscribe {
|
||||
@@ -238,12 +238,11 @@ func (e *Dispatch) handleSub(sub *models.AlertSubscribe, event models.AlertCurEv
|
||||
e.HandleEventNotify(&event, true)
|
||||
}
|
||||
|
||||
func (e *Dispatch) Send(rule *models.AlertRule, event *models.AlertCurEvent, notifyTarget *NotifyTarget, isSubscribe bool) {
|
||||
func (e *Dispatch) Send(rule *models.AlertRule, event *models.AlertCurEvent, notifyTarget *NotifyTarget) {
|
||||
needSend := e.BeforeSenderHook(event)
|
||||
if needSend {
|
||||
for channel, uids := range notifyTarget.ToChannelUserMap() {
|
||||
msgCtx := sender.BuildMessageContext(e.ctx, rule, []*models.AlertCurEvent{event},
|
||||
uids, e.userCache, e.Astats)
|
||||
msgCtx := sender.BuildMessageContext(rule, []*models.AlertCurEvent{event}, uids, e.userCache, e.Astats)
|
||||
e.RwLock.RLock()
|
||||
s := e.Senders[channel]
|
||||
e.RwLock.RUnlock()
|
||||
@@ -267,26 +266,19 @@ func (e *Dispatch) Send(rule *models.AlertRule, event *models.AlertCurEvent, not
|
||||
|
||||
// handle global webhooks
|
||||
if e.alerting.WebhookBatchSend {
|
||||
sender.BatchSendWebhooks(e.ctx, notifyTarget.ToWebhookList(), event, e.Astats)
|
||||
sender.BatchSendWebhooks(notifyTarget.ToWebhookList(), event, e.Astats)
|
||||
} else {
|
||||
sender.SingleSendWebhooks(e.ctx, notifyTarget.ToWebhookList(), event, e.Astats)
|
||||
sender.SingleSendWebhooks(notifyTarget.ToWebhookList(), event, e.Astats)
|
||||
}
|
||||
|
||||
// handle plugin call
|
||||
go sender.MayPluginNotify(e.ctx, e.genNoticeBytes(event), e.notifyConfigCache.
|
||||
GetNotifyScript(), e.Astats, event)
|
||||
|
||||
if !isSubscribe {
|
||||
// handle ibex callbacks
|
||||
e.HandleIbex(rule, event)
|
||||
}
|
||||
go sender.MayPluginNotify(e.genNoticeBytes(event), e.notifyConfigCache.GetNotifyScript(), e.Astats)
|
||||
}
|
||||
|
||||
func (e *Dispatch) SendCallbacks(rule *models.AlertRule, notifyTarget *NotifyTarget, event *models.AlertCurEvent) {
|
||||
|
||||
uids := notifyTarget.ToUidList()
|
||||
urls := notifyTarget.ToCallbackList()
|
||||
whMap := notifyTarget.ToWebhookMap()
|
||||
for _, urlStr := range urls {
|
||||
if len(urlStr) == 0 {
|
||||
continue
|
||||
@@ -294,11 +286,6 @@ func (e *Dispatch) SendCallbacks(rule *models.AlertRule, notifyTarget *NotifyTar
|
||||
|
||||
cbCtx := sender.BuildCallBackContext(e.ctx, urlStr, rule, []*models.AlertCurEvent{event}, uids, e.userCache, e.alerting.WebhookBatchSend, e.Astats)
|
||||
|
||||
if wh, ok := whMap[cbCtx.CallBackURL]; ok && wh.Enable {
|
||||
logger.Debugf("SendCallbacks: webhook[%s] is in global conf.", cbCtx.CallBackURL)
|
||||
continue
|
||||
}
|
||||
|
||||
if strings.HasPrefix(urlStr, "${ibex}") {
|
||||
e.CallBacks[models.IbexDomain].CallBack(cbCtx)
|
||||
continue
|
||||
@@ -335,30 +322,6 @@ func (e *Dispatch) SendCallbacks(rule *models.AlertRule, notifyTarget *NotifyTar
|
||||
}
|
||||
}
|
||||
|
||||
func (e *Dispatch) HandleIbex(rule *models.AlertRule, event *models.AlertCurEvent) {
|
||||
// 解析 RuleConfig 字段
|
||||
var ruleConfig struct {
|
||||
TaskTpls []*models.Tpl `json:"task_tpls"`
|
||||
}
|
||||
json.Unmarshal([]byte(rule.RuleConfig), &ruleConfig)
|
||||
|
||||
for _, t := range ruleConfig.TaskTpls {
|
||||
if t.TplId == 0 {
|
||||
continue
|
||||
}
|
||||
|
||||
if len(t.Host) == 0 {
|
||||
sender.CallIbex(e.ctx, t.TplId, event.TargetIdent,
|
||||
e.taskTplsCache, e.targetCache, e.userCache, event)
|
||||
continue
|
||||
}
|
||||
for _, host := range t.Host {
|
||||
sender.CallIbex(e.ctx, t.TplId, host,
|
||||
e.taskTplsCache, e.targetCache, e.userCache, event)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
type Notice struct {
|
||||
Event *models.AlertCurEvent `json:"event"`
|
||||
Tpls map[string]string `json:"tpls"`
|
||||
|
||||
@@ -100,32 +100,8 @@ func (s *NotifyTarget) ToWebhookList() []*models.Webhook {
|
||||
return webhooks
|
||||
}
|
||||
|
||||
func (s *NotifyTarget) ToWebhookMap() map[string]*models.Webhook {
|
||||
webhookMap := make(map[string]*models.Webhook, len(s.webhooks))
|
||||
for _, wh := range s.webhooks {
|
||||
if wh.Batch == 0 {
|
||||
wh.Batch = 1000
|
||||
}
|
||||
|
||||
if wh.Timeout == 0 {
|
||||
wh.Timeout = 10
|
||||
}
|
||||
|
||||
if wh.RetryCount == 0 {
|
||||
wh.RetryCount = 10
|
||||
}
|
||||
|
||||
if wh.RetryInterval == 0 {
|
||||
wh.RetryInterval = 10
|
||||
}
|
||||
|
||||
webhookMap[wh.Url] = wh
|
||||
}
|
||||
return webhookMap
|
||||
}
|
||||
|
||||
func (s *NotifyTarget) ToUidList() []int64 {
|
||||
uids := make([]int64, 0, len(s.userMap))
|
||||
uids := make([]int64, len(s.userMap))
|
||||
for uid, _ := range s.userMap {
|
||||
uids = append(uids, uid)
|
||||
}
|
||||
|
||||
@@ -46,14 +46,6 @@ const (
|
||||
QUERY_DATA = "query_data"
|
||||
)
|
||||
|
||||
type JoinType string
|
||||
|
||||
const (
|
||||
Left JoinType = "left"
|
||||
Right JoinType = "right"
|
||||
Inner JoinType = "inner"
|
||||
)
|
||||
|
||||
func NewAlertRuleWorker(rule *models.AlertRule, datasourceId int64, processor *process.Processor, promClients *prom.PromClientMap, tdengineClients *tdengine.TdengineClientMap, ctx *ctx.Context) *AlertRuleWorker {
|
||||
arw := &AlertRuleWorker{
|
||||
datasourceId: datasourceId,
|
||||
@@ -266,12 +258,9 @@ func (arw *AlertRuleWorker) GetTdengineAnomalyPoint(rule *models.AlertRule, dsId
|
||||
arw.inhibit = ruleQuery.Inhibit
|
||||
if len(ruleQuery.Queries) > 0 {
|
||||
seriesStore := make(map[uint64]models.DataResp)
|
||||
// 将不同查询的 hash 索引分组存放
|
||||
seriesTagIndexes := make([]map[uint64][]uint64, 0)
|
||||
seriesTagIndex := make(map[uint64][]uint64)
|
||||
|
||||
for _, query := range ruleQuery.Queries {
|
||||
seriesTagIndex := make(map[uint64][]uint64)
|
||||
|
||||
arw.processor.Stats.CounterQueryDataTotal.WithLabelValues(fmt.Sprintf("%d", arw.datasourceId)).Inc()
|
||||
cli := arw.tdengineClients.GetCli(dsId)
|
||||
if cli == nil {
|
||||
@@ -289,13 +278,13 @@ func (arw *AlertRuleWorker) GetTdengineAnomalyPoint(rule *models.AlertRule, dsId
|
||||
arw.processor.Stats.CounterRuleEvalErrorTotal.WithLabelValues(fmt.Sprintf("%v", arw.processor.DatasourceId()), QUERY_DATA).Inc()
|
||||
continue
|
||||
}
|
||||
|
||||
// 此条日志很重要,是告警判断的现场值
|
||||
logger.Debugf("rule_eval rid:%d req:%+v resp:%+v", rule.Id, query, series)
|
||||
MakeSeriesMap(series, seriesTagIndex, seriesStore)
|
||||
seriesTagIndexes = append(seriesTagIndexes, seriesTagIndex)
|
||||
}
|
||||
|
||||
points, recoverPoints = GetAnomalyPoint(rule.Id, ruleQuery, seriesTagIndexes, seriesStore)
|
||||
points, recoverPoints = GetAnomalyPoint(rule.Id, ruleQuery, seriesTagIndex, seriesStore)
|
||||
}
|
||||
|
||||
return points, recoverPoints
|
||||
@@ -369,6 +358,11 @@ func (arw *AlertRuleWorker) GetHostAnomalyPoint(ruleConfig string) []common.Anom
|
||||
}
|
||||
m["ident"] = target.Ident
|
||||
|
||||
bg := arw.processor.BusiGroupCache.GetByBusiGroupId(target.GroupId)
|
||||
if bg != nil && bg.LabelEnable == 1 {
|
||||
m["busigroup"] = bg.LabelValue
|
||||
}
|
||||
|
||||
lst = append(lst, common.NewAnomalyPoint(trigger.Type, m, now, float64(now-target.UpdateAt), trigger.Severity))
|
||||
}
|
||||
case "offset":
|
||||
@@ -417,6 +411,11 @@ func (arw *AlertRuleWorker) GetHostAnomalyPoint(ruleConfig string) []common.Anom
|
||||
}
|
||||
m["ident"] = host
|
||||
|
||||
bg := arw.processor.BusiGroupCache.GetByBusiGroupId(target.GroupId)
|
||||
if bg != nil && bg.LabelEnable == 1 {
|
||||
m["busigroup"] = bg.LabelValue
|
||||
}
|
||||
|
||||
lst = append(lst, common.NewAnomalyPoint(trigger.Type, m, now, float64(offset), trigger.Severity))
|
||||
}
|
||||
case "pct_target_miss":
|
||||
@@ -445,80 +444,17 @@ func (arw *AlertRuleWorker) GetHostAnomalyPoint(ruleConfig string) []common.Anom
|
||||
return lst
|
||||
}
|
||||
|
||||
func GetAnomalyPoint(ruleId int64, ruleQuery models.RuleQuery, seriesTagIndexes []map[uint64][]uint64, seriesStore map[uint64]models.DataResp) ([]common.AnomalyPoint, []common.AnomalyPoint) {
|
||||
func GetAnomalyPoint(ruleId int64, ruleQuery models.RuleQuery, seriesTagIndex map[uint64][]uint64, seriesStore map[uint64]models.DataResp) ([]common.AnomalyPoint, []common.AnomalyPoint) {
|
||||
points := []common.AnomalyPoint{}
|
||||
recoverPoints := []common.AnomalyPoint{}
|
||||
|
||||
if len(ruleQuery.Triggers) == 0 {
|
||||
return points, recoverPoints
|
||||
}
|
||||
|
||||
if len(seriesTagIndexes) == 0 {
|
||||
return points, recoverPoints
|
||||
}
|
||||
|
||||
for _, trigger := range ruleQuery.Triggers {
|
||||
// seriesTagIndex 的 key 仅做分组使用,value 为每组 series 的 hash
|
||||
seriesTagIndex := make(map[uint64][]uint64)
|
||||
|
||||
if len(trigger.Joins) == 0 {
|
||||
// 没有 join 条件,走原逻辑
|
||||
last := seriesTagIndexes[0]
|
||||
for i := 1; i < len(seriesTagIndexes); i++ {
|
||||
last = originalJoin(last, seriesTagIndexes[i])
|
||||
}
|
||||
seriesTagIndex = last
|
||||
} else {
|
||||
// 有 join 条件,按条件依次合并
|
||||
if len(seriesTagIndexes) != len(trigger.Joins)+1 {
|
||||
logger.Errorf("rule_eval rid:%d queries' count: %d not match join condition's count: %d", ruleId, len(seriesTagIndexes), len(trigger.Joins))
|
||||
continue
|
||||
}
|
||||
|
||||
last := seriesTagIndexes[0]
|
||||
lastRehashed := rehashSet(last, seriesStore, trigger.Joins[0].On)
|
||||
for i := range trigger.Joins {
|
||||
cur := seriesTagIndexes[i+1]
|
||||
switch trigger.Joins[i].JoinType {
|
||||
case "original":
|
||||
last = originalJoin(last, cur)
|
||||
case "none":
|
||||
last = noneJoin(last, cur)
|
||||
case "cartesian":
|
||||
last = cartesianJoin(last, cur)
|
||||
case "inner_join":
|
||||
curRehashed := rehashSet(cur, seriesStore, trigger.Joins[i].On)
|
||||
lastRehashed = onJoin(lastRehashed, curRehashed, Inner)
|
||||
last = flatten(lastRehashed)
|
||||
case "left_join":
|
||||
curRehashed := rehashSet(cur, seriesStore, trigger.Joins[i].On)
|
||||
lastRehashed = onJoin(lastRehashed, curRehashed, Left)
|
||||
last = flatten(lastRehashed)
|
||||
case "right_join":
|
||||
curRehashed := rehashSet(cur, seriesStore, trigger.Joins[i].On)
|
||||
lastRehashed = onJoin(curRehashed, lastRehashed, Right)
|
||||
last = flatten(lastRehashed)
|
||||
case "left_exclude":
|
||||
curRehashed := rehashSet(cur, seriesStore, trigger.Joins[i].On)
|
||||
lastRehashed = exclude(lastRehashed, curRehashed)
|
||||
last = flatten(lastRehashed)
|
||||
case "right_exclude":
|
||||
curRehashed := rehashSet(cur, seriesStore, trigger.Joins[i].On)
|
||||
lastRehashed = exclude(curRehashed, lastRehashed)
|
||||
last = flatten(lastRehashed)
|
||||
default:
|
||||
logger.Warningf("rule_eval rid:%d join type:%s not support", ruleId, trigger.Joins[i].JoinType)
|
||||
}
|
||||
}
|
||||
seriesTagIndex = last
|
||||
}
|
||||
|
||||
for _, seriesHash := range seriesTagIndex {
|
||||
sort.Slice(seriesHash, func(i, j int) bool {
|
||||
return seriesHash[i] < seriesHash[j]
|
||||
})
|
||||
|
||||
m := make(map[string]interface{})
|
||||
m := make(map[string]float64)
|
||||
var ts int64
|
||||
var sample models.DataResp
|
||||
var value float64
|
||||
@@ -583,143 +519,6 @@ func GetAnomalyPoint(ruleId int64, ruleQuery models.RuleQuery, seriesTagIndexes
|
||||
return points, recoverPoints
|
||||
}
|
||||
|
||||
func flatten(rehashed map[uint64][][]uint64) map[uint64][]uint64 {
|
||||
seriesTagIndex := make(map[uint64][]uint64)
|
||||
var i uint64
|
||||
for _, HashTagIndex := range rehashed {
|
||||
for u := range HashTagIndex {
|
||||
seriesTagIndex[i] = HashTagIndex[u]
|
||||
i++
|
||||
}
|
||||
}
|
||||
return seriesTagIndex
|
||||
}
|
||||
|
||||
// onJoin 组合两个经过 rehash 之后的集合
|
||||
// 如查询 A,经过 on data_base rehash 分组后
|
||||
// [[A1{data_base=1, table=alert},A2{data_base=1, table=alert}],[A5{data_base=1, table=board}]]
|
||||
// [[A3{data_base=2, table=board}],[A4{data_base=2, table=alert}]]
|
||||
// 查询 B,经过 on data_base rehash 分组后
|
||||
// [[B1{data_base=1, table=alert}]]
|
||||
// [[B2{data_base=2, table=alert}]]
|
||||
// 内联得到
|
||||
// [[A1{data_base=1, table=alert},A2{data_base=1, table=alert},B1{data_base=1, table=alert}],[A5{data_base=1, table=board},[B1{data_base=1, table=alert}]]
|
||||
// [[A3{data_base=2, table=board},B2{data_base=2, table=alert}],[A4{data_base=2, table=alert},B2{data_base=2, table=alert}]]
|
||||
func onJoin(reHashTagIndex1 map[uint64][][]uint64, reHashTagIndex2 map[uint64][][]uint64, joinType JoinType) map[uint64][][]uint64 {
|
||||
reHashTagIndex := make(map[uint64][][]uint64)
|
||||
for rehash, _ := range reHashTagIndex1 {
|
||||
if _, ok := reHashTagIndex2[rehash]; ok {
|
||||
// 若有 rehash 相同的记录,两两合并
|
||||
for i1 := range reHashTagIndex1[rehash] {
|
||||
for i2 := range reHashTagIndex2[rehash] {
|
||||
reHashTagIndex[rehash] = append(reHashTagIndex[rehash], mergeNewArray(reHashTagIndex1[rehash][i1], reHashTagIndex2[rehash][i2]))
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// 合并方式不为 inner 时,需要保留 reHashTagIndex1 中未匹配的记录
|
||||
if joinType != Inner {
|
||||
reHashTagIndex[rehash] = reHashTagIndex1[rehash]
|
||||
}
|
||||
}
|
||||
}
|
||||
return reHashTagIndex
|
||||
}
|
||||
|
||||
// rehashSet 重新 hash 分组
|
||||
// 如当前查询 A 有五条记录
|
||||
// A1{data_base=1, table=alert}
|
||||
// A2{data_base=1, table=alert}
|
||||
// A3{data_base=2, table=board}
|
||||
// A4{data_base=2, table=alert}
|
||||
// A5{data_base=1, table=board}
|
||||
// 经过预处理(按曲线分组,此步已在进入 GetAnomalyPoint 函数前完成)后,分为 4 组,
|
||||
// [A1{data_base=1, table=alert},A2{data_base=1, table=alert}]
|
||||
// [A3{data_base=2, table=board}]
|
||||
// [A4{data_base=2, table=alert}]
|
||||
// [A5{data_base=1, table=board}]
|
||||
// 若 rehashSet 按 data_base 重新分组,此时会得到按 rehash 值分的二维数组,即不会将 rehash 值相同的记录完全合并
|
||||
// [[A1{data_base=1, table=alert},A2{data_base=1, table=alert}],[A5{data_base=1, table=board}]]
|
||||
// [[A3{data_base=2, table=board}],[A4{data_base=2, table=alert}]]
|
||||
func rehashSet(seriesTagIndex1 map[uint64][]uint64, seriesStore map[uint64]models.DataResp, on []string) map[uint64][][]uint64 {
|
||||
reHashTagIndex := make(map[uint64][][]uint64)
|
||||
for _, seriesHashes := range seriesTagIndex1 {
|
||||
if len(seriesHashes) == 0 {
|
||||
continue
|
||||
}
|
||||
series, exists := seriesStore[seriesHashes[0]]
|
||||
if !exists {
|
||||
continue
|
||||
}
|
||||
rehash := hash.GetTargetTagHash(series.Metric, on)
|
||||
if _, ok := reHashTagIndex[rehash]; !ok {
|
||||
reHashTagIndex[rehash] = make([][]uint64, 0)
|
||||
}
|
||||
reHashTagIndex[rehash] = append(reHashTagIndex[rehash], seriesHashes)
|
||||
}
|
||||
return reHashTagIndex
|
||||
}
|
||||
|
||||
// 笛卡尔积,查询的结果两两合并
|
||||
func cartesianJoin(seriesTagIndex1 map[uint64][]uint64, seriesTagIndex2 map[uint64][]uint64) map[uint64][]uint64 {
|
||||
var index uint64
|
||||
seriesTagIndex := make(map[uint64][]uint64)
|
||||
for _, seriesHashes1 := range seriesTagIndex1 {
|
||||
for _, seriesHashes2 := range seriesTagIndex2 {
|
||||
seriesTagIndex[index] = mergeNewArray(seriesHashes1, seriesHashes2)
|
||||
index++
|
||||
}
|
||||
}
|
||||
return seriesTagIndex
|
||||
}
|
||||
|
||||
// noneJoin 直接拼接
|
||||
func noneJoin(seriesTagIndex1 map[uint64][]uint64, seriesTagIndex2 map[uint64][]uint64) map[uint64][]uint64 {
|
||||
seriesTagIndex := make(map[uint64][]uint64)
|
||||
var index uint64
|
||||
for _, seriesHashes := range seriesTagIndex1 {
|
||||
seriesTagIndex[index] = seriesHashes
|
||||
index++
|
||||
}
|
||||
for _, seriesHashes := range seriesTagIndex2 {
|
||||
seriesTagIndex[index] = seriesHashes
|
||||
index++
|
||||
}
|
||||
return seriesTagIndex
|
||||
}
|
||||
|
||||
// originalJoin 原始分组方案,key 相同,即标签全部相同分为一组
|
||||
func originalJoin(seriesTagIndex1 map[uint64][]uint64, seriesTagIndex2 map[uint64][]uint64) map[uint64][]uint64 {
|
||||
seriesTagIndex := make(map[uint64][]uint64)
|
||||
for tagHash, seriesHashes := range seriesTagIndex1 {
|
||||
if _, ok := seriesTagIndex[tagHash]; !ok {
|
||||
seriesTagIndex[tagHash] = mergeNewArray(seriesHashes)
|
||||
} else {
|
||||
seriesTagIndex[tagHash] = append(seriesTagIndex[tagHash], seriesHashes...)
|
||||
}
|
||||
}
|
||||
|
||||
for tagHash, seriesHashes := range seriesTagIndex2 {
|
||||
if _, ok := seriesTagIndex[tagHash]; !ok {
|
||||
seriesTagIndex[tagHash] = mergeNewArray(seriesHashes)
|
||||
} else {
|
||||
seriesTagIndex[tagHash] = append(seriesTagIndex[tagHash], seriesHashes...)
|
||||
}
|
||||
}
|
||||
|
||||
return seriesTagIndex
|
||||
}
|
||||
|
||||
// exclude 左斥,留下在 reHashTagIndex1 中,但不在 reHashTagIndex2 中的记录
|
||||
func exclude(reHashTagIndex1 map[uint64][][]uint64, reHashTagIndex2 map[uint64][][]uint64) map[uint64][][]uint64 {
|
||||
reHashTagIndex := make(map[uint64][][]uint64)
|
||||
for rehash, _ := range reHashTagIndex1 {
|
||||
if _, ok := reHashTagIndex2[rehash]; !ok {
|
||||
reHashTagIndex[rehash] = reHashTagIndex1[rehash]
|
||||
}
|
||||
}
|
||||
return reHashTagIndex
|
||||
}
|
||||
|
||||
func MakeSeriesMap(series []models.DataResp, seriesTagIndex map[uint64][]uint64, seriesStore map[uint64]models.DataResp) {
|
||||
for i := 0; i < len(series); i++ {
|
||||
serieHash := hash.GetHash(series[i].Metric, series[i].Ref)
|
||||
@@ -733,11 +532,3 @@ func MakeSeriesMap(series []models.DataResp, seriesTagIndex map[uint64][]uint64,
|
||||
seriesTagIndex[tagHash] = append(seriesTagIndex[tagHash], serieHash)
|
||||
}
|
||||
}
|
||||
|
||||
func mergeNewArray(arg ...[]uint64) []uint64 {
|
||||
res := make([]uint64, 0)
|
||||
for _, a := range arg {
|
||||
res = append(res, a...)
|
||||
}
|
||||
return res
|
||||
}
|
||||
|
||||
@@ -1,271 +0,0 @@
|
||||
package eval
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"testing"
|
||||
|
||||
"golang.org/x/exp/slices"
|
||||
)
|
||||
|
||||
var (
|
||||
reHashTagIndex1 = map[uint64][][]uint64{
|
||||
1: {
|
||||
{1, 2}, {3, 4},
|
||||
},
|
||||
2: {
|
||||
{5, 6}, {7, 8},
|
||||
},
|
||||
}
|
||||
reHashTagIndex2 = map[uint64][][]uint64{
|
||||
1: {
|
||||
{9, 10}, {11, 12},
|
||||
},
|
||||
3: {
|
||||
{13, 14}, {15, 16},
|
||||
},
|
||||
}
|
||||
seriesTagIndex1 = map[uint64][]uint64{
|
||||
1: {1, 2, 3, 4},
|
||||
2: {5, 6, 7, 8},
|
||||
}
|
||||
seriesTagIndex2 = map[uint64][]uint64{
|
||||
1: {9, 10, 11, 12},
|
||||
3: {13, 14, 15, 16},
|
||||
}
|
||||
)
|
||||
|
||||
func Test_originalJoin(t *testing.T) {
|
||||
type args struct {
|
||||
seriesTagIndex1 map[uint64][]uint64
|
||||
seriesTagIndex2 map[uint64][]uint64
|
||||
}
|
||||
tests := []struct {
|
||||
name string
|
||||
args args
|
||||
want map[uint64][]uint64
|
||||
}{
|
||||
{
|
||||
name: "original join",
|
||||
args: args{
|
||||
seriesTagIndex1: map[uint64][]uint64{
|
||||
1: {1, 2, 3, 4},
|
||||
2: {5, 6, 7, 8},
|
||||
},
|
||||
seriesTagIndex2: map[uint64][]uint64{
|
||||
1: {9, 10, 11, 12},
|
||||
3: {13, 14, 15, 16},
|
||||
},
|
||||
},
|
||||
want: map[uint64][]uint64{
|
||||
1: {1, 2, 3, 4, 9, 10, 11, 12},
|
||||
2: {5, 6, 7, 8},
|
||||
3: {13, 14, 15, 16},
|
||||
},
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
if got := originalJoin(tt.args.seriesTagIndex1, tt.args.seriesTagIndex2); !reflect.DeepEqual(got, tt.want) {
|
||||
t.Errorf("originalJoin() = %v, want %v", got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func Test_exclude(t *testing.T) {
|
||||
type args struct {
|
||||
reHashTagIndex1 map[uint64][][]uint64
|
||||
reHashTagIndex2 map[uint64][][]uint64
|
||||
}
|
||||
tests := []struct {
|
||||
name string
|
||||
args args
|
||||
want map[uint64][]uint64
|
||||
}{
|
||||
{
|
||||
name: "left exclude",
|
||||
args: args{
|
||||
reHashTagIndex1: reHashTagIndex1,
|
||||
reHashTagIndex2: reHashTagIndex2,
|
||||
},
|
||||
want: map[uint64][]uint64{
|
||||
0: {5, 6},
|
||||
1: {7, 8},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "right exclude",
|
||||
args: args{
|
||||
reHashTagIndex1: reHashTagIndex2,
|
||||
reHashTagIndex2: reHashTagIndex1,
|
||||
},
|
||||
want: map[uint64][]uint64{
|
||||
3: {13, 14},
|
||||
4: {15, 16},
|
||||
},
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
if got := exclude(tt.args.reHashTagIndex1, tt.args.reHashTagIndex2); !allValueDeepEqual(flatten(got), tt.want) {
|
||||
t.Errorf("exclude() = %v, want %v", got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func Test_noneJoin(t *testing.T) {
|
||||
type args struct {
|
||||
seriesTagIndex1 map[uint64][]uint64
|
||||
seriesTagIndex2 map[uint64][]uint64
|
||||
}
|
||||
tests := []struct {
|
||||
name string
|
||||
args args
|
||||
want map[uint64][]uint64
|
||||
}{
|
||||
{
|
||||
name: "none join, direct splicing",
|
||||
args: args{
|
||||
seriesTagIndex1: seriesTagIndex1,
|
||||
seriesTagIndex2: seriesTagIndex2,
|
||||
},
|
||||
want: map[uint64][]uint64{
|
||||
0: {1, 2, 3, 4},
|
||||
1: {5, 6, 7, 8},
|
||||
2: {9, 10, 11, 12},
|
||||
3: {13, 14, 15, 16},
|
||||
},
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
if got := noneJoin(tt.args.seriesTagIndex1, tt.args.seriesTagIndex2); !allValueDeepEqual(got, tt.want) {
|
||||
t.Errorf("noneJoin() = %v, want %v", got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func Test_cartesianJoin(t *testing.T) {
|
||||
type args struct {
|
||||
seriesTagIndex1 map[uint64][]uint64
|
||||
seriesTagIndex2 map[uint64][]uint64
|
||||
}
|
||||
tests := []struct {
|
||||
name string
|
||||
args args
|
||||
want map[uint64][]uint64
|
||||
}{
|
||||
{
|
||||
name: "cartesian join",
|
||||
args: args{
|
||||
seriesTagIndex1: seriesTagIndex1,
|
||||
seriesTagIndex2: seriesTagIndex2,
|
||||
},
|
||||
want: map[uint64][]uint64{
|
||||
0: {1, 2, 3, 4, 9, 10, 11, 12},
|
||||
1: {5, 6, 7, 8, 9, 10, 11, 12},
|
||||
2: {5, 6, 7, 8, 13, 14, 15, 16},
|
||||
3: {1, 2, 3, 4, 13, 14, 15, 16},
|
||||
},
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
if got := cartesianJoin(tt.args.seriesTagIndex1, tt.args.seriesTagIndex2); !allValueDeepEqual(got, tt.want) {
|
||||
t.Errorf("cartesianJoin() = %v, want %v", got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func Test_onJoin(t *testing.T) {
|
||||
type args struct {
|
||||
reHashTagIndex1 map[uint64][][]uint64
|
||||
reHashTagIndex2 map[uint64][][]uint64
|
||||
joinType JoinType
|
||||
}
|
||||
tests := []struct {
|
||||
name string
|
||||
args args
|
||||
want map[uint64][]uint64
|
||||
}{
|
||||
{
|
||||
name: "left join",
|
||||
args: args{
|
||||
reHashTagIndex1: reHashTagIndex1,
|
||||
reHashTagIndex2: reHashTagIndex2,
|
||||
joinType: Left,
|
||||
},
|
||||
want: map[uint64][]uint64{
|
||||
1: {1, 2, 9, 10},
|
||||
2: {3, 4, 9, 10},
|
||||
3: {1, 2, 11, 12},
|
||||
4: {3, 4, 11, 12},
|
||||
5: {5, 6},
|
||||
6: {7, 8},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "right join",
|
||||
args: args{
|
||||
reHashTagIndex1: reHashTagIndex2,
|
||||
reHashTagIndex2: reHashTagIndex1,
|
||||
joinType: Right,
|
||||
},
|
||||
want: map[uint64][]uint64{
|
||||
1: {1, 2, 9, 10},
|
||||
2: {3, 4, 9, 10},
|
||||
3: {1, 2, 11, 12},
|
||||
4: {3, 4, 11, 12},
|
||||
5: {13, 14},
|
||||
6: {15, 16},
|
||||
},
|
||||
},
|
||||
|
||||
{
|
||||
name: "inner join",
|
||||
args: args{
|
||||
reHashTagIndex1: reHashTagIndex1,
|
||||
reHashTagIndex2: reHashTagIndex2,
|
||||
joinType: Inner,
|
||||
},
|
||||
want: map[uint64][]uint64{
|
||||
1: {1, 2, 9, 10},
|
||||
2: {3, 4, 9, 10},
|
||||
3: {1, 2, 11, 12},
|
||||
4: {3, 4, 11, 12},
|
||||
},
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
if got := onJoin(tt.args.reHashTagIndex1, tt.args.reHashTagIndex2, tt.args.joinType); !allValueDeepEqual(flatten(got), tt.want) {
|
||||
t.Errorf("onJoin() = %v, want %v", got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// allValueDeepEqual 判断 map 的 value 是否相同,不考虑 key
|
||||
func allValueDeepEqual(got, want map[uint64][]uint64) bool {
|
||||
if len(got) != len(want) {
|
||||
return false
|
||||
}
|
||||
for _, v1 := range got {
|
||||
curEqual := false
|
||||
slices.Sort(v1)
|
||||
for _, v2 := range want {
|
||||
slices.Sort(v2)
|
||||
if reflect.DeepEqual(v1, v2) {
|
||||
curEqual = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !curEqual {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
@@ -114,7 +114,7 @@ func BgNotMatchMuteStrategy(rule *models.AlertRule, event *models.AlertCurEvent,
|
||||
target, exists := targetCache.Get(ident)
|
||||
// 对于包含ident的告警事件,check一下ident所属bg和rule所属bg是否相同
|
||||
// 如果告警规则选择了只在本BG生效,那其他BG的机器就不能因此规则产生告警
|
||||
if exists && !target.MatchGroupId(rule.GroupId) {
|
||||
if exists && target.GroupId != rule.GroupId {
|
||||
logger.Debugf("[%s] mute: rule_eval:%d cluster:%s", "BgNotMatchMuteStrategy", rule.Id, event.Cluster)
|
||||
return true
|
||||
}
|
||||
|
||||
@@ -53,11 +53,10 @@ type Processor struct {
|
||||
datasourceId int64
|
||||
EngineName string
|
||||
|
||||
rule *models.AlertRule
|
||||
fires *AlertCurEventMap
|
||||
pendings *AlertCurEventMap
|
||||
pendingsUseByRecover *AlertCurEventMap
|
||||
inhibit bool
|
||||
rule *models.AlertRule
|
||||
fires *AlertCurEventMap
|
||||
pendings *AlertCurEventMap
|
||||
inhibit bool
|
||||
|
||||
tagsMap map[string]string
|
||||
tagsArr []string
|
||||
@@ -212,14 +211,6 @@ func (p *Processor) BuildEvent(anomalyPoint common.AnomalyPoint, from string, no
|
||||
event.ExtraConfig = p.rule.ExtraConfigJSON
|
||||
event.PromQl = anomalyPoint.Query
|
||||
|
||||
if p.target != "" {
|
||||
if pt, exist := p.TargetCache.Get(p.target); exist {
|
||||
event.Target = pt
|
||||
} else {
|
||||
logger.Infof("Target[ident: %s] doesn't exist in cache.", p.target)
|
||||
}
|
||||
}
|
||||
|
||||
if event.TriggerValues != "" && strings.Count(event.TriggerValues, "$") > 1 {
|
||||
// TriggerValues 有多个变量,将多个变量都放到 TriggerValue 中
|
||||
event.TriggerValue = event.TriggerValues
|
||||
@@ -241,17 +232,17 @@ func Relabel(rule *models.AlertRule, event *models.AlertCurEvent) {
|
||||
return
|
||||
}
|
||||
|
||||
if len(rule.EventRelabelConfig) == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
// need to keep the original label
|
||||
event.OriginalTags = event.Tags
|
||||
event.OriginalTagsJSON = make([]string, len(event.TagsJSON))
|
||||
|
||||
labels := make([]prompb.Label, len(event.TagsJSON))
|
||||
for i, tag := range event.TagsJSON {
|
||||
label := strings.SplitN(tag, "=", 2)
|
||||
label := strings.Split(tag, "=")
|
||||
if len(label) != 2 {
|
||||
logger.Errorf("event%+v relabel: the label length is not 2:%v", event, label)
|
||||
continue
|
||||
}
|
||||
event.OriginalTagsJSON[i] = tag
|
||||
labels[i] = prompb.Label{Name: label[0], Value: label[1]}
|
||||
}
|
||||
@@ -351,22 +342,11 @@ func (p *Processor) RecoverSingle(hash string, now int64, value *string, values
|
||||
if !has {
|
||||
return
|
||||
}
|
||||
|
||||
// 如果配置了留观时长,就不能立马恢复了
|
||||
if cachedRule.RecoverDuration > 0 {
|
||||
lastPendingEvent, has := p.pendingsUseByRecover.Get(hash)
|
||||
if !has {
|
||||
// 说明没有产生过异常点,就不需要恢复了
|
||||
logger.Debugf("rule_eval:%s event:%v do not has pending event, not recover", p.Key(), event)
|
||||
return
|
||||
}
|
||||
|
||||
if now-lastPendingEvent.LastEvalTime < cachedRule.RecoverDuration {
|
||||
logger.Debugf("rule_eval:%s event:%v not recover", p.Key(), event)
|
||||
return
|
||||
}
|
||||
if cachedRule.RecoverDuration > 0 && now-event.LastEvalTime < cachedRule.RecoverDuration {
|
||||
logger.Debugf("rule_eval:%s event:%v not recover", p.Key(), event)
|
||||
return
|
||||
}
|
||||
|
||||
if value != nil {
|
||||
event.TriggerValue = *value
|
||||
if len(values) > 0 {
|
||||
@@ -378,7 +358,6 @@ func (p *Processor) RecoverSingle(hash string, now int64, value *string, values
|
||||
// 我确实无法分辨,是prom中有值但是未满足阈值所以没返回,还是prom中确实丢了一些点导致没有数据可以返回,尴尬
|
||||
p.fires.Delete(hash)
|
||||
p.pendings.Delete(hash)
|
||||
p.pendingsUseByRecover.Delete(hash)
|
||||
|
||||
// 可能是因为调整了promql才恢复的,所以事件里边要体现最新的promql,否则用户会比较困惑
|
||||
// 当然,其实rule的各个字段都可能发生变化了,都更新一下吧
|
||||
@@ -398,13 +377,6 @@ func (p *Processor) handleEvent(events []*models.AlertCurEvent) {
|
||||
if event == nil {
|
||||
continue
|
||||
}
|
||||
|
||||
if _, has := p.pendingsUseByRecover.Get(event.Hash); has {
|
||||
p.pendingsUseByRecover.UpdateLastEvalTime(event.Hash, event.LastEvalTime)
|
||||
} else {
|
||||
p.pendingsUseByRecover.Set(event.Hash, event)
|
||||
}
|
||||
|
||||
if p.rule.PromForDuration == 0 {
|
||||
fireEvents = append(fireEvents, event)
|
||||
if severity > event.Severity {
|
||||
@@ -413,7 +385,7 @@ func (p *Processor) handleEvent(events []*models.AlertCurEvent) {
|
||||
continue
|
||||
}
|
||||
|
||||
var preTriggerTime int64 // 第一个 pending event 的触发时间
|
||||
var preTriggerTime int64
|
||||
preEvent, has := p.pendings.Get(event.Hash)
|
||||
if has {
|
||||
p.pendings.UpdateLastEvalTime(event.Hash, event.LastEvalTime)
|
||||
@@ -505,7 +477,6 @@ func (p *Processor) pushEventToQueue(e *models.AlertCurEvent) {
|
||||
|
||||
func (p *Processor) RecoverAlertCurEventFromDb() {
|
||||
p.pendings = NewAlertCurEventMap(nil)
|
||||
p.pendingsUseByRecover = NewAlertCurEventMap(nil)
|
||||
|
||||
curEvents, err := models.AlertCurEventGetByRuleIdAndDsId(p.ctx, p.rule.Id, p.datasourceId)
|
||||
if err != nil {
|
||||
@@ -516,7 +487,6 @@ func (p *Processor) RecoverAlertCurEventFromDb() {
|
||||
}
|
||||
|
||||
fireMap := make(map[string]*models.AlertCurEvent)
|
||||
pendingsUseByRecoverMap := make(map[string]*models.AlertCurEvent)
|
||||
for _, event := range curEvents {
|
||||
if event.Cate == models.HOST {
|
||||
target, exists := p.TargetCache.Get(event.TargetIdent)
|
||||
@@ -527,20 +497,10 @@ func (p *Processor) RecoverAlertCurEventFromDb() {
|
||||
}
|
||||
|
||||
event.DB2Mem()
|
||||
target, exists := p.TargetCache.Get(event.TargetIdent)
|
||||
if exists {
|
||||
event.Target = target
|
||||
}
|
||||
|
||||
fireMap[event.Hash] = event
|
||||
e := *event
|
||||
pendingsUseByRecoverMap[event.Hash] = &e
|
||||
}
|
||||
|
||||
p.fires = NewAlertCurEventMap(fireMap)
|
||||
|
||||
// 修改告警规则,或者进程重启之后,需要重新加载 pendingsUseByRecover
|
||||
p.pendingsUseByRecover = NewAlertCurEventMap(pendingsUseByRecoverMap)
|
||||
}
|
||||
|
||||
func (p *Processor) fillTags(anomalyPoint common.AnomalyPoint) {
|
||||
@@ -616,7 +576,6 @@ func (p *Processor) mayHandleGroup() {
|
||||
func (p *Processor) DeleteProcessEvent(hash string) {
|
||||
p.fires.Delete(hash)
|
||||
p.pendings.Delete(hash)
|
||||
p.pendingsUseByRecover.Delete(hash)
|
||||
}
|
||||
|
||||
func labelMapToArr(m map[string]string) []string {
|
||||
|
||||
@@ -56,12 +56,11 @@ func (rrc *RecordRuleContext) Key() string {
|
||||
}
|
||||
|
||||
func (rrc *RecordRuleContext) Hash() string {
|
||||
return str.MD5(fmt.Sprintf("%d_%s_%s_%d_%s",
|
||||
return str.MD5(fmt.Sprintf("%d_%s_%s_%d",
|
||||
rrc.rule.Id,
|
||||
rrc.rule.CronPattern,
|
||||
rrc.rule.PromQl,
|
||||
rrc.datasourceId,
|
||||
rrc.rule.AppendTags,
|
||||
))
|
||||
}
|
||||
|
||||
|
||||
@@ -34,7 +34,7 @@ func (rt *Router) pushEventToQueue(c *gin.Context) {
|
||||
continue
|
||||
}
|
||||
|
||||
arr := strings.SplitN(pair, "=", 2)
|
||||
arr := strings.Split(pair, "=")
|
||||
if len(arr) != 2 {
|
||||
continue
|
||||
}
|
||||
|
||||
@@ -125,7 +125,7 @@ func (c *DefaultCallBacker) CallBack(ctx CallBackContext) {
|
||||
Batch: 1000,
|
||||
}
|
||||
|
||||
PushCallbackEvent(ctx.Ctx, webhookConf, event, ctx.Stats)
|
||||
PushCallbackEvent(webhookConf, event, ctx.Stats)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -141,46 +141,16 @@ func (c *DefaultCallBacker) CallBack(ctx CallBackContext) {
|
||||
}
|
||||
}
|
||||
|
||||
func doSendAndRecord(ctx *ctx.Context, url, token string, body interface{}, channel string,
|
||||
stats *astats.Stats, event *models.AlertCurEvent) {
|
||||
res, err := doSend(url, body, channel, stats)
|
||||
NotifyRecord(ctx, event, channel, token, res, err)
|
||||
}
|
||||
|
||||
func NotifyRecord(ctx *ctx.Context, evt *models.AlertCurEvent, channel, target, res string, err error) {
|
||||
noti := models.NewNotificationRecord(evt, channel, target)
|
||||
if err != nil {
|
||||
noti.SetStatus(models.NotiStatusFailure)
|
||||
noti.SetDetails(err.Error())
|
||||
} else if res != "" {
|
||||
noti.SetDetails(string(res))
|
||||
}
|
||||
|
||||
if !ctx.IsCenter {
|
||||
_, err := poster.PostByUrlsWithResp[int64](ctx, "/v1/n9e/notify-record", noti)
|
||||
if err != nil {
|
||||
logger.Errorf("add noti:%v failed, err: %v", noti, err)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
if err := noti.Add(ctx); err != nil {
|
||||
logger.Errorf("add noti:%v failed, err: %v", noti, err)
|
||||
}
|
||||
}
|
||||
|
||||
func doSend(url string, body interface{}, channel string, stats *astats.Stats) (string, error) {
|
||||
func doSend(url string, body interface{}, channel string, stats *astats.Stats) {
|
||||
stats.AlertNotifyTotal.WithLabelValues(channel).Inc()
|
||||
|
||||
res, code, err := poster.PostJSON(url, time.Second*5, body, 3)
|
||||
if err != nil {
|
||||
logger.Errorf("%s_sender: result=fail url=%s code=%d error=%v req:%v response=%s", channel, url, code, err, body, string(res))
|
||||
stats.AlertNotifyErrorTotal.WithLabelValues(channel).Inc()
|
||||
return "", err
|
||||
} else {
|
||||
logger.Infof("%s_sender: result=succ url=%s code=%d req:%v response=%s", channel, url, code, body, string(res))
|
||||
}
|
||||
|
||||
logger.Infof("%s_sender: result=succ url=%s code=%d req:%v response=%s", channel, url, code, body, string(res))
|
||||
return string(res), nil
|
||||
}
|
||||
|
||||
type TaskCreateReply struct {
|
||||
@@ -188,7 +158,7 @@ type TaskCreateReply struct {
|
||||
Dat int64 `json:"dat"` // task.id
|
||||
}
|
||||
|
||||
func PushCallbackEvent(ctx *ctx.Context, webhook *models.Webhook, event *models.AlertCurEvent, stats *astats.Stats) {
|
||||
func PushCallbackEvent(webhook *models.Webhook, event *models.AlertCurEvent, stats *astats.Stats) {
|
||||
CallbackEventQueueLock.RLock()
|
||||
queue := CallbackEventQueue[webhook.Url]
|
||||
CallbackEventQueueLock.RUnlock()
|
||||
@@ -203,7 +173,7 @@ func PushCallbackEvent(ctx *ctx.Context, webhook *models.Webhook, event *models.
|
||||
CallbackEventQueue[webhook.Url] = queue
|
||||
CallbackEventQueueLock.Unlock()
|
||||
|
||||
StartConsumer(ctx, queue, webhook.Batch, webhook, stats)
|
||||
StartConsumer(queue, webhook.Batch, webhook, stats)
|
||||
}
|
||||
|
||||
succ := queue.list.PushFront(event)
|
||||
|
||||
@@ -1,10 +1,9 @@
|
||||
package sender
|
||||
|
||||
import (
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
"html/template"
|
||||
"strings"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
)
|
||||
|
||||
type dingtalkMarkdown struct {
|
||||
@@ -36,13 +35,13 @@ func (ds *DingtalkSender) Send(ctx MessageContext) {
|
||||
return
|
||||
}
|
||||
|
||||
urls, ats, tokens := ds.extract(ctx.Users)
|
||||
urls, ats := ds.extract(ctx.Users)
|
||||
if len(urls) == 0 {
|
||||
return
|
||||
}
|
||||
message := BuildTplMessage(models.Dingtalk, ds.tpl, ctx.Events)
|
||||
|
||||
for i, url := range urls {
|
||||
for _, url := range urls {
|
||||
var body dingtalk
|
||||
// NoAt in url
|
||||
if strings.Contains(url, "noat=1") {
|
||||
@@ -67,7 +66,7 @@ func (ds *DingtalkSender) Send(ctx MessageContext) {
|
||||
}
|
||||
}
|
||||
|
||||
doSendAndRecord(ctx.Ctx, url, tokens[i], body, models.Dingtalk, ctx.Stats, ctx.Events[0])
|
||||
doSend(url, body, models.Dingtalk, ctx.Stats)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -97,17 +96,15 @@ func (ds *DingtalkSender) CallBack(ctx CallBackContext) {
|
||||
body.Markdown.Text = message
|
||||
}
|
||||
|
||||
doSendAndRecord(ctx.Ctx, ctx.CallBackURL, ctx.CallBackURL, body,
|
||||
"callback", ctx.Stats, ctx.Events[0])
|
||||
doSend(ctx.CallBackURL, body, models.Dingtalk, ctx.Stats)
|
||||
|
||||
ctx.Stats.AlertNotifyTotal.WithLabelValues("rule_callback").Inc()
|
||||
}
|
||||
|
||||
// extract urls and ats from Users
|
||||
func (ds *DingtalkSender) extract(users []*models.User) ([]string, []string, []string) {
|
||||
func (ds *DingtalkSender) extract(users []*models.User) ([]string, []string) {
|
||||
urls := make([]string, 0, len(users))
|
||||
ats := make([]string, 0, len(users))
|
||||
tokens := make([]string, 0, len(users))
|
||||
|
||||
for _, user := range users {
|
||||
if user.Phone != "" {
|
||||
@@ -119,8 +116,7 @@ func (ds *DingtalkSender) extract(users []*models.User) ([]string, []string, []s
|
||||
url = "https://oapi.dingtalk.com/robot/send?access_token=" + token
|
||||
}
|
||||
urls = append(urls, url)
|
||||
tokens = append(tokens, token)
|
||||
}
|
||||
}
|
||||
return urls, ats, tokens
|
||||
return urls, ats
|
||||
}
|
||||
|
||||
@@ -9,14 +9,13 @@ import (
|
||||
"github.com/ccfos/nightingale/v6/alert/aconf"
|
||||
"github.com/ccfos/nightingale/v6/memsto"
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
"github.com/ccfos/nightingale/v6/pkg/ctx"
|
||||
|
||||
"github.com/toolkits/pkg/logger"
|
||||
|
||||
"gopkg.in/gomail.v2"
|
||||
)
|
||||
|
||||
var mailch chan *EmailContext
|
||||
var mailch chan *gomail.Message
|
||||
|
||||
type EmailSender struct {
|
||||
subjectTpl *template.Template
|
||||
@@ -24,11 +23,6 @@ type EmailSender struct {
|
||||
smtp aconf.SMTPConfig
|
||||
}
|
||||
|
||||
type EmailContext struct {
|
||||
event *models.AlertCurEvent
|
||||
mail *gomail.Message
|
||||
}
|
||||
|
||||
func (es *EmailSender) Send(ctx MessageContext) {
|
||||
if len(ctx.Users) == 0 || len(ctx.Events) == 0 {
|
||||
return
|
||||
@@ -42,7 +36,7 @@ func (es *EmailSender) Send(ctx MessageContext) {
|
||||
subject = ctx.Events[0].RuleName
|
||||
}
|
||||
content := BuildTplMessage(models.Email, es.contentTpl, ctx.Events)
|
||||
es.WriteEmail(subject, content, tos, ctx.Events[0])
|
||||
es.WriteEmail(subject, content, tos)
|
||||
|
||||
ctx.Stats.AlertNotifyTotal.WithLabelValues(models.Email).Add(float64(len(tos)))
|
||||
}
|
||||
@@ -79,8 +73,7 @@ func SendEmail(subject, content string, tos []string, stmp aconf.SMTPConfig) err
|
||||
return nil
|
||||
}
|
||||
|
||||
func (es *EmailSender) WriteEmail(subject, content string, tos []string,
|
||||
event *models.AlertCurEvent) {
|
||||
func (es *EmailSender) WriteEmail(subject, content string, tos []string) {
|
||||
m := gomail.NewMessage()
|
||||
|
||||
m.SetHeader("From", es.smtp.From)
|
||||
@@ -88,7 +81,7 @@ func (es *EmailSender) WriteEmail(subject, content string, tos []string,
|
||||
m.SetHeader("Subject", subject)
|
||||
m.SetBody("text/html", content)
|
||||
|
||||
mailch <- &EmailContext{event, m}
|
||||
mailch <- m
|
||||
}
|
||||
|
||||
func dialSmtp(d *gomail.Dialer) gomail.SendCloser {
|
||||
@@ -111,22 +104,22 @@ func dialSmtp(d *gomail.Dialer) gomail.SendCloser {
|
||||
|
||||
var mailQuit = make(chan struct{})
|
||||
|
||||
func RestartEmailSender(ctx *ctx.Context, smtp aconf.SMTPConfig) {
|
||||
func RestartEmailSender(smtp aconf.SMTPConfig) {
|
||||
// Notify internal start exit
|
||||
mailQuit <- struct{}{}
|
||||
startEmailSender(ctx, smtp)
|
||||
startEmailSender(smtp)
|
||||
}
|
||||
|
||||
var smtpConfig aconf.SMTPConfig
|
||||
|
||||
func InitEmailSender(ctx *ctx.Context, ncc *memsto.NotifyConfigCacheType) {
|
||||
mailch = make(chan *EmailContext, 100000)
|
||||
go updateSmtp(ctx, ncc)
|
||||
func InitEmailSender(ncc *memsto.NotifyConfigCacheType) {
|
||||
mailch = make(chan *gomail.Message, 100000)
|
||||
go updateSmtp(ncc)
|
||||
smtpConfig = ncc.GetSMTP()
|
||||
startEmailSender(ctx, smtpConfig)
|
||||
startEmailSender(smtpConfig)
|
||||
}
|
||||
|
||||
func updateSmtp(ctx *ctx.Context, ncc *memsto.NotifyConfigCacheType) {
|
||||
func updateSmtp(ncc *memsto.NotifyConfigCacheType) {
|
||||
for {
|
||||
time.Sleep(1 * time.Minute)
|
||||
smtp := ncc.GetSMTP()
|
||||
@@ -134,12 +127,12 @@ func updateSmtp(ctx *ctx.Context, ncc *memsto.NotifyConfigCacheType) {
|
||||
smtpConfig.Pass != smtp.Pass || smtpConfig.User != smtp.User || smtpConfig.Port != smtp.Port ||
|
||||
smtpConfig.InsecureSkipVerify != smtp.InsecureSkipVerify { //diff
|
||||
smtpConfig = smtp
|
||||
RestartEmailSender(ctx, smtp)
|
||||
RestartEmailSender(smtp)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func startEmailSender(ctx *ctx.Context, smtp aconf.SMTPConfig) {
|
||||
func startEmailSender(smtp aconf.SMTPConfig) {
|
||||
conf := smtp
|
||||
if conf.Host == "" || conf.Port == 0 {
|
||||
logger.Warning("SMTP configurations invalid")
|
||||
@@ -174,8 +167,7 @@ func startEmailSender(ctx *ctx.Context, smtp aconf.SMTPConfig) {
|
||||
}
|
||||
open = true
|
||||
}
|
||||
var err error
|
||||
if err = gomail.Send(s, m.mail); err != nil {
|
||||
if err := gomail.Send(s, m); err != nil {
|
||||
logger.Errorf("email_sender: failed to send: %s", err)
|
||||
|
||||
// close and retry
|
||||
@@ -192,16 +184,11 @@ func startEmailSender(ctx *ctx.Context, smtp aconf.SMTPConfig) {
|
||||
}
|
||||
open = true
|
||||
|
||||
if err = gomail.Send(s, m.mail); err != nil {
|
||||
if err := gomail.Send(s, m); err != nil {
|
||||
logger.Errorf("email_sender: failed to retry send: %s", err)
|
||||
}
|
||||
} else {
|
||||
logger.Infof("email_sender: result=succ subject=%v to=%v",
|
||||
m.mail.GetHeader("Subject"), m.mail.GetHeader("To"))
|
||||
}
|
||||
|
||||
for _, to := range m.mail.GetHeader("To") {
|
||||
NotifyRecord(ctx, m.event, models.Email, to, "", err)
|
||||
logger.Infof("email_sender: result=succ subject=%v to=%v", m.GetHeader("Subject"), m.GetHeader("To"))
|
||||
}
|
||||
|
||||
size++
|
||||
|
||||
@@ -54,8 +54,7 @@ func (fs *FeishuSender) CallBack(ctx CallBackContext) {
|
||||
},
|
||||
}
|
||||
|
||||
doSendAndRecord(ctx.Ctx, ctx.CallBackURL, ctx.CallBackURL, body, "callback",
|
||||
ctx.Stats, ctx.Events[0])
|
||||
doSend(ctx.CallBackURL, body, models.Feishu, ctx.Stats)
|
||||
ctx.Stats.AlertNotifyTotal.WithLabelValues("rule_callback").Inc()
|
||||
}
|
||||
|
||||
@@ -63,9 +62,9 @@ func (fs *FeishuSender) Send(ctx MessageContext) {
|
||||
if len(ctx.Users) == 0 || len(ctx.Events) == 0 {
|
||||
return
|
||||
}
|
||||
urls, ats, tokens := fs.extract(ctx.Users)
|
||||
urls, ats := fs.extract(ctx.Users)
|
||||
message := BuildTplMessage(models.Feishu, fs.tpl, ctx.Events)
|
||||
for i, url := range urls {
|
||||
for _, url := range urls {
|
||||
body := feishu{
|
||||
Msgtype: "text",
|
||||
Content: feishuContent{
|
||||
@@ -78,14 +77,13 @@ func (fs *FeishuSender) Send(ctx MessageContext) {
|
||||
IsAtAll: false,
|
||||
}
|
||||
}
|
||||
doSendAndRecord(ctx.Ctx, url, tokens[i], body, models.Feishu, ctx.Stats, ctx.Events[0])
|
||||
doSend(url, body, models.Feishu, ctx.Stats)
|
||||
}
|
||||
}
|
||||
|
||||
func (fs *FeishuSender) extract(users []*models.User) ([]string, []string, []string) {
|
||||
func (fs *FeishuSender) extract(users []*models.User) ([]string, []string) {
|
||||
urls := make([]string, 0, len(users))
|
||||
ats := make([]string, 0, len(users))
|
||||
tokens := make([]string, 0, len(users))
|
||||
|
||||
for _, user := range users {
|
||||
if user.Phone != "" {
|
||||
@@ -97,8 +95,7 @@ func (fs *FeishuSender) extract(users []*models.User) ([]string, []string, []str
|
||||
url = "https://open.feishu.cn/open-apis/bot/v2/hook/" + token
|
||||
}
|
||||
urls = append(urls, url)
|
||||
tokens = append(tokens, token)
|
||||
}
|
||||
}
|
||||
return urls, ats, tokens
|
||||
return urls, ats
|
||||
}
|
||||
|
||||
@@ -56,8 +56,8 @@ const (
|
||||
Triggered = "triggered"
|
||||
)
|
||||
|
||||
func createFeishuCardBody() feishuCard {
|
||||
return feishuCard{
|
||||
var (
|
||||
body = feishuCard{
|
||||
feishu: feishu{Msgtype: "interactive"},
|
||||
Card: Cards{
|
||||
Config: Conf{
|
||||
@@ -90,7 +90,7 @@ func createFeishuCardBody() feishuCard {
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
)
|
||||
|
||||
func (fs *FeishuCardSender) CallBack(ctx CallBackContext) {
|
||||
if len(ctx.Events) == 0 || len(ctx.CallBackURL) == 0 {
|
||||
@@ -121,7 +121,6 @@ func (fs *FeishuCardSender) CallBack(ctx CallBackContext) {
|
||||
}
|
||||
|
||||
SendTitle := fmt.Sprintf("🔔 %s", ctx.Events[0].RuleName)
|
||||
body := createFeishuCardBody()
|
||||
body.Card.Header.Title.Content = SendTitle
|
||||
body.Card.Header.Template = color
|
||||
body.Card.Elements[0].Text.Content = message
|
||||
@@ -135,15 +134,14 @@ func (fs *FeishuCardSender) CallBack(ctx CallBackContext) {
|
||||
}
|
||||
parsedURL.RawQuery = ""
|
||||
|
||||
doSendAndRecord(ctx.Ctx, parsedURL.String(), parsedURL.String(), body, "callback",
|
||||
ctx.Stats, ctx.Events[0])
|
||||
doSend(parsedURL.String(), body, models.FeishuCard, ctx.Stats)
|
||||
}
|
||||
|
||||
func (fs *FeishuCardSender) Send(ctx MessageContext) {
|
||||
if len(ctx.Users) == 0 || len(ctx.Events) == 0 {
|
||||
return
|
||||
}
|
||||
urls, tokens := fs.extract(ctx.Users)
|
||||
urls, _ := fs.extract(ctx.Users)
|
||||
message := BuildTplMessage(models.FeishuCard, fs.tpl, ctx.Events)
|
||||
color := "red"
|
||||
lowerUnicode := strings.ToLower(message)
|
||||
@@ -154,20 +152,18 @@ func (fs *FeishuCardSender) Send(ctx MessageContext) {
|
||||
}
|
||||
|
||||
SendTitle := fmt.Sprintf("🔔 %s", ctx.Events[0].RuleName)
|
||||
body := createFeishuCardBody()
|
||||
body.Card.Header.Title.Content = SendTitle
|
||||
body.Card.Header.Template = color
|
||||
body.Card.Elements[0].Text.Content = message
|
||||
body.Card.Elements[2].Elements[0].Content = SendTitle
|
||||
for i, url := range urls {
|
||||
doSendAndRecord(ctx.Ctx, url, tokens[i], body, models.FeishuCard,
|
||||
ctx.Stats, ctx.Events[0])
|
||||
for _, url := range urls {
|
||||
doSend(url, body, models.FeishuCard, ctx.Stats)
|
||||
}
|
||||
}
|
||||
|
||||
func (fs *FeishuCardSender) extract(users []*models.User) ([]string, []string) {
|
||||
urls := make([]string, 0, len(users))
|
||||
tokens := make([]string, 0, len(users))
|
||||
ats := make([]string, 0)
|
||||
for i := range users {
|
||||
if token, has := users[i].ExtractToken(models.FeishuCard); has {
|
||||
url := token
|
||||
@@ -175,8 +171,7 @@ func (fs *FeishuCardSender) extract(users []*models.User) ([]string, []string) {
|
||||
url = "https://open.feishu.cn/open-apis/bot/v2/hook/" + strings.TrimSpace(token)
|
||||
}
|
||||
urls = append(urls, url)
|
||||
tokens = append(tokens, token)
|
||||
}
|
||||
}
|
||||
return urls, tokens
|
||||
return urls, ats
|
||||
}
|
||||
|
||||
@@ -77,20 +77,15 @@ func (c *IbexCallBacker) handleIbex(ctx *ctx.Context, url string, event *models.
|
||||
return
|
||||
}
|
||||
|
||||
CallIbex(ctx, id, host, c.taskTplCache, c.targetCache, c.userCache, event)
|
||||
}
|
||||
|
||||
func CallIbex(ctx *ctx.Context, id int64, host string,
|
||||
taskTplCache *memsto.TaskTplCache, targetCache *memsto.TargetCacheType,
|
||||
userCache *memsto.UserCacheType, event *models.AlertCurEvent) {
|
||||
tpl := taskTplCache.Get(id)
|
||||
tpl := c.taskTplCache.Get(id)
|
||||
if tpl == nil {
|
||||
logger.Errorf("event_callback_ibex: no such tpl(%d)", id)
|
||||
return
|
||||
}
|
||||
|
||||
// check perm
|
||||
// tpl.GroupId - host - account 三元组校验权限
|
||||
can, err := canDoIbex(tpl.UpdateBy, tpl, host, targetCache, userCache)
|
||||
can, err := canDoIbex(tpl.UpdateBy, tpl, host, c.targetCache, c.userCache)
|
||||
if err != nil {
|
||||
logger.Errorf("event_callback_ibex: check perm fail: %v", err)
|
||||
return
|
||||
@@ -108,7 +103,7 @@ func CallIbex(ctx *ctx.Context, id int64, host string,
|
||||
continue
|
||||
}
|
||||
|
||||
arr := strings.SplitN(pair, "=", 2)
|
||||
arr := strings.Split(pair, "=")
|
||||
if len(arr) != 2 {
|
||||
continue
|
||||
}
|
||||
@@ -181,7 +176,7 @@ func canDoIbex(username string, tpl *models.TaskTpl, host string, targetCache *m
|
||||
return false, nil
|
||||
}
|
||||
|
||||
return target.MatchGroupId(tpl.GroupId), nil
|
||||
return target.GroupId == tpl.GroupId, nil
|
||||
}
|
||||
|
||||
func TaskAdd(f models.TaskForm, authUser string, isCenter bool) (int64, error) {
|
||||
|
||||
@@ -27,8 +27,7 @@ func (lk *LarkSender) CallBack(ctx CallBackContext) {
|
||||
},
|
||||
}
|
||||
|
||||
doSendAndRecord(ctx.Ctx, ctx.CallBackURL, ctx.CallBackURL, body, "callback",
|
||||
ctx.Stats, ctx.Events[0])
|
||||
doSend(ctx.CallBackURL, body, models.Lark, ctx.Stats)
|
||||
ctx.Stats.AlertNotifyTotal.WithLabelValues("rule_callback").Inc()
|
||||
}
|
||||
|
||||
|
||||
@@ -42,7 +42,6 @@ func (fs *LarkCardSender) CallBack(ctx CallBackContext) {
|
||||
}
|
||||
|
||||
SendTitle := fmt.Sprintf("🔔 %s", ctx.Events[0].RuleName)
|
||||
body := createFeishuCardBody()
|
||||
body.Card.Header.Title.Content = SendTitle
|
||||
body.Card.Header.Template = color
|
||||
body.Card.Elements[0].Text.Content = message
|
||||
@@ -56,8 +55,7 @@ func (fs *LarkCardSender) CallBack(ctx CallBackContext) {
|
||||
}
|
||||
parsedURL.RawQuery = ""
|
||||
|
||||
doSendAndRecord(ctx.Ctx, ctx.CallBackURL, ctx.CallBackURL, body, "callback",
|
||||
ctx.Stats, ctx.Events[0])
|
||||
doSend(parsedURL.String(), body, models.LarkCard, ctx.Stats)
|
||||
}
|
||||
|
||||
func (fs *LarkCardSender) Send(ctx MessageContext) {
|
||||
@@ -75,7 +73,6 @@ func (fs *LarkCardSender) Send(ctx MessageContext) {
|
||||
}
|
||||
|
||||
SendTitle := fmt.Sprintf("🔔 %s", ctx.Events[0].RuleName)
|
||||
body := createFeishuCardBody()
|
||||
body.Card.Header.Title.Content = SendTitle
|
||||
body.Card.Header.Template = color
|
||||
body.Card.Elements[0].Text.Content = message
|
||||
|
||||
@@ -7,7 +7,6 @@ import (
|
||||
|
||||
"github.com/ccfos/nightingale/v6/alert/astats"
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
"github.com/ccfos/nightingale/v6/pkg/ctx"
|
||||
|
||||
"github.com/toolkits/pkg/logger"
|
||||
)
|
||||
@@ -39,11 +38,11 @@ func (ms *MmSender) Send(ctx MessageContext) {
|
||||
}
|
||||
message := BuildTplMessage(models.Mm, ms.tpl, ctx.Events)
|
||||
|
||||
SendMM(ctx.Ctx, MatterMostMessage{
|
||||
SendMM(MatterMostMessage{
|
||||
Text: message,
|
||||
Tokens: urls,
|
||||
Stats: ctx.Stats,
|
||||
}, ctx.Events[0])
|
||||
})
|
||||
}
|
||||
|
||||
func (ms *MmSender) CallBack(ctx CallBackContext) {
|
||||
@@ -52,11 +51,11 @@ func (ms *MmSender) CallBack(ctx CallBackContext) {
|
||||
}
|
||||
message := BuildTplMessage(models.Mm, ms.tpl, ctx.Events)
|
||||
|
||||
SendMM(ctx.Ctx, MatterMostMessage{
|
||||
SendMM(MatterMostMessage{
|
||||
Text: message,
|
||||
Tokens: []string{ctx.CallBackURL},
|
||||
Stats: ctx.Stats,
|
||||
}, ctx.Events[0])
|
||||
})
|
||||
|
||||
ctx.Stats.AlertNotifyTotal.WithLabelValues("rule_callback").Inc()
|
||||
}
|
||||
@@ -71,7 +70,7 @@ func (ms *MmSender) extract(users []*models.User) []string {
|
||||
return tokens
|
||||
}
|
||||
|
||||
func SendMM(ctx *ctx.Context, message MatterMostMessage, event *models.AlertCurEvent) {
|
||||
func SendMM(message MatterMostMessage) {
|
||||
for i := 0; i < len(message.Tokens); i++ {
|
||||
u, err := url.Parse(message.Tokens[i])
|
||||
if err != nil {
|
||||
@@ -104,7 +103,7 @@ func SendMM(ctx *ctx.Context, message MatterMostMessage, event *models.AlertCurE
|
||||
Username: username,
|
||||
Text: txt + message.Text,
|
||||
}
|
||||
doSendAndRecord(ctx, ur, message.Tokens[i], body, models.Mm, message.Stats, event)
|
||||
doSend(ur, body, models.Mm, message.Stats)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -2,30 +2,26 @@ package sender
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"os"
|
||||
"os/exec"
|
||||
"time"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/alert/astats"
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
"github.com/ccfos/nightingale/v6/pkg/ctx"
|
||||
|
||||
"github.com/toolkits/pkg/file"
|
||||
"github.com/toolkits/pkg/logger"
|
||||
"github.com/toolkits/pkg/sys"
|
||||
)
|
||||
|
||||
func MayPluginNotify(ctx *ctx.Context, noticeBytes []byte, notifyScript models.NotifyScript,
|
||||
stats *astats.Stats, event *models.AlertCurEvent) {
|
||||
func MayPluginNotify(noticeBytes []byte, notifyScript models.NotifyScript, stats *astats.Stats) {
|
||||
if len(noticeBytes) == 0 {
|
||||
return
|
||||
}
|
||||
alertingCallScript(ctx, noticeBytes, notifyScript, stats, event)
|
||||
alertingCallScript(noticeBytes, notifyScript, stats)
|
||||
}
|
||||
|
||||
func alertingCallScript(ctx *ctx.Context, stdinBytes []byte, notifyScript models.NotifyScript,
|
||||
stats *astats.Stats, event *models.AlertCurEvent) {
|
||||
func alertingCallScript(stdinBytes []byte, notifyScript models.NotifyScript, stats *astats.Stats) {
|
||||
// not enable or no notify.py? do nothing
|
||||
config := notifyScript
|
||||
if !config.Enable || config.Content == "" {
|
||||
@@ -85,7 +81,6 @@ func alertingCallScript(ctx *ctx.Context, stdinBytes []byte, notifyScript models
|
||||
}
|
||||
|
||||
err, isTimeout := sys.WrapTimeout(cmd, time.Duration(config.Timeout)*time.Second)
|
||||
NotifyRecord(ctx, event, channel, cmd.String(), "", buildErr(err, isTimeout))
|
||||
|
||||
if isTimeout {
|
||||
if err == nil {
|
||||
@@ -107,11 +102,3 @@ func alertingCallScript(ctx *ctx.Context, stdinBytes []byte, notifyScript models
|
||||
|
||||
logger.Infof("event_script_notify_ok: exec %s output: %s", fpath, buf.String())
|
||||
}
|
||||
|
||||
func buildErr(err error, isTimeout bool) error {
|
||||
if err == nil && !isTimeout {
|
||||
return nil
|
||||
} else {
|
||||
return fmt.Errorf("is_timeout: %v, err: %v", isTimeout, err)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -8,7 +8,6 @@ import (
|
||||
"github.com/ccfos/nightingale/v6/alert/astats"
|
||||
"github.com/ccfos/nightingale/v6/memsto"
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
"github.com/ccfos/nightingale/v6/pkg/ctx"
|
||||
)
|
||||
|
||||
type (
|
||||
@@ -23,7 +22,6 @@ type (
|
||||
Rule *models.AlertRule
|
||||
Events []*models.AlertCurEvent
|
||||
Stats *astats.Stats
|
||||
Ctx *ctx.Context
|
||||
}
|
||||
)
|
||||
|
||||
@@ -51,15 +49,13 @@ func NewSender(key string, tpls map[string]*template.Template, smtp ...aconf.SMT
|
||||
return nil
|
||||
}
|
||||
|
||||
func BuildMessageContext(ctx *ctx.Context, rule *models.AlertRule, events []*models.AlertCurEvent,
|
||||
uids []int64, userCache *memsto.UserCacheType, stats *astats.Stats) MessageContext {
|
||||
func BuildMessageContext(rule *models.AlertRule, events []*models.AlertCurEvent, uids []int64, userCache *memsto.UserCacheType, stats *astats.Stats) MessageContext {
|
||||
users := userCache.GetByUserIds(uids)
|
||||
return MessageContext{
|
||||
Rule: rule,
|
||||
Events: events,
|
||||
Users: users,
|
||||
Stats: stats,
|
||||
Ctx: ctx,
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -6,7 +6,6 @@ import (
|
||||
|
||||
"github.com/ccfos/nightingale/v6/alert/astats"
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
"github.com/ccfos/nightingale/v6/pkg/ctx"
|
||||
|
||||
"github.com/toolkits/pkg/logger"
|
||||
)
|
||||
@@ -36,11 +35,11 @@ func (ts *TelegramSender) CallBack(ctx CallBackContext) {
|
||||
}
|
||||
|
||||
message := BuildTplMessage(models.Telegram, ts.tpl, ctx.Events)
|
||||
SendTelegram(ctx.Ctx, TelegramMessage{
|
||||
SendTelegram(TelegramMessage{
|
||||
Text: message,
|
||||
Tokens: []string{ctx.CallBackURL},
|
||||
Stats: ctx.Stats,
|
||||
}, ctx.Events[0])
|
||||
})
|
||||
|
||||
ctx.Stats.AlertNotifyTotal.WithLabelValues("rule_callback").Inc()
|
||||
}
|
||||
@@ -52,11 +51,11 @@ func (ts *TelegramSender) Send(ctx MessageContext) {
|
||||
tokens := ts.extract(ctx.Users)
|
||||
message := BuildTplMessage(models.Telegram, ts.tpl, ctx.Events)
|
||||
|
||||
SendTelegram(ctx.Ctx, TelegramMessage{
|
||||
SendTelegram(TelegramMessage{
|
||||
Text: message,
|
||||
Tokens: tokens,
|
||||
Stats: ctx.Stats,
|
||||
}, ctx.Events[0])
|
||||
})
|
||||
}
|
||||
|
||||
func (ts *TelegramSender) extract(users []*models.User) []string {
|
||||
@@ -69,7 +68,7 @@ func (ts *TelegramSender) extract(users []*models.User) []string {
|
||||
return tokens
|
||||
}
|
||||
|
||||
func SendTelegram(ctx *ctx.Context, message TelegramMessage, event *models.AlertCurEvent) {
|
||||
func SendTelegram(message TelegramMessage) {
|
||||
for i := 0; i < len(message.Tokens); i++ {
|
||||
if !strings.Contains(message.Tokens[i], "/") && !strings.HasPrefix(message.Tokens[i], "https://") {
|
||||
logger.Errorf("telegram_sender: result=fail invalid token=%s", message.Tokens[i])
|
||||
@@ -93,6 +92,6 @@ func SendTelegram(ctx *ctx.Context, message TelegramMessage, event *models.Alert
|
||||
Text: message.Text,
|
||||
}
|
||||
|
||||
doSendAndRecord(ctx, url, message.Tokens[i], body, models.Telegram, message.Stats, event)
|
||||
doSend(url, body, models.Telegram, message.Stats)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -4,7 +4,6 @@ import (
|
||||
"bytes"
|
||||
"crypto/tls"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"sync"
|
||||
@@ -12,12 +11,11 @@ import (
|
||||
|
||||
"github.com/ccfos/nightingale/v6/alert/astats"
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
"github.com/ccfos/nightingale/v6/pkg/ctx"
|
||||
|
||||
"github.com/toolkits/pkg/logger"
|
||||
)
|
||||
|
||||
func sendWebhook(webhook *models.Webhook, event interface{}, stats *astats.Stats) (bool, string, error) {
|
||||
func sendWebhook(webhook *models.Webhook, event interface{}, stats *astats.Stats) bool {
|
||||
channel := "webhook"
|
||||
if webhook.Type == models.RuleCallback {
|
||||
channel = "callback"
|
||||
@@ -25,12 +23,12 @@ func sendWebhook(webhook *models.Webhook, event interface{}, stats *astats.Stats
|
||||
|
||||
conf := webhook
|
||||
if conf.Url == "" || !conf.Enable {
|
||||
return false, "", nil
|
||||
return false
|
||||
}
|
||||
bs, err := json.Marshal(event)
|
||||
if err != nil {
|
||||
logger.Errorf("%s alertingWebhook failed to marshal event:%+v err:%v", channel, event, err)
|
||||
return false, "", err
|
||||
return false
|
||||
}
|
||||
|
||||
bf := bytes.NewBuffer(bs)
|
||||
@@ -38,7 +36,7 @@ func sendWebhook(webhook *models.Webhook, event interface{}, stats *astats.Stats
|
||||
req, err := http.NewRequest("POST", conf.Url, bf)
|
||||
if err != nil {
|
||||
logger.Warningf("%s alertingWebhook failed to new reques event:%s err:%v", channel, string(bs), err)
|
||||
return true, "", err
|
||||
return true
|
||||
}
|
||||
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
@@ -68,15 +66,14 @@ func sendWebhook(webhook *models.Webhook, event interface{}, stats *astats.Stats
|
||||
|
||||
stats.AlertNotifyTotal.WithLabelValues(channel).Inc()
|
||||
var resp *http.Response
|
||||
var body []byte
|
||||
resp, err = client.Do(req)
|
||||
|
||||
if err != nil {
|
||||
stats.AlertNotifyErrorTotal.WithLabelValues(channel).Inc()
|
||||
logger.Errorf("event_%s_fail, event:%s, url: [%s], error: [%s]", channel, string(bs), conf.Url, err)
|
||||
return true, "", err
|
||||
return true
|
||||
}
|
||||
|
||||
var body []byte
|
||||
if resp.Body != nil {
|
||||
defer resp.Body.Close()
|
||||
body, _ = io.ReadAll(resp.Body)
|
||||
@@ -84,19 +81,18 @@ func sendWebhook(webhook *models.Webhook, event interface{}, stats *astats.Stats
|
||||
|
||||
if resp.StatusCode == 429 {
|
||||
logger.Errorf("event_%s_fail, url: %s, response code: %d, body: %s event:%s", channel, conf.Url, resp.StatusCode, string(body), string(bs))
|
||||
return true, string(body), fmt.Errorf("status code is 429")
|
||||
return true
|
||||
}
|
||||
|
||||
logger.Debugf("event_%s_succ, url: %s, response code: %d, body: %s event:%s", channel, conf.Url, resp.StatusCode, string(body), string(bs))
|
||||
return false, string(body), nil
|
||||
return false
|
||||
}
|
||||
|
||||
func SingleSendWebhooks(ctx *ctx.Context, webhooks []*models.Webhook, event *models.AlertCurEvent, stats *astats.Stats) {
|
||||
func SingleSendWebhooks(webhooks []*models.Webhook, event *models.AlertCurEvent, stats *astats.Stats) {
|
||||
for _, conf := range webhooks {
|
||||
retryCount := 0
|
||||
for retryCount < 3 {
|
||||
needRetry, res, err := sendWebhook(conf, event, stats)
|
||||
NotifyRecord(ctx, event, "webhook", conf.Url, res, err)
|
||||
needRetry := sendWebhook(conf, event, stats)
|
||||
if !needRetry {
|
||||
break
|
||||
}
|
||||
@@ -106,10 +102,10 @@ func SingleSendWebhooks(ctx *ctx.Context, webhooks []*models.Webhook, event *mod
|
||||
}
|
||||
}
|
||||
|
||||
func BatchSendWebhooks(ctx *ctx.Context, webhooks []*models.Webhook, event *models.AlertCurEvent, stats *astats.Stats) {
|
||||
func BatchSendWebhooks(webhooks []*models.Webhook, event *models.AlertCurEvent, stats *astats.Stats) {
|
||||
for _, conf := range webhooks {
|
||||
logger.Infof("push event:%+v to queue:%v", event, conf)
|
||||
PushEvent(ctx, conf, event, stats)
|
||||
PushEvent(conf, event, stats)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -125,7 +121,7 @@ type WebhookQueue struct {
|
||||
closeCh chan struct{}
|
||||
}
|
||||
|
||||
func PushEvent(ctx *ctx.Context, webhook *models.Webhook, event *models.AlertCurEvent, stats *astats.Stats) {
|
||||
func PushEvent(webhook *models.Webhook, event *models.AlertCurEvent, stats *astats.Stats) {
|
||||
EventQueueLock.RLock()
|
||||
queue := EventQueue[webhook.Url]
|
||||
EventQueueLock.RUnlock()
|
||||
@@ -140,7 +136,7 @@ func PushEvent(ctx *ctx.Context, webhook *models.Webhook, event *models.AlertCur
|
||||
EventQueue[webhook.Url] = queue
|
||||
EventQueueLock.Unlock()
|
||||
|
||||
StartConsumer(ctx, queue, webhook.Batch, webhook, stats)
|
||||
StartConsumer(queue, webhook.Batch, webhook, stats)
|
||||
}
|
||||
|
||||
succ := queue.list.PushFront(event)
|
||||
@@ -150,7 +146,7 @@ func PushEvent(ctx *ctx.Context, webhook *models.Webhook, event *models.AlertCur
|
||||
}
|
||||
}
|
||||
|
||||
func StartConsumer(ctx *ctx.Context, queue *WebhookQueue, popSize int, webhook *models.Webhook, stats *astats.Stats) {
|
||||
func StartConsumer(queue *WebhookQueue, popSize int, webhook *models.Webhook, stats *astats.Stats) {
|
||||
for {
|
||||
select {
|
||||
case <-queue.closeCh:
|
||||
@@ -165,8 +161,7 @@ func StartConsumer(ctx *ctx.Context, queue *WebhookQueue, popSize int, webhook *
|
||||
|
||||
retryCount := 0
|
||||
for retryCount < webhook.RetryCount {
|
||||
needRetry, res, err := sendWebhook(webhook, events, stats)
|
||||
go RecordEvents(ctx, webhook, events, stats, res, err)
|
||||
needRetry := sendWebhook(webhook, events, stats)
|
||||
if !needRetry {
|
||||
break
|
||||
}
|
||||
@@ -176,10 +171,3 @@ func StartConsumer(ctx *ctx.Context, queue *WebhookQueue, popSize int, webhook *
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func RecordEvents(ctx *ctx.Context, webhook *models.Webhook, events []*models.AlertCurEvent, stats *astats.Stats, res string, err error) {
|
||||
for _, event := range events {
|
||||
time.Sleep(time.Millisecond * 10)
|
||||
NotifyRecord(ctx, event, "webhook", webhook.Url, res, err)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -37,8 +37,7 @@ func (ws *WecomSender) CallBack(ctx CallBackContext) {
|
||||
},
|
||||
}
|
||||
|
||||
doSendAndRecord(ctx.Ctx, ctx.CallBackURL, ctx.CallBackURL, body, "callback",
|
||||
ctx.Stats, ctx.Events[0])
|
||||
doSend(ctx.CallBackURL, body, models.Wecom, ctx.Stats)
|
||||
ctx.Stats.AlertNotifyTotal.WithLabelValues("rule_callback").Inc()
|
||||
}
|
||||
|
||||
@@ -46,22 +45,21 @@ func (ws *WecomSender) Send(ctx MessageContext) {
|
||||
if len(ctx.Users) == 0 || len(ctx.Events) == 0 {
|
||||
return
|
||||
}
|
||||
urls, tokens := ws.extract(ctx.Users)
|
||||
urls := ws.extract(ctx.Users)
|
||||
message := BuildTplMessage(models.Wecom, ws.tpl, ctx.Events)
|
||||
for i, url := range urls {
|
||||
for _, url := range urls {
|
||||
body := wecom{
|
||||
Msgtype: "markdown",
|
||||
Markdown: wecomMarkdown{
|
||||
Content: message,
|
||||
},
|
||||
}
|
||||
doSendAndRecord(ctx.Ctx, url, tokens[i], body, models.Wecom, ctx.Stats, ctx.Events[0])
|
||||
doSend(url, body, models.Wecom, ctx.Stats)
|
||||
}
|
||||
}
|
||||
|
||||
func (ws *WecomSender) extract(users []*models.User) ([]string, []string) {
|
||||
func (ws *WecomSender) extract(users []*models.User) []string {
|
||||
urls := make([]string, 0, len(users))
|
||||
tokens := make([]string, 0, len(users))
|
||||
for _, user := range users {
|
||||
if token, has := user.ExtractToken(models.Wecom); has {
|
||||
url := token
|
||||
@@ -69,8 +67,7 @@ func (ws *WecomSender) extract(users []*models.User) ([]string, []string) {
|
||||
url = "https://qyapi.weixin.qq.com/cgi-bin/webhook/send?key=" + token
|
||||
}
|
||||
urls = append(urls, url)
|
||||
tokens = append(tokens, token)
|
||||
}
|
||||
}
|
||||
return urls, tokens
|
||||
return urls
|
||||
}
|
||||
|
||||
@@ -13,8 +13,6 @@ type Center struct {
|
||||
UseFileAssets bool
|
||||
FlashDuty FlashDuty
|
||||
EventHistoryGroupView bool
|
||||
CleanNotifyRecordDay int
|
||||
MigrateBusiGroupLabel bool
|
||||
}
|
||||
|
||||
type Plugin struct {
|
||||
|
||||
@@ -16,7 +16,6 @@ import (
|
||||
centerrt "github.com/ccfos/nightingale/v6/center/router"
|
||||
"github.com/ccfos/nightingale/v6/center/sso"
|
||||
"github.com/ccfos/nightingale/v6/conf"
|
||||
"github.com/ccfos/nightingale/v6/cron"
|
||||
"github.com/ccfos/nightingale/v6/dumper"
|
||||
"github.com/ccfos/nightingale/v6/memsto"
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
@@ -107,19 +106,11 @@ func Initialize(configDir string, cryptoKey string) (func(), error) {
|
||||
|
||||
go version.GetGithubVersion()
|
||||
|
||||
go cron.CleanNotifyRecord(ctx, config.Center.CleanNotifyRecordDay)
|
||||
|
||||
alertrtRouter := alertrt.New(config.HTTP, config.Alert, alertMuteCache, targetCache, busiGroupCache, alertStats, ctx, externalProcessors)
|
||||
centerRouter := centerrt.New(config.HTTP, config.Center, config.Alert, config.Ibex,
|
||||
cconf.Operations, dsCache, notifyConfigCache, promClients, tdengineClients,
|
||||
centerRouter := centerrt.New(config.HTTP, config.Center, config.Alert, config.Ibex, cconf.Operations, dsCache, notifyConfigCache, promClients, tdengineClients,
|
||||
redis, sso, ctx, metas, idents, targetCache, userCache, userGroupCache)
|
||||
pushgwRouter := pushgwrt.New(config.HTTP, config.Pushgw, config.Alert, targetCache, busiGroupCache, idents, metas, writers, ctx)
|
||||
|
||||
models.MigrateBg(ctx, pushgwRouter.Pushgw.BusiGroupLabelKey)
|
||||
if config.Center.MigrateBusiGroupLabel {
|
||||
models.DoMigrateBg(ctx, config.Pushgw.BusiGroupLabelKey)
|
||||
}
|
||||
|
||||
r := httpx.GinEngine(config.Global.RunMode, config.HTTP)
|
||||
|
||||
centerRouter.Config(r)
|
||||
|
||||
@@ -16,12 +16,6 @@ import (
|
||||
const SYSTEM = "system"
|
||||
|
||||
func Init(ctx *ctx.Context, builtinIntegrationsDir string) {
|
||||
err := models.InitBuiltinPayloads(ctx)
|
||||
if err != nil {
|
||||
logger.Warning("init old builtinPayloads fail ", err)
|
||||
return
|
||||
}
|
||||
|
||||
fp := builtinIntegrationsDir
|
||||
if fp == "" {
|
||||
fp = path.Join(runner.Cwd, "integrations")
|
||||
@@ -98,7 +92,6 @@ func Init(ctx *ctx.Context, builtinIntegrationsDir string) {
|
||||
logger.Warning("update builtin component fail ", old, err)
|
||||
}
|
||||
}
|
||||
component.ID = old.ID
|
||||
}
|
||||
|
||||
// delete uuid is emtpy
|
||||
@@ -148,13 +141,13 @@ func Init(ctx *ctx.Context, builtinIntegrationsDir string) {
|
||||
|
||||
cate := strings.Replace(f, ".json", "", -1)
|
||||
builtinAlert := models.BuiltinPayload{
|
||||
ComponentID: component.ID,
|
||||
Type: "alert",
|
||||
Cate: cate,
|
||||
Name: alert.Name,
|
||||
Tags: alert.AppendTags,
|
||||
Content: string(content),
|
||||
UUID: alert.UUID,
|
||||
Component: component.Ident,
|
||||
Type: "alert",
|
||||
Cate: cate,
|
||||
Name: alert.Name,
|
||||
Tags: alert.AppendTags,
|
||||
Content: string(content),
|
||||
UUID: alert.UUID,
|
||||
}
|
||||
|
||||
old, err := models.BuiltinPayloadGet(ctx, "uuid = ?", alert.UUID)
|
||||
@@ -172,7 +165,6 @@ func Init(ctx *ctx.Context, builtinIntegrationsDir string) {
|
||||
}
|
||||
|
||||
if old.UpdatedBy == SYSTEM {
|
||||
old.ComponentID = component.ID
|
||||
old.Content = string(content)
|
||||
old.Name = alert.Name
|
||||
old.Tags = alert.AppendTags
|
||||
@@ -239,13 +231,13 @@ func Init(ctx *ctx.Context, builtinIntegrationsDir string) {
|
||||
}
|
||||
|
||||
builtinDashboard := models.BuiltinPayload{
|
||||
ComponentID: component.ID,
|
||||
Type: "dashboard",
|
||||
Cate: "",
|
||||
Name: dashboard.Name,
|
||||
Tags: dashboard.Tags,
|
||||
Content: string(content),
|
||||
UUID: dashboard.UUID,
|
||||
Component: component.Ident,
|
||||
Type: "dashboard",
|
||||
Cate: "",
|
||||
Name: dashboard.Name,
|
||||
Tags: dashboard.Tags,
|
||||
Content: string(content),
|
||||
UUID: dashboard.UUID,
|
||||
}
|
||||
|
||||
old, err := models.BuiltinPayloadGet(ctx, "uuid = ?", dashboard.UUID)
|
||||
@@ -263,7 +255,6 @@ func Init(ctx *ctx.Context, builtinIntegrationsDir string) {
|
||||
}
|
||||
|
||||
if old.UpdatedBy == SYSTEM {
|
||||
old.ComponentID = component.ID
|
||||
old.Content = string(content)
|
||||
old.Name = dashboard.Name
|
||||
old.Tags = dashboard.Tags
|
||||
|
||||
@@ -53,11 +53,7 @@ type Router struct {
|
||||
HeartbeatHook HeartbeatHookFunc
|
||||
}
|
||||
|
||||
func New(httpConfig httpx.Config, center cconf.Center, alert aconf.Alert, ibex conf.Ibex,
|
||||
operations cconf.Operation, ds *memsto.DatasourceCacheType, ncc *memsto.NotifyConfigCacheType,
|
||||
pc *prom.PromClientMap, tdendgineClients *tdengine.TdengineClientMap, redis storage.Redis,
|
||||
sso *sso.SsoClient, ctx *ctx.Context, metaSet *metas.Set, idents *idents.Set,
|
||||
tc *memsto.TargetCacheType, uc *memsto.UserCacheType, ugc *memsto.UserGroupCacheType) *Router {
|
||||
func New(httpConfig httpx.Config, center cconf.Center, alert aconf.Alert, ibex conf.Ibex, operations cconf.Operation, ds *memsto.DatasourceCacheType, ncc *memsto.NotifyConfigCacheType, pc *prom.PromClientMap, tdendgineClients *tdengine.TdengineClientMap, redis storage.Redis, sso *sso.SsoClient, ctx *ctx.Context, metaSet *metas.Set, idents *idents.Set, tc *memsto.TargetCacheType, uc *memsto.UserCacheType, ugc *memsto.UserGroupCacheType) *Router {
|
||||
return &Router{
|
||||
HTTP: httpConfig,
|
||||
Center: center,
|
||||
@@ -280,7 +276,7 @@ func (rt *Router) Config(r *gin.Engine) {
|
||||
pages.POST("/targets/tags", rt.auth(), rt.user(), rt.perm("/targets/put"), rt.targetBindTagsByFE)
|
||||
pages.DELETE("/targets/tags", rt.auth(), rt.user(), rt.perm("/targets/put"), rt.targetUnbindTagsByFE)
|
||||
pages.PUT("/targets/note", rt.auth(), rt.user(), rt.perm("/targets/put"), rt.targetUpdateNote)
|
||||
pages.PUT("/targets/bgids", rt.auth(), rt.user(), rt.perm("/targets/put"), rt.targetBindBgids)
|
||||
pages.PUT("/targets/bgid", rt.auth(), rt.user(), rt.perm("/targets/put"), rt.targetUpdateBgid)
|
||||
|
||||
pages.POST("/builtin-cate-favorite", rt.auth(), rt.user(), rt.builtinCateFavoriteAdd)
|
||||
pages.DELETE("/builtin-cate-favorite/:name", rt.auth(), rt.user(), rt.builtinCateFavoriteDel)
|
||||
@@ -301,7 +297,6 @@ func (rt *Router) Config(r *gin.Engine) {
|
||||
pages.POST("/busi-group/:id/board/:bid/clone", rt.auth(), rt.user(), rt.perm("/dashboards/add"), rt.bgrw(), rt.boardClone)
|
||||
pages.POST("/busi-groups/boards/clones", rt.auth(), rt.user(), rt.perm("/dashboards/add"), rt.boardBatchClone)
|
||||
|
||||
pages.GET("/boards", rt.auth(), rt.user(), rt.boardGetsByBids)
|
||||
pages.GET("/board/:bid", rt.boardGet)
|
||||
pages.GET("/board/:bid/pure", rt.boardPureGet)
|
||||
pages.PUT("/board/:bid", rt.auth(), rt.user(), rt.perm("/dashboards/put"), rt.boardPut)
|
||||
@@ -320,8 +315,6 @@ func (rt *Router) Config(r *gin.Engine) {
|
||||
pages.GET("/busi-group/:id/alert-rules", rt.auth(), rt.user(), rt.perm("/alert-rules"), rt.alertRuleGets)
|
||||
pages.POST("/busi-group/:id/alert-rules", rt.auth(), rt.user(), rt.perm("/alert-rules/add"), rt.bgrw(), rt.alertRuleAddByFE)
|
||||
pages.POST("/busi-group/:id/alert-rules/import", rt.auth(), rt.user(), rt.perm("/alert-rules/add"), rt.bgrw(), rt.alertRuleAddByImport)
|
||||
pages.POST("/busi-group/:id/alert-rules/import-prom-rule", rt.auth(),
|
||||
rt.user(), rt.perm("/alert-rules/add"), rt.bgrw(), rt.alertRuleAddByImportPromRule)
|
||||
pages.DELETE("/busi-group/:id/alert-rules", rt.auth(), rt.user(), rt.perm("/alert-rules/del"), rt.bgrw(), rt.alertRuleDel)
|
||||
pages.PUT("/busi-group/:id/alert-rules/fields", rt.auth(), rt.user(), rt.perm("/alert-rules/put"), rt.bgrw(), rt.alertRulePutFields)
|
||||
pages.PUT("/busi-group/:id/alert-rule/:arid", rt.auth(), rt.user(), rt.perm("/alert-rules/put"), rt.alertRulePutByFE)
|
||||
@@ -329,7 +322,6 @@ func (rt *Router) Config(r *gin.Engine) {
|
||||
pages.GET("/alert-rule/:arid/pure", rt.auth(), rt.user(), rt.perm("/alert-rules"), rt.alertRulePureGet)
|
||||
pages.PUT("/busi-group/alert-rule/validate", rt.auth(), rt.user(), rt.perm("/alert-rules/put"), rt.alertRuleValidation)
|
||||
pages.POST("/relabel-test", rt.auth(), rt.user(), rt.relabelTest)
|
||||
pages.POST("/busi-group/:id/alert-rules/clone", rt.auth(), rt.user(), rt.perm("/alert-rules/add"), rt.bgrw(), rt.cloneToMachine)
|
||||
|
||||
pages.GET("/busi-groups/recording-rules", rt.auth(), rt.user(), rt.perm("/recording-rules"), rt.recordingRuleGetsByGids)
|
||||
pages.GET("/busi-group/:id/recording-rules", rt.auth(), rt.user(), rt.perm("/recording-rules"), rt.recordingRuleGets)
|
||||
@@ -358,11 +350,9 @@ func (rt *Router) Config(r *gin.Engine) {
|
||||
if rt.Center.AnonymousAccess.AlertDetail {
|
||||
pages.GET("/alert-cur-event/:eid", rt.alertCurEventGet)
|
||||
pages.GET("/alert-his-event/:eid", rt.alertHisEventGet)
|
||||
pages.GET("/event-notify-records/:eid", rt.notificationRecordList)
|
||||
} else {
|
||||
pages.GET("/alert-cur-event/:eid", rt.auth(), rt.user(), rt.alertCurEventGet)
|
||||
pages.GET("/alert-his-event/:eid", rt.auth(), rt.user(), rt.alertHisEventGet)
|
||||
pages.GET("/event-notify-records/:eid", rt.auth(), rt.user(), rt.notificationRecordList)
|
||||
}
|
||||
|
||||
// card logic
|
||||
@@ -561,11 +551,6 @@ func (rt *Router) Config(r *gin.Engine) {
|
||||
|
||||
service.GET("/targets-of-alert-rule", rt.targetsOfAlertRule)
|
||||
|
||||
service.POST("/notify-record", rt.notificationRecordAdd)
|
||||
|
||||
service.GET("/alert-cur-events-del-by-hash", rt.alertCurEventDelByHash)
|
||||
|
||||
service.POST("/center/heartbeat", rt.heartbeat)
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -65,8 +65,7 @@ func (rt *Router) alertCurEventsCard(c *gin.Context) {
|
||||
ginx.Dangerous(err)
|
||||
|
||||
// 最多获取50000个,获取太多也没啥意义
|
||||
list, err := models.AlertCurEventsGet(rt.Ctx, prods, bgids, stime, etime, severity, dsIds,
|
||||
cates, 0, query, 50000, 0)
|
||||
list, err := models.AlertCurEventGets(rt.Ctx, prods, bgids, stime, etime, severity, dsIds, cates, query, 50000, 0)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
cardmap := make(map[string]*AlertCard)
|
||||
@@ -163,17 +162,13 @@ func (rt *Router) alertCurEventsList(c *gin.Context) {
|
||||
cates = strings.Split(cate, ",")
|
||||
}
|
||||
|
||||
ruleId := ginx.QueryInt64(c, "rid", 0)
|
||||
|
||||
bgids, err := GetBusinessGroupIds(c, rt.Ctx, rt.Center.EventHistoryGroupView)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
total, err := models.AlertCurEventTotal(rt.Ctx, prods, bgids, stime, etime, severity, dsIds,
|
||||
cates, ruleId, query)
|
||||
total, err := models.AlertCurEventTotal(rt.Ctx, prods, bgids, stime, etime, severity, dsIds, cates, query)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
list, err := models.AlertCurEventsGet(rt.Ctx, prods, bgids, stime, etime, severity, dsIds,
|
||||
cates, ruleId, query, limit, ginx.Offset(c, limit))
|
||||
list, err := models.AlertCurEventGets(rt.Ctx, prods, bgids, stime, etime, severity, dsIds, cates, query, limit, ginx.Offset(c, limit))
|
||||
ginx.Dangerous(err)
|
||||
|
||||
cache := make(map[int64]*models.UserGroup)
|
||||
@@ -227,12 +222,6 @@ func (rt *Router) alertCurEventGet(c *gin.Context) {
|
||||
rt.bgroCheck(c, event.GroupId)
|
||||
}
|
||||
|
||||
ruleConfig, needReset := models.FillRuleConfigTplName(rt.Ctx, event.RuleConfig)
|
||||
if needReset {
|
||||
event.RuleConfigJson = ruleConfig
|
||||
}
|
||||
|
||||
event.LastEvalTime = event.TriggerTime
|
||||
ginx.NewRender(c).Data(event, nil)
|
||||
}
|
||||
|
||||
@@ -240,8 +229,3 @@ func (rt *Router) alertCurEventsStatistics(c *gin.Context) {
|
||||
|
||||
ginx.NewRender(c).Data(models.AlertCurEventStatistics(rt.Ctx, time.Now()), nil)
|
||||
}
|
||||
|
||||
func (rt *Router) alertCurEventDelByHash(c *gin.Context) {
|
||||
hash := ginx.QueryStr(c, "hash")
|
||||
ginx.NewRender(c).Message(models.AlertCurEventDelByHash(rt.Ctx, hash))
|
||||
}
|
||||
|
||||
@@ -54,17 +54,13 @@ func (rt *Router) alertHisEventsList(c *gin.Context) {
|
||||
cates = strings.Split(cate, ",")
|
||||
}
|
||||
|
||||
ruleId := ginx.QueryInt64(c, "rid", 0)
|
||||
|
||||
bgids, err := GetBusinessGroupIds(c, rt.Ctx, rt.Center.EventHistoryGroupView)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
total, err := models.AlertHisEventTotal(rt.Ctx, prods, bgids, stime, etime, severity,
|
||||
recovered, dsIds, cates, ruleId, query)
|
||||
total, err := models.AlertHisEventTotal(rt.Ctx, prods, bgids, stime, etime, severity, recovered, dsIds, cates, query)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
list, err := models.AlertHisEventGets(rt.Ctx, prods, bgids, stime, etime, severity, recovered,
|
||||
dsIds, cates, ruleId, query, limit, ginx.Offset(c, limit))
|
||||
list, err := models.AlertHisEventGets(rt.Ctx, prods, bgids, stime, etime, severity, recovered, dsIds, cates, query, limit, ginx.Offset(c, limit))
|
||||
ginx.Dangerous(err)
|
||||
|
||||
cache := make(map[int64]*models.UserGroup)
|
||||
@@ -91,11 +87,6 @@ func (rt *Router) alertHisEventGet(c *gin.Context) {
|
||||
rt.bgroCheck(c, event.GroupId)
|
||||
}
|
||||
|
||||
ruleConfig, needReset := models.FillRuleConfigTplName(rt.Ctx, event.RuleConfig)
|
||||
if needReset {
|
||||
event.RuleConfigJson = ruleConfig
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Data(event, err)
|
||||
}
|
||||
|
||||
|
||||
@@ -1,22 +1,17 @@
|
||||
package router
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"regexp"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"gopkg.in/yaml.v2"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
"github.com/ccfos/nightingale/v6/pushgw/pconf"
|
||||
"github.com/ccfos/nightingale/v6/pushgw/writer"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/jinzhu/copier"
|
||||
"github.com/prometheus/prometheus/prompb"
|
||||
"github.com/toolkits/pkg/ginx"
|
||||
"github.com/toolkits/pkg/i18n"
|
||||
@@ -37,18 +32,6 @@ func (rt *Router) alertRuleGets(c *gin.Context) {
|
||||
ginx.NewRender(c).Data(ars, err)
|
||||
}
|
||||
|
||||
func getAlertCueEventTimeRange(c *gin.Context) (stime, etime int64) {
|
||||
stime = ginx.QueryInt64(c, "stime", 0)
|
||||
etime = ginx.QueryInt64(c, "etime", 0)
|
||||
if etime == 0 {
|
||||
etime = time.Now().Unix()
|
||||
}
|
||||
if stime == 0 || stime >= etime {
|
||||
stime = etime - 30*24*int64(time.Hour.Seconds())
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func (rt *Router) alertRuleGetsByGids(c *gin.Context) {
|
||||
gids := str.IdsInt64(ginx.QueryStr(c, "gids", ""), ",")
|
||||
if len(gids) > 0 {
|
||||
@@ -72,19 +55,9 @@ func (rt *Router) alertRuleGetsByGids(c *gin.Context) {
|
||||
ars, err := models.AlertRuleGetsByBGIds(rt.Ctx, gids)
|
||||
if err == nil {
|
||||
cache := make(map[int64]*models.UserGroup)
|
||||
rids := make([]int64, 0, len(ars))
|
||||
for i := 0; i < len(ars); i++ {
|
||||
ars[i].FillNotifyGroups(rt.Ctx, cache)
|
||||
ars[i].FillSeverities()
|
||||
rids = append(rids, ars[i].Id)
|
||||
}
|
||||
|
||||
stime, etime := getAlertCueEventTimeRange(c)
|
||||
cnt := models.AlertCurEventCountByRuleId(rt.Ctx, rids, stime, etime)
|
||||
if cnt != nil {
|
||||
for i := 0; i < len(ars); i++ {
|
||||
ars[i].CurEventCount = cnt[ars[i].Id]
|
||||
}
|
||||
}
|
||||
}
|
||||
ginx.NewRender(c).Data(ars, err)
|
||||
@@ -152,34 +125,6 @@ func (rt *Router) alertRuleAddByImport(c *gin.Context) {
|
||||
ginx.NewRender(c).Data(reterr, nil)
|
||||
}
|
||||
|
||||
type promRuleForm struct {
|
||||
Payload string `json:"payload" binding:"required"`
|
||||
DatasourceIds []int64 `json:"datasource_ids" binding:"required"`
|
||||
Disabled int `json:"disabled" binding:"gte=0,lte=1"`
|
||||
}
|
||||
|
||||
func (rt *Router) alertRuleAddByImportPromRule(c *gin.Context) {
|
||||
var f promRuleForm
|
||||
ginx.Dangerous(c.BindJSON(&f))
|
||||
|
||||
var pr struct {
|
||||
Groups []models.PromRuleGroup `yaml:"groups"`
|
||||
}
|
||||
err := yaml.Unmarshal([]byte(f.Payload), &pr)
|
||||
if err != nil {
|
||||
ginx.Bomb(http.StatusBadRequest, "invalid yaml format, please use the example format. err: %v", err)
|
||||
}
|
||||
|
||||
if len(pr.Groups) == 0 {
|
||||
ginx.Bomb(http.StatusBadRequest, "input yaml is empty")
|
||||
}
|
||||
|
||||
lst := models.DealPromGroup(pr.Groups, f.DatasourceIds, f.Disabled)
|
||||
username := c.MustGet("username").(string)
|
||||
bgid := ginx.UrlParamInt64(c, "id")
|
||||
ginx.NewRender(c).Data(rt.alertRuleAdd(lst, username, bgid, c.GetHeader("X-Language")), nil)
|
||||
}
|
||||
|
||||
func (rt *Router) alertRuleAddByService(c *gin.Context) {
|
||||
var lst []models.AlertRule
|
||||
ginx.BindJSON(c, &lst)
|
||||
@@ -330,43 +275,6 @@ func (rt *Router) alertRulePutFields(c *gin.Context) {
|
||||
continue
|
||||
}
|
||||
|
||||
if f.Action == "update_triggers" {
|
||||
if triggers, has := f.Fields["triggers"]; has {
|
||||
originRule := ar.RuleConfigJson.(map[string]interface{})
|
||||
originRule["triggers"] = triggers
|
||||
b, err := json.Marshal(originRule)
|
||||
ginx.Dangerous(err)
|
||||
ginx.Dangerous(ar.UpdateFieldsMap(rt.Ctx, map[string]interface{}{"rule_config": string(b)}))
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
||||
if f.Action == "annotations_add" {
|
||||
if annotations, has := f.Fields["annotations"]; has {
|
||||
annotationsMap := annotations.(map[string]interface{})
|
||||
for k, v := range annotationsMap {
|
||||
ar.AnnotationsJSON[k] = v.(string)
|
||||
}
|
||||
b, err := json.Marshal(ar.AnnotationsJSON)
|
||||
ginx.Dangerous(err)
|
||||
ginx.Dangerous(ar.UpdateFieldsMap(rt.Ctx, map[string]interface{}{"annotations": string(b)}))
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
||||
if f.Action == "annotations_del" {
|
||||
if annotations, has := f.Fields["annotations"]; has {
|
||||
annotationsKeys := annotations.(map[string]interface{})
|
||||
for key := range annotationsKeys {
|
||||
delete(ar.AnnotationsJSON, key)
|
||||
}
|
||||
b, err := json.Marshal(ar.AnnotationsJSON)
|
||||
ginx.Dangerous(err)
|
||||
ginx.Dangerous(ar.UpdateFieldsMap(rt.Ctx, map[string]interface{}{"annotations": string(b)}))
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
||||
if f.Action == "callback_add" {
|
||||
// 增加一个 callback 地址
|
||||
if callbacks, has := f.Fields["callbacks"]; has {
|
||||
@@ -514,7 +422,7 @@ func (rt *Router) relabelTest(c *gin.Context) {
|
||||
|
||||
labels := make([]prompb.Label, len(f.Tags))
|
||||
for i, tag := range f.Tags {
|
||||
label := strings.SplitN(tag, "=", 2)
|
||||
label := strings.Split(tag, "=")
|
||||
if len(label) != 2 {
|
||||
ginx.Bomb(http.StatusBadRequest, "tag:%s format error", tag)
|
||||
}
|
||||
@@ -545,91 +453,3 @@ func (rt *Router) relabelTest(c *gin.Context) {
|
||||
|
||||
ginx.NewRender(c).Data(tags, nil)
|
||||
}
|
||||
|
||||
type identListForm struct {
|
||||
Ids []int64 `json:"ids"`
|
||||
IdentList []string `json:"ident_list"`
|
||||
}
|
||||
|
||||
func containsIdentOperator(s string) bool {
|
||||
pattern := `ident\s*(!=|!~|=~)`
|
||||
matched, err := regexp.MatchString(pattern, s)
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
return matched
|
||||
}
|
||||
|
||||
func (rt *Router) cloneToMachine(c *gin.Context) {
|
||||
var f identListForm
|
||||
ginx.BindJSON(c, &f)
|
||||
|
||||
if len(f.IdentList) == 0 {
|
||||
ginx.Bomb(http.StatusBadRequest, "ident_list is empty")
|
||||
}
|
||||
|
||||
alertRules, err := models.AlertRuleGetsByIds(rt.Ctx, f.Ids)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
re := regexp.MustCompile(`ident\s*=\s*\\".*?\\"`)
|
||||
|
||||
user := c.MustGet("username").(string)
|
||||
now := time.Now().Unix()
|
||||
|
||||
newRules := make([]*models.AlertRule, 0)
|
||||
|
||||
reterr := make(map[string]map[string]string)
|
||||
|
||||
for i := range alertRules {
|
||||
errMsg := make(map[string]string)
|
||||
|
||||
if alertRules[i].Cate != "prometheus" {
|
||||
errMsg["all"] = "Only Prometheus rule can be cloned to machines"
|
||||
reterr[alertRules[i].Name] = errMsg
|
||||
continue
|
||||
}
|
||||
|
||||
if containsIdentOperator(alertRules[i].RuleConfig) {
|
||||
errMsg["all"] = "promql is missing ident"
|
||||
reterr[alertRules[i].Name] = errMsg
|
||||
continue
|
||||
}
|
||||
|
||||
for j := range f.IdentList {
|
||||
alertRules[i].RuleConfig = re.ReplaceAllString(alertRules[i].RuleConfig, fmt.Sprintf(`ident=\"%s\"`, f.IdentList[j]))
|
||||
|
||||
newRule := &models.AlertRule{}
|
||||
if err := copier.Copy(newRule, alertRules[i]); err != nil {
|
||||
errMsg[f.IdentList[j]] = fmt.Sprintf("fail to clone rule, err: %s", err)
|
||||
continue
|
||||
}
|
||||
|
||||
newRule.Id = 0
|
||||
newRule.Name = alertRules[i].Name + "_" + f.IdentList[j]
|
||||
newRule.CreateBy = user
|
||||
newRule.UpdateBy = user
|
||||
newRule.UpdateAt = now
|
||||
newRule.CreateAt = now
|
||||
newRule.RuleConfig = alertRules[i].RuleConfig
|
||||
|
||||
exist, err := models.AlertRuleExists(rt.Ctx, 0, newRule.GroupId, newRule.DatasourceIdsJson, newRule.Name)
|
||||
if err != nil {
|
||||
errMsg[f.IdentList[j]] = err.Error()
|
||||
continue
|
||||
}
|
||||
|
||||
if exist {
|
||||
errMsg[f.IdentList[j]] = fmt.Sprintf("rule already exists, ruleName: %s", newRule.Name)
|
||||
continue
|
||||
}
|
||||
|
||||
newRules = append(newRules, newRule)
|
||||
}
|
||||
|
||||
if len(errMsg) > 0 {
|
||||
reterr[alertRules[i].Name] = errMsg
|
||||
}
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Data(reterr, models.InsertAlertRule(rt.Ctx, newRules))
|
||||
}
|
||||
|
||||
@@ -94,14 +94,6 @@ func (rt *Router) boardGet(c *gin.Context) {
|
||||
ginx.NewRender(c).Data(board, nil)
|
||||
}
|
||||
|
||||
// 根据 bids 参数,获取多个 board
|
||||
func (rt *Router) boardGetsByBids(c *gin.Context) {
|
||||
bids := str.IdsInt64(ginx.QueryStr(c, "bids", ""), ",")
|
||||
boards, err := models.BoardGetsByBids(rt.Ctx, bids)
|
||||
ginx.Dangerous(err)
|
||||
ginx.NewRender(c).Data(boards, err)
|
||||
}
|
||||
|
||||
func (rt *Router) boardPureGet(c *gin.Context) {
|
||||
board, err := models.BoardGetByID(rt.Ctx, ginx.UrlParamInt64(c, "bid"))
|
||||
ginx.Dangerous(err)
|
||||
|
||||
@@ -4,15 +4,10 @@ import (
|
||||
"net/http"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
"github.com/ccfos/nightingale/v6/pkg/ctx"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/toolkits/pkg/ginx"
|
||||
"gorm.io/gorm"
|
||||
)
|
||||
|
||||
const SYSTEM = "system"
|
||||
|
||||
func (rt *Router) builtinComponentsAdd(c *gin.Context) {
|
||||
var lst []models.BuiltinComponent
|
||||
ginx.BindJSON(c, &lst)
|
||||
@@ -55,31 +50,10 @@ func (rt *Router) builtinComponentsPut(c *gin.Context) {
|
||||
return
|
||||
}
|
||||
|
||||
if bc.CreatedBy == SYSTEM {
|
||||
req.Ident = bc.Ident
|
||||
}
|
||||
|
||||
username := Username(c)
|
||||
req.UpdatedBy = username
|
||||
|
||||
err = models.DB(rt.Ctx).Transaction(func(tx *gorm.DB) error {
|
||||
tCtx := &ctx.Context{
|
||||
DB: tx,
|
||||
}
|
||||
|
||||
txErr := models.BuiltinMetricBatchUpdateColumn(tCtx, "typ", bc.Ident, req.Ident, req.UpdatedBy)
|
||||
if txErr != nil {
|
||||
return txErr
|
||||
}
|
||||
|
||||
txErr = bc.Update(tCtx, req)
|
||||
if txErr != nil {
|
||||
return txErr
|
||||
}
|
||||
return nil
|
||||
})
|
||||
|
||||
ginx.NewRender(c).Message(err)
|
||||
ginx.NewRender(c).Message(bc.Update(rt.Ctx, req))
|
||||
}
|
||||
|
||||
func (rt *Router) builtinComponentsDel(c *gin.Context) {
|
||||
|
||||
@@ -6,7 +6,6 @@ import (
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/BurntSushi/toml"
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/toolkits/pkg/ginx"
|
||||
@@ -53,15 +52,15 @@ func (rt *Router) builtinPayloadsAdd(c *gin.Context) {
|
||||
}
|
||||
|
||||
bp := models.BuiltinPayload{
|
||||
Type: lst[i].Type,
|
||||
ComponentID: lst[i].ComponentID,
|
||||
Cate: lst[i].Cate,
|
||||
Name: rule.Name,
|
||||
Tags: rule.AppendTags,
|
||||
UUID: rule.UUID,
|
||||
Content: string(contentBytes),
|
||||
CreatedBy: username,
|
||||
UpdatedBy: username,
|
||||
Type: lst[i].Type,
|
||||
Component: lst[i].Component,
|
||||
Cate: lst[i].Cate,
|
||||
Name: rule.Name,
|
||||
Tags: rule.AppendTags,
|
||||
UUID: rule.UUID,
|
||||
Content: string(contentBytes),
|
||||
CreatedBy: username,
|
||||
UpdatedBy: username,
|
||||
}
|
||||
|
||||
if err := bp.Add(rt.Ctx, username); err != nil {
|
||||
@@ -82,15 +81,15 @@ func (rt *Router) builtinPayloadsAdd(c *gin.Context) {
|
||||
}
|
||||
|
||||
bp := models.BuiltinPayload{
|
||||
Type: lst[i].Type,
|
||||
ComponentID: lst[i].ComponentID,
|
||||
Cate: lst[i].Cate,
|
||||
Name: alertRule.Name,
|
||||
Tags: alertRule.AppendTags,
|
||||
UUID: alertRule.UUID,
|
||||
Content: lst[i].Content,
|
||||
CreatedBy: username,
|
||||
UpdatedBy: username,
|
||||
Type: lst[i].Type,
|
||||
Component: lst[i].Component,
|
||||
Cate: lst[i].Cate,
|
||||
Name: alertRule.Name,
|
||||
Tags: alertRule.AppendTags,
|
||||
UUID: alertRule.UUID,
|
||||
Content: lst[i].Content,
|
||||
CreatedBy: username,
|
||||
UpdatedBy: username,
|
||||
}
|
||||
|
||||
if err := bp.Add(rt.Ctx, username); err != nil {
|
||||
@@ -116,15 +115,15 @@ func (rt *Router) builtinPayloadsAdd(c *gin.Context) {
|
||||
}
|
||||
|
||||
bp := models.BuiltinPayload{
|
||||
Type: lst[i].Type,
|
||||
ComponentID: lst[i].ComponentID,
|
||||
Cate: lst[i].Cate,
|
||||
Name: dashboard.Name,
|
||||
Tags: dashboard.Tags,
|
||||
UUID: dashboard.UUID,
|
||||
Content: string(contentBytes),
|
||||
CreatedBy: username,
|
||||
UpdatedBy: username,
|
||||
Type: lst[i].Type,
|
||||
Component: lst[i].Component,
|
||||
Cate: lst[i].Cate,
|
||||
Name: dashboard.Name,
|
||||
Tags: dashboard.Tags,
|
||||
UUID: dashboard.UUID,
|
||||
Content: string(contentBytes),
|
||||
CreatedBy: username,
|
||||
UpdatedBy: username,
|
||||
}
|
||||
|
||||
if err := bp.Add(rt.Ctx, username); err != nil {
|
||||
@@ -145,29 +144,21 @@ func (rt *Router) builtinPayloadsAdd(c *gin.Context) {
|
||||
}
|
||||
|
||||
bp := models.BuiltinPayload{
|
||||
Type: lst[i].Type,
|
||||
ComponentID: lst[i].ComponentID,
|
||||
Cate: lst[i].Cate,
|
||||
Name: dashboard.Name,
|
||||
Tags: dashboard.Tags,
|
||||
UUID: dashboard.UUID,
|
||||
Content: lst[i].Content,
|
||||
CreatedBy: username,
|
||||
UpdatedBy: username,
|
||||
Type: lst[i].Type,
|
||||
Component: lst[i].Component,
|
||||
Cate: lst[i].Cate,
|
||||
Name: dashboard.Name,
|
||||
Tags: dashboard.Tags,
|
||||
UUID: dashboard.UUID,
|
||||
Content: lst[i].Content,
|
||||
CreatedBy: username,
|
||||
UpdatedBy: username,
|
||||
}
|
||||
|
||||
if err := bp.Add(rt.Ctx, username); err != nil {
|
||||
reterr[bp.Name] = i18n.Sprintf(c.GetHeader("X-Language"), err.Error())
|
||||
}
|
||||
} else {
|
||||
if lst[i].Type == "collect" {
|
||||
c := make(map[string]interface{})
|
||||
if _, err := toml.Decode(lst[i].Content, &c); err != nil {
|
||||
reterr[lst[i].Name] = err.Error()
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
||||
if err := lst[i].Add(rt.Ctx, username); err != nil {
|
||||
reterr[lst[i].Name] = i18n.Sprintf(c.GetHeader("X-Language"), err.Error())
|
||||
}
|
||||
@@ -180,20 +171,19 @@ func (rt *Router) builtinPayloadsAdd(c *gin.Context) {
|
||||
|
||||
func (rt *Router) builtinPayloadsGets(c *gin.Context) {
|
||||
typ := ginx.QueryStr(c, "type", "")
|
||||
ComponentID := ginx.QueryInt64(c, "component_id", 0)
|
||||
|
||||
component := ginx.QueryStr(c, "component", "")
|
||||
cate := ginx.QueryStr(c, "cate", "")
|
||||
query := ginx.QueryStr(c, "query", "")
|
||||
|
||||
lst, err := models.BuiltinPayloadGets(rt.Ctx, uint64(ComponentID), typ, cate, query)
|
||||
lst, err := models.BuiltinPayloadGets(rt.Ctx, typ, component, cate, query)
|
||||
ginx.NewRender(c).Data(lst, err)
|
||||
}
|
||||
|
||||
func (rt *Router) builtinPayloadcatesGet(c *gin.Context) {
|
||||
typ := ginx.QueryStr(c, "type", "")
|
||||
ComponentID := ginx.QueryInt64(c, "component_id", 0)
|
||||
component := ginx.QueryStr(c, "component", "")
|
||||
|
||||
cates, err := models.BuiltinPayloadCates(rt.Ctx, typ, uint64(ComponentID))
|
||||
cates, err := models.BuiltinPayloadCates(rt.Ctx, typ, component)
|
||||
ginx.NewRender(c).Data(cates, err)
|
||||
}
|
||||
|
||||
@@ -239,11 +229,6 @@ func (rt *Router) builtinPayloadsPut(c *gin.Context) {
|
||||
|
||||
req.Name = dashboard.Name
|
||||
req.Tags = dashboard.Tags
|
||||
} else if req.Type == "collect" {
|
||||
c := make(map[string]interface{})
|
||||
if _, err := toml.Decode(req.Content, &c); err != nil {
|
||||
ginx.Bomb(http.StatusBadRequest, err.Error())
|
||||
}
|
||||
}
|
||||
|
||||
username := Username(c)
|
||||
|
||||
@@ -65,23 +65,6 @@ func queryDatasourceIds(c *gin.Context) []int64 {
|
||||
return ids
|
||||
}
|
||||
|
||||
func queryStrListField(c *gin.Context, fieldName string, sep ...string) []string {
|
||||
str := ginx.QueryStr(c, fieldName, "")
|
||||
if str == "" {
|
||||
return nil
|
||||
}
|
||||
|
||||
lst := []string{str}
|
||||
for _, s := range sep {
|
||||
var newLst []string
|
||||
for _, str := range lst {
|
||||
newLst = append(newLst, strings.Split(str, s)...)
|
||||
}
|
||||
lst = newLst
|
||||
}
|
||||
return lst
|
||||
}
|
||||
|
||||
type idsForm struct {
|
||||
Ids []int64 `json:"ids"`
|
||||
IsSyncToFlashDuty bool `json:"is_sync_to_flashduty"`
|
||||
|
||||
@@ -3,10 +3,8 @@ package router
|
||||
import (
|
||||
"compress/gzip"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"sort"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
@@ -59,7 +57,7 @@ func HandleHeartbeat(c *gin.Context, ctx *ctx.Context, engineName string, metaSe
|
||||
}
|
||||
|
||||
if req.Hostname == "" {
|
||||
return req, errors.New("hostname is required")
|
||||
return req, fmt.Errorf("hostname is required", 400)
|
||||
}
|
||||
|
||||
// maybe from pushgw
|
||||
@@ -81,122 +79,55 @@ func HandleHeartbeat(c *gin.Context, ctx *ctx.Context, engineName string, metaSe
|
||||
identSet.MSet(items)
|
||||
|
||||
if target, has := targetCache.Get(req.Hostname); has && target != nil {
|
||||
gidsStr := ginx.QueryStr(c, "gid", "")
|
||||
overwriteGids := ginx.QueryBool(c, "overwrite_gids", false)
|
||||
gid := ginx.QueryInt64(c, "gid", 0)
|
||||
hostIp := strings.TrimSpace(req.HostIp)
|
||||
gids := strings.Split(gidsStr, ",")
|
||||
|
||||
if overwriteGids {
|
||||
groupIds := make([]int64, 0)
|
||||
for i := range gids {
|
||||
if gids[i] == "" {
|
||||
continue
|
||||
}
|
||||
groupId, err := strconv.ParseInt(gids[i], 10, 64)
|
||||
if err != nil {
|
||||
logger.Warningf("update target:%s group ids failed, err: %v", req.Hostname, err)
|
||||
continue
|
||||
}
|
||||
groupIds = append(groupIds, groupId)
|
||||
}
|
||||
|
||||
err := models.TargetOverrideBgids(ctx, []string{target.Ident}, groupIds)
|
||||
if err != nil {
|
||||
logger.Warningf("update target:%s group ids failed, err: %v", target.Ident, err)
|
||||
}
|
||||
} else if gidsStr != "" {
|
||||
for i := range gids {
|
||||
groupId, err := strconv.ParseInt(gids[i], 10, 64)
|
||||
if err != nil {
|
||||
logger.Warningf("update target:%s group ids failed, err: %v", req.Hostname, err)
|
||||
continue
|
||||
}
|
||||
|
||||
if !target.MatchGroupId(groupId) {
|
||||
err := models.TargetBindBgids(ctx, []string{target.Ident}, []int64{groupId})
|
||||
if err != nil {
|
||||
logger.Warningf("update target:%s group ids failed, err: %v", target.Ident, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
field := make(map[string]interface{})
|
||||
if gid != 0 && gid != target.GroupId {
|
||||
field["group_id"] = gid
|
||||
}
|
||||
|
||||
newTarget := models.Target{}
|
||||
targetNeedUpdate := false
|
||||
if hostIp != "" && hostIp != target.HostIp {
|
||||
newTarget.HostIp = hostIp
|
||||
targetNeedUpdate = true
|
||||
field["host_ip"] = hostIp
|
||||
}
|
||||
|
||||
hostTagsMap := target.GetHostTagsMap()
|
||||
hostTagNeedUpdate := false
|
||||
if len(hostTagsMap) != len(req.GlobalLabels) {
|
||||
hostTagNeedUpdate = true
|
||||
} else {
|
||||
for k, v := range req.GlobalLabels {
|
||||
if v == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
if tagv, ok := hostTagsMap[k]; !ok || tagv != v {
|
||||
hostTagNeedUpdate = true
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if hostTagNeedUpdate {
|
||||
lst := []string{}
|
||||
for k, v := range req.GlobalLabels {
|
||||
lst = append(lst, k+"="+v)
|
||||
}
|
||||
sort.Strings(lst)
|
||||
newTarget.HostTags = lst
|
||||
targetNeedUpdate = true
|
||||
}
|
||||
|
||||
userTagsMap := target.GetTagsMap()
|
||||
userTagNeedUpdate := false
|
||||
userTags := []string{}
|
||||
for k, v := range userTagsMap {
|
||||
tagsMap := target.GetTagsMap()
|
||||
tagNeedUpdate := false
|
||||
for k, v := range req.GlobalLabels {
|
||||
if v == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
if _, ok := req.GlobalLabels[k]; !ok {
|
||||
userTags = append(userTags, k+"="+v)
|
||||
} else { // 该key在hostTags中已经存在
|
||||
userTagNeedUpdate = true
|
||||
if tagv, ok := tagsMap[k]; !ok || tagv != v {
|
||||
tagNeedUpdate = true
|
||||
tagsMap[k] = v
|
||||
}
|
||||
}
|
||||
|
||||
if userTagNeedUpdate {
|
||||
newTarget.Tags = strings.Join(userTags, " ") + " "
|
||||
targetNeedUpdate = true
|
||||
if tagNeedUpdate {
|
||||
lst := []string{}
|
||||
for k, v := range tagsMap {
|
||||
lst = append(lst, k+"="+v)
|
||||
}
|
||||
labels := strings.Join(lst, " ") + " "
|
||||
field["tags"] = labels
|
||||
}
|
||||
|
||||
if req.EngineName != "" && req.EngineName != target.EngineName {
|
||||
newTarget.EngineName = req.EngineName
|
||||
targetNeedUpdate = true
|
||||
field["engine_name"] = req.EngineName
|
||||
}
|
||||
|
||||
if req.AgentVersion != "" && req.AgentVersion != target.AgentVersion {
|
||||
newTarget.AgentVersion = req.AgentVersion
|
||||
targetNeedUpdate = true
|
||||
field["agent_version"] = req.AgentVersion
|
||||
}
|
||||
|
||||
if req.OS != "" && req.OS != target.OS {
|
||||
newTarget.OS = req.OS
|
||||
targetNeedUpdate = true
|
||||
}
|
||||
|
||||
if targetNeedUpdate {
|
||||
err := models.DB(ctx).Model(&target).Updates(newTarget).Error
|
||||
if len(field) > 0 {
|
||||
err := target.UpdateFieldsMap(ctx, field)
|
||||
if err != nil {
|
||||
logger.Errorf("update target fields failed, err: %v", err)
|
||||
}
|
||||
}
|
||||
logger.Debugf("heartbeat field:%+v target: %v", newTarget, *target)
|
||||
logger.Debugf("heartbeat field:%+v target: %v", field, *target)
|
||||
}
|
||||
|
||||
return req, nil
|
||||
|
||||
@@ -1,205 +0,0 @@
|
||||
package router
|
||||
|
||||
import (
|
||||
"strings"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
"github.com/ccfos/nightingale/v6/pkg/ctx"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/toolkits/pkg/ginx"
|
||||
"github.com/toolkits/pkg/logger"
|
||||
)
|
||||
|
||||
type NotificationResponse struct {
|
||||
SubRules []SubRule `json:"sub_rules"`
|
||||
Notifies map[string][]Record `json:"notifies"`
|
||||
}
|
||||
|
||||
type SubRule struct {
|
||||
SubID int64 `json:"sub_id"`
|
||||
Notifies map[string][]Record `json:"notifies"`
|
||||
}
|
||||
|
||||
type Notify struct {
|
||||
Channel string `json:"channel"`
|
||||
Records []Record `json:"records"`
|
||||
}
|
||||
|
||||
type Record struct {
|
||||
Target string `json:"target"`
|
||||
Username string `json:"username"`
|
||||
Status int `json:"status"`
|
||||
Detail string `json:"detail"`
|
||||
}
|
||||
|
||||
// notificationRecordAdd
|
||||
func (rt *Router) notificationRecordAdd(c *gin.Context) {
|
||||
var req models.NotificaitonRecord
|
||||
ginx.BindJSON(c, &req)
|
||||
err := req.Add(rt.Ctx)
|
||||
|
||||
ginx.NewRender(c).Data(req.Id, err)
|
||||
}
|
||||
|
||||
func (rt *Router) notificationRecordList(c *gin.Context) {
|
||||
eid := ginx.UrlParamInt64(c, "eid")
|
||||
lst, err := models.NotificaitonRecordsGetByEventId(rt.Ctx, eid)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
response := buildNotificationResponse(rt.Ctx, lst)
|
||||
ginx.NewRender(c).Data(response, nil)
|
||||
}
|
||||
|
||||
func buildNotificationResponse(ctx *ctx.Context, nl []*models.NotificaitonRecord) NotificationResponse {
|
||||
response := NotificationResponse{
|
||||
SubRules: []SubRule{},
|
||||
Notifies: make(map[string][]Record),
|
||||
}
|
||||
|
||||
subRuleMap := make(map[int64]*SubRule)
|
||||
|
||||
// Collect all group IDs
|
||||
groupIdSet := make(map[int64]struct{})
|
||||
|
||||
// map[SubId]map[Channel]map[Target]index
|
||||
filter := make(map[int64]map[string]map[string]int)
|
||||
|
||||
for i, n := range nl {
|
||||
// 对相同的 channel-target 进行合并
|
||||
for _, gid := range n.GetGroupIds(ctx) {
|
||||
groupIdSet[gid] = struct{}{}
|
||||
}
|
||||
|
||||
if _, exists := filter[n.SubId]; !exists {
|
||||
filter[n.SubId] = make(map[string]map[string]int)
|
||||
}
|
||||
|
||||
if _, exists := filter[n.SubId][n.Channel]; !exists {
|
||||
filter[n.SubId][n.Channel] = make(map[string]int)
|
||||
}
|
||||
|
||||
idx, exists := filter[n.SubId][n.Channel][n.Target]
|
||||
if !exists {
|
||||
filter[n.SubId][n.Channel][n.Target] = i
|
||||
} else {
|
||||
if nl[idx].Status < n.Status {
|
||||
nl[idx].Status = n.Status
|
||||
}
|
||||
nl[idx].Details = nl[idx].Details + ", " + n.Details
|
||||
nl[i] = nil
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
// Fill usernames only once
|
||||
usernameByTarget := fillUserNames(ctx, groupIdSet)
|
||||
|
||||
for _, n := range nl {
|
||||
if n == nil {
|
||||
continue
|
||||
}
|
||||
|
||||
m := usernameByTarget[n.Target]
|
||||
usernames := make([]string, 0, len(m))
|
||||
for k := range m {
|
||||
usernames = append(usernames, k)
|
||||
}
|
||||
|
||||
if !checkChannel(n.Channel) {
|
||||
// Hide sensitive information
|
||||
n.Target = replaceLastEightChars(n.Target)
|
||||
}
|
||||
record := Record{
|
||||
Target: n.Target,
|
||||
Status: n.Status,
|
||||
Detail: n.Details,
|
||||
}
|
||||
|
||||
record.Username = strings.Join(usernames, ",")
|
||||
|
||||
if n.SubId > 0 {
|
||||
// Handle SubRules
|
||||
subRule, ok := subRuleMap[n.SubId]
|
||||
if !ok {
|
||||
newSubRule := &SubRule{
|
||||
SubID: n.SubId,
|
||||
}
|
||||
newSubRule.Notifies = make(map[string][]Record)
|
||||
newSubRule.Notifies[n.Channel] = []Record{record}
|
||||
|
||||
subRuleMap[n.SubId] = newSubRule
|
||||
} else {
|
||||
if _, exists := subRule.Notifies[n.Channel]; !exists {
|
||||
|
||||
subRule.Notifies[n.Channel] = []Record{record}
|
||||
} else {
|
||||
subRule.Notifies[n.Channel] = append(subRule.Notifies[n.Channel], record)
|
||||
}
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
if response.Notifies == nil {
|
||||
response.Notifies = make(map[string][]Record)
|
||||
}
|
||||
|
||||
if _, exists := response.Notifies[n.Channel]; !exists {
|
||||
response.Notifies[n.Channel] = []Record{record}
|
||||
} else {
|
||||
response.Notifies[n.Channel] = append(response.Notifies[n.Channel], record)
|
||||
}
|
||||
}
|
||||
|
||||
for _, subRule := range subRuleMap {
|
||||
response.SubRules = append(response.SubRules, *subRule)
|
||||
}
|
||||
|
||||
return response
|
||||
}
|
||||
|
||||
// check channel is one of the following: tx-sms, tx-voice, ali-sms, ali-voice, email, script
|
||||
func checkChannel(channel string) bool {
|
||||
switch channel {
|
||||
case "tx-sms", "tx-voice", "ali-sms", "ali-voice", "email", "script":
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func replaceLastEightChars(s string) string {
|
||||
if len(s) <= 8 {
|
||||
return strings.Repeat("*", len(s))
|
||||
}
|
||||
return s[:len(s)-8] + strings.Repeat("*", 8)
|
||||
}
|
||||
|
||||
func fillUserNames(ctx *ctx.Context, groupIdSet map[int64]struct{}) map[string]map[string]struct{} {
|
||||
userNameByTarget := make(map[string]map[string]struct{})
|
||||
|
||||
gids := make([]int64, 0, len(groupIdSet))
|
||||
for gid := range groupIdSet {
|
||||
gids = append(gids, gid)
|
||||
}
|
||||
|
||||
users, err := models.UsersGetByGroupIds(ctx, gids)
|
||||
if err != nil {
|
||||
logger.Errorf("UsersGetByGroupIds failed, err: %v", err)
|
||||
return userNameByTarget
|
||||
}
|
||||
|
||||
for _, user := range users {
|
||||
logger.Warningf("user: %s", user.Username)
|
||||
for _, ch := range models.DefaultChannels {
|
||||
target, exist := user.ExtractToken(ch)
|
||||
if exist {
|
||||
if _, ok := userNameByTarget[target]; !ok {
|
||||
userNameByTarget[target] = make(map[string]struct{})
|
||||
}
|
||||
userNameByTarget[target][user.Username] = struct{}{}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return userNameByTarget
|
||||
}
|
||||
@@ -182,7 +182,7 @@ func (rt *Router) notifyConfigPut(c *gin.Context) {
|
||||
|
||||
smtp, errSmtp := SmtpValidate(text)
|
||||
ginx.Dangerous(errSmtp)
|
||||
go sender.RestartEmailSender(rt.Ctx, smtp)
|
||||
go sender.RestartEmailSender(smtp)
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Message(nil)
|
||||
|
||||
@@ -68,7 +68,7 @@ func (rt *Router) notifyTplUpdate(c *gin.Context) {
|
||||
}
|
||||
|
||||
// get the count of the same channel and name but different id
|
||||
count, err := models.Count(models.DB(rt.Ctx).Model(&models.NotifyTpl{}).Where("(channel = ? or name = ?) and id <> ?", f.Channel, f.Name, f.Id))
|
||||
count, err := models.Count(models.DB(rt.Ctx).Model(&models.NotifyTpl{}).Where("channel = ? or name = ? and id <> ?", f.Channel, f.Name, f.Id))
|
||||
ginx.Dangerous(err)
|
||||
if count != 0 {
|
||||
ginx.Bomb(200, "Refuse to create duplicate channel or name")
|
||||
@@ -138,7 +138,7 @@ func (rt *Router) notifyTplPreview(c *gin.Context) {
|
||||
continue
|
||||
}
|
||||
|
||||
arr := strings.SplitN(pair, "=", 2)
|
||||
arr := strings.Split(pair, "=")
|
||||
if len(arr) != 2 {
|
||||
continue
|
||||
}
|
||||
|
||||
@@ -52,8 +52,6 @@ func (rt *Router) targetGets(c *gin.Context) {
|
||||
order := ginx.QueryStr(c, "order", "ident")
|
||||
desc := ginx.QueryBool(c, "desc", false)
|
||||
|
||||
hosts := queryStrListField(c, "hosts", ",", " ", "\n")
|
||||
|
||||
var err error
|
||||
if len(bgids) == 0 {
|
||||
user := c.MustGet("user").(*models.User)
|
||||
@@ -67,13 +65,11 @@ func (rt *Router) targetGets(c *gin.Context) {
|
||||
bgids = append(bgids, 0)
|
||||
}
|
||||
}
|
||||
|
||||
options := []models.BuildTargetWhereOption{
|
||||
models.BuildTargetWhereWithBgids(bgids),
|
||||
models.BuildTargetWhereWithDsIds(dsIds),
|
||||
models.BuildTargetWhereWithQuery(query),
|
||||
models.BuildTargetWhereWithDowntime(downtime),
|
||||
models.BuildTargetWhereWithHosts(hosts),
|
||||
}
|
||||
total, err := models.TargetTotal(rt.Ctx, options...)
|
||||
ginx.Dangerous(err)
|
||||
@@ -82,13 +78,6 @@ func (rt *Router) targetGets(c *gin.Context) {
|
||||
ginx.Offset(c, limit), order, desc, options...)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
tgs, err := models.TargetBusiGroupsGetAll(rt.Ctx)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
for _, t := range list {
|
||||
t.GroupIds = tgs[t.Ident]
|
||||
}
|
||||
|
||||
if err == nil {
|
||||
now := time.Now()
|
||||
cache := make(map[int64]*models.BusiGroup)
|
||||
@@ -168,8 +157,7 @@ func (rt *Router) targetGetsByService(c *gin.Context) {
|
||||
func (rt *Router) targetGetTags(c *gin.Context) {
|
||||
idents := ginx.QueryStr(c, "idents", "")
|
||||
idents = strings.ReplaceAll(idents, ",", " ")
|
||||
ignoreHostTag := ginx.QueryBool(c, "ignore_host_tag", false)
|
||||
lst, err := models.TargetGetTags(rt.Ctx, strings.Fields(idents), ignoreHostTag)
|
||||
lst, err := models.TargetGetTags(rt.Ctx, strings.Fields(idents))
|
||||
ginx.NewRender(c).Data(lst, err)
|
||||
}
|
||||
|
||||
@@ -272,11 +260,9 @@ func (rt *Router) validateTags(tags []string) error {
|
||||
}
|
||||
|
||||
func (rt *Router) addTagsToTarget(target *models.Target, tags []string) error {
|
||||
hostTagsMap := target.GetHostTagsMap()
|
||||
for _, tag := range tags {
|
||||
tagKey := strings.Split(tag, "=")[0]
|
||||
if _, ok := hostTagsMap[tagKey]; ok ||
|
||||
strings.Contains(target.Tags, tagKey+"=") {
|
||||
if strings.Contains(target.Tags, tagKey+"=") {
|
||||
return fmt.Errorf("duplicate tagkey(%s)", tagKey)
|
||||
}
|
||||
}
|
||||
@@ -393,15 +379,8 @@ type targetBgidForm struct {
|
||||
Bgid int64 `json:"bgid"`
|
||||
}
|
||||
|
||||
type targetBgidsForm struct {
|
||||
Idents []string `json:"idents" binding:"required_without=HostIps"`
|
||||
HostIps []string `json:"host_ips" binding:"required_without=Idents"`
|
||||
Bgids []int64 `json:"bgids"`
|
||||
Action string `json:"action"` // add del reset
|
||||
}
|
||||
|
||||
func (rt *Router) targetBindBgids(c *gin.Context) {
|
||||
var f targetBgidsForm
|
||||
func (rt *Router) targetUpdateBgid(c *gin.Context) {
|
||||
var f targetBgidForm
|
||||
var err error
|
||||
var failedResults = make(map[string]string)
|
||||
ginx.BindJSON(c, &f)
|
||||
@@ -417,24 +396,35 @@ func (rt *Router) targetBindBgids(c *gin.Context) {
|
||||
}
|
||||
|
||||
user := c.MustGet("user").(*models.User)
|
||||
if !user.IsAdmin() {
|
||||
// 普通用户,检查用户是否有权限操作所有请求的业务组
|
||||
existing, _, err := models.SeparateTargetIdents(rt.Ctx, f.Idents)
|
||||
if user.IsAdmin() {
|
||||
ginx.NewRender(c).Data(failedResults, models.TargetUpdateBgid(rt.Ctx, f.Idents, f.Bgid, false))
|
||||
return
|
||||
}
|
||||
|
||||
if f.Bgid > 0 {
|
||||
// 把要操作的机器分成两部分,一部分是bgid为0,需要管理员分配,另一部分bgid>0,说明是业务组内部想调整
|
||||
// 比如原来分配给didiyun的机器,didiyun的管理员想把部分机器调整到didiyun-ceph下
|
||||
// 对于调整的这种情况,当前登录用户要对这批机器有操作权限,同时还要对目标BG有操作权限
|
||||
orphans, err := models.IdentsFilter(rt.Ctx, f.Idents, "group_id = ?", 0)
|
||||
ginx.Dangerous(err)
|
||||
rt.checkTargetPerm(c, existing)
|
||||
|
||||
var groupIds []int64
|
||||
if f.Action == "reset" {
|
||||
// 如果是复写,则需要检查用户是否有权限操作机器之前的业务组
|
||||
bgids, err := models.TargetGroupIdsGetByIdents(rt.Ctx, f.Idents)
|
||||
// 机器里边存在未归组的,登录用户就需要是admin
|
||||
if len(orphans) > 0 && !user.IsAdmin() {
|
||||
can, err := user.CheckPerm(rt.Ctx, "/targets/bind")
|
||||
ginx.Dangerous(err)
|
||||
|
||||
groupIds = append(groupIds, bgids...)
|
||||
if !can {
|
||||
ginx.Bomb(http.StatusForbidden, "No permission. Only admin can assign BG")
|
||||
}
|
||||
}
|
||||
groupIds = append(groupIds, f.Bgids...)
|
||||
|
||||
for _, bgid := range groupIds {
|
||||
bg := BusiGroup(rt.Ctx, bgid)
|
||||
reBelongs, err := models.IdentsFilter(rt.Ctx, f.Idents, "group_id > ?", 0)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if len(reBelongs) > 0 {
|
||||
// 对于这些要重新分配的机器,操作者要对这些机器本身有权限,同时要对目标bgid有权限
|
||||
rt.checkTargetPerm(c, f.Idents)
|
||||
|
||||
bg := BusiGroup(rt.Ctx, f.Bgid)
|
||||
can, err := user.CanDoBusiGroup(rt.Ctx, bg, "rw")
|
||||
ginx.Dangerous(err)
|
||||
|
||||
@@ -442,24 +432,14 @@ func (rt *Router) targetBindBgids(c *gin.Context) {
|
||||
ginx.Bomb(http.StatusForbidden, "No permission. You are not admin of BG(%s)", bg.Name)
|
||||
}
|
||||
}
|
||||
|
||||
can, err := user.CheckPerm(rt.Ctx, "/targets/bind")
|
||||
ginx.Dangerous(err)
|
||||
if !can {
|
||||
ginx.Bomb(http.StatusForbidden, "No permission. Only admin can assign BG")
|
||||
}
|
||||
} else if f.Bgid == 0 {
|
||||
// 退还机器
|
||||
rt.checkTargetPerm(c, f.Idents)
|
||||
} else {
|
||||
ginx.Bomb(http.StatusBadRequest, "invalid bgid")
|
||||
}
|
||||
|
||||
switch f.Action {
|
||||
case "add":
|
||||
ginx.NewRender(c).Data(failedResults, models.TargetBindBgids(rt.Ctx, f.Idents, f.Bgids))
|
||||
case "del":
|
||||
ginx.NewRender(c).Data(failedResults, models.TargetUnbindBgids(rt.Ctx, f.Idents, f.Bgids))
|
||||
case "reset":
|
||||
ginx.NewRender(c).Data(failedResults, models.TargetOverrideBgids(rt.Ctx, f.Idents, f.Bgids))
|
||||
default:
|
||||
ginx.Bomb(http.StatusBadRequest, "invalid action")
|
||||
}
|
||||
ginx.NewRender(c).Data(failedResults, models.TargetUpdateBgid(rt.Ctx, f.Idents, f.Bgid, false))
|
||||
}
|
||||
|
||||
func (rt *Router) targetUpdateBgidByService(c *gin.Context) {
|
||||
@@ -478,7 +458,7 @@ func (rt *Router) targetUpdateBgidByService(c *gin.Context) {
|
||||
ginx.Bomb(http.StatusBadRequest, err.Error())
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Data(failedResults, models.TargetOverrideBgids(rt.Ctx, f.Idents, []int64{f.Bgid}))
|
||||
ginx.NewRender(c).Data(failedResults, models.TargetUpdateBgid(rt.Ctx, f.Idents, f.Bgid, false))
|
||||
}
|
||||
|
||||
type identsForm struct {
|
||||
|
||||
@@ -196,13 +196,6 @@ func (rt *Router) taskTplDel(c *gin.Context) {
|
||||
return
|
||||
}
|
||||
|
||||
ids, err := models.GetAlertRuleIdsByTaskId(rt.Ctx, tid)
|
||||
ginx.Dangerous(err)
|
||||
if len(ids) > 0 {
|
||||
ginx.NewRender(c).Message("can't del this task tpl, used by alert rule ids(%v) ", ids)
|
||||
return
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Message(tpl.Del(rt.Ctx))
|
||||
}
|
||||
|
||||
|
||||
@@ -48,7 +48,7 @@ func (rt *Router) userGets(c *gin.Context) {
|
||||
order := ginx.QueryStr(c, "order", "username")
|
||||
desc := ginx.QueryBool(c, "desc", false)
|
||||
|
||||
go rt.UserCache.UpdateUsersLastActiveTime()
|
||||
rt.UserCache.UpdateUsersLastActiveTime()
|
||||
total, err := models.UserTotal(rt.Ctx, query, stime, etime)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package sso
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"log"
|
||||
"time"
|
||||
|
||||
@@ -146,7 +145,7 @@ func Init(center cconf.Center, ctx *ctx.Context, configCache *memsto.ConfigCache
|
||||
}
|
||||
}
|
||||
if configCache == nil {
|
||||
log.Fatalln(fmt.Errorf("configCache is nil, sso initialization failed"))
|
||||
logger.Error("configCache is nil, sso initialization failed")
|
||||
}
|
||||
ssoClient.configCache = configCache
|
||||
userVariableMap := configCache.Get()
|
||||
|
||||
@@ -1,41 +0,0 @@
|
||||
package cron
|
||||
|
||||
import (
|
||||
"time"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
"github.com/ccfos/nightingale/v6/pkg/ctx"
|
||||
|
||||
"github.com/robfig/cron/v3"
|
||||
"github.com/toolkits/pkg/logger"
|
||||
)
|
||||
|
||||
func cleanNotifyRecord(ctx *ctx.Context, day int) {
|
||||
lastWeek := time.Now().Unix() - 86400*int64(day)
|
||||
err := models.DB(ctx).Model(&models.NotificaitonRecord{}).Where("created_at < ?", lastWeek).Delete(&models.NotificaitonRecord{}).Error
|
||||
if err != nil {
|
||||
logger.Errorf("Failed to clean notify record: %v", err)
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
// 每天凌晨1点执行清理任务
|
||||
func CleanNotifyRecord(ctx *ctx.Context, day int) {
|
||||
c := cron.New()
|
||||
if day < 1 {
|
||||
day = 7
|
||||
}
|
||||
|
||||
// 使用cron表达式设置每天凌晨1点执行
|
||||
_, err := c.AddFunc("0 1 * * *", func() {
|
||||
cleanNotifyRecord(ctx, day)
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
logger.Errorf("Failed to add clean notify record cron job: %v", err)
|
||||
return
|
||||
}
|
||||
|
||||
// 启动cron任务
|
||||
c.Start()
|
||||
}
|
||||
@@ -561,7 +561,7 @@ CREATE TABLE alert_cur_event (
|
||||
target_note varchar(191) not null default '' ,
|
||||
first_trigger_time bigint,
|
||||
trigger_time bigint not null,
|
||||
trigger_value varchar(2048) not null,
|
||||
trigger_value varchar(255) not null,
|
||||
annotations text not null ,
|
||||
rule_config text not null ,
|
||||
tags varchar(1024) not null default '' ,
|
||||
@@ -621,7 +621,7 @@ CREATE TABLE alert_his_event (
|
||||
target_note varchar(191) not null default '' ,
|
||||
first_trigger_time bigint,
|
||||
trigger_time bigint not null,
|
||||
trigger_value varchar(2048) not null,
|
||||
trigger_value varchar(255) not null,
|
||||
recover_time bigint not null default 0,
|
||||
last_eval_time bigint not null default 0 ,
|
||||
tags varchar(1024) not null default '' ,
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
set names utf8mb4;
|
||||
|
||||
-- drop database if exists n9e_v6;
|
||||
drop database if exists n9e_v6;
|
||||
create database n9e_v6;
|
||||
use n9e_v6;
|
||||
|
||||
@@ -366,7 +366,6 @@ CREATE TABLE `target` (
|
||||
`host_ip` varchar(15) default '' COMMENT 'IPv4 string',
|
||||
`agent_version` varchar(255) default '' COMMENT 'agent version',
|
||||
`engine_name` varchar(255) default '' COMMENT 'engine_name',
|
||||
`os` VARCHAR(31) DEFAULT '' COMMENT 'os type',
|
||||
`update_at` bigint not null default 0,
|
||||
PRIMARY KEY (`id`),
|
||||
UNIQUE KEY (`ident`),
|
||||
@@ -453,7 +452,7 @@ CREATE TABLE `alert_cur_event` (
|
||||
`target_note` varchar(191) not null default '' comment 'target note',
|
||||
`first_trigger_time` bigint,
|
||||
`trigger_time` bigint not null,
|
||||
`trigger_value` text not null,
|
||||
`trigger_value` varchar(255) not null,
|
||||
`annotations` text not null comment 'annotations',
|
||||
`rule_config` text not null comment 'annotations',
|
||||
`tags` varchar(1024) not null default '' comment 'merge data_tags rule_tags, split by ,,',
|
||||
@@ -493,7 +492,7 @@ CREATE TABLE `alert_his_event` (
|
||||
`target_note` varchar(191) not null default '' comment 'target note',
|
||||
`first_trigger_time` bigint,
|
||||
`trigger_time` bigint not null,
|
||||
`trigger_value` text not null,
|
||||
`trigger_value` varchar(255) not null,
|
||||
`recover_time` bigint not null default 0,
|
||||
`last_eval_time` bigint not null default 0 comment 'for time filter',
|
||||
`tags` varchar(1024) not null default '' comment 'merge data_tags rule_tags, split by ,,',
|
||||
@@ -528,7 +527,6 @@ CREATE TABLE `builtin_components` (
|
||||
|
||||
CREATE TABLE `builtin_payloads` (
|
||||
`id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT '''unique identifier''',
|
||||
`component_id` bigint(20) NOT NULL DEFAULT 0 COMMENT 'component_id',
|
||||
`uuid` bigint(20) NOT NULL COMMENT '''uuid of payload''',
|
||||
`type` varchar(191) NOT NULL COMMENT '''type of payload''',
|
||||
`component` varchar(191) NOT NULL COMMENT '''component of payload''',
|
||||
@@ -548,18 +546,6 @@ CREATE TABLE `builtin_payloads` (
|
||||
KEY `idx_type` (`type`)
|
||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
|
||||
|
||||
CREATE TABLE notification_record (
|
||||
`id` BIGINT PRIMARY KEY AUTO_INCREMENT,
|
||||
`event_id` BIGINT NOT NULL,
|
||||
`sub_id` BIGINT NOT NULL,
|
||||
`channel` VARCHAR(255) NOT NULL,
|
||||
`status` TINYINT NOT NULL DEFAULT 0,
|
||||
`target` VARCHAR(1024) NOT NULL,
|
||||
`details` VARCHAR(2048),
|
||||
`created_at` BIGINT NOT NULL,
|
||||
INDEX idx_evt (event_id)
|
||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
|
||||
|
||||
CREATE TABLE `task_tpl`
|
||||
(
|
||||
`id` int unsigned NOT NULL AUTO_INCREMENT,
|
||||
|
||||
@@ -389,7 +389,7 @@ CREATE TABLE `alert_cur_event` (
|
||||
`target_note` varchar(191) not null default '',
|
||||
`first_trigger_time` bigint,
|
||||
`trigger_time` bigint not null,
|
||||
`trigger_value` varchar(2048) not null,
|
||||
`trigger_value` varchar(255) not null,
|
||||
`annotations` text not null,
|
||||
`rule_config` text not null,
|
||||
`tags` varchar(1024) not null default ''
|
||||
@@ -427,7 +427,7 @@ CREATE TABLE `alert_his_event` (
|
||||
`target_note` varchar(191) not null default '',
|
||||
`first_trigger_time` bigint,
|
||||
`trigger_time` bigint not null,
|
||||
`trigger_value` varchar(2048) not null,
|
||||
`trigger_value` varchar(255) not null,
|
||||
`recover_time` bigint not null default 0,
|
||||
`last_eval_time` bigint not null default 0,
|
||||
`tags` varchar(1024) not null default '',
|
||||
@@ -83,37 +83,4 @@ ALTER TABLE recording_rule ADD COLUMN cron_pattern VARCHAR(255) DEFAULT '' COMME
|
||||
|
||||
/* v7.0.0-beta.14 */
|
||||
ALTER TABLE alert_cur_event ADD COLUMN original_tags TEXT COMMENT 'labels key=val,,k2=v2';
|
||||
ALTER TABLE alert_his_event ADD COLUMN original_tags TEXT COMMENT 'labels key=val,,k2=v2';
|
||||
|
||||
/* v7.1.0 */
|
||||
ALTER TABLE target ADD COLUMN os VARCHAR(31) DEFAULT '' COMMENT 'os type';
|
||||
|
||||
/* v7.2.0 */
|
||||
CREATE TABLE notification_record (
|
||||
`id` BIGINT PRIMARY KEY AUTO_INCREMENT,
|
||||
`event_id` BIGINT NOT NULL,
|
||||
`sub_id` BIGINT NOT NULL,
|
||||
`channel` VARCHAR(255) NOT NULL,
|
||||
`status` TINYINT NOT NULL DEFAULT 0,
|
||||
`target` VARCHAR(1024) NOT NULL,
|
||||
`details` VARCHAR(2048),
|
||||
`created_at` BIGINT NOT NULL,
|
||||
INDEX idx_evt (event_id)
|
||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
|
||||
|
||||
|
||||
/* v7.3.0 2024-08-26 */
|
||||
ALTER TABLE `target` ADD COLUMN `host_tags` TEXT COMMENT 'global labels set in conf file';
|
||||
|
||||
/* v7.3.4 2024-08-28 */
|
||||
ALTER TABLE `builtin_payloads` ADD COLUMN `component_id` bigint(20) NOT NULL DEFAULT 0 COMMENT 'component_id';
|
||||
|
||||
/* v7.4.0 2024-09-20 */
|
||||
CREATE TABLE `target_busi_group` (
|
||||
`id` bigint NOT NULL AUTO_INCREMENT,
|
||||
`target_ident` varchar(191) NOT NULL,
|
||||
`group_id` bigint NOT NULL,
|
||||
`update_at` bigint NOT NULL,
|
||||
PRIMARY KEY (`id`),
|
||||
UNIQUE KEY `idx_target_group` (`target_ident`,`group_id`)
|
||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
|
||||
ALTER TABLE alert_his_event ADD COLUMN original_tags TEXT COMMENT 'labels key=val,,k2=v2';
|
||||
File diff suppressed because one or more lines are too long
1
go.mod
1
go.mod
@@ -18,7 +18,6 @@ require (
|
||||
github.com/golang/snappy v0.0.4
|
||||
github.com/google/uuid v1.3.0
|
||||
github.com/hashicorp/go-version v1.6.0
|
||||
github.com/jinzhu/copier v0.4.0
|
||||
github.com/json-iterator/go v1.1.12
|
||||
github.com/koding/multiconfig v0.0.0-20171124222453-69c27309b2d7
|
||||
github.com/mailru/easyjson v0.7.7
|
||||
|
||||
2
go.sum
2
go.sum
@@ -160,8 +160,6 @@ github.com/jackc/puddle v0.0.0-20190413234325-e4ced69a3a2b/go.mod h1:m4B5Dj62Y0f
|
||||
github.com/jackc/puddle v0.0.0-20190608224051-11cab39313c9/go.mod h1:m4B5Dj62Y0fbyuIc15OsIqK0+JU8nkqQjsgx7dvjSWk=
|
||||
github.com/jackc/puddle v1.1.3/go.mod h1:m4B5Dj62Y0fbyuIc15OsIqK0+JU8nkqQjsgx7dvjSWk=
|
||||
github.com/jackc/puddle v1.3.0/go.mod h1:m4B5Dj62Y0fbyuIc15OsIqK0+JU8nkqQjsgx7dvjSWk=
|
||||
github.com/jinzhu/copier v0.4.0 h1:w3ciUoD19shMCRargcpm0cm91ytaBhDvuRpz1ODO/U8=
|
||||
github.com/jinzhu/copier v0.4.0/go.mod h1:DfbEm0FYsaqBcKcFuvmOZb218JkPGtvSHsKg8S8hyyg=
|
||||
github.com/jinzhu/inflection v1.0.0 h1:K317FqzuhWc8YvSVlFMCCUb36O/S9MCKRDI7QkRKD/E=
|
||||
github.com/jinzhu/inflection v1.0.0/go.mod h1:h+uFLlag+Qp1Va5pdKtLDYj+kHp5pxUVkryuEj+Srlc=
|
||||
github.com/jinzhu/now v1.1.4/go.mod h1:d3SSVoowX0Lcu0IBviAWJpolVfI5UJVZZ7cO71lE/z8=
|
||||
|
||||
@@ -192,7 +192,7 @@
|
||||
"prom_ql": "",
|
||||
"queries": [
|
||||
{
|
||||
"prom_ql": "elasticsearch_filesystem_data_available_bytes / elasticsearch_filesystem_data_size_in_bytes * 100 \u003c 10",
|
||||
"prom_ql": "elasticsearch_filesystem_data_available_bytes / elasticsearch_filesystem_data_size_bytes * 100 \u003c 10",
|
||||
"severity": 1
|
||||
}
|
||||
],
|
||||
@@ -275,7 +275,7 @@
|
||||
"prom_ql": "",
|
||||
"queries": [
|
||||
{
|
||||
"prom_ql": "elasticsearch_filesystem_data_available_bytes / elasticsearch_filesystem_data_size_in_bytes * 100 \u003c 20",
|
||||
"prom_ql": "elasticsearch_filesystem_data_available_bytes / elasticsearch_filesystem_data_size_bytes * 100 \u003c 20",
|
||||
"severity": 2
|
||||
}
|
||||
],
|
||||
@@ -1078,4 +1078,4 @@
|
||||
"update_by": "",
|
||||
"uuid": 1717556327360313000
|
||||
}
|
||||
]
|
||||
]
|
||||
@@ -4,6 +4,7 @@ ElasticSearch 通过 HTTP JSON 的方式暴露了自身的监控指标,通过
|
||||
|
||||
如果是小规模集群,设置 `local=false`,从集群中某一个节点抓取数据,即可拿到整个集群所有节点的监控数据。如果是大规模集群,建议设置 `local=true`,在集群的每个节点上都部署抓取器,抓取本地 elasticsearch 进程的监控数据。
|
||||
|
||||
ElasticSearch 详细的监控讲解,请参考这篇 [文章](https://time.geekbang.org/column/article/628847)。
|
||||
|
||||
## 配置示例
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# kafka plugin
|
||||
|
||||
Kafka 的核心指标,其实都是通过 JMX 的方式暴露的。对于 JMX 暴露的指标,使用 jolokia 或者使用 jmx_exporter 那个 jar 包来采集即可,不需要本插件。
|
||||
Kafka 的核心指标,其实都是通过 JMX 的方式暴露的,可以参考这篇 [文章](https://time.geekbang.org/column/article/628498)。对于 JMX 暴露的指标,使用 jolokia 或者使用 jmx_exporter 那个 jar 包来采集即可,不需要本插件。
|
||||
|
||||
本插件主要是采集的消费者延迟数据,这个数据无法通过 Kafka 服务端的 JMX 拿到。
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Kubernetes
|
||||
|
||||
这个插件已经废弃。Kubernetes 监控系列可以参考这个 [文章](https://flashcat.cloud/categories/kubernetes%E7%9B%91%E6%8E%A7%E4%B8%93%E6%A0%8F/)。
|
||||
这个插件已经废弃。Kubernetes 监控系列可以参考这个 [文章](https://flashcat.cloud/categories/kubernetes%E7%9B%91%E6%8E%A7%E4%B8%93%E6%A0%8F/)。或者参考 [专栏](https://time.geekbang.org/column/article/630306)。
|
||||
|
||||
不过 Kubernetes 这个类别下的内置告警规则和内置仪表盘都是可以使用的。
|
||||
|
||||
|
||||
@@ -1425,7 +1425,7 @@
|
||||
"rule_config": {
|
||||
"queries": [
|
||||
{
|
||||
"prom_ql": "increase(kernel_vmstat_oom_kill[2m]) > 0",
|
||||
"prom_ql": "kernel_vmstat_oom_kill != 0",
|
||||
"severity": 2
|
||||
}
|
||||
]
|
||||
@@ -2139,4 +2139,4 @@
|
||||
"update_by": "",
|
||||
"uuid": 1717556327737117000
|
||||
}
|
||||
]
|
||||
]
|
||||
@@ -2051,7 +2051,7 @@
|
||||
"cate": "prometheus",
|
||||
"value": "${prom}"
|
||||
},
|
||||
"definition": "label_values(node_uname_info, instance)",
|
||||
"definition": "label_values(node_cpu_seconds_total, instance)",
|
||||
"name": "node",
|
||||
"selected": "$node",
|
||||
"type": "query"
|
||||
|
||||
@@ -252,7 +252,7 @@
|
||||
"cate": "prometheus",
|
||||
"value": "${prom}"
|
||||
},
|
||||
"definition": "label_values(node_uname_info, instance)",
|
||||
"definition": "label_values(node_cpu_seconds_total, instance)",
|
||||
"name": "node",
|
||||
"selected": "$node",
|
||||
"type": "query"
|
||||
|
||||
@@ -259,11 +259,11 @@
|
||||
"uuid": 1717556327796195000,
|
||||
"collector": "Categraf",
|
||||
"typ": "Linux",
|
||||
"name": "1分钟内 OOM 次数统计",
|
||||
"name": "OOM 次数统计",
|
||||
"unit": "none",
|
||||
"note": "取自 `/proc/vmstat`,需要较高版本的内核,没记错的话应该是 4.13 以上版本",
|
||||
"lang": "zh_CN",
|
||||
"expression": "increase(kernel_vmstat_oom_kill[1m])",
|
||||
"expression": "kernel_vmstat_oom_kill",
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
@@ -1334,4 +1334,4 @@
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
}
|
||||
]
|
||||
]
|
||||
251
integrations/Process/alerts/process_by_exporter.json
Normal file
251
integrations/Process/alerts/process_by_exporter.json
Normal file
@@ -0,0 +1,251 @@
|
||||
[
|
||||
{
|
||||
"id": 0,
|
||||
"group_id": 0,
|
||||
"cate": "prometheus",
|
||||
"datasource_ids": [
|
||||
0
|
||||
],
|
||||
"cluster": "",
|
||||
"name": "Process X high number of open files - exporter",
|
||||
"note": "",
|
||||
"prod": "metric",
|
||||
"algorithm": "",
|
||||
"algo_params": null,
|
||||
"delay": 0,
|
||||
"severity": 2,
|
||||
"severities": [
|
||||
2
|
||||
],
|
||||
"disabled": 1,
|
||||
"prom_for_duration": 60,
|
||||
"prom_ql": "",
|
||||
"rule_config": {
|
||||
"algo_params": null,
|
||||
"inhibit": false,
|
||||
"prom_ql": "",
|
||||
"queries": [
|
||||
{
|
||||
"prom_ql": "avg by (instance) (namedprocess_namegroup_worst_fd_ratio{groupname=\"X\"}) * 100 \u003e 80",
|
||||
"severity": 2
|
||||
}
|
||||
],
|
||||
"severity": 0
|
||||
},
|
||||
"prom_eval_interval": 15,
|
||||
"enable_stime": "00:00",
|
||||
"enable_stimes": [
|
||||
"00:00"
|
||||
],
|
||||
"enable_etime": "23:59",
|
||||
"enable_etimes": [
|
||||
"23:59"
|
||||
],
|
||||
"enable_days_of_week": [
|
||||
"1",
|
||||
"2",
|
||||
"3",
|
||||
"4",
|
||||
"5",
|
||||
"6",
|
||||
"0"
|
||||
],
|
||||
"enable_days_of_weeks": [
|
||||
[
|
||||
"1",
|
||||
"2",
|
||||
"3",
|
||||
"4",
|
||||
"5",
|
||||
"6",
|
||||
"0"
|
||||
]
|
||||
],
|
||||
"enable_in_bg": 0,
|
||||
"notify_recovered": 1,
|
||||
"notify_channels": [],
|
||||
"notify_groups_obj": null,
|
||||
"notify_groups": null,
|
||||
"notify_repeat_step": 60,
|
||||
"notify_max_number": 0,
|
||||
"recover_duration": 0,
|
||||
"callbacks": [],
|
||||
"runbook_url": "",
|
||||
"append_tags": [
|
||||
"alertname=ProcessHighOpenFiles"
|
||||
],
|
||||
"annotations": null,
|
||||
"extra_config": null,
|
||||
"create_at": 0,
|
||||
"create_by": "",
|
||||
"update_at": 0,
|
||||
"update_by": "",
|
||||
"uuid": 1717556328248638000
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
"group_id": 0,
|
||||
"cate": "prometheus",
|
||||
"datasource_ids": [
|
||||
0
|
||||
],
|
||||
"cluster": "",
|
||||
"name": "Process X is down - exporter",
|
||||
"note": "",
|
||||
"prod": "metric",
|
||||
"algorithm": "",
|
||||
"algo_params": null,
|
||||
"delay": 0,
|
||||
"severity": 1,
|
||||
"severities": [
|
||||
1
|
||||
],
|
||||
"disabled": 1,
|
||||
"prom_for_duration": 0,
|
||||
"prom_ql": "",
|
||||
"rule_config": {
|
||||
"algo_params": null,
|
||||
"inhibit": false,
|
||||
"prom_ql": "",
|
||||
"queries": [
|
||||
{
|
||||
"prom_ql": "sum by (instance) (namedprocess_namegroup_num_procs{groupname=\"X\"}) == 0",
|
||||
"severity": 1
|
||||
}
|
||||
],
|
||||
"severity": 0
|
||||
},
|
||||
"prom_eval_interval": 15,
|
||||
"enable_stime": "00:00",
|
||||
"enable_stimes": [
|
||||
"00:00"
|
||||
],
|
||||
"enable_etime": "23:59",
|
||||
"enable_etimes": [
|
||||
"23:59"
|
||||
],
|
||||
"enable_days_of_week": [
|
||||
"1",
|
||||
"2",
|
||||
"3",
|
||||
"4",
|
||||
"5",
|
||||
"6",
|
||||
"0"
|
||||
],
|
||||
"enable_days_of_weeks": [
|
||||
[
|
||||
"1",
|
||||
"2",
|
||||
"3",
|
||||
"4",
|
||||
"5",
|
||||
"6",
|
||||
"0"
|
||||
]
|
||||
],
|
||||
"enable_in_bg": 0,
|
||||
"notify_recovered": 1,
|
||||
"notify_channels": [],
|
||||
"notify_groups_obj": null,
|
||||
"notify_groups": null,
|
||||
"notify_repeat_step": 60,
|
||||
"notify_max_number": 0,
|
||||
"recover_duration": 0,
|
||||
"callbacks": [],
|
||||
"runbook_url": "",
|
||||
"append_tags": [
|
||||
"alertname=ProcessNotRunning"
|
||||
],
|
||||
"annotations": null,
|
||||
"extra_config": null,
|
||||
"create_at": 0,
|
||||
"create_by": "",
|
||||
"update_at": 0,
|
||||
"update_by": "",
|
||||
"uuid": 1717556328249307000
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
"group_id": 0,
|
||||
"cate": "prometheus",
|
||||
"datasource_ids": [
|
||||
0
|
||||
],
|
||||
"cluster": "",
|
||||
"name": "Process X is restarted - exporter",
|
||||
"note": "",
|
||||
"prod": "metric",
|
||||
"algorithm": "",
|
||||
"algo_params": null,
|
||||
"delay": 0,
|
||||
"severity": 3,
|
||||
"severities": [
|
||||
3
|
||||
],
|
||||
"disabled": 1,
|
||||
"prom_for_duration": 0,
|
||||
"prom_ql": "",
|
||||
"rule_config": {
|
||||
"algo_params": null,
|
||||
"inhibit": false,
|
||||
"prom_ql": "",
|
||||
"queries": [
|
||||
{
|
||||
"prom_ql": "namedprocess_namegroup_oldest_start_time_seconds{groupname=\"X\"} \u003e time() - 60 ",
|
||||
"severity": 3
|
||||
}
|
||||
],
|
||||
"severity": 0
|
||||
},
|
||||
"prom_eval_interval": 15,
|
||||
"enable_stime": "00:00",
|
||||
"enable_stimes": [
|
||||
"00:00"
|
||||
],
|
||||
"enable_etime": "23:59",
|
||||
"enable_etimes": [
|
||||
"23:59"
|
||||
],
|
||||
"enable_days_of_week": [
|
||||
"1",
|
||||
"2",
|
||||
"3",
|
||||
"4",
|
||||
"5",
|
||||
"6",
|
||||
"0"
|
||||
],
|
||||
"enable_days_of_weeks": [
|
||||
[
|
||||
"1",
|
||||
"2",
|
||||
"3",
|
||||
"4",
|
||||
"5",
|
||||
"6",
|
||||
"0"
|
||||
]
|
||||
],
|
||||
"enable_in_bg": 0,
|
||||
"notify_recovered": 1,
|
||||
"notify_channels": [],
|
||||
"notify_groups_obj": null,
|
||||
"notify_groups": null,
|
||||
"notify_repeat_step": 60,
|
||||
"notify_max_number": 0,
|
||||
"recover_duration": 0,
|
||||
"callbacks": [],
|
||||
"runbook_url": "",
|
||||
"append_tags": [
|
||||
"alertname=ProcessRestarted"
|
||||
],
|
||||
"annotations": null,
|
||||
"extra_config": null,
|
||||
"create_at": 0,
|
||||
"create_by": "",
|
||||
"update_at": 0,
|
||||
"update_by": "",
|
||||
"uuid": 1717556328249744000
|
||||
}
|
||||
]
|
||||
172
integrations/Process/alerts/procstat_by_categraf.json
Normal file
172
integrations/Process/alerts/procstat_by_categraf.json
Normal file
@@ -0,0 +1,172 @@
|
||||
[
|
||||
{
|
||||
"id": 0,
|
||||
"group_id": 0,
|
||||
"cate": "prometheus",
|
||||
"datasource_ids": [
|
||||
0
|
||||
],
|
||||
"cluster": "",
|
||||
"name": "process handle limit is too low",
|
||||
"note": "",
|
||||
"prod": "metric",
|
||||
"algorithm": "",
|
||||
"algo_params": null,
|
||||
"delay": 0,
|
||||
"severity": 3,
|
||||
"severities": [
|
||||
3
|
||||
],
|
||||
"disabled": 1,
|
||||
"prom_for_duration": 60,
|
||||
"prom_ql": "",
|
||||
"rule_config": {
|
||||
"algo_params": null,
|
||||
"inhibit": false,
|
||||
"prom_ql": "",
|
||||
"queries": [
|
||||
{
|
||||
"prom_ql": "procstat_rlimit_num_fds_soft \u003c 2048",
|
||||
"severity": 3
|
||||
}
|
||||
],
|
||||
"severity": 0
|
||||
},
|
||||
"prom_eval_interval": 15,
|
||||
"enable_stime": "00:00",
|
||||
"enable_stimes": [
|
||||
"00:00"
|
||||
],
|
||||
"enable_etime": "23:59",
|
||||
"enable_etimes": [
|
||||
"23:59"
|
||||
],
|
||||
"enable_days_of_week": [
|
||||
"1",
|
||||
"2",
|
||||
"3",
|
||||
"4",
|
||||
"5",
|
||||
"6",
|
||||
"0"
|
||||
],
|
||||
"enable_days_of_weeks": [
|
||||
[
|
||||
"1",
|
||||
"2",
|
||||
"3",
|
||||
"4",
|
||||
"5",
|
||||
"6",
|
||||
"0"
|
||||
]
|
||||
],
|
||||
"enable_in_bg": 0,
|
||||
"notify_recovered": 1,
|
||||
"notify_channels": [
|
||||
"email",
|
||||
"dingtalk",
|
||||
"wecom"
|
||||
],
|
||||
"notify_groups_obj": null,
|
||||
"notify_groups": null,
|
||||
"notify_repeat_step": 60,
|
||||
"notify_max_number": 0,
|
||||
"recover_duration": 0,
|
||||
"callbacks": [],
|
||||
"runbook_url": "",
|
||||
"append_tags": [],
|
||||
"annotations": null,
|
||||
"extra_config": null,
|
||||
"create_at": 0,
|
||||
"create_by": "",
|
||||
"update_at": 0,
|
||||
"update_by": "",
|
||||
"uuid": 1717556328251224000
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
"group_id": 0,
|
||||
"cate": "prometheus",
|
||||
"datasource_ids": [
|
||||
0
|
||||
],
|
||||
"cluster": "",
|
||||
"name": "there is a process count of 0, indicating that a certain process may have crashed",
|
||||
"note": "",
|
||||
"prod": "metric",
|
||||
"algorithm": "",
|
||||
"algo_params": null,
|
||||
"delay": 0,
|
||||
"severity": 1,
|
||||
"severities": [
|
||||
1
|
||||
],
|
||||
"disabled": 1,
|
||||
"prom_for_duration": 60,
|
||||
"prom_ql": "",
|
||||
"rule_config": {
|
||||
"algo_params": null,
|
||||
"inhibit": false,
|
||||
"prom_ql": "",
|
||||
"queries": [
|
||||
{
|
||||
"prom_ql": "procstat_lookup_count == 0",
|
||||
"severity": 1
|
||||
}
|
||||
],
|
||||
"severity": 0
|
||||
},
|
||||
"prom_eval_interval": 15,
|
||||
"enable_stime": "00:00",
|
||||
"enable_stimes": [
|
||||
"00:00"
|
||||
],
|
||||
"enable_etime": "23:59",
|
||||
"enable_etimes": [
|
||||
"23:59"
|
||||
],
|
||||
"enable_days_of_week": [
|
||||
"1",
|
||||
"2",
|
||||
"3",
|
||||
"4",
|
||||
"5",
|
||||
"6",
|
||||
"0"
|
||||
],
|
||||
"enable_days_of_weeks": [
|
||||
[
|
||||
"1",
|
||||
"2",
|
||||
"3",
|
||||
"4",
|
||||
"5",
|
||||
"6",
|
||||
"0"
|
||||
]
|
||||
],
|
||||
"enable_in_bg": 0,
|
||||
"notify_recovered": 1,
|
||||
"notify_channels": [
|
||||
"email",
|
||||
"dingtalk",
|
||||
"wecom"
|
||||
],
|
||||
"notify_groups_obj": null,
|
||||
"notify_groups": null,
|
||||
"notify_repeat_step": 60,
|
||||
"notify_max_number": 0,
|
||||
"recover_duration": 0,
|
||||
"callbacks": [],
|
||||
"runbook_url": "",
|
||||
"append_tags": [],
|
||||
"annotations": null,
|
||||
"extra_config": null,
|
||||
"create_at": 0,
|
||||
"create_by": "",
|
||||
"update_at": 0,
|
||||
"update_by": "",
|
||||
"uuid": 1717556328254260000
|
||||
}
|
||||
]
|
||||
1158
integrations/Process/dashboards/process_by_exporter.json
Normal file
1158
integrations/Process/dashboards/process_by_exporter.json
Normal file
File diff suppressed because it is too large
Load Diff
BIN
integrations/Process/icon/process.png
Normal file
BIN
integrations/Process/icon/process.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 2.2 KiB |
File diff suppressed because it is too large
Load Diff
@@ -2085,6 +2085,7 @@
|
||||
],
|
||||
"var": [
|
||||
{
|
||||
"defaultValue": 2,
|
||||
"definition": "prometheus",
|
||||
"label": "datasource",
|
||||
"name": "datasource",
|
||||
|
||||
@@ -160,9 +160,8 @@ func (tc *TargetCacheType) syncTargets() error {
|
||||
}
|
||||
|
||||
m := make(map[string]*models.Target)
|
||||
|
||||
metaMap := tc.GetHostMetas(lst)
|
||||
if len(metaMap) > 0 {
|
||||
if tc.ctx.IsCenter {
|
||||
metaMap := tc.GetHostMetas(lst)
|
||||
for i := 0; i < len(lst); i++ {
|
||||
if meta, ok := metaMap[lst[i].Ident]; ok {
|
||||
lst[i].FillMeta(meta)
|
||||
|
||||
@@ -67,7 +67,6 @@ type AlertCurEvent struct {
|
||||
Claimant string `json:"claimant" gorm:"-"`
|
||||
SubRuleId int64 `json:"sub_rule_id" gorm:"-"`
|
||||
ExtraInfo []string `json:"extra_info" gorm:"-"`
|
||||
Target *Target `json:"target" gorm:"-"`
|
||||
}
|
||||
|
||||
func (e *AlertCurEvent) TableName() string {
|
||||
@@ -306,12 +305,8 @@ func (e *AlertCurEvent) DB2FE() error {
|
||||
e.CallbacksJSON = strings.Fields(e.Callbacks)
|
||||
e.TagsJSON = strings.Split(e.Tags, ",,")
|
||||
e.OriginalTagsJSON = strings.Split(e.OriginalTags, ",,")
|
||||
if err := json.Unmarshal([]byte(e.Annotations), &e.AnnotationsJSON); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := json.Unmarshal([]byte(e.RuleConfig), &e.RuleConfigJson); err != nil {
|
||||
return err
|
||||
}
|
||||
json.Unmarshal([]byte(e.Annotations), &e.AnnotationsJSON)
|
||||
json.Unmarshal([]byte(e.RuleConfig), &e.RuleConfigJson)
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -342,46 +337,13 @@ func (e *AlertCurEvent) DB2Mem() {
|
||||
continue
|
||||
}
|
||||
|
||||
arr := strings.SplitN(pair, "=", 2)
|
||||
arr := strings.Split(pair, "=")
|
||||
if len(arr) != 2 {
|
||||
continue
|
||||
}
|
||||
|
||||
e.TagsMap[arr[0]] = arr[1]
|
||||
}
|
||||
|
||||
// 解决之前数据库中 FirstTriggerTime 为 0 的情况
|
||||
if e.FirstTriggerTime == 0 {
|
||||
e.FirstTriggerTime = e.TriggerTime
|
||||
}
|
||||
}
|
||||
|
||||
func FillRuleConfigTplName(ctx *ctx.Context, ruleConfig string) (interface{}, bool) {
|
||||
var config RuleConfig
|
||||
err := json.Unmarshal([]byte(ruleConfig), &config)
|
||||
if err != nil {
|
||||
logger.Warningf("failed to unmarshal rule config: %v", err)
|
||||
return nil, false
|
||||
}
|
||||
|
||||
if len(config.TaskTpls) == 0 {
|
||||
return nil, false
|
||||
}
|
||||
|
||||
for i := 0; i < len(config.TaskTpls); i++ {
|
||||
tpl, err := TaskTplGetById(ctx, config.TaskTpls[i].TplId)
|
||||
if err != nil {
|
||||
logger.Warningf("failed to get task tpl by id:%d, %v", config.TaskTpls[i].TplId, err)
|
||||
return nil, false
|
||||
}
|
||||
|
||||
if tpl == nil {
|
||||
logger.Warningf("task tpl not found by id:%d", config.TaskTpls[i].TplId)
|
||||
return nil, false
|
||||
}
|
||||
config.TaskTpls[i].TplName = tpl.Title
|
||||
}
|
||||
return config, true
|
||||
}
|
||||
|
||||
// for webui
|
||||
@@ -419,8 +381,7 @@ func (e *AlertCurEvent) FillNotifyGroups(ctx *ctx.Context, cache map[int64]*User
|
||||
return nil
|
||||
}
|
||||
|
||||
func AlertCurEventTotal(ctx *ctx.Context, prods []string, bgids []int64, stime, etime int64,
|
||||
severity int, dsIds []int64, cates []string, ruleId int64, query string) (int64, error) {
|
||||
func AlertCurEventTotal(ctx *ctx.Context, prods []string, bgids []int64, stime, etime int64, severity int, dsIds []int64, cates []string, query string) (int64, error) {
|
||||
session := DB(ctx).Model(&AlertCurEvent{})
|
||||
if stime != 0 && etime != 0 {
|
||||
session = session.Where("trigger_time between ? and ?", stime, etime)
|
||||
@@ -445,10 +406,6 @@ func AlertCurEventTotal(ctx *ctx.Context, prods []string, bgids []int64, stime,
|
||||
session = session.Where("cate in ?", cates)
|
||||
}
|
||||
|
||||
if ruleId > 0 {
|
||||
session = session.Where("rule_id = ?", ruleId)
|
||||
}
|
||||
|
||||
if query != "" {
|
||||
arr := strings.Fields(query)
|
||||
for i := 0; i < len(arr); i++ {
|
||||
@@ -460,9 +417,7 @@ func AlertCurEventTotal(ctx *ctx.Context, prods []string, bgids []int64, stime,
|
||||
return Count(session)
|
||||
}
|
||||
|
||||
func AlertCurEventsGet(ctx *ctx.Context, prods []string, bgids []int64, stime, etime int64,
|
||||
severity int, dsIds []int64, cates []string, ruleId int64, query string, limit, offset int) (
|
||||
[]AlertCurEvent, error) {
|
||||
func AlertCurEventGets(ctx *ctx.Context, prods []string, bgids []int64, stime, etime int64, severity int, dsIds []int64, cates []string, query string, limit, offset int) ([]AlertCurEvent, error) {
|
||||
session := DB(ctx).Model(&AlertCurEvent{})
|
||||
if stime != 0 && etime != 0 {
|
||||
session = session.Where("trigger_time between ? and ?", stime, etime)
|
||||
@@ -487,10 +442,6 @@ func AlertCurEventsGet(ctx *ctx.Context, prods []string, bgids []int64, stime, e
|
||||
session = session.Where("cate in ?", cates)
|
||||
}
|
||||
|
||||
if ruleId > 0 {
|
||||
session = session.Where("rule_id = ?", ruleId)
|
||||
}
|
||||
|
||||
if query != "" {
|
||||
arr := strings.Fields(query)
|
||||
for i := 0; i < len(arr); i++ {
|
||||
@@ -511,26 +462,6 @@ func AlertCurEventsGet(ctx *ctx.Context, prods []string, bgids []int64, stime, e
|
||||
return lst, err
|
||||
}
|
||||
|
||||
func AlertCurEventCountByRuleId(ctx *ctx.Context, rids []int64, stime, etime int64) map[int64]int64 {
|
||||
type Row struct {
|
||||
RuleId int64
|
||||
Cnt int64
|
||||
}
|
||||
var rows []Row
|
||||
err := DB(ctx).Model(&AlertCurEvent{}).Select("rule_id, count(*) as cnt").
|
||||
Where("trigger_time between ? and ?", stime, etime).Group("rule_id").Find(&rows).Error
|
||||
if err != nil {
|
||||
logger.Errorf("Failed to count group by rule_id: %v", err)
|
||||
return nil
|
||||
}
|
||||
|
||||
curEventTotalByRid := make(map[int64]int64, len(rids))
|
||||
for _, r := range rows {
|
||||
curEventTotalByRid[r.RuleId] = r.Cnt
|
||||
}
|
||||
return curEventTotalByRid
|
||||
}
|
||||
|
||||
func AlertCurEventDel(ctx *ctx.Context, ids []int64) error {
|
||||
if len(ids) == 0 {
|
||||
return nil
|
||||
@@ -540,11 +471,6 @@ func AlertCurEventDel(ctx *ctx.Context, ids []int64) error {
|
||||
}
|
||||
|
||||
func AlertCurEventDelByHash(ctx *ctx.Context, hash string) error {
|
||||
if !ctx.IsCenter {
|
||||
_, err := poster.GetByUrls[string](ctx, "/v1/n9e/alert-cur-events-del-by-hash?hash="+hash)
|
||||
return err
|
||||
}
|
||||
|
||||
return DB(ctx).Where("hash = ?", hash).Delete(&AlertCurEvent{}).Error
|
||||
}
|
||||
|
||||
@@ -663,8 +589,8 @@ func AlertCurEventGetMap(ctx *ctx.Context, cluster string) (map[int64]map[string
|
||||
return ret, nil
|
||||
}
|
||||
|
||||
func (e *AlertCurEvent) UpdateFieldsMap(ctx *ctx.Context, fields map[string]interface{}) error {
|
||||
return DB(ctx).Model(e).Updates(fields).Error
|
||||
func (m *AlertCurEvent) UpdateFieldsMap(ctx *ctx.Context, fields map[string]interface{}) error {
|
||||
return DB(ctx).Model(m).Updates(fields).Error
|
||||
}
|
||||
|
||||
func AlertCurEventUpgradeToV6(ctx *ctx.Context, dsm map[string]Datasource) error {
|
||||
|
||||
@@ -117,13 +117,7 @@ func (e *AlertHisEvent) FillNotifyGroups(ctx *ctx.Context, cache map[int64]*User
|
||||
return nil
|
||||
}
|
||||
|
||||
// func (e *AlertHisEvent) FillTaskTplName(ctx *ctx.Context, cache map[int64]*UserGroup) error {
|
||||
|
||||
// }
|
||||
|
||||
func AlertHisEventTotal(
|
||||
ctx *ctx.Context, prods []string, bgids []int64, stime, etime int64, severity int,
|
||||
recovered int, dsIds []int64, cates []string, ruleId int64, query string) (int64, error) {
|
||||
func AlertHisEventTotal(ctx *ctx.Context, prods []string, bgids []int64, stime, etime int64, severity int, recovered int, dsIds []int64, cates []string, query string) (int64, error) {
|
||||
session := DB(ctx).Model(&AlertHisEvent{}).Where("last_eval_time between ? and ?", stime, etime)
|
||||
|
||||
if len(prods) > 0 {
|
||||
@@ -150,10 +144,6 @@ func AlertHisEventTotal(
|
||||
session = session.Where("cate in ?", cates)
|
||||
}
|
||||
|
||||
if ruleId > 0 {
|
||||
session = session.Where("rule_id = ?", ruleId)
|
||||
}
|
||||
|
||||
if query != "" {
|
||||
arr := strings.Fields(query)
|
||||
for i := 0; i < len(arr); i++ {
|
||||
@@ -165,9 +155,7 @@ func AlertHisEventTotal(
|
||||
return Count(session)
|
||||
}
|
||||
|
||||
func AlertHisEventGets(ctx *ctx.Context, prods []string, bgids []int64, stime, etime int64,
|
||||
severity int, recovered int, dsIds []int64, cates []string, ruleId int64, query string,
|
||||
limit, offset int) ([]AlertHisEvent, error) {
|
||||
func AlertHisEventGets(ctx *ctx.Context, prods []string, bgids []int64, stime, etime int64, severity int, recovered int, dsIds []int64, cates []string, query string, limit, offset int) ([]AlertHisEvent, error) {
|
||||
session := DB(ctx).Where("last_eval_time between ? and ?", stime, etime)
|
||||
|
||||
if len(prods) != 0 {
|
||||
@@ -194,10 +182,6 @@ func AlertHisEventGets(ctx *ctx.Context, prods []string, bgids []int64, stime, e
|
||||
session = session.Where("cate in ?", cates)
|
||||
}
|
||||
|
||||
if ruleId > 0 {
|
||||
session = session.Where("rule_id = ?", ruleId)
|
||||
}
|
||||
|
||||
if query != "" {
|
||||
arr := strings.Fields(query)
|
||||
for i := 0; i < len(arr); i++ {
|
||||
|
||||
@@ -11,7 +11,6 @@ import (
|
||||
"github.com/ccfos/nightingale/v6/pkg/poster"
|
||||
"github.com/ccfos/nightingale/v6/pushgw/pconf"
|
||||
|
||||
"github.com/jinzhu/copier"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/toolkits/pkg/logger"
|
||||
"github.com/toolkits/pkg/str"
|
||||
@@ -27,21 +26,6 @@ const (
|
||||
TDENGINE = "tdengine"
|
||||
)
|
||||
|
||||
const (
|
||||
AlertRuleEnabled = 0
|
||||
AlertRuleDisabled = 1
|
||||
|
||||
AlertRuleEnableInGlobalBG = 0
|
||||
AlertRuleEnableInOneBG = 1
|
||||
|
||||
AlertRuleNotNotifyRecovered = 0
|
||||
AlertRuleNotifyRecovered = 1
|
||||
|
||||
AlertRuleNotifyRepeatStep60Min = 60
|
||||
|
||||
AlertRuleRecoverDuration0Sec = 0
|
||||
)
|
||||
|
||||
type AlertRule struct {
|
||||
Id int64 `json:"id" gorm:"primaryKey"`
|
||||
GroupId int64 `json:"group_id"` // busi group id
|
||||
@@ -98,25 +82,6 @@ type AlertRule struct {
|
||||
UpdateAt int64 `json:"update_at"`
|
||||
UpdateBy string `json:"update_by"`
|
||||
UUID int64 `json:"uuid" gorm:"-"` // tpl identifier
|
||||
CurEventCount int64 `json:"cur_event_count" gorm:"-"`
|
||||
}
|
||||
|
||||
type Tpl struct {
|
||||
TplId int64 `json:"tpl_id"`
|
||||
TplName string `json:"tpl_name"`
|
||||
Host []string `json:"host"`
|
||||
}
|
||||
|
||||
type RuleConfig struct {
|
||||
Version string `json:"version,omitempty"`
|
||||
EventRelabelConfig []*pconf.RelabelConfig `json:"event_relabel_config,omitempty"`
|
||||
TaskTpls []*Tpl `json:"task_tpls,omitempty"`
|
||||
Queries interface{} `json:"queries,omitempty"`
|
||||
Triggers []Trigger `json:"triggers,omitempty"`
|
||||
Inhibit bool `json:"inhibit,omitempty"`
|
||||
PromQl string `json:"prom_ql,omitempty"`
|
||||
Severity int `json:"severity,omitempty"`
|
||||
AlgoParams interface{} `json:"algo_params,omitempty"`
|
||||
}
|
||||
|
||||
type PromRuleConfig struct {
|
||||
@@ -157,16 +122,6 @@ type Trigger struct {
|
||||
Mode int `json:"mode"`
|
||||
Exp string `json:"exp"`
|
||||
Severity int `json:"severity"`
|
||||
|
||||
Type string `json:"type,omitempty"`
|
||||
Duration int `json:"duration,omitempty"`
|
||||
Percent int `json:"percent,omitempty"`
|
||||
Joins []Join `json:"joins"`
|
||||
}
|
||||
|
||||
type Join struct {
|
||||
JoinType string `json:"join_type"`
|
||||
On []string `json:"on"`
|
||||
}
|
||||
|
||||
func GetHostsQuery(queries []HostQuery) []map[string]interface{} {
|
||||
@@ -177,10 +132,9 @@ func GetHostsQuery(queries []HostQuery) []map[string]interface{} {
|
||||
case "group_ids":
|
||||
ids := ParseInt64(q.Values)
|
||||
if q.Op == "==" {
|
||||
m["target_busi_group.group_id in (?)"] = ids
|
||||
m["group_id in (?)"] = ids
|
||||
} else {
|
||||
m["target.ident not in (select target_ident "+
|
||||
"from target_busi_group where group_id in (?))"] = ids
|
||||
m["group_id not in (?)"] = ids
|
||||
}
|
||||
case "tags":
|
||||
lst := []string{}
|
||||
@@ -194,14 +148,12 @@ func GetHostsQuery(queries []HostQuery) []map[string]interface{} {
|
||||
blank := " "
|
||||
for _, tag := range lst {
|
||||
m["tags like ?"+blank] = "%" + tag + "%"
|
||||
m["host_tags like ?"+blank] = "%" + tag + "%"
|
||||
blank += " "
|
||||
}
|
||||
} else {
|
||||
blank := " "
|
||||
for _, tag := range lst {
|
||||
m["tags not like ?"+blank] = "%" + tag + "%"
|
||||
m["host_tags not like ?"+blank] = "%" + tag + "%"
|
||||
blank += " "
|
||||
}
|
||||
}
|
||||
@@ -470,19 +422,6 @@ func (ar *AlertRule) UpdateColumn(ctx *ctx.Context, column string, value interfa
|
||||
return DB(ctx).Model(ar).UpdateColumn("annotations", string(b)).Error
|
||||
}
|
||||
|
||||
if column == "annotations" {
|
||||
newAnnotations := value.(map[string]interface{})
|
||||
ar.AnnotationsJSON = make(map[string]string)
|
||||
for k, v := range newAnnotations {
|
||||
ar.AnnotationsJSON[k] = v.(string)
|
||||
}
|
||||
b, err := json.Marshal(ar.AnnotationsJSON)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return DB(ctx).Model(ar).UpdateColumn("annotations", string(b)).Error
|
||||
}
|
||||
|
||||
return DB(ctx).Model(ar).UpdateColumn(column, value).Error
|
||||
}
|
||||
|
||||
@@ -740,25 +679,6 @@ func AlertRuleExists(ctx *ctx.Context, id, groupId int64, datasourceIds []int64,
|
||||
return false, nil
|
||||
}
|
||||
|
||||
func GetAlertRuleIdsByTaskId(ctx *ctx.Context, taskId int64) ([]int64, error) {
|
||||
tpl := "%\"tpl_id\":" + fmt.Sprint(taskId) + "}%"
|
||||
cb := "{ibex}/" + fmt.Sprint(taskId) + "%"
|
||||
session := DB(ctx).Where("rule_config like ? or callbacks like ?", tpl, cb)
|
||||
|
||||
var lst []AlertRule
|
||||
var ids []int64
|
||||
err := session.Find(&lst).Error
|
||||
if err != nil || len(lst) == 0 {
|
||||
return ids, err
|
||||
}
|
||||
|
||||
for i := 0; i < len(lst); i++ {
|
||||
ids = append(ids, lst[i].Id)
|
||||
}
|
||||
|
||||
return ids, nil
|
||||
}
|
||||
|
||||
func AlertRuleGets(ctx *ctx.Context, groupId int64) ([]AlertRule, error) {
|
||||
session := DB(ctx).Where("group_id=?", groupId).Order("name")
|
||||
|
||||
@@ -1091,19 +1011,3 @@ func GetTargetsOfHostAlertRule(ctx *ctx.Context, engineName string) (map[string]
|
||||
|
||||
return m, nil
|
||||
}
|
||||
|
||||
func (ar *AlertRule) Copy(ctx *ctx.Context) (*AlertRule, error) {
|
||||
newAr := &AlertRule{}
|
||||
err := copier.Copy(newAr, ar)
|
||||
if err != nil {
|
||||
logger.Errorf("copy alert rule failed, %v", err)
|
||||
}
|
||||
return newAr, err
|
||||
}
|
||||
|
||||
func InsertAlertRule(ctx *ctx.Context, ars []*AlertRule) error {
|
||||
if len(ars) == 0 {
|
||||
return nil
|
||||
}
|
||||
return DB(ctx).Create(ars).Error
|
||||
}
|
||||
|
||||
@@ -286,53 +286,3 @@ func BoardSetHide(ctx *ctx.Context, ids []int64) error {
|
||||
return nil
|
||||
})
|
||||
}
|
||||
|
||||
func BoardGetsByBids(ctx *ctx.Context, bids []int64) ([]map[string]interface{}, error) {
|
||||
var boards []Board
|
||||
err := DB(ctx).Where("id IN ?", bids).Find(&boards).Error
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// 收集所有唯一的 group_id
|
||||
groupIDs := make([]int64, 0)
|
||||
groupIDSet := make(map[int64]struct{})
|
||||
for _, board := range boards {
|
||||
if _, exists := groupIDSet[board.GroupId]; !exists {
|
||||
groupIDs = append(groupIDs, board.GroupId)
|
||||
groupIDSet[board.GroupId] = struct{}{}
|
||||
}
|
||||
}
|
||||
|
||||
// 一次性查询所有需要的 BusiGroup
|
||||
var busiGroups []BusiGroup
|
||||
err = DB(ctx).Where("id IN ?", groupIDs).Find(&busiGroups).Error
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// 创建 group_id 到 BusiGroup 的映射
|
||||
groupMap := make(map[int64]BusiGroup)
|
||||
for _, bg := range busiGroups {
|
||||
groupMap[bg.Id] = bg
|
||||
}
|
||||
|
||||
result := make([]map[string]interface{}, 0, len(boards))
|
||||
for _, board := range boards {
|
||||
busiGroup, exists := groupMap[board.GroupId]
|
||||
if !exists {
|
||||
// 处理找不到对应 BusiGroup 的情况
|
||||
continue
|
||||
}
|
||||
|
||||
item := map[string]interface{}{
|
||||
"busi_group_name": busiGroup.Name,
|
||||
"busi_group_id": busiGroup.Id,
|
||||
"board_id": board.Id,
|
||||
"board_name": board.Name,
|
||||
}
|
||||
result = append(result, item)
|
||||
}
|
||||
|
||||
return result, nil
|
||||
}
|
||||
|
||||
@@ -11,7 +11,7 @@ import (
|
||||
// BuiltinComponent represents a builtin component along with its metadata.
|
||||
type BuiltinComponent struct {
|
||||
ID uint64 `json:"id" gorm:"primaryKey;type:bigint;autoIncrement;comment:'unique identifier'"`
|
||||
Ident string `json:"ident" gorm:"type:varchar(191);not null;uniqueIndex:idx_ident,sort:asc;comment:'identifier of component'"`
|
||||
Ident string `json:"ident" gorm:"type:varchar(191);not null;index:idx_ident,sort:asc;comment:'identifier of component'"`
|
||||
Logo string `json:"logo" gorm:"type:varchar(191);not null;comment:'logo of component'"`
|
||||
Readme string `json:"readme" gorm:"type:text;not null;comment:'readme of component'"`
|
||||
CreatedAt int64 `json:"created_at" gorm:"type:bigint;not null;default:0;comment:'create time'"`
|
||||
|
||||
@@ -2,7 +2,6 @@ package models
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
@@ -218,10 +217,3 @@ func BuiltinMetricCollectors(ctx *ctx.Context, lang, typ, query string) ([]strin
|
||||
err := session.Select("distinct(collector)").Pluck("collector", &collectors).Error
|
||||
return collectors, err
|
||||
}
|
||||
|
||||
func BuiltinMetricBatchUpdateColumn(ctx *ctx.Context, col, old, new, updatedBy string) error {
|
||||
if old == new {
|
||||
return nil
|
||||
}
|
||||
return DB(ctx).Model(&BuiltinMetric{}).Where(fmt.Sprintf("%s = ?", col), old).Updates(map[string]interface{}{col: new, "updated_by": updatedBy}).Error
|
||||
}
|
||||
|
||||
@@ -9,19 +9,18 @@ import (
|
||||
)
|
||||
|
||||
type BuiltinPayload struct {
|
||||
ID int64 `json:"id" gorm:"primaryKey;type:bigint;autoIncrement;comment:'unique identifier'"`
|
||||
Type string `json:"type" gorm:"type:varchar(191);not null;index:idx_type,sort:asc;comment:'type of payload'"` // Alert Dashboard Collet
|
||||
Component string `json:"component" gorm:"type:varchar(191);not null;index:idx_component,sort:asc;comment:'component of payload'"` //
|
||||
ComponentID uint64 `json:"component_id" gorm:"type:bigint;index:idx_component,sort:asc;comment:'component_id of payload'"` // ComponentID which the payload belongs to
|
||||
Cate string `json:"cate" gorm:"type:varchar(191);not null;comment:'category of payload'"` // categraf_v1 telegraf_v1
|
||||
Name string `json:"name" gorm:"type:varchar(191);not null;index:idx_buildinpayload_name,sort:asc;comment:'name of payload'"` //
|
||||
Tags string `json:"tags" gorm:"type:varchar(191);not null;default:'';comment:'tags of payload'"` // {"host":"
|
||||
Content string `json:"content" gorm:"type:longtext;not null;comment:'content of payload'"`
|
||||
UUID int64 `json:"uuid" gorm:"type:bigint;not null;index:idx_uuid;comment:'uuid of payload'"`
|
||||
CreatedAt int64 `json:"created_at" gorm:"type:bigint;not null;default:0;comment:'create time'"`
|
||||
CreatedBy string `json:"created_by" gorm:"type:varchar(191);not null;default:'';comment:'creator'"`
|
||||
UpdatedAt int64 `json:"updated_at" gorm:"type:bigint;not null;default:0;comment:'update time'"`
|
||||
UpdatedBy string `json:"updated_by" gorm:"type:varchar(191);not null;default:'';comment:'updater'"`
|
||||
ID int64 `json:"id" gorm:"primaryKey;type:bigint;autoIncrement;comment:'unique identifier'"`
|
||||
Type string `json:"type" gorm:"type:varchar(191);not null;index:idx_type,sort:asc;comment:'type of payload'"` // Alert Dashboard Collet
|
||||
Component string `json:"component" gorm:"type:varchar(191);not null;index:idx_component,sort:asc;comment:'component of payload'"` // Host MySQL Redis
|
||||
Cate string `json:"cate" gorm:"type:varchar(191);not null;comment:'category of payload'"` // categraf_v1 telegraf_v1
|
||||
Name string `json:"name" gorm:"type:varchar(191);not null;index:idx_buildinpayload_name,sort:asc;comment:'name of payload'"` //
|
||||
Tags string `json:"tags" gorm:"type:varchar(191);not null;default:'';comment:'tags of payload'"` // {"host":"
|
||||
Content string `json:"content" gorm:"type:longtext;not null;comment:'content of payload'"`
|
||||
UUID int64 `json:"uuid" gorm:"type:bigint;not null;index:idx_uuid;comment:'uuid of payload'"`
|
||||
CreatedAt int64 `json:"created_at" gorm:"type:bigint;not null;default:0;comment:'create time'"`
|
||||
CreatedBy string `json:"created_by" gorm:"type:varchar(191);not null;default:'';comment:'creator'"`
|
||||
UpdatedAt int64 `json:"updated_at" gorm:"type:bigint;not null;default:0;comment:'update time'"`
|
||||
UpdatedBy string `json:"updated_by" gorm:"type:varchar(191);not null;default:'';comment:'updater'"`
|
||||
}
|
||||
|
||||
func (bp *BuiltinPayload) TableName() string {
|
||||
@@ -34,8 +33,9 @@ func (bp *BuiltinPayload) Verify() error {
|
||||
return errors.New("type is blank")
|
||||
}
|
||||
|
||||
if bp.ComponentID == 0 {
|
||||
return errors.New("component_id is blank")
|
||||
bp.Component = strings.TrimSpace(bp.Component)
|
||||
if bp.Component == "" {
|
||||
return errors.New("component is blank")
|
||||
}
|
||||
|
||||
if bp.Name == "" {
|
||||
@@ -47,7 +47,7 @@ func (bp *BuiltinPayload) Verify() error {
|
||||
|
||||
func BuiltinPayloadExists(ctx *ctx.Context, bp *BuiltinPayload) (bool, error) {
|
||||
var count int64
|
||||
err := DB(ctx).Model(bp).Where("type = ? AND component_id = ? AND name = ? AND cate = ?", bp.Type, bp.ComponentID, bp.Name, bp.Cate).Count(&count).Error
|
||||
err := DB(ctx).Model(bp).Where("type = ? AND component = ? AND name = ? AND cate = ?", bp.Type, bp.Component, bp.Name, bp.Cate).Count(&count).Error
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
@@ -78,7 +78,7 @@ func (bp *BuiltinPayload) Update(ctx *ctx.Context, req BuiltinPayload) error {
|
||||
return err
|
||||
}
|
||||
|
||||
if bp.Type != req.Type || bp.ComponentID != req.ComponentID || bp.Name != req.Name {
|
||||
if bp.Type != req.Type || bp.Component != req.Component || bp.Name != req.Name {
|
||||
exists, err := BuiltinPayloadExists(ctx, &req)
|
||||
if err != nil {
|
||||
return err
|
||||
@@ -117,13 +117,13 @@ func BuiltinPayloadGet(ctx *ctx.Context, where string, args ...interface{}) (*Bu
|
||||
return &bp, nil
|
||||
}
|
||||
|
||||
func BuiltinPayloadGets(ctx *ctx.Context, componentId uint64, typ, cate, query string) ([]*BuiltinPayload, error) {
|
||||
func BuiltinPayloadGets(ctx *ctx.Context, typ, component, cate, query string) ([]*BuiltinPayload, error) {
|
||||
session := DB(ctx)
|
||||
if typ != "" {
|
||||
session = session.Where("type = ?", typ)
|
||||
}
|
||||
if componentId != 0 {
|
||||
session = session.Where("component_id = ?", componentId)
|
||||
if component != "" {
|
||||
session = session.Where("component = ?", component)
|
||||
}
|
||||
|
||||
if cate != "" {
|
||||
@@ -144,9 +144,9 @@ func BuiltinPayloadGets(ctx *ctx.Context, componentId uint64, typ, cate, query s
|
||||
}
|
||||
|
||||
// get cates of BuiltinPayload by type and component, return []string
|
||||
func BuiltinPayloadCates(ctx *ctx.Context, typ string, componentID uint64) ([]string, error) {
|
||||
func BuiltinPayloadCates(ctx *ctx.Context, typ, component string) ([]string, error) {
|
||||
var cates []string
|
||||
err := DB(ctx).Model(new(BuiltinPayload)).Where("type = ? and component_id = ?", typ, componentID).Distinct("cate").Pluck("cate", &cates).Error
|
||||
err := DB(ctx).Model(new(BuiltinPayload)).Where("type = ? and component = ?", typ, component).Distinct("cate").Pluck("cate", &cates).Error
|
||||
return cates, err
|
||||
}
|
||||
|
||||
@@ -163,37 +163,3 @@ func BuiltinPayloadComponents(ctx *ctx.Context, typ, cate string) (string, error
|
||||
}
|
||||
return components[0], nil
|
||||
}
|
||||
|
||||
// InitBuiltinPayloads 兼容新旧 BuiltinPayload 格式
|
||||
func InitBuiltinPayloads(ctx *ctx.Context) error {
|
||||
var lst []*BuiltinPayload
|
||||
|
||||
components, err := BuiltinComponentGets(ctx, "")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
identToId := make(map[string]uint64)
|
||||
for _, component := range components {
|
||||
identToId[component.Ident] = component.ID
|
||||
}
|
||||
|
||||
err = DB(ctx).Where("component_id = 0 or component_id is NULL").Find(&lst).Error
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
for _, bp := range lst {
|
||||
componentId, ok := identToId[bp.Component]
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
bp.ComponentID = componentId
|
||||
}
|
||||
|
||||
if len(lst) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
return DB(ctx).Save(&lst).Error
|
||||
}
|
||||
|
||||
@@ -134,7 +134,7 @@ func (bg *BusiGroup) Del(ctx *ctx.Context) error {
|
||||
return errors.New("Some alert subscribes still in the BusiGroup")
|
||||
}
|
||||
|
||||
has, err = Exists(DB(ctx).Model(&TargetBusiGroup{}).Where("group_id=?", bg.Id))
|
||||
has, err = Exists(DB(ctx).Model(&Target{}).Where("group_id=?", bg.Id))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
@@ -17,7 +17,6 @@ type HostMeta struct {
|
||||
EngineName string `json:"engine_name"`
|
||||
GlobalLabels map[string]string `json:"global_labels"`
|
||||
ExtendInfo map[string]interface{} `json:"extend_info"`
|
||||
Config interface{} `json:"config"`
|
||||
}
|
||||
|
||||
type HostUpdteTime struct {
|
||||
|
||||
@@ -58,8 +58,7 @@ func MigrateTables(db *gorm.DB) error {
|
||||
dts := []interface{}{&RecordingRule{}, &AlertRule{}, &AlertSubscribe{}, &AlertMute{},
|
||||
&TaskRecord{}, &ChartShare{}, &Target{}, &Configs{}, &Datasource{}, &NotifyTpl{},
|
||||
&Board{}, &BoardBusigroup{}, &Users{}, &SsoConfig{}, &models.BuiltinMetric{},
|
||||
&models.MetricFilter{}, &models.BuiltinComponent{}, &models.NotificaitonRecord{},
|
||||
&models.TargetBusiGroup{}}
|
||||
&models.MetricFilter{}, &models.BuiltinComponent{}}
|
||||
|
||||
if !columnHasIndex(db, &AlertHisEvent{}, "original_tags") ||
|
||||
!columnHasIndex(db, &AlertCurEvent{}, "original_tags") {
|
||||
@@ -89,7 +88,7 @@ func MigrateTables(db *gorm.DB) error {
|
||||
for _, dt := range dts {
|
||||
err := db.AutoMigrate(dt)
|
||||
if err != nil {
|
||||
logger.Errorf("failed to migrate table:%v %v", dt, err)
|
||||
logger.Errorf("failed to migrate table: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -228,15 +227,13 @@ type AlertCurEvent struct {
|
||||
}
|
||||
|
||||
type Target struct {
|
||||
HostIp string `gorm:"column:host_ip;type:varchar(15);default:'';comment:IPv4 string;index:idx_host_ip"`
|
||||
AgentVersion string `gorm:"column:agent_version;type:varchar(255);default:'';comment:agent version;index:idx_agent_version"`
|
||||
EngineName string `gorm:"column:engine_name;type:varchar(255);default:'';comment:engine name;index:idx_engine_name"`
|
||||
OS string `gorm:"column:os;type:varchar(31);default:'';comment:os type;index:idx_os"`
|
||||
HostTags []string `gorm:"column:host_tags;type:text;comment:global labels set in conf file;serializer:json"`
|
||||
HostIp string `gorm:"column:host_ip;varchar(15);default:'';comment:IPv4 string;index:idx_host_ip"`
|
||||
AgentVersion string `gorm:"column:agent_version;varchar(255);default:'';comment:agent version;index:idx_agent_version"`
|
||||
EngineName string `gorm:"column:engine_name;varchar(255);default:'';comment:engine name;index:idx_engine_name"`
|
||||
}
|
||||
|
||||
type Datasource struct {
|
||||
IsDefault bool `gorm:"column:is_default;type:boolean;comment:is default datasource"`
|
||||
IsDefault bool `gorm:"column:is_default;type:boolean;not null;comment:is default datasource"`
|
||||
}
|
||||
|
||||
type Configs struct {
|
||||
@@ -277,6 +274,5 @@ type SsoConfig struct {
|
||||
}
|
||||
|
||||
type BuiltinPayloads struct {
|
||||
UUID int64 `json:"uuid" gorm:"type:bigint;not null;index:idx_uuid;comment:'uuid of payload'"`
|
||||
ComponentID int64 `json:"component_id" gorm:"type:bigint;index:idx_component,sort:asc;not null;default:0;comment:'component_id of payload'"`
|
||||
UUID int64 `json:"uuid" gorm:"type:bigint;not null;index:idx_uuid;comment:'uuid of payload'"`
|
||||
}
|
||||
|
||||
@@ -1,91 +0,0 @@
|
||||
package models
|
||||
|
||||
import (
|
||||
"github.com/ccfos/nightingale/v6/pkg/ctx"
|
||||
"github.com/toolkits/pkg/logger"
|
||||
"github.com/toolkits/pkg/str"
|
||||
)
|
||||
|
||||
const (
|
||||
NotiStatusSuccess = iota + 1
|
||||
NotiStatusFailure
|
||||
)
|
||||
|
||||
type NotificaitonRecord struct {
|
||||
Id int64 `json:"id" gorm:"primaryKey;type:bigint;autoIncrement"`
|
||||
EventId int64 `json:"event_id" gorm:"type:bigint;not null;index:idx_evt,priority:1;comment:event history id"`
|
||||
SubId int64 `json:"sub_id" gorm:"type:bigint;comment:subscribed rule id"`
|
||||
Channel string `json:"channel" gorm:"type:varchar(255);not null;comment:notification channel name"`
|
||||
Status int `json:"status" gorm:"type:int;comment:notification status"` // 1-成功,2-失败
|
||||
Target string `json:"target" gorm:"type:varchar(1024);not null;comment:notification target"`
|
||||
Details string `json:"details" gorm:"type:varchar(2048);default:'';comment:notification other info"`
|
||||
CreatedAt int64 `json:"created_at" gorm:"type:bigint;not null;comment:create time"`
|
||||
}
|
||||
|
||||
func NewNotificationRecord(event *AlertCurEvent, channel, target string) *NotificaitonRecord {
|
||||
return &NotificaitonRecord{
|
||||
EventId: event.Id,
|
||||
SubId: event.SubRuleId,
|
||||
Channel: channel,
|
||||
Status: NotiStatusSuccess,
|
||||
Target: target,
|
||||
}
|
||||
}
|
||||
|
||||
func (n *NotificaitonRecord) SetStatus(status int) {
|
||||
if n == nil {
|
||||
return
|
||||
}
|
||||
n.Status = status
|
||||
}
|
||||
|
||||
func (n *NotificaitonRecord) SetDetails(details string) {
|
||||
if n == nil {
|
||||
return
|
||||
}
|
||||
n.Details = details
|
||||
}
|
||||
|
||||
func (n *NotificaitonRecord) TableName() string {
|
||||
return "notification_record"
|
||||
}
|
||||
|
||||
func (n *NotificaitonRecord) Add(ctx *ctx.Context) error {
|
||||
return Insert(ctx, n)
|
||||
}
|
||||
|
||||
func (n *NotificaitonRecord) GetGroupIds(ctx *ctx.Context) (groupIds []int64) {
|
||||
if n == nil {
|
||||
return
|
||||
}
|
||||
|
||||
if n.SubId > 0 {
|
||||
if sub, err := AlertSubscribeGet(ctx, "id=?", n.SubId); err != nil {
|
||||
logger.Errorf("AlertSubscribeGet failed, err: %v", err)
|
||||
} else {
|
||||
groupIds = str.IdsInt64(sub.UserGroupIds, " ")
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
if event, err := AlertHisEventGetById(ctx, n.EventId); err != nil {
|
||||
logger.Errorf("AlertHisEventGetById failed, err: %v", err)
|
||||
} else {
|
||||
groupIds = str.IdsInt64(event.NotifyGroups, " ")
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func NotificaitonRecordsGetByEventId(ctx *ctx.Context, eid int64) ([]*NotificaitonRecord, error) {
|
||||
return NotificaitonRecordsGet(ctx, "event_id=?", eid)
|
||||
}
|
||||
|
||||
func NotificaitonRecordsGet(ctx *ctx.Context, where string, args ...interface{}) ([]*NotificaitonRecord, error) {
|
||||
var lst []*NotificaitonRecord
|
||||
err := DB(ctx).Where(where, args...).Find(&lst).Error
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return lst, nil
|
||||
}
|
||||
@@ -1,95 +0,0 @@
|
||||
package models
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"github.com/toolkits/pkg/logger"
|
||||
)
|
||||
|
||||
type PromRule struct {
|
||||
Alert string `yaml:"alert,omitempty" json:"alert,omitempty"` // 报警规则的名称
|
||||
Record string `yaml:"record,omitempty" json:"record,omitempty"` // 记录规则的名称
|
||||
Expr string `yaml:"expr,omitempty" json:"expr,omitempty"` // PromQL 表达式
|
||||
For string `yaml:"for,omitempty" json:"for,omitempty"` // 告警的等待时间
|
||||
Annotations map[string]string `yaml:"annotations,omitempty" json:"annotations,omitempty"` // 规则的注释信息
|
||||
Labels map[string]string `yaml:"labels,omitempty" json:"labels,omitempty"` // 规则的标签信息
|
||||
}
|
||||
|
||||
type PromRuleGroup struct {
|
||||
Name string `yaml:"name"`
|
||||
Rules []PromRule `yaml:"rules"`
|
||||
Interval string `yaml:"interval,omitempty"`
|
||||
}
|
||||
|
||||
func convertInterval(interval string) int {
|
||||
duration, err := time.ParseDuration(interval)
|
||||
if err != nil {
|
||||
logger.Errorf("Error parsing interval `%s`, err: %v", interval, err)
|
||||
return 0
|
||||
}
|
||||
return int(duration.Seconds())
|
||||
}
|
||||
|
||||
func ConvertAlert(rule PromRule, interval string, datasouceIds []int64, disabled int) AlertRule {
|
||||
annotations := rule.Annotations
|
||||
appendTags := []string{}
|
||||
severity := 2
|
||||
|
||||
if len(rule.Labels) > 0 {
|
||||
for k, v := range rule.Labels {
|
||||
if k != "severity" {
|
||||
appendTags = append(appendTags, fmt.Sprintf("%s=%s", k, v))
|
||||
} else {
|
||||
switch v {
|
||||
case "critical":
|
||||
severity = 1
|
||||
case "warning":
|
||||
severity = 2
|
||||
case "info":
|
||||
severity = 3
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return AlertRule{
|
||||
Name: rule.Alert,
|
||||
Severity: severity,
|
||||
DatasourceIdsJson: datasouceIds,
|
||||
Disabled: disabled,
|
||||
PromForDuration: convertInterval(rule.For),
|
||||
PromQl: rule.Expr,
|
||||
PromEvalInterval: convertInterval(interval),
|
||||
EnableStimeJSON: "00:00",
|
||||
EnableEtimeJSON: "23:59",
|
||||
EnableDaysOfWeekJSON: []string{
|
||||
"1", "2", "3", "4", "5", "6", "0",
|
||||
},
|
||||
EnableInBG: AlertRuleEnableInGlobalBG,
|
||||
NotifyRecovered: AlertRuleNotifyRecovered,
|
||||
NotifyRepeatStep: AlertRuleNotifyRepeatStep60Min,
|
||||
RecoverDuration: AlertRuleRecoverDuration0Sec,
|
||||
AnnotationsJSON: annotations,
|
||||
AppendTagsJSON: appendTags,
|
||||
}
|
||||
}
|
||||
|
||||
func DealPromGroup(promRule []PromRuleGroup, dataSourceIds []int64, disabled int) []AlertRule {
|
||||
var alertRules []AlertRule
|
||||
|
||||
for _, group := range promRule {
|
||||
interval := group.Interval
|
||||
if interval == "" {
|
||||
interval = "15s"
|
||||
}
|
||||
for _, rule := range group.Rules {
|
||||
if rule.Alert != "" {
|
||||
alertRules = append(alertRules,
|
||||
ConvertAlert(rule, interval, dataSourceIds, disabled))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return alertRules
|
||||
}
|
||||
@@ -1,84 +0,0 @@
|
||||
package models_test
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
"gopkg.in/yaml.v2"
|
||||
)
|
||||
|
||||
func TestConvertAlert(t *testing.T) {
|
||||
jobMissing := []models.PromRule{}
|
||||
err := yaml.Unmarshal([]byte(` - alert: PrometheusJobMissing
|
||||
expr: absent(up{job="prometheus"})
|
||||
for: 1m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
summary: Prometheus job missing (instance {{ $labels.instance }})
|
||||
description: "A Prometheus job has disappeared\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"`), &jobMissing)
|
||||
if err != nil {
|
||||
t.Errorf("Failed to Unmarshal, err: %s", err)
|
||||
}
|
||||
t.Logf("jobMissing: %+v", jobMissing[0])
|
||||
convJobMissing := models.ConvertAlert(jobMissing[0], "30s", []int64{1}, 0)
|
||||
if convJobMissing.PromEvalInterval != 30 {
|
||||
t.Errorf("PromEvalInterval is expected to be 30, but got %d",
|
||||
convJobMissing.PromEvalInterval)
|
||||
}
|
||||
if convJobMissing.PromForDuration != 60 {
|
||||
t.Errorf("PromForDuration is expected to be 60, but got %d",
|
||||
convJobMissing.PromForDuration)
|
||||
}
|
||||
if convJobMissing.Severity != 2 {
|
||||
t.Errorf("Severity is expected to be 2, but got %d", convJobMissing.Severity)
|
||||
}
|
||||
|
||||
ruleEvaluationSlow := []models.PromRule{}
|
||||
yaml.Unmarshal([]byte(` - alert: PrometheusRuleEvaluationSlow
|
||||
expr: prometheus_rule_group_last_duration_seconds > prometheus_rule_group_interval_seconds
|
||||
for: 180s
|
||||
labels:
|
||||
severity: info
|
||||
annotations:
|
||||
summary: Prometheus rule evaluation slow (instance {{ $labels.instance }})
|
||||
description: "Prometheus rule evaluation took more time than the scheduled interval. It indicates a slower storage backend access or too complex query.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
`), &ruleEvaluationSlow)
|
||||
t.Logf("ruleEvaluationSlow: %+v", ruleEvaluationSlow[0])
|
||||
convRuleEvaluationSlow := models.ConvertAlert(ruleEvaluationSlow[0], "1m", []int64{1}, 0)
|
||||
if convRuleEvaluationSlow.PromEvalInterval != 60 {
|
||||
t.Errorf("PromEvalInterval is expected to be 60, but got %d",
|
||||
convJobMissing.PromEvalInterval)
|
||||
}
|
||||
if convRuleEvaluationSlow.PromForDuration != 180 {
|
||||
t.Errorf("PromForDuration is expected to be 180, but got %d",
|
||||
convJobMissing.PromForDuration)
|
||||
}
|
||||
if convRuleEvaluationSlow.Severity != 3 {
|
||||
t.Errorf("Severity is expected to be 3, but got %d", convJobMissing.Severity)
|
||||
}
|
||||
|
||||
targetMissing := []models.PromRule{}
|
||||
yaml.Unmarshal([]byte(` - alert: PrometheusTargetMissing
|
||||
expr: up == 0
|
||||
for: 1.5m
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
summary: Prometheus target missing (instance {{ $labels.instance }})
|
||||
description: "A Prometheus target has disappeared. An exporter might be crashed.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
`), &targetMissing)
|
||||
t.Logf("targetMissing: %+v", targetMissing[0])
|
||||
convTargetMissing := models.ConvertAlert(targetMissing[0], "1h", []int64{1}, 0)
|
||||
if convTargetMissing.PromEvalInterval != 3600 {
|
||||
t.Errorf("PromEvalInterval is expected to be 3600, but got %d",
|
||||
convTargetMissing.PromEvalInterval)
|
||||
}
|
||||
if convTargetMissing.PromForDuration != 90 {
|
||||
t.Errorf("PromForDuration is expected to be 90, but got %d",
|
||||
convTargetMissing.PromForDuration)
|
||||
}
|
||||
if convTargetMissing.Severity != 1 {
|
||||
t.Errorf("Severity is expected to be 1, but got %d", convTargetMissing.Severity)
|
||||
}
|
||||
}
|
||||
214
models/target.go
214
models/target.go
@@ -1,36 +1,31 @@
|
||||
package models
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"sort"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/pkg/ctx"
|
||||
"github.com/ccfos/nightingale/v6/pkg/poster"
|
||||
"golang.org/x/exp/slices"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
"github.com/toolkits/pkg/container/set"
|
||||
|
||||
"gorm.io/gorm"
|
||||
)
|
||||
|
||||
type Target struct {
|
||||
Id int64 `json:"id" gorm:"primaryKey"`
|
||||
GroupId int64 `json:"group_id"`
|
||||
GroupObjs []*BusiGroup `json:"group_objs" gorm:"-"`
|
||||
GroupObj *BusiGroup `json:"group_obj" gorm:"-"`
|
||||
Ident string `json:"ident"`
|
||||
Note string `json:"note"`
|
||||
Tags string `json:"-"` // user tags
|
||||
Tags string `json:"-"`
|
||||
TagsJSON []string `json:"tags" gorm:"-"`
|
||||
TagsMap map[string]string `json:"tags_maps" gorm:"-"` // internal use, append tags to series
|
||||
UpdateAt int64 `json:"update_at"`
|
||||
HostIp string `json:"host_ip"` //ipv4,do not needs range select
|
||||
AgentVersion string `json:"agent_version"`
|
||||
EngineName string `json:"engine_name"`
|
||||
OS string `json:"os" gorm:"column:os"`
|
||||
HostTags []string `json:"host_tags" gorm:"serializer:json"`
|
||||
|
||||
UnixTime int64 `json:"unixtime" gorm:"-"`
|
||||
Offset int64 `json:"offset" gorm:"-"`
|
||||
@@ -38,9 +33,9 @@ type Target struct {
|
||||
MemUtil float64 `json:"mem_util" gorm:"-"`
|
||||
CpuNum int `json:"cpu_num" gorm:"-"`
|
||||
CpuUtil float64 `json:"cpu_util" gorm:"-"`
|
||||
OS string `json:"os" gorm:"-"`
|
||||
Arch string `json:"arch" gorm:"-"`
|
||||
RemoteAddr string `json:"remote_addr" gorm:"-"`
|
||||
GroupIds []int64 `json:"group_ids" gorm:"-"`
|
||||
}
|
||||
|
||||
func (t *Target) TableName() string {
|
||||
@@ -48,60 +43,26 @@ func (t *Target) TableName() string {
|
||||
}
|
||||
|
||||
func (t *Target) FillGroup(ctx *ctx.Context, cache map[int64]*BusiGroup) error {
|
||||
var err error
|
||||
if len(t.GroupIds) == 0 {
|
||||
t.GroupIds, err = TargetGroupIdsGetByIdent(ctx, t.Ident)
|
||||
if err != nil {
|
||||
return errors.WithMessage(err, "failed to get target gids")
|
||||
}
|
||||
t.GroupObjs = make([]*BusiGroup, 0, len(t.GroupIds))
|
||||
if t.GroupId <= 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
for _, gid := range t.GroupIds {
|
||||
bg, has := cache[gid]
|
||||
if has && bg != nil {
|
||||
t.GroupObjs = append(t.GroupObjs, bg)
|
||||
continue
|
||||
}
|
||||
|
||||
bg, err := BusiGroupGetById(ctx, gid)
|
||||
if err != nil {
|
||||
return errors.WithMessage(err, "failed to get busi group")
|
||||
}
|
||||
|
||||
if bg == nil {
|
||||
continue
|
||||
}
|
||||
|
||||
t.GroupObjs = append(t.GroupObjs, bg)
|
||||
cache[gid] = bg
|
||||
bg, has := cache[t.GroupId]
|
||||
if has {
|
||||
t.GroupObj = bg
|
||||
return nil
|
||||
}
|
||||
|
||||
bg, err := BusiGroupGetById(ctx, t.GroupId)
|
||||
if err != nil {
|
||||
return errors.WithMessage(err, "failed to get busi group")
|
||||
}
|
||||
|
||||
t.GroupObj = bg
|
||||
cache[t.GroupId] = bg
|
||||
return nil
|
||||
}
|
||||
|
||||
func (t *Target) MatchGroupId(gid ...int64) bool {
|
||||
for _, tgId := range t.GroupIds {
|
||||
for _, id := range gid {
|
||||
if tgId == id {
|
||||
return true
|
||||
}
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (t *Target) AfterFind(tx *gorm.DB) (err error) {
|
||||
delta := time.Now().Unix() - t.UpdateAt
|
||||
if delta < 60 {
|
||||
t.TargetUp = 2
|
||||
} else if delta < 180 {
|
||||
t.TargetUp = 1
|
||||
}
|
||||
t.FillTagsMap()
|
||||
return
|
||||
}
|
||||
|
||||
func TargetStatistics(ctx *ctx.Context) (*Statistics, error) {
|
||||
if !ctx.IsCenter {
|
||||
s, err := poster.GetByUrls[*Statistics](ctx, "/v1/n9e/statistic?name=target")
|
||||
@@ -128,17 +89,8 @@ type BuildTargetWhereOption func(session *gorm.DB) *gorm.DB
|
||||
|
||||
func BuildTargetWhereWithBgids(bgids []int64) BuildTargetWhereOption {
|
||||
return func(session *gorm.DB) *gorm.DB {
|
||||
if len(bgids) == 1 && bgids[0] == 0 {
|
||||
session = session.Joins("left join target_busi_group on target.ident = " +
|
||||
"target_busi_group.target_ident").Where("target_busi_group.target_ident is null")
|
||||
} else if len(bgids) > 0 {
|
||||
if slices.Contains(bgids, 0) {
|
||||
session = session.Joins("left join target_busi_group on target.ident = target_busi_group.target_ident").
|
||||
Where("target_busi_group.target_ident is null OR target_busi_group.group_id in (?)", bgids)
|
||||
} else {
|
||||
session = session.Joins("join target_busi_group on target.ident = "+
|
||||
"target_busi_group.target_ident").Where("target_busi_group.group_id in (?)", bgids)
|
||||
}
|
||||
if len(bgids) > 0 {
|
||||
session = session.Where("group_id in (?)", bgids)
|
||||
}
|
||||
return session
|
||||
}
|
||||
@@ -153,22 +105,13 @@ func BuildTargetWhereWithDsIds(dsIds []int64) BuildTargetWhereOption {
|
||||
}
|
||||
}
|
||||
|
||||
func BuildTargetWhereWithHosts(hosts []string) BuildTargetWhereOption {
|
||||
return func(session *gorm.DB) *gorm.DB {
|
||||
if len(hosts) > 0 {
|
||||
session = session.Where("ident in (?) or host_ip in (?)", hosts, hosts)
|
||||
}
|
||||
return session
|
||||
}
|
||||
}
|
||||
|
||||
func BuildTargetWhereWithQuery(query string) BuildTargetWhereOption {
|
||||
return func(session *gorm.DB) *gorm.DB {
|
||||
if query != "" {
|
||||
arr := strings.Fields(query)
|
||||
for i := 0; i < len(arr); i++ {
|
||||
q := "%" + arr[i] + "%"
|
||||
session = session.Where("ident like ? or host_ip like ? or note like ? or tags like ? or host_tags like ? or os like ?", q, q, q, q, q, q)
|
||||
session = session.Where("ident like ? or note like ? or tags like ?", q, q, q)
|
||||
}
|
||||
}
|
||||
return session
|
||||
@@ -178,18 +121,18 @@ func BuildTargetWhereWithQuery(query string) BuildTargetWhereOption {
|
||||
func BuildTargetWhereWithDowntime(downtime int64) BuildTargetWhereOption {
|
||||
return func(session *gorm.DB) *gorm.DB {
|
||||
if downtime > 0 {
|
||||
session = session.Where("target.update_at < ?", time.Now().Unix()-downtime)
|
||||
session = session.Where("update_at < ?", time.Now().Unix()-downtime)
|
||||
}
|
||||
return session
|
||||
}
|
||||
}
|
||||
|
||||
func buildTargetWhere(ctx *ctx.Context, options ...BuildTargetWhereOption) *gorm.DB {
|
||||
sub := DB(ctx).Model(&Target{}).Distinct("target.ident")
|
||||
session := DB(ctx).Model(&Target{})
|
||||
for _, opt := range options {
|
||||
sub = opt(sub)
|
||||
session = opt(session)
|
||||
}
|
||||
return DB(ctx).Model(&Target{}).Where("ident in (?)", sub)
|
||||
return session
|
||||
}
|
||||
|
||||
func TargetTotal(ctx *ctx.Context, options ...BuildTargetWhereOption) (int64, error) {
|
||||
@@ -247,18 +190,15 @@ func MissTargetCountByFilter(ctx *ctx.Context, query []map[string]interface{}, t
|
||||
}
|
||||
|
||||
func TargetFilterQueryBuild(ctx *ctx.Context, query []map[string]interface{}, limit, offset int) *gorm.DB {
|
||||
sub := DB(ctx).Model(&Target{}).Distinct("target.ident").Joins("left join " +
|
||||
"target_busi_group on target.ident = target_busi_group.target_ident")
|
||||
session := DB(ctx).Model(&Target{})
|
||||
for _, q := range query {
|
||||
tx := DB(ctx).Model(&Target{})
|
||||
for k, v := range q {
|
||||
tx = tx.Or(k, v)
|
||||
}
|
||||
sub = sub.Where(tx)
|
||||
session = session.Where(tx)
|
||||
}
|
||||
|
||||
session := DB(ctx).Model(&Target{}).Where("ident in (?)", sub)
|
||||
|
||||
if limit > 0 {
|
||||
session = session.Limit(limit).Offset(offset)
|
||||
}
|
||||
@@ -274,20 +214,9 @@ func TargetGetsAll(ctx *ctx.Context) ([]*Target, error) {
|
||||
|
||||
var lst []*Target
|
||||
err := DB(ctx).Model(&Target{}).Find(&lst).Error
|
||||
if err != nil {
|
||||
return lst, err
|
||||
}
|
||||
|
||||
tgs, err := TargetBusiGroupsGetAll(ctx)
|
||||
if err != nil {
|
||||
return lst, err
|
||||
}
|
||||
|
||||
for i := 0; i < len(lst); i++ {
|
||||
lst[i].FillTagsMap()
|
||||
lst[i].GroupIds = tgs[lst[i].Ident]
|
||||
}
|
||||
|
||||
return lst, err
|
||||
}
|
||||
|
||||
@@ -391,15 +320,15 @@ func TargetsGetIdentsByIdentsAndHostIps(ctx *ctx.Context, idents, hostIps []stri
|
||||
return inexistence, identSet.ToSlice(), nil
|
||||
}
|
||||
|
||||
func TargetGetTags(ctx *ctx.Context, idents []string, ignoreHostTag bool) ([]string, error) {
|
||||
func TargetGetTags(ctx *ctx.Context, idents []string) ([]string, error) {
|
||||
session := DB(ctx).Model(new(Target))
|
||||
|
||||
var arr []*Target
|
||||
var arr []string
|
||||
if len(idents) > 0 {
|
||||
session = session.Where("ident in ?", idents)
|
||||
}
|
||||
|
||||
err := session.Select("tags", "host_tags").Find(&arr).Error
|
||||
err := session.Select("distinct(tags) as tags").Pluck("tags", &arr).Error
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -411,16 +340,10 @@ func TargetGetTags(ctx *ctx.Context, idents []string, ignoreHostTag bool) ([]str
|
||||
|
||||
set := make(map[string]struct{})
|
||||
for i := 0; i < cnt; i++ {
|
||||
tags := strings.Fields(arr[i].Tags)
|
||||
tags := strings.Fields(arr[i])
|
||||
for j := 0; j < len(tags); j++ {
|
||||
set[tags[j]] = struct{}{}
|
||||
}
|
||||
|
||||
if !ignoreHostTag {
|
||||
for _, ht := range arr[i].HostTags {
|
||||
set[ht] = struct{}{}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
cnt = len(set)
|
||||
@@ -465,8 +388,7 @@ func (t *Target) FillTagsMap() {
|
||||
t.TagsJSON = strings.Fields(t.Tags)
|
||||
t.TagsMap = make(map[string]string)
|
||||
m := make(map[string]string)
|
||||
allTags := append(t.TagsJSON, t.HostTags...)
|
||||
for _, item := range allTags {
|
||||
for _, item := range t.TagsJSON {
|
||||
arr := strings.Split(item, "=")
|
||||
if len(arr) != 2 {
|
||||
continue
|
||||
@@ -481,16 +403,6 @@ func (t *Target) GetTagsMap() map[string]string {
|
||||
tagsJSON := strings.Fields(t.Tags)
|
||||
m := make(map[string]string)
|
||||
for _, item := range tagsJSON {
|
||||
if arr := strings.Split(item, "="); len(arr) == 2 {
|
||||
m[arr[0]] = arr[1]
|
||||
}
|
||||
}
|
||||
return m
|
||||
}
|
||||
|
||||
func (t *Target) GetHostTagsMap() map[string]string {
|
||||
m := make(map[string]string)
|
||||
for _, item := range t.HostTags {
|
||||
arr := strings.Split(item, "=")
|
||||
if len(arr) != 2 {
|
||||
continue
|
||||
@@ -506,6 +418,7 @@ func (t *Target) FillMeta(meta *HostMeta) {
|
||||
t.CpuNum = meta.CpuNum
|
||||
t.UnixTime = meta.UnixTime
|
||||
t.Offset = meta.Offset
|
||||
t.OS = meta.OS
|
||||
t.Arch = meta.Arch
|
||||
t.RemoteAddr = meta.RemoteAddr
|
||||
}
|
||||
@@ -545,68 +458,3 @@ func IdentsFilter(ctx *ctx.Context, idents []string, where string, args ...inter
|
||||
func (m *Target) UpdateFieldsMap(ctx *ctx.Context, fields map[string]interface{}) error {
|
||||
return DB(ctx).Model(m).Updates(fields).Error
|
||||
}
|
||||
|
||||
func MigrateBg(ctx *ctx.Context, bgLabelKey string) {
|
||||
// 1. 判断是否已经完成迁移
|
||||
var cnt int64
|
||||
if err := DB(ctx).Model(&TargetBusiGroup{}).Count(&cnt).Error; err != nil {
|
||||
fmt.Println("Failed to count target_busi_group, err:", err)
|
||||
return
|
||||
}
|
||||
if cnt > 0 {
|
||||
fmt.Println("Migration has been completed.")
|
||||
return
|
||||
}
|
||||
|
||||
DoMigrateBg(ctx, bgLabelKey)
|
||||
}
|
||||
|
||||
func DoMigrateBg(ctx *ctx.Context, bgLabelKey string) error {
|
||||
// 2. 获取全量 target
|
||||
targets, err := TargetGetsAll(ctx)
|
||||
if err != nil {
|
||||
fmt.Println("Failed to get target, err:", err)
|
||||
return err
|
||||
}
|
||||
|
||||
// 3. 获取全量 busi_group
|
||||
bgs, err := BusiGroupGetAll(ctx)
|
||||
if err != nil {
|
||||
fmt.Println("Failed to get bg, err:", err)
|
||||
return err
|
||||
}
|
||||
|
||||
bgById := make(map[int64]*BusiGroup, len(bgs))
|
||||
for _, bg := range bgs {
|
||||
bgById[bg.Id] = bg
|
||||
}
|
||||
|
||||
// 4. 如果某 busi_group 有 label,将其存至对应的 target tags 中
|
||||
for _, t := range targets {
|
||||
if t.GroupId == 0 {
|
||||
continue
|
||||
}
|
||||
err := DB(ctx).Transaction(func(tx *gorm.DB) error {
|
||||
// 4.1 将 group_id 迁移至关联表
|
||||
if err := TargetBindBgids(ctx, []string{t.Ident}, []int64{t.GroupId}); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := TargetUpdateBgid(ctx, []string{t.Ident}, 0, false); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// 4.2 判断该机器是否需要新增 tag
|
||||
if bg, ok := bgById[t.GroupId]; !ok || bg.LabelEnable == 0 ||
|
||||
strings.Contains(t.Tags, bgLabelKey+"=") {
|
||||
return nil
|
||||
} else {
|
||||
return t.AddTags(ctx, []string{bgLabelKey + "=" + bg.LabelValue})
|
||||
}
|
||||
})
|
||||
if err != nil {
|
||||
fmt.Println("Failed to migrate bg, err:", err)
|
||||
continue
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -1,158 +0,0 @@
|
||||
package models
|
||||
|
||||
import (
|
||||
"time"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/pkg/ctx"
|
||||
|
||||
"gorm.io/gorm"
|
||||
"gorm.io/gorm/clause"
|
||||
)
|
||||
|
||||
type TargetBusiGroup struct {
|
||||
Id int64 `json:"id" gorm:"primaryKey;type:bigint;autoIncrement"`
|
||||
TargetIdent string `json:"target_ident" gorm:"type:varchar(191);not null;index:idx_target_group,unique,priority:1"`
|
||||
GroupId int64 `json:"group_id" gorm:"type:bigint;not null;index:idx_target_group,unique,priority:2"`
|
||||
UpdateAt int64 `json:"update_at" gorm:"type:bigint;not null"`
|
||||
}
|
||||
|
||||
func (t *TargetBusiGroup) TableName() string {
|
||||
return "target_busi_group"
|
||||
}
|
||||
|
||||
func TargetBusiGroupsGetAll(ctx *ctx.Context) (map[string][]int64, error) {
|
||||
var lst []*TargetBusiGroup
|
||||
err := DB(ctx).Find(&lst).Error
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
tgs := make(map[string][]int64)
|
||||
for _, tg := range lst {
|
||||
tgs[tg.TargetIdent] = append(tgs[tg.TargetIdent], tg.GroupId)
|
||||
}
|
||||
return tgs, nil
|
||||
}
|
||||
|
||||
func TargetGroupIdsGetByIdent(ctx *ctx.Context, ident string) ([]int64, error) {
|
||||
var lst []*TargetBusiGroup
|
||||
err := DB(ctx).Where("target_ident = ?", ident).Find(&lst).Error
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
groupIds := make([]int64, 0, len(lst))
|
||||
for _, tg := range lst {
|
||||
groupIds = append(groupIds, tg.GroupId)
|
||||
}
|
||||
return groupIds, nil
|
||||
}
|
||||
|
||||
func TargetGroupIdsGetByIdents(ctx *ctx.Context, idents []string) ([]int64, error) {
|
||||
var groupIds []int64
|
||||
err := DB(ctx).Model(&TargetBusiGroup{}).
|
||||
Where("target_ident IN ?", idents).
|
||||
Distinct().
|
||||
Pluck("group_id", &groupIds).
|
||||
Error
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return groupIds, nil
|
||||
}
|
||||
|
||||
func TargetBindBgids(ctx *ctx.Context, idents []string, bgids []int64) error {
|
||||
lst := make([]TargetBusiGroup, 0, len(bgids)*len(idents))
|
||||
updateAt := time.Now().Unix()
|
||||
for _, bgid := range bgids {
|
||||
for _, ident := range idents {
|
||||
cur := TargetBusiGroup{
|
||||
TargetIdent: ident,
|
||||
GroupId: bgid,
|
||||
UpdateAt: updateAt,
|
||||
}
|
||||
lst = append(lst, cur)
|
||||
}
|
||||
}
|
||||
|
||||
var cl clause.Expression = clause.Insert{Modifier: "ignore"}
|
||||
switch DB(ctx).Dialector.Name() {
|
||||
case "sqlite":
|
||||
cl = clause.Insert{Modifier: "or ignore"}
|
||||
case "postgres":
|
||||
cl = clause.OnConflict{DoNothing: true}
|
||||
}
|
||||
return DB(ctx).Clauses(cl).CreateInBatches(&lst, 10).Error
|
||||
}
|
||||
|
||||
func TargetUnbindBgids(ctx *ctx.Context, idents []string, bgids []int64) error {
|
||||
return DB(ctx).Where("target_ident in ? and group_id in ?",
|
||||
idents, bgids).Delete(&TargetBusiGroup{}).Error
|
||||
}
|
||||
|
||||
func TargetDeleteBgids(ctx *ctx.Context, idents []string) error {
|
||||
return DB(ctx).Where("target_ident in ?", idents).Delete(&TargetBusiGroup{}).Error
|
||||
}
|
||||
|
||||
func TargetOverrideBgids(ctx *ctx.Context, idents []string, bgids []int64) error {
|
||||
return DB(ctx).Transaction(func(tx *gorm.DB) error {
|
||||
// 先删除旧的关联
|
||||
if err := tx.Where("target_ident IN ?", idents).Delete(&TargetBusiGroup{}).Error; err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// 准备新的关联数据
|
||||
lst := make([]TargetBusiGroup, 0, len(bgids)*len(idents))
|
||||
updateAt := time.Now().Unix()
|
||||
for _, ident := range idents {
|
||||
for _, bgid := range bgids {
|
||||
cur := TargetBusiGroup{
|
||||
TargetIdent: ident,
|
||||
GroupId: bgid,
|
||||
UpdateAt: updateAt,
|
||||
}
|
||||
lst = append(lst, cur)
|
||||
}
|
||||
}
|
||||
|
||||
if len(lst) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
// 添加新的关联
|
||||
var cl clause.Expression = clause.Insert{Modifier: "ignore"}
|
||||
switch tx.Dialector.Name() {
|
||||
case "sqlite":
|
||||
cl = clause.Insert{Modifier: "or ignore"}
|
||||
case "postgres":
|
||||
cl = clause.OnConflict{DoNothing: true}
|
||||
}
|
||||
return tx.Clauses(cl).CreateInBatches(&lst, 10).Error
|
||||
})
|
||||
}
|
||||
|
||||
func SeparateTargetIdents(ctx *ctx.Context, idents []string) (existing, nonExisting []string, err error) {
|
||||
existingMap := make(map[string]bool)
|
||||
|
||||
// 查询已存在的 idents 并直接填充 map
|
||||
err = DB(ctx).Model(&TargetBusiGroup{}).
|
||||
Where("target_ident IN ?", idents).
|
||||
Distinct().
|
||||
Pluck("target_ident", &existing).
|
||||
Error
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
for _, ident := range existing {
|
||||
existingMap[ident] = true
|
||||
}
|
||||
|
||||
// 分离不存在的 idents
|
||||
for _, ident := range idents {
|
||||
if !existingMap[ident] {
|
||||
nonExisting = append(nonExisting, ident)
|
||||
}
|
||||
}
|
||||
|
||||
return
|
||||
}
|
||||
@@ -304,22 +304,6 @@ func UserGetById(ctx *ctx.Context, id int64) (*User, error) {
|
||||
return UserGet(ctx, "id=?", id)
|
||||
}
|
||||
|
||||
func UsersGetByGroupIds(ctx *ctx.Context, groupIds []int64) ([]User, error) {
|
||||
if len(groupIds) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
userIds, err := GroupsMemberIds(ctx, groupIds)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
users, err := UserGetsByIds(ctx, userIds)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return users, nil
|
||||
}
|
||||
|
||||
func InitRoot(ctx *ctx.Context) {
|
||||
user, err := UserGetByUsername(ctx, "root")
|
||||
if err != nil {
|
||||
@@ -704,10 +688,7 @@ func (u *User) NopriIdents(ctx *ctx.Context, idents []string) ([]string, error)
|
||||
}
|
||||
|
||||
var allowedIdents []string
|
||||
sub := DB(ctx).Model(&Target{}).Distinct("target.ident").
|
||||
Joins("join target_busi_group on target.ident = target_busi_group.target_ident").
|
||||
Where("target_busi_group.group_id in (?)", bgids)
|
||||
err = DB(ctx).Model(&Target{}).Where("ident in (?)", sub).Pluck("ident", &allowedIdents).Error
|
||||
err = DB(ctx).Model(&Target{}).Where("group_id in ?", bgids).Pluck("ident", &allowedIdents).Error
|
||||
if err != nil {
|
||||
return []string{}, err
|
||||
}
|
||||
@@ -739,11 +720,7 @@ func (u *User) BusiGroups(ctx *ctx.Context, limit int, query string, all ...bool
|
||||
return lst, nil
|
||||
}
|
||||
|
||||
t.GroupIds, err = TargetGroupIdsGetByIdent(ctx, t.Ident)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
err = DB(ctx).Order("name").Limit(limit).Where("id in ?", t.GroupIds).Find(&lst).Error
|
||||
err = DB(ctx).Order("name").Limit(limit).Where("id=?", t.GroupId).Find(&lst).Error
|
||||
}
|
||||
|
||||
return lst, err
|
||||
@@ -775,12 +752,8 @@ func (u *User) BusiGroups(ctx *ctx.Context, limit int, query string, all ...bool
|
||||
return lst, err
|
||||
}
|
||||
|
||||
t.GroupIds, err = TargetGroupIdsGetByIdent(ctx, t.Ident)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if t != nil && t.MatchGroupId(busiGroupIds...) {
|
||||
err = DB(ctx).Order("name").Limit(limit).Where("id in ?", t.GroupIds).Find(&lst).Error
|
||||
if t != nil && slice.ContainsInt64(busiGroupIds, t.GroupId) {
|
||||
err = DB(ctx).Order("name").Limit(limit).Where("id=?", t.GroupId).Find(&lst).Error
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -38,12 +38,6 @@ func MemberIds(ctx *ctx.Context, groupId int64) ([]int64, error) {
|
||||
return ids, err
|
||||
}
|
||||
|
||||
func GroupsMemberIds(ctx *ctx.Context, groupIds []int64) ([]int64, error) {
|
||||
var ids []int64
|
||||
err := DB(ctx).Model(&UserGroupMember{}).Where("group_id in ?", groupIds).Pluck("user_id", &ids).Error
|
||||
return ids, err
|
||||
}
|
||||
|
||||
func UserGroupMemberCount(ctx *ctx.Context, where string, args ...interface{}) (int64, error) {
|
||||
return Count(DB(ctx).Model(&UserGroupMember{}).Where(where, args...))
|
||||
}
|
||||
|
||||
@@ -2,7 +2,6 @@ package hash
|
||||
|
||||
import (
|
||||
"sort"
|
||||
"strings"
|
||||
|
||||
prommodel "github.com/prometheus/common/model"
|
||||
"github.com/spaolacci/murmur3"
|
||||
@@ -54,14 +53,3 @@ func GetTagHash(m prommodel.Metric) uint64 {
|
||||
|
||||
return murmur3.Sum64([]byte(str))
|
||||
}
|
||||
|
||||
func GetTargetTagHash(m prommodel.Metric, target []string) uint64 {
|
||||
builder := strings.Builder{}
|
||||
for _, k := range target {
|
||||
builder.WriteString("/")
|
||||
builder.WriteString(k)
|
||||
builder.WriteString("/")
|
||||
builder.WriteString(string(m[prommodel.LabelName(k)]))
|
||||
}
|
||||
return murmur3.Sum64([]byte(builder.String()))
|
||||
}
|
||||
|
||||
@@ -104,56 +104,6 @@ var I18N = `
|
||||
"builtin metric already exists":"内置指标已存在",
|
||||
"AlertRule already exists":"告警规则已存在",
|
||||
"This functionality has not been enabled. Please contact the system administrator to activate it.":"此功能尚未启用。请联系系统管理员启用"
|
||||
},
|
||||
"ja_JP": {
|
||||
"Username or password invalid": "ユーザー名またはパスワードが無効です",
|
||||
"incorrect verification code": "認証コードが正しくありません",
|
||||
"roles empty": "役割を空にすることはできません",
|
||||
"Username already exists": "このユーザー名は既に存在します。別のユーザー名を使用してください",
|
||||
"failed to count user-groups": "データの検証に失敗しました。もう一度お試しください",
|
||||
"UserGroup already exists": "グループ名は既に存在します。別の名前を使用してください",
|
||||
"members empty": "メンバーを空にすることはできません",
|
||||
"At least one team have rw permission": "少なくとも1つのチームに読み書き権限が必要です",
|
||||
"Failed to create BusiGroup(%s)": "[%s]の作成に失敗しました。もう一度お試しください",
|
||||
"business group id invalid": "ビジネスグループIDが正しくありません",
|
||||
"idents empty": "監視対象を空にすることはできません",
|
||||
"invalid tag(%s)": "タグ[%s]が無効です",
|
||||
"invalid tagkey(%s): cannot contains . ": "タグキー[%s]にドット(.)を含めることはできません",
|
||||
"invalid tagkey(%s): cannot contains _ ": "タグキー[%s]にアンダースコア(_)を含めることはできません",
|
||||
"invalid tagkey(%s)": "タグキー[%s]が無効です",
|
||||
"duplicate tagkey(%s)": "タグキー(%s)が重複しています",
|
||||
"name is empty": "名前を空にすることはできません",
|
||||
"Ident duplicate": "ダッシュボードの一意の識別子が既に存在します",
|
||||
"No such dashboard": "ダッシュボードが存在しません",
|
||||
"Name has invalid characters": "名前に無効な文字が含まれています",
|
||||
"Name is blank": "名前を空白にすることはできません",
|
||||
"forbidden": "権限がありません",
|
||||
"builtin alerts is empty, file: %s": "ビルトインアラートテンプレートが空です %s",
|
||||
"input json is empty": "提出内容を空にすることはできません",
|
||||
"fields empty": "選択フィールドを空にすることはできません",
|
||||
"No such AlertRule": "そのようなアラートルールはありません",
|
||||
"GroupId(%d) invalid": "ビジネスグループIDが無効です",
|
||||
"No such recording rule": "そのような記録ルールはありません",
|
||||
"tags is blank": "タグを空白にすることはできません",
|
||||
"oops... etime(%d) <= btime(%d)": "開始時間は終了時間より大きくすることはできません",
|
||||
"group_id invalid": "ビジネスグループが無効です",
|
||||
"No such AlertMute": "そのようなアラートミュートルールはありません",
|
||||
"rule_id and tags are both blank": "アラートルールとタグを同時に空にすることはできません",
|
||||
"rule is blank": "ルールを空にすることはできません",
|
||||
"rule invalid": "ルールが無効です。正しいかどうか確認してください",
|
||||
"unsupported field: %s": "フィールド %s はサポートされていません",
|
||||
"arg(batch) should be nonnegative": "batchは負の数にできません",
|
||||
"arg(tolerance) should be nonnegative": "toleranceは負の数にできません",
|
||||
"arg(timeout) should be nonnegative": "timeoutは負の数にできません",
|
||||
"arg(timeout) longer than five days": "timeoutは5日を超えることはできません",
|
||||
"arg(title) is required": "titleは必須項目です",
|
||||
"created task.id is zero": "作成されたタスクIDがゼロです",
|
||||
"invalid ibex address: %s": "ibex %s のアドレスが無効です",
|
||||
"url path invalid": "URLパスが無効です",
|
||||
"no such server": "そのようなインスタンスはありません",
|
||||
"admin role can not be modified": "管理者ロールは変更できません",
|
||||
"builtin payload already exists": "ビルトインテンプレートは既に存在します",
|
||||
"This functionality has not been enabled. Please contact the system administrator to activate it.": "この機能はまだ有効になっていません。システム管理者に連絡して有効にしてください"
|
||||
}
|
||||
}
|
||||
`
|
||||
|
||||
@@ -208,7 +208,7 @@ func (s *SsoClient) exchangeUser(code string) (*CallbackOutput, error) {
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to exchange token: %s", err)
|
||||
}
|
||||
userInfo, err := s.getUserInfo(s.Config.ClientID, s.UserInfoAddr, oauth2Token.AccessToken, s.TranTokenMethod)
|
||||
userInfo, err := s.getUserInfo(s.UserInfoAddr, oauth2Token.AccessToken, s.TranTokenMethod)
|
||||
if err != nil {
|
||||
logger.Errorf("failed to get user info: %s", err)
|
||||
return nil, fmt.Errorf("failed to get user info: %s", err)
|
||||
@@ -223,10 +223,10 @@ func (s *SsoClient) exchangeUser(code string) (*CallbackOutput, error) {
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (s *SsoClient) getUserInfo(ClientId, UserInfoAddr, accessToken string, TranTokenMethod string) ([]byte, error) {
|
||||
func (s *SsoClient) getUserInfo(UserInfoAddr, accessToken string, TranTokenMethod string) ([]byte, error) {
|
||||
var req *http.Request
|
||||
if TranTokenMethod == "formdata" {
|
||||
body := bytes.NewBuffer([]byte("access_token=" + accessToken + "&client_id=" + ClientId))
|
||||
body := bytes.NewBuffer([]byte("access_token=" + accessToken))
|
||||
r, err := http.NewRequest("POST", UserInfoAddr, body)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
@@ -234,8 +234,7 @@ func (s *SsoClient) getUserInfo(ClientId, UserInfoAddr, accessToken string, Tran
|
||||
r.Header.Add("Content-Type", "application/x-www-form-urlencoded")
|
||||
req = r
|
||||
} else if TranTokenMethod == "querystring" {
|
||||
|
||||
r, err := http.NewRequest("GET", UserInfoAddr+"?access_token="+accessToken+"&client_id="+ClientId, nil)
|
||||
r, err := http.NewRequest("GET", UserInfoAddr+"?access_token="+accessToken, nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -247,7 +246,6 @@ func (s *SsoClient) getUserInfo(ClientId, UserInfoAddr, accessToken string, Tran
|
||||
return nil, err
|
||||
}
|
||||
r.Header.Add("Authorization", "Bearer "+accessToken)
|
||||
r.Header.Add("client_id", ClientId)
|
||||
req = r
|
||||
}
|
||||
|
||||
@@ -264,7 +262,6 @@ func (s *SsoClient) getUserInfo(ClientId, UserInfoAddr, accessToken string, Tran
|
||||
|
||||
body, err := ioutil.ReadAll(resp.Body)
|
||||
resp.Body.Close()
|
||||
logger.Debugf("getUserInfo req:%+v resp: %+v", req, string(body))
|
||||
return body, err
|
||||
}
|
||||
|
||||
|
||||
@@ -8,21 +8,12 @@ import (
|
||||
"github.com/toolkits/pkg/logger"
|
||||
)
|
||||
|
||||
var defaultFuncMap = map[string]interface{}{
|
||||
"between": between,
|
||||
}
|
||||
|
||||
func MathCalc(s string, data map[string]interface{}) (float64, error) {
|
||||
m := make(map[string]interface{})
|
||||
func MathCalc(s string, data map[string]float64) (float64, error) {
|
||||
m := make(map[string]float64)
|
||||
for k, v := range data {
|
||||
m[cleanStr(k)] = v
|
||||
}
|
||||
|
||||
for k, v := range defaultFuncMap {
|
||||
m[k] = v
|
||||
}
|
||||
|
||||
// 表达式要求类型一致,否则此处编译会报错
|
||||
program, err := expr.Compile(cleanStr(s), expr.Env(m))
|
||||
if err != nil {
|
||||
return 0, err
|
||||
@@ -41,14 +32,12 @@ func MathCalc(s string, data map[string]interface{}) (float64, error) {
|
||||
} else {
|
||||
return 0, nil
|
||||
}
|
||||
} else if result, ok := output.(int); ok {
|
||||
return float64(result), nil
|
||||
} else {
|
||||
return 0, nil
|
||||
}
|
||||
}
|
||||
|
||||
func Calc(s string, data map[string]interface{}) bool {
|
||||
func Calc(s string, data map[string]float64) bool {
|
||||
v, err := MathCalc(s, data)
|
||||
if err != nil {
|
||||
logger.Errorf("Calc exp:%s data:%v error: %v", s, data, err)
|
||||
@@ -68,32 +57,3 @@ func replaceDollarSigns(s string) string {
|
||||
re := regexp.MustCompile(`\$([A-Z])\.`)
|
||||
return re.ReplaceAllString(s, "${1}_")
|
||||
}
|
||||
|
||||
// 自定义 expr 函数
|
||||
// between 函数,判断 target 是否在 arr[0] 和 arr[1] 之间
|
||||
func between(target float64, arr []interface{}) bool {
|
||||
if len(arr) != 2 {
|
||||
return false
|
||||
}
|
||||
|
||||
var min, max float64
|
||||
switch arr[0].(type) {
|
||||
case float64:
|
||||
min = arr[0].(float64)
|
||||
case int:
|
||||
min = float64(arr[0].(int))
|
||||
default:
|
||||
return false
|
||||
}
|
||||
|
||||
switch arr[1].(type) {
|
||||
case float64:
|
||||
max = arr[1].(float64)
|
||||
case int:
|
||||
max = float64(arr[1].(int))
|
||||
default:
|
||||
return false
|
||||
}
|
||||
|
||||
return target >= min && target <= max
|
||||
}
|
||||
|
||||
@@ -8,140 +8,140 @@ func TestMathCalc(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
expr string
|
||||
data map[string]interface{}
|
||||
data map[string]float64
|
||||
expected float64
|
||||
wantErr bool
|
||||
}{
|
||||
{
|
||||
name: "Add and Subtract",
|
||||
expr: "一个 + $.B - $.C",
|
||||
data: map[string]interface{}{"一个": 1, "$.B": 2, "$.C": 3},
|
||||
data: map[string]float64{"一个": 1, "$.B": 2, "$.C": 3},
|
||||
expected: 0,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "Multiply and Divide",
|
||||
expr: "($A.err_count >0&& $A.err_count <=3)||($B.err_count>0 && $B.err_count <=5)",
|
||||
data: map[string]interface{}{"$A.err_count": 4, "$B.err_count": 2},
|
||||
data: map[string]float64{"$A.err_count": 4, "$B.err_count": 2},
|
||||
expected: 1,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "Subtract and Add",
|
||||
expr: "$.C - $.D + $.A",
|
||||
data: map[string]interface{}{"$.A": 5, "$.C": 3, "$.D": 2},
|
||||
data: map[string]float64{"$.A": 5, "$.C": 3, "$.D": 2},
|
||||
expected: 6,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "Divide and Multiply",
|
||||
expr: "$.B / $.C * $.D",
|
||||
data: map[string]interface{}{"$.B": 6, "$.C": 2, "$.D": 3},
|
||||
data: map[string]float64{"$.B": 6, "$.C": 2, "$.D": 3},
|
||||
expected: 9,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "Divide and Multiply",
|
||||
expr: "$.B / $.C * $.D",
|
||||
data: map[string]interface{}{"$.B": 6, "$.C": 2, "$.D": 3},
|
||||
data: map[string]float64{"$.B": 6, "$.C": 2, "$.D": 3},
|
||||
expected: 9,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "Multiply and Add",
|
||||
expr: "$.A * $.B + $.C",
|
||||
data: map[string]interface{}{"$.A": 2, "$.B": 3, "$.C": 4},
|
||||
data: map[string]float64{"$.A": 2, "$.B": 3, "$.C": 4},
|
||||
expected: 10,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "Subtract and Divide",
|
||||
expr: "$.D - $.A / $.B",
|
||||
data: map[string]interface{}{"$.D": 10, "$.A": 4, "$.B": 2},
|
||||
data: map[string]float64{"$.D": 10, "$.A": 4, "$.B": 2},
|
||||
expected: 8,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "Add, Subtract and Subtract",
|
||||
expr: "$.C + $.D - $.A",
|
||||
data: map[string]interface{}{"$.C": 3, "$.D": 4, "$.A": 5},
|
||||
data: map[string]float64{"$.C": 3, "$.D": 4, "$.A": 5},
|
||||
expected: 2,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "Multiply and Subtract",
|
||||
expr: "$.B * $.A - $.D",
|
||||
data: map[string]interface{}{"$.B": 2, "$.A": 3, "$.D": 4},
|
||||
data: map[string]float64{"$.B": 2, "$.A": 3, "$.D": 4},
|
||||
expected: 2,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "Divide and Add",
|
||||
expr: "$.A / $.B + $.C",
|
||||
data: map[string]interface{}{"$.A": 4, "$.B": 2, "$.C": 3},
|
||||
data: map[string]float64{"$.A": 4, "$.B": 2, "$.C": 3},
|
||||
expected: 5,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "Add and Multiply",
|
||||
expr: "$.D + $.A * $.B",
|
||||
data: map[string]interface{}{"$.D": 1, "$.A": 2, "$.B": 3},
|
||||
data: map[string]float64{"$.D": 1, "$.A": 2, "$.B": 3},
|
||||
expected: 7,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "Divide and Add with Parentheses",
|
||||
expr: "($A / $B) + ($C * $D)",
|
||||
data: map[string]interface{}{"$A": 4, "$B": 2, "$C": 1, "$D": 3},
|
||||
data: map[string]float64{"$A": 4, "$B": 2, "$C": 1, "$D": 3},
|
||||
expected: 5.0,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "Divide with Parentheses",
|
||||
expr: "($.A - $.B) / ($.C + $.D)",
|
||||
data: map[string]interface{}{"$.A": 6, "$.B": 2, "$.C": 3, "$.D": 1},
|
||||
data: map[string]float64{"$.A": 6, "$.B": 2, "$.C": 3, "$.D": 1},
|
||||
expected: 1.0,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "Add and Multiply with Parentheses",
|
||||
expr: "($.A + $.B) * ($.C - $.D)",
|
||||
data: map[string]interface{}{"$.A": 8, "$.B": 2, "$.C": 4, "$.D": 2},
|
||||
data: map[string]float64{"$.A": 8, "$.B": 2, "$.C": 4, "$.D": 2},
|
||||
expected: 20,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "Divide and Multiply with Parentheses",
|
||||
expr: "($.A * $.B) / ($.C - $.D)",
|
||||
data: map[string]interface{}{"$.A": 8, "$.B": 2, "$.C": 4, "$.D": 2},
|
||||
data: map[string]float64{"$.A": 8, "$.B": 2, "$.C": 4, "$.D": 2},
|
||||
expected: 8,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "Add and Divide with Parentheses",
|
||||
expr: "$.A + ($.B * $.C) / $.D",
|
||||
data: map[string]interface{}{"$.A": 1, "$.B": 2, "$.C": 3, "$.D": 4},
|
||||
data: map[string]float64{"$.A": 1, "$.B": 2, "$.C": 3, "$.D": 4},
|
||||
expected: 2.5,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "Subtract and Multiply with Parentheses",
|
||||
expr: "($.A + $.B) - ($.C * $.D)",
|
||||
data: map[string]interface{}{"$.A": 5, "$.B": 2, "$.C": 3, "$.D": 1},
|
||||
data: map[string]float64{"$.A": 5, "$.B": 2, "$.C": 3, "$.D": 1},
|
||||
expected: 4,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "Multiply and Divide with Parentheses",
|
||||
expr: "$.A / ($.B - $.C) * $.D",
|
||||
data: map[string]interface{}{"$.A": 4, "$.B": 3, "$.C": 2, "$.D": 5},
|
||||
data: map[string]float64{"$.A": 4, "$.B": 3, "$.C": 2, "$.D": 5},
|
||||
expected: 20.0,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "Multiply and Divide with Parentheses 2",
|
||||
expr: "($.A - $.B) * ($.C / $.D)",
|
||||
data: map[string]interface{}{"$.A": 3, "$.B": 1, "$.C": 2, "$.D": 4},
|
||||
data: map[string]float64{"$.A": 3, "$.B": 1, "$.C": 2, "$.D": 4},
|
||||
expected: 1.0,
|
||||
wantErr: false,
|
||||
},
|
||||
@@ -149,80 +149,73 @@ func TestMathCalc(t *testing.T) {
|
||||
{
|
||||
name: "Complex expression",
|
||||
expr: "$.A/$.B*$.D",
|
||||
data: map[string]interface{}{"$.A": 1, "$.B": 2, "$.C": 3, "$.D": 4},
|
||||
data: map[string]float64{"$.A": 1, "$.B": 2, "$.C": 3, "$.D": 4},
|
||||
expected: 2,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "Complex expression",
|
||||
expr: "$.A/$.B*$.C",
|
||||
data: map[string]interface{}{"$.A": 2, "$.B": 2, "$.C": 2},
|
||||
data: map[string]float64{"$.A": 2, "$.B": 2, "$.C": 2},
|
||||
expected: 2,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "Complex expression",
|
||||
expr: "$.A/($.B*$.C)",
|
||||
data: map[string]interface{}{"$.A": 2, "$.B": 2, "$.C": 2},
|
||||
data: map[string]float64{"$.A": 2, "$.B": 2, "$.C": 2},
|
||||
expected: 0.5,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "Addition",
|
||||
expr: "$.A + $.B",
|
||||
data: map[string]interface{}{"$.A": 2, "$.B": 3},
|
||||
data: map[string]float64{"$.A": 2, "$.B": 3},
|
||||
expected: 5,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "Subtraction",
|
||||
expr: "$.A - $.B",
|
||||
data: map[string]interface{}{"$.A": 5, "$.B": 3},
|
||||
data: map[string]float64{"$.A": 5, "$.B": 3},
|
||||
expected: 2,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "Multiplication",
|
||||
expr: "$.A * $.B",
|
||||
data: map[string]interface{}{"$.A": 4, "$.B": 3},
|
||||
data: map[string]float64{"$.A": 4, "$.B": 3},
|
||||
expected: 12,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "Division",
|
||||
expr: "$.A / $.B",
|
||||
data: map[string]interface{}{"$.A": 10, "$.B": 2},
|
||||
data: map[string]float64{"$.A": 10, "$.B": 2},
|
||||
expected: 5,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "Mixed operations",
|
||||
expr: "($.A + $.B) * ($.C - $.D)",
|
||||
data: map[string]interface{}{"$.A": 1, "$.B": 2, "$.C": 5, "$.D": 3},
|
||||
data: map[string]float64{"$.A": 1, "$.B": 2, "$.C": 5, "$.D": 3},
|
||||
expected: 6, // Corrected from 9 to 6
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "Division by zero",
|
||||
expr: "( $D/$A >= 0.1 || $D/$B <= 0.5 ) && $C >= 1000",
|
||||
data: map[string]float64{"$A": 428382, "$B": 250218, "$C": 305578, "$D": 325028},
|
||||
expected: 1,
|
||||
wantErr: true,
|
||||
},
|
||||
{
|
||||
name: "Parentheses",
|
||||
expr: "($.A + $.B) / ($.C - $.D)",
|
||||
data: map[string]interface{}{"$.A": 6, "$.B": 4, "$.C": 10, "$.D": 2},
|
||||
data: map[string]float64{"$.A": 6, "$.B": 4, "$.C": 10, "$.D": 2},
|
||||
expected: 1.25, // Corrected from 2.5 to 1.25
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "Add and Multiply with Parentheses for float64 and int",
|
||||
expr: "($.A + $.B) * ($.C - $.D)",
|
||||
data: map[string]interface{}{"$.A": 8.0, "$.B": 2.0, "$.C": 4.0, "$.D": 2},
|
||||
expected: 20,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "Divide and Multiply with Parentheses for float64 and int",
|
||||
expr: "($.A * $.B) / ($.C - $.D)",
|
||||
data: map[string]interface{}{"$.A": 8, "$.B": 2, "$.C": 4.0, "$.D": 2},
|
||||
expected: 8,
|
||||
wantErr: false,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
@@ -255,269 +248,172 @@ func TestCalc(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
expr string
|
||||
data map[string]interface{}
|
||||
data map[string]float64
|
||||
expected bool
|
||||
}{
|
||||
{
|
||||
name: "Greater than - true",
|
||||
expr: "$.A > $.B",
|
||||
data: map[string]interface{}{"$.A": 5, "$.B": 3},
|
||||
data: map[string]float64{"$.A": 5, "$.B": 3},
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
name: "Multiply and Subtract with Parentheses",
|
||||
expr: "$A.yesterday_rate > 0.1 && $A.last_week_rate>0.1 or ($A.今天 >300 || $A.昨天>300 || $A.上周今天 > 300)",
|
||||
data: map[string]interface{}{"$A.yesterday_rate": 0.1, "$A.last_week_rate": 2, "$A.今天": 200.4, "$A.昨天": 200.4, "$A.上周今天": 200.4},
|
||||
data: map[string]float64{"$A.yesterday_rate": 0.1, "$A.last_week_rate": 2, "$A.今天": 200.4, "$A.昨天": 200.4, "$A.上周今天": 200.4},
|
||||
expected: false,
|
||||
},
|
||||
{
|
||||
name: "Count Greater Than Zero with Code",
|
||||
expr: "$A.count > 0",
|
||||
data: map[string]interface{}{"$A.count": 197, "$A.code": 30000},
|
||||
data: map[string]float64{"$A.count": 197, "$A.code": 30000},
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
name: "Today, Yesterday, and Lastweek Rate Comparison",
|
||||
expr: "$A.todayRate<0.3 && $A.yesterdayRate<0.3 && $A.lastweekRate<0.3",
|
||||
data: map[string]interface{}{"$A.todayRate": 1.1, "$A.yesterdayRate": 0.8, "$A.lastweekRate": 1.2},
|
||||
data: map[string]float64{"$A.todayRate": 1.1, "$A.yesterdayRate": 0.8, "$A.lastweekRate": 1.2},
|
||||
expected: false,
|
||||
},
|
||||
{
|
||||
name: "Today, Yesterday, and Lastweek Rate Low Threshold",
|
||||
expr: "$A.todayRate<0.1 && $A.yesterdayRate<0.1 && $A.lastweekRate<0.1",
|
||||
data: map[string]interface{}{"$A.todayRate": 0.9, "$A.yesterdayRate": 0.8, "$A.lastweekRate": 0.9},
|
||||
data: map[string]float64{"$A.todayRate": 0.9, "$A.yesterdayRate": 0.8, "$A.lastweekRate": 0.9},
|
||||
expected: false,
|
||||
},
|
||||
{
|
||||
name: "Agent Specific Today, Yesterday, and Lastweek Rate Comparison",
|
||||
expr: "$A.agent == 11 && $A.todayRate<0.3 && $A.yesterdayRate<0.3 && $A.lastweekRate<0.3",
|
||||
data: map[string]interface{}{"$A.agent": 11, "$A.todayRate": 0.9, "$A.yesterdayRate": 0.9, "$A.lastweekRate": 1},
|
||||
data: map[string]float64{"$A.agent": 11, "$A.todayRate": 0.9, "$A.yesterdayRate": 0.9, "$A.lastweekRate": 1},
|
||||
expected: false,
|
||||
},
|
||||
{
|
||||
name: "Today, Yesterday, and Lastweek Rate Below 0.1 - Case 1",
|
||||
expr: "$A<0.1 && $A.yesterdayRate<0.1 && $A.lastweekRate<0.1",
|
||||
data: map[string]interface{}{"$A": 0.8, "$A.yesterdayRate": 0.9, "$A.lastweekRate": 0.9},
|
||||
data: map[string]float64{"$A": 0.8, "$A.yesterdayRate": 0.9, "$A.lastweekRate": 0.9},
|
||||
expected: false,
|
||||
},
|
||||
{
|
||||
name: "Today, Yesterday, and Lastweek Rate Below 0.1 - Case 2",
|
||||
expr: "$A.today_rate<0.1 && $A.yesterday_rate<0.1 && $A.lastweek_rate<0.1",
|
||||
data: map[string]interface{}{"$A.today_rate": 0.9, "$A.yesterday_rate": 0.9, "$A.lastweek_rate": 0.9},
|
||||
data: map[string]float64{"$A.today_rate": 0.9, "$A.yesterday_rate": 0.9, "$A.lastweek_rate": 0.9},
|
||||
expected: false,
|
||||
},
|
||||
{
|
||||
name: "Today, Yesterday, and Lastweek Rate Below 0.1 - Case 3",
|
||||
expr: "$B.today_rate<0.1 && $A.yesterday_rate<0.1 && $A.lastweek_rate<0.1",
|
||||
data: map[string]interface{}{"$B.today_rate": 0.5, "$A.yesterday_rate": 0.9, "$A.lastweek_rate": 0.8},
|
||||
data: map[string]float64{"$B.today_rate": 0.5, "$A.yesterday_rate": 0.9, "$A.lastweek_rate": 0.8},
|
||||
expected: false,
|
||||
},
|
||||
{
|
||||
name: "Yesterday and Byesterday Rates Logical Conditions - Case 1",
|
||||
expr: "($A.yesterday_rate > 2 && $A.byesterday_rate > 2) or ($A.yesterday_rate <= 0.7 && $A.byesterday_rate <= 0.7)",
|
||||
data: map[string]interface{}{"$A.yesterday_rate": 3, "$A.byesterday_rate": 3},
|
||||
data: map[string]float64{"$A.yesterday_rate": 3, "$A.byesterday_rate": 3},
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
name: "Yesterday and Byesterday Rates Higher Thresholds - Case 1",
|
||||
expr: "($A.yesterday_rate > 1.5 && $A.byesterday_rate > 1.5) or ($A.yesterday_rate <= 0.8 && $A.byesterday_rate <= 0.8)",
|
||||
data: map[string]interface{}{"$A.yesterday_rate": 1.08, "$A.byesterday_rate": 1.02},
|
||||
data: map[string]float64{"$A.yesterday_rate": 1.08, "$A.byesterday_rate": 1.02},
|
||||
expected: false,
|
||||
},
|
||||
{
|
||||
name: "Greater than - false",
|
||||
expr: "($A.yesterday_rate > 1.0 && $A.byesterday_rate > 1.0 ) or ($A.yesterday_rate <= 0.9 && $A.byesterday_rate <= 0.9)",
|
||||
data: map[string]interface{}{"$A.byesterday_rate": 0.33, "$A.yesterday_rate": 2},
|
||||
data: map[string]float64{"$A.byesterday_rate": 0.33, "$A.yesterday_rate": 2},
|
||||
expected: false,
|
||||
},
|
||||
{
|
||||
name: "Less than - true",
|
||||
expr: "$A.count > 100 or $A.count2 > -3",
|
||||
data: map[string]interface{}{"$A.count": 5, "$A.count2": -1, "$.D": 2},
|
||||
data: map[string]float64{"$A.count": 5, "$A.count2": -1, "$.D": 2},
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
name: "Less than - false",
|
||||
expr: "$.A < $.B/$.B*4",
|
||||
data: map[string]interface{}{"$.A": 5, "$.B": 3},
|
||||
data: map[string]float64{"$.A": 5, "$.B": 3},
|
||||
expected: false,
|
||||
},
|
||||
{
|
||||
name: "Greater than or equal - true",
|
||||
expr: "$.A >= $.B",
|
||||
data: map[string]interface{}{"$.A": 3, "$.B": 3},
|
||||
data: map[string]float64{"$.A": 3, "$.B": 3},
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
name: "Less than or equal - true",
|
||||
expr: "$.A <= $.B",
|
||||
data: map[string]interface{}{"$.A": 2, "$.B": 2},
|
||||
data: map[string]float64{"$.A": 2, "$.B": 2},
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
name: "Not equal - true",
|
||||
expr: "$.A != $.B",
|
||||
data: map[string]interface{}{"$.A": 3, "$.B": 2},
|
||||
data: map[string]float64{"$.A": 3, "$.B": 2},
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
name: "Not equal - false",
|
||||
expr: "$.A != $.B",
|
||||
data: map[string]interface{}{"$.A": 2, "$.B": 2},
|
||||
data: map[string]float64{"$.A": 2, "$.B": 2},
|
||||
expected: false,
|
||||
},
|
||||
{
|
||||
name: "Addition resulting in true",
|
||||
expr: "$.A + $.B > $.C",
|
||||
data: map[string]interface{}{"$.A": 3, "$.B": 2, "$.C": 4},
|
||||
data: map[string]float64{"$.A": 3, "$.B": 2, "$.C": 4},
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
name: "Subtraction resulting in false",
|
||||
expr: "$.A - $.B < $.C",
|
||||
data: map[string]interface{}{"$.A": 1, "$.B": 3, "$.C": 1},
|
||||
data: map[string]float64{"$.A": 1, "$.B": 3, "$.C": 1},
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
name: "Multiplication resulting in true",
|
||||
expr: "$.A * $.B > $.C",
|
||||
data: map[string]interface{}{"$.A": 2, "$.B": 3, "$.C": 5},
|
||||
data: map[string]float64{"$.A": 2, "$.B": 3, "$.C": 5},
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
name: "Division resulting in false",
|
||||
expr: "$.A / $.B*$.C < $.C",
|
||||
data: map[string]interface{}{"$.A": 4, "$.B": 2, "$.C": 2},
|
||||
data: map[string]float64{"$.A": 4, "$.B": 2, "$.C": 2},
|
||||
expected: false,
|
||||
},
|
||||
{
|
||||
name: "Addition with parentheses resulting in true",
|
||||
expr: "($.A + $.B) > $.C && $.A >0",
|
||||
data: map[string]interface{}{"$.A": 1, "$.B": 4, "$.C": 4},
|
||||
data: map[string]float64{"$.A": 1, "$.B": 4, "$.C": 4},
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
name: "Addition with parentheses resulting in true",
|
||||
expr: "($.A + $.B) > $.C || $.A < 0",
|
||||
data: map[string]interface{}{"$.A": 1, "$.B": 4, "$.C": 4},
|
||||
data: map[string]float64{"$.A": 1, "$.B": 4, "$.C": 4},
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
name: "Complex expression with parentheses resulting in false",
|
||||
expr: "($.A + $.B) * $.C < $.D",
|
||||
data: map[string]interface{}{"$.A": 1, "$.B": 2, "$.C": 3, "$.D": 10},
|
||||
data: map[string]float64{"$.A": 1, "$.B": 2, "$.C": 3, "$.D": 10},
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
name: "Nested parentheses resulting in true",
|
||||
expr: "($.A + ($.B - $.C)) * $.D > $.E",
|
||||
data: map[string]interface{}{"$.A": 2, "$.B": 5, "$.C": 2, "$.D": 2, "$.E": 8},
|
||||
data: map[string]float64{"$.A": 2, "$.B": 5, "$.C": 2, "$.D": 2, "$.E": 8},
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
name: "Division with parentheses resulting in false",
|
||||
expr: " ( true || false ) && true",
|
||||
data: map[string]interface{}{"$A": 673601, "$A.": 673601, "$B": 250218, "$C": 456513, "$C.": 456513, "$D": 456513, "$D.": 456513},
|
||||
data: map[string]float64{"$A": 673601, "$A.": 673601, "$B": 250218, "$C": 456513, "$C.": 456513, "$D": 456513, "$D.": 456513},
|
||||
expected: true,
|
||||
},
|
||||
// $A:673601.5 $A.:673601.5 $B:361520 $B.:361520 $C:456513 $C.:456513 $D:422634 $D.:422634]
|
||||
|
||||
{
|
||||
name: "Greater than or equal for string - true",
|
||||
expr: "$.A >= $.B",
|
||||
data: map[string]interface{}{"$.A": "123", "$.B": "123"},
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
name: "Less than or equal - true",
|
||||
expr: "$.A <= $.B",
|
||||
data: map[string]interface{}{"$.A": "abc", "$.B": "abc"},
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
name: "Not equal - true",
|
||||
expr: "$.A != $.B",
|
||||
data: map[string]interface{}{"$.A": "abcde", "$.B": "abcdf"},
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
name: "Not equal - false",
|
||||
expr: "$.A != $.B",
|
||||
data: map[string]interface{}{"$.A": "!@#$qwer1234", "$.B": "!@#$qwer1234"},
|
||||
expected: false,
|
||||
},
|
||||
{
|
||||
name: "In operation for string resulting in false",
|
||||
expr: `$.A in ["admin", "moderator"]`,
|
||||
data: map[string]interface{}{"$.A": "admin1"},
|
||||
expected: false,
|
||||
},
|
||||
{
|
||||
name: "In operation for string resulting in true",
|
||||
expr: `$.A in ["admin", "moderator"]`,
|
||||
data: map[string]interface{}{"$.A": "admin"},
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
name: "In operation for int resulting in false",
|
||||
expr: `$.A not in [1, 2, 3]`,
|
||||
data: map[string]interface{}{"$.A": 2},
|
||||
expected: false,
|
||||
},
|
||||
{
|
||||
name: "In operation for int resulting in true",
|
||||
expr: `$.A not in [1, 2, 3]`,
|
||||
data: map[string]interface{}{"$.A": 5},
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
name: "Contains operation resulting in true",
|
||||
expr: `$.A contains $.B`,
|
||||
data: map[string]interface{}{"$.A": "hello world", "$.B": "world"},
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
name: "Contains operation resulting in false",
|
||||
expr: `$.A contains $.B`,
|
||||
data: map[string]interface{}{"$.A": "hello world", "$.B": "go"},
|
||||
expected: false,
|
||||
},
|
||||
{
|
||||
name: "Contains operation resulting in false",
|
||||
expr: `$.A not contains $.B`,
|
||||
data: map[string]interface{}{"$.A": "hello world", "$.B": "world"},
|
||||
expected: false,
|
||||
},
|
||||
{
|
||||
name: "Contains operation resulting in true",
|
||||
expr: `$.A not contains $.B`,
|
||||
data: map[string]interface{}{"$.A": "hello world", "$.B": "go"},
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
name: "regex operation resulting in true",
|
||||
expr: `$.A matches $.B`,
|
||||
data: map[string]interface{}{"$.A": "123", "$.B": "^[0-9]+$"},
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
name: "regex operation resulting in false",
|
||||
expr: `$.A matches $.B`,
|
||||
data: map[string]interface{}{"$.A": "abc", "$.B": "^[0-9]+$"},
|
||||
expected: false,
|
||||
},
|
||||
{
|
||||
name: "between function resulting in true",
|
||||
expr: `between($.A, [100,200])`,
|
||||
data: map[string]interface{}{"$.A": 155.0},
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
name: "between function resulting in false",
|
||||
expr: `not between($.A, [100.3,200.3])`,
|
||||
data: map[string]interface{}{"$.A": 155.1},
|
||||
expected: false,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
|
||||
@@ -48,12 +48,6 @@ func New(httpConfig httpx.Config, pushgw pconf.Pushgw, aconf aconf.Alert, tc *me
|
||||
}
|
||||
|
||||
func (rt *Router) Config(r *gin.Engine) {
|
||||
service := r.Group("/v1/n9e")
|
||||
if len(rt.HTTP.APIForService.BasicAuth) > 0 {
|
||||
service.Use(gin.BasicAuth(rt.HTTP.APIForService.BasicAuth))
|
||||
}
|
||||
service.POST("/target-update", rt.targetUpdate)
|
||||
|
||||
if !rt.HTTP.APIForAgent.Enable {
|
||||
return
|
||||
}
|
||||
@@ -75,6 +69,7 @@ func (rt *Router) Config(r *gin.Engine) {
|
||||
r.POST("/opentsdb/put", auth, rt.openTSDBPut)
|
||||
r.POST("/openfalcon/push", auth, rt.falconPush)
|
||||
r.POST("/prometheus/v1/write", auth, rt.remoteWrite)
|
||||
r.POST("/v1/n9e/target-update", auth, rt.targetUpdate)
|
||||
r.POST("/v1/n9e/edge/heartbeat", auth, rt.heartbeat)
|
||||
|
||||
if len(rt.Ctx.CenterApi.Addrs) > 0 {
|
||||
@@ -85,6 +80,7 @@ func (rt *Router) Config(r *gin.Engine) {
|
||||
r.POST("/opentsdb/put", rt.openTSDBPut)
|
||||
r.POST("/openfalcon/push", rt.falconPush)
|
||||
r.POST("/prometheus/v1/write", rt.remoteWrite)
|
||||
r.POST("/v1/n9e/target-update", rt.targetUpdate)
|
||||
r.POST("/v1/n9e/edge/heartbeat", rt.heartbeat)
|
||||
|
||||
if len(rt.Ctx.CenterApi.Addrs) > 0 {
|
||||
|
||||
@@ -3,7 +3,6 @@ package router
|
||||
import (
|
||||
"compress/gzip"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"time"
|
||||
|
||||
@@ -18,18 +17,17 @@ import (
|
||||
// heartbeat Forward heartbeat request to the center.
|
||||
func (rt *Router) heartbeat(c *gin.Context) {
|
||||
gid := ginx.QueryStr(c, "gid", "")
|
||||
overwriteGids := ginx.QueryBool(c, "overwrite_gids", false)
|
||||
req, err := HandleHeartbeat(c, rt.Aconf.Heartbeat.EngineName, rt.MetaSet)
|
||||
if err != nil {
|
||||
logger.Warningf("req:%v heartbeat failed to handle heartbeat err:%v", req, err)
|
||||
ginx.Dangerous(err)
|
||||
}
|
||||
api := "/v1/n9e/center/heartbeat"
|
||||
api := "/v1/n9e/heartbeat"
|
||||
if rt.HeartbeartApi != "" {
|
||||
api = rt.HeartbeartApi
|
||||
}
|
||||
|
||||
ret, err := poster.PostByUrlsWithResp[map[string]interface{}](rt.Ctx, fmt.Sprintf("%s?gid=%s&overwrite_gids=%t", api, gid, overwriteGids), req)
|
||||
ret, err := poster.PostByUrlsWithResp[map[string]interface{}](rt.Ctx, api+"?gid="+gid, req)
|
||||
ginx.NewRender(c).Data(ret, err)
|
||||
}
|
||||
|
||||
|
||||
@@ -17,6 +17,7 @@ type RedisConfig struct {
|
||||
Username string
|
||||
Password string
|
||||
DB int
|
||||
UseTLS bool
|
||||
tlsx.ClientConfig
|
||||
RedisType string
|
||||
MasterName string
|
||||
|
||||
Reference in New Issue
Block a user