mirror of
https://github.com/ccfos/nightingale.git
synced 2026-03-03 14:38:55 +00:00
Compare commits
63 Commits
release-17
...
fix-exec-s
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
93ff325f72 | ||
|
|
84ee14d21e | ||
|
|
c9cf1cfdd2 | ||
|
|
9d1c01107f | ||
|
|
7ea31b5c6d | ||
|
|
e8e1c67cc8 | ||
|
|
8079bcd288 | ||
|
|
33b178ce82 | ||
|
|
28c9cd7b43 | ||
|
|
b771e8a3e8 | ||
|
|
4945e98200 | ||
|
|
a938ea3e56 | ||
|
|
25c339025b | ||
|
|
bb0ee35275 | ||
|
|
0fc54ad173 | ||
|
|
1f95e2df94 | ||
|
|
d2969f34ef | ||
|
|
d9a34959dc | ||
|
|
bc6ff7f4ba | ||
|
|
514913a97a | ||
|
|
affc610b7b | ||
|
|
a098d5d39c | ||
|
|
05c3f1e0e4 | ||
|
|
d5740164f2 | ||
|
|
8c2383c410 | ||
|
|
9af024fb99 | ||
|
|
12f3cc21e1 | ||
|
|
0b3bb54eb4 | ||
|
|
da813e2b0c | ||
|
|
50fa2499b7 | ||
|
|
2c5ae5b3a9 | ||
|
|
522932aeb4 | ||
|
|
35ac0ddea5 | ||
|
|
26fa750309 | ||
|
|
1eba607aeb | ||
|
|
6aadd159af | ||
|
|
b6ad87523e | ||
|
|
ea5b6845de | ||
|
|
5ba5096da2 | ||
|
|
85786d985d | ||
|
|
cff211364a | ||
|
|
0190b2b432 | ||
|
|
d8081129f1 | ||
|
|
66d4d0c494 | ||
|
|
d936d57863 | ||
|
|
d819691b78 | ||
|
|
6f0b415821 | ||
|
|
f482efd9ce | ||
|
|
b39d5a742e | ||
|
|
59c3d62c6b | ||
|
|
624ae125d5 | ||
|
|
b9c822b220 | ||
|
|
c13baf3a9d | ||
|
|
bc46ff1912 | ||
|
|
2f7c76c275 | ||
|
|
1edf305952 | ||
|
|
c026a6d2b2 | ||
|
|
1853e89f7c | ||
|
|
a41a00fba3 | ||
|
|
ceb9a1d7ff | ||
|
|
0b5223acdb | ||
|
|
4b63c6b4b1 | ||
|
|
edd024306a |
105
README.md
105
README.md
@@ -3,7 +3,7 @@
|
||||
<img src="doc/img/Nightingale_L_V.png" alt="nightingale - cloud native monitoring" width="100" /></a>
|
||||
</p>
|
||||
<p align="center">
|
||||
<b>开源告警管理专家</b>
|
||||
<b>Open-Source Alerting Expert</b>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
@@ -25,94 +25,91 @@
|
||||
|
||||
|
||||
|
||||
[English](./README_en.md) | [中文](./README.md)
|
||||
[English](./README.md) | [中文](./README_zh.md)
|
||||
|
||||
## 夜莺是什么
|
||||
## 🎯 What is Nightingale
|
||||
|
||||
夜莺监控(Nightingale)是一款侧重告警的监控类开源项目。类似 Grafana 的数据源集成方式,夜莺也是对接多种既有的数据源,不过 Grafana 侧重在可视化,夜莺是侧重在告警引擎、告警事件的处理和分发。
|
||||
Nightingale is an open-source monitoring project that focuses on alerting. Similar to Grafana, Nightingale also connects with various existing data sources. However, while Grafana emphasizes visualization, Nightingale places greater emphasis on the alerting engine, as well as the processing and distribution of alarms.
|
||||
|
||||
夜莺监控项目,最初由滴滴开发和开源,并于 2022 年 5 月 11 日,捐赠予中国计算机学会开源发展委员会(CCF ODC),为 CCF ODC 成立后接受捐赠的第一个开源项目。
|
||||
> The Nightingale project was initially developed and open-sourced by DiDi.inc. On May 11, 2022, it was donated to the Open Source Development Committee of the China Computer Federation (CCF ODC).
|
||||
|
||||
## 夜莺的工作逻辑
|
||||

|
||||
|
||||
很多用户已经自行采集了指标、日志数据,此时就把存储库(VictoriaMetrics、ElasticSearch等)作为数据源接入夜莺,即可在夜莺里配置告警规则、通知规则,完成告警事件的生成和派发。
|
||||
## 💡 How Nightingale Works
|
||||
|
||||

|
||||
Many users have already collected metrics and log data. In this case, you can connect your storage repositories (such as VictoriaMetrics, ElasticSearch, etc.) as data sources in Nightingale. This allows you to configure alerting rules and notification rules within Nightingale, enabling the generation and distribution of alarms.
|
||||
|
||||
夜莺项目本身不提供监控数据采集能力。推荐您使用 [Categraf](https://github.com/flashcatcloud/categraf) 作为采集器,可以和夜莺丝滑对接。
|
||||

|
||||
|
||||
[Categraf](https://github.com/flashcatcloud/categraf) 可以采集操作系统、网络设备、各类中间件、数据库的监控数据,通过 Remote Write 协议推送给夜莺,夜莺把监控数据转存到时序库(如 Prometheus、VictoriaMetrics 等),并提供告警和可视化能力。
|
||||
Nightingale itself does not provide monitoring data collection capabilities. We recommend using [Categraf](https://github.com/flashcatcloud/categraf) as the collector, which integrates seamlessly with Nightingale.
|
||||
|
||||
对于个别边缘机房,如果和中心夜莺服务端网络链路不好,希望提升告警可用性,夜莺也提供边缘机房告警引擎下沉部署模式,这个模式下,即便边缘和中心端网络割裂,告警功能也不受影响。
|
||||
[Categraf](https://github.com/flashcatcloud/categraf) can collect monitoring data from operating systems, network devices, various middleware, and databases. It pushes this data to Nightingale via the `Prometheus Remote Write` protocol. Nightingale then stores the monitoring data in a time-series database (such as Prometheus, VictoriaMetrics, etc.) and provides alerting and visualization capabilities.
|
||||
|
||||

|
||||
For certain edge data centers with poor network connectivity to the central Nightingale server, we offer a distributed deployment mode for the alerting engine. In this mode, even if the network is disconnected, the alerting functionality remains unaffected.
|
||||
|
||||
> 上图中,机房A和中心机房的网络链路很好,所以直接由中心端的夜莺进程做告警引擎,机房B和中心机房的网络链路不好,所以在机房B部署了 `n9e-edge` 做告警引擎,对机房B的数据源做告警判定。
|
||||

|
||||
|
||||
## 告警降噪、升级、协同
|
||||
> In the above diagram, Data Center A has a good network with the central data center, so it uses the Nightingale process in the central data center as the alerting engine. Data Center B has a poor network with the central data center, so it deploys `n9e-edge` as the alerting engine to handle alerting for its own data sources.
|
||||
|
||||
夜莺的侧重点是做告警引擎,即负责产生告警事件,并根据规则做灵活派发,内置支持 20 种通知媒介(电话、短信、邮件、钉钉、飞书、企微、Slack 等)。
|
||||
## 🔕 Alert Noise Reduction, Escalation, and Collaboration
|
||||
|
||||
如果您有更高级的需求,比如:
|
||||
Nightingale focuses on being an alerting engine, responsible for generating alarms and flexibly distributing them based on rules. It supports 20 built-in notification medias (such as phone calls, SMS, email, DingTalk, Slack, etc.).
|
||||
|
||||
- 想要把公司的多套监控系统产生的事件聚拢到一个平台,统一做收敛降噪、响应处理、数据分析
|
||||
- 想要支持人员的排班,践行 On-call 文化,想要支持告警认领、升级(避免遗漏)、协同处理
|
||||
If you have more advanced requirements, such as:
|
||||
- Want to consolidate events from multiple monitoring systems into one platform for unified noise reduction, response handling, and data analysis.
|
||||
- Want to support personnel scheduling, practice on-call culture, and support alert escalation (to avoid missing alerts) and collaborative handling.
|
||||
|
||||
那夜莺是不合适的,您需要的是 [PagerDuty](https://www.pagerduty.com/) 或 [FlashDuty](https://flashcat.cloud/product/flashcat-duty/) (产品易用,且有免费套餐)这样的 On-call 产品。
|
||||
Then Nightingale is not suitable. It is recommended that you choose on-call products such as PagerDuty and FlashDuty. These products are simple and easy to use.
|
||||
|
||||
## 🗨️ Communication Channels
|
||||
|
||||
## 相关资料 & 交流渠道
|
||||
- 📚 [夜莺介绍PPT](https://mp.weixin.qq.com/s/Mkwx_46xrltSq8NLqAIYow) 对您了解夜莺各项关键特性会有帮助(PPT链接在文末)
|
||||
- 👉 [文档中心](https://flashcat.cloud/docs/) 为了更快的访问速度,站点托管在 [FlashcatCloud](https://flashcat.cloud)
|
||||
- ❤️ [报告 Bug](https://github.com/ccfos/nightingale/issues/new?assignees=&labels=&projects=&template=question.yml) 写清楚问题描述、复现步骤、截图等信息,更容易得到答案
|
||||
- 💡 前后端代码分离,前端代码仓库:[https://github.com/n9e/fe](https://github.com/n9e/fe)
|
||||
- 🎯 关注[这个公众号](https://gitlink.org.cn/UlricQin)了解更多夜莺动态和知识
|
||||
- 🌟 加我微信:`picobyte`(我已关闭好友验证)拉入微信群,备注:`夜莺互助群`,如果已经把夜莺上到生产环境,可联系我拉入资深监控用户群
|
||||
- **Report Bugs:** It is highly recommended to submit issues via the [Nightingale GitHub Issue tracker](https://github.com/ccfos/nightingale/issues/new?assignees=&labels=kind%2Fbug&projects=&template=bug_report.yml).
|
||||
- **Documentation:** For more information, we recommend thoroughly browsing the [Nightingale Documentation Site](https://n9e.github.io/).
|
||||
|
||||
## 🔑 Key Features
|
||||
|
||||
## 关键特性简介
|
||||

|
||||
|
||||

|
||||
- Nightingale supports alerting rules, mute rules, subscription rules, and notification rules. It natively supports 20 types of notification media and allows customization of message templates.
|
||||
- It supports event pipelines for Pipeline processing of alarms, facilitating automated integration with in-house systems. For example, it can append metadata to alarms or perform relabeling on events.
|
||||
- It introduces the concept of business groups and a permission system to manage various rules in a categorized manner.
|
||||
- Many databases and middleware come with built-in alert rules that can be directly imported and used. It also supports direct import of Prometheus alerting rules.
|
||||
- It supports alerting self-healing, which automatically triggers a script to execute predefined logic after an alarm is generated—such as cleaning up disk space or capturing the current system state.
|
||||
|
||||
- 夜莺支持告警规则、屏蔽规则、订阅规则、通知规则,内置支持 20 种通知媒介,支持消息模板自定义
|
||||
- 支持事件管道,对告警事件做 Pipeline 处理,方便和自有系统做自动化整合,比如给告警事件附加一些元信息,对事件做 relabel
|
||||
- 支持业务组概念,引入权限体系,分门别类管理各类规则
|
||||
- 很多数据库、中间件内置了告警规则,可以直接导入使用,也可以直接导入 Prometheus 的告警规则
|
||||
- 支持告警自愈,即告警之后自动触发一个脚本执行一些预定义的逻辑,比如清理一下磁盘、抓一下现场等
|
||||

|
||||
|
||||

|
||||
- Nightingale archives historical alarms and supports multi-dimensional query and statistics.
|
||||
- It supports flexible aggregation grouping, allowing a clear view of the distribution of alarms across the company.
|
||||
|
||||
- 夜莺存档了历史告警事件,支持多维度的查询和统计
|
||||
- 支持灵活的聚合分组,一目了然看到公司的告警事件分布情况
|
||||

|
||||
|
||||

|
||||
- Nightingale has built-in metric descriptions, dashboards, and alerting rules for common operating systems, middleware, and databases, which are contributed by the community with varying quality.
|
||||
- It directly receives data via multiple protocols such as Remote Write, OpenTSDB, Datadog, and Falcon, integrates with various Agents.
|
||||
- It supports data sources like Prometheus, ElasticSearch, Loki, ClickHouse, MySQL, Postgres, allowing alerting based on data from these sources.
|
||||
- Nightingale can be easily embedded into internal enterprise systems (e.g. Grafana, CMDB), and even supports configuring menu visibility for these embedded systems.
|
||||
|
||||
- 夜莺内置常用操作系统、中间件、数据库的的指标说明、仪表盘、告警规则,不过都是社区贡献的,整体也是参差不齐
|
||||
- 夜莺直接接收 Remote Write、OpenTSDB、Datadog、Falcon 等多种协议的数据,故而可以和各类 Agent 对接
|
||||
- 夜莺支持 Prometheus、ElasticSearch、Loki、TDEngine 等多种数据源,可以对其中的数据做告警
|
||||
- 夜莺可以很方便内嵌企业内部系统,比如 Grafana、CMDB 等,甚至可以配置这些内嵌系统的菜单可见性
|
||||

|
||||
|
||||
- Nightingale supports dashboard functionality, including common chart types, and comes with pre-built dashboards. The image above is a screenshot of one of these dashboards.
|
||||
- If you are already accustomed to Grafana, it is recommended to continue using Grafana for visualization, as Grafana has deeper expertise in this area.
|
||||
- For machine-related monitoring data collected by Categraf, it is advisable to use Nightingale's built-in dashboards for viewing. This is because Categraf's metric naming follows Telegraf's convention, which differs from that of Node Exporter.
|
||||
- Due to Nightingale's concept of business groups (where machines can belong to different groups), there may be scenarios where you only want to view machines within the current business group on the dashboard. Thus, Nightingale's dashboards can be linked with business groups for interactive filtering.
|
||||
|
||||

|
||||
## 🌟 Stargazers over time
|
||||
|
||||
- 夜莺支持仪表盘功能,支持常见的图表类型,也内置了一些仪表盘,上图是其中一个仪表盘的截图。
|
||||
- 如果你已经习惯了 Grafana,建议仍然使用 Grafana 看图。Grafana 在看图方面道行更深。
|
||||
- 机器相关的监控数据,如果是 Categraf 采集的,建议使用夜莺自带的仪表盘查看,因为 Categraf 的指标命名 Follow 的是 Telegraf 的命名方式,和 Node Exporter 不同
|
||||
- 因为夜莺有个业务组的概念,机器可以归属不同的业务组,有时在仪表盘里只想查看当前所属业务组的机器,所以夜莺的仪表盘可以和业务组联动
|
||||
|
||||
## 广受关注
|
||||
[](https://star-history.com/#ccfos/nightingale&Date)
|
||||
|
||||
## 感谢众多企业的信赖
|
||||
## 🔥 Users
|
||||
|
||||

|
||||

|
||||
|
||||
## 社区共建
|
||||
- ❇️ 请阅读浏览[夜莺开源项目和社区治理架构草案](./doc/community-governance.md),真诚欢迎每一位用户、开发者、公司以及组织,使用夜莺监控、积极反馈 Bug、提交功能需求、分享最佳实践,共建专业、活跃的夜莺开源社区。
|
||||
- ❤️ 夜莺贡献者
|
||||
## 🤝 Community Co-Building
|
||||
|
||||
- ❇️ Please read the [Nightingale Open Source Project and Community Governance Draft](./doc/community-governance.md). We sincerely welcome every user, developer, company, and organization to use Nightingale, actively report bugs, submit feature requests, share best practices, and help build a professional and active open-source community.
|
||||
- ❤️ Nightingale Contributors
|
||||
<a href="https://github.com/ccfos/nightingale/graphs/contributors">
|
||||
<img src="https://contrib.rocks/image?repo=ccfos/nightingale" />
|
||||
</a>
|
||||
|
||||
## License
|
||||
## 📜 License
|
||||
- [Apache License V2.0](https://github.com/didi/nightingale/blob/main/LICENSE)
|
||||
|
||||
113
README_en.md
113
README_en.md
@@ -1,113 +0,0 @@
|
||||
<p align="center">
|
||||
<a href="https://github.com/ccfos/nightingale">
|
||||
<img src="doc/img/Nightingale_L_V.png" alt="nightingale - cloud native monitoring" width="100" /></a>
|
||||
</p>
|
||||
<p align="center">
|
||||
<b>Open-source Alert Management Expert, an Integrated Observability Platform</b>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<a href="https://flashcat.cloud/docs/">
|
||||
<img alt="Docs" src="https://img.shields.io/badge/docs-get%20started-brightgreen"/></a>
|
||||
<a href="https://hub.docker.com/u/flashcatcloud">
|
||||
<img alt="Docker pulls" src="https://img.shields.io/docker/pulls/flashcatcloud/nightingale"/></a>
|
||||
<a href="https://github.com/ccfos/nightingale/graphs/contributors">
|
||||
<img alt="GitHub contributors" src="https://img.shields.io/github/contributors-anon/ccfos/nightingale"/></a>
|
||||
<img alt="GitHub Repo stars" src="https://img.shields.io/github/stars/ccfos/nightingale">
|
||||
<img alt="GitHub forks" src="https://img.shields.io/github/forks/ccfos/nightingale">
|
||||
<br/><img alt="GitHub Repo issues" src="https://img.shields.io/github/issues/ccfos/nightingale">
|
||||
<img alt="GitHub Repo issues closed" src="https://img.shields.io/github/issues-closed/ccfos/nightingale">
|
||||
<img alt="GitHub latest release" src="https://img.shields.io/github/v/release/ccfos/nightingale"/>
|
||||
<img alt="License" src="https://img.shields.io/badge/license-Apache--2.0-blue"/>
|
||||
<a href="https://n9e-talk.slack.com/">
|
||||
<img alt="GitHub contributors" src="https://img.shields.io/badge/join%20slack-%23n9e-brightgreen.svg"/></a>
|
||||
</p>
|
||||
|
||||
|
||||
|
||||
[English](./README_en.md) | [中文](./README.md)
|
||||
|
||||
## What is Nightingale
|
||||
|
||||
Nightingale is an open-source project focused on alerting. Similar to Grafana's data source integration approach, Nightingale also connects with various existing data sources. However, while Grafana focuses on visualization, Nightingale focuses on alerting engines.
|
||||
|
||||
Originally developed and open-sourced by Didi, Nightingale was donated to the China Computer Federation Open Source Development Committee (CCF ODC) on May 11, 2022, becoming the first open-source project accepted by the CCF ODC after its establishment.
|
||||
|
||||
|
||||
## Quick Start
|
||||
|
||||
- 👉 [Documentation](https://flashcat.cloud/docs/) | [Download](https://flashcat.cloud/download/nightingale/)
|
||||
- ❤️ [Report a Bug](https://github.com/ccfos/nightingale/issues/new?assignees=&labels=&projects=&template=question.yml)
|
||||
- ℹ️ For faster access, the above documentation and download sites are hosted on [FlashcatCloud](https://flashcat.cloud).
|
||||
|
||||
## Features
|
||||
|
||||
- **Integration with Multiple Time-Series Databases:** Supports integration with various time-series databases such as Prometheus, VictoriaMetrics, Thanos, Mimir, M3DB, and TDengine, enabling unified alert management.
|
||||
- **Advanced Alerting Capabilities:** Comes with built-in support for multiple alerting rules, extensible to common notification channels. It also supports alert suppression, silencing, subscription, self-healing, and alert event management.
|
||||
- **High-Performance Visualization Engine:** Offers various chart styles with numerous built-in dashboard templates and the ability to import Grafana templates. Ready to use with a business-friendly open-source license.
|
||||
- **Support for Common Collectors:** Compatible with [Categraf](https://flashcat.cloud/product/categraf), Telegraf, Grafana-agent, Datadog-agent, and various exporters as collectors—there's no data that can't be monitored.
|
||||
- **Seamless Integration with [Flashduty](https://flashcat.cloud/product/flashcat-duty/):** Enables alert aggregation, acknowledgment, escalation, scheduling, and IM integration, ensuring no alerts are missed, reducing unnecessary interruptions, and enhancing efficient collaboration.
|
||||
|
||||
|
||||
## Screenshots
|
||||
|
||||
You can switch languages and themes in the top right corner. We now support English, Simplified Chinese, and Traditional Chinese.
|
||||
|
||||

|
||||
|
||||
### Instant Query
|
||||
|
||||
Similar to the built-in query analysis page in Prometheus, Nightingale offers an ad-hoc query feature with UI enhancements. It also provides built-in PromQL metrics, allowing users unfamiliar with PromQL to quickly perform queries.
|
||||
|
||||

|
||||
|
||||
### Metric View
|
||||
|
||||
Alternatively, you can use the Metric View to access data. With this feature, Instant Query becomes less necessary, as it caters more to advanced users. Regular users can easily perform queries using the Metric View.
|
||||
|
||||

|
||||
|
||||
### Built-in Dashboards
|
||||
|
||||
Nightingale includes commonly used dashboards that can be imported and used directly. You can also import Grafana dashboards, although compatibility is limited to basic Grafana charts. If you’re accustomed to Grafana, it’s recommended to continue using it for visualization, with Nightingale serving as an alerting engine.
|
||||
|
||||

|
||||
|
||||
### Built-in Alert Rules
|
||||
|
||||
In addition to the built-in dashboards, Nightingale also comes with numerous alert rules that are ready to use out of the box.
|
||||
|
||||

|
||||
|
||||
|
||||
|
||||
## Architecture
|
||||
|
||||
In most community scenarios, Nightingale is primarily used as an alert engine, integrating with multiple time-series databases to unify alert rule management. Grafana remains the preferred tool for visualization. As an alert engine, the product architecture of Nightingale is as follows:
|
||||
|
||||

|
||||
|
||||
For certain edge data centers with poor network connectivity to the central Nightingale server, we offer a distributed deployment mode for the alert engine. In this mode, even if the network is disconnected, the alerting functionality remains unaffected.
|
||||
|
||||

|
||||
|
||||
|
||||
## Communication Channels
|
||||
|
||||
- **Report Bugs:** It is highly recommended to submit issues via the [Nightingale GitHub Issue tracker](https://github.com/ccfos/nightingale/issues/new?assignees=&labels=kind%2Fbug&projects=&template=bug_report.yml).
|
||||
- **Documentation:** For more information, we recommend thoroughly browsing the [Nightingale Documentation Site](https://flashcat.cloud/docs/content/flashcat-monitor/nightingale-v7/introduction/).
|
||||
|
||||
## Stargazers over time
|
||||
|
||||
[](https://star-history.com/#ccfos/nightingale&Date)
|
||||
|
||||
## Community Co-Building
|
||||
|
||||
- ❇️ Please read the [Nightingale Open Source Project and Community Governance Draft](./doc/community-governance.md). We sincerely welcome every user, developer, company, and organization to use Nightingale, actively report bugs, submit feature requests, share best practices, and help build a professional and active open-source community.
|
||||
- ❤️ Nightingale Contributors
|
||||
<a href="https://github.com/ccfos/nightingale/graphs/contributors">
|
||||
<img src="https://contrib.rocks/image?repo=ccfos/nightingale" />
|
||||
</a>
|
||||
|
||||
## License
|
||||
- [Apache License V2.0](https://github.com/didi/nightingale/blob/main/LICENSE)
|
||||
120
README_zh.md
Normal file
120
README_zh.md
Normal file
@@ -0,0 +1,120 @@
|
||||
<p align="center">
|
||||
<a href="https://github.com/ccfos/nightingale">
|
||||
<img src="doc/img/Nightingale_L_V.png" alt="nightingale - cloud native monitoring" width="100" /></a>
|
||||
</p>
|
||||
<p align="center">
|
||||
<b>开源告警管理专家</b>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<a href="https://flashcat.cloud/docs/">
|
||||
<img alt="Docs" src="https://img.shields.io/badge/docs-get%20started-brightgreen"/></a>
|
||||
<a href="https://hub.docker.com/u/flashcatcloud">
|
||||
<img alt="Docker pulls" src="https://img.shields.io/docker/pulls/flashcatcloud/nightingale"/></a>
|
||||
<a href="https://github.com/ccfos/nightingale/graphs/contributors">
|
||||
<img alt="GitHub contributors" src="https://img.shields.io/github/contributors-anon/ccfos/nightingale"/></a>
|
||||
<img alt="GitHub Repo stars" src="https://img.shields.io/github/stars/ccfos/nightingale">
|
||||
<img alt="GitHub forks" src="https://img.shields.io/github/forks/ccfos/nightingale">
|
||||
<br/><img alt="GitHub Repo issues" src="https://img.shields.io/github/issues/ccfos/nightingale">
|
||||
<img alt="GitHub Repo issues closed" src="https://img.shields.io/github/issues-closed/ccfos/nightingale">
|
||||
<img alt="GitHub latest release" src="https://img.shields.io/github/v/release/ccfos/nightingale"/>
|
||||
<img alt="License" src="https://img.shields.io/badge/license-Apache--2.0-blue"/>
|
||||
<a href="https://n9e-talk.slack.com/">
|
||||
<img alt="GitHub contributors" src="https://img.shields.io/badge/join%20slack-%23n9e-brightgreen.svg"/></a>
|
||||
</p>
|
||||
|
||||
|
||||
|
||||
[English](./README.md) | [中文](./README_zh.md)
|
||||
|
||||
## 夜莺是什么
|
||||
|
||||
夜莺监控(Nightingale)是一款侧重告警的监控类开源项目。类似 Grafana 的数据源集成方式,夜莺也是对接多种既有的数据源,不过 Grafana 侧重在可视化,夜莺是侧重在告警引擎、告警事件的处理和分发。
|
||||
|
||||
> 夜莺监控项目,最初由滴滴开发和开源,并于 2022 年 5 月 11 日,捐赠予中国计算机学会开源发展委员会(CCF ODC),为 CCF ODC 成立后接受捐赠的第一个开源项目。
|
||||
|
||||

|
||||
|
||||
## 夜莺的工作逻辑
|
||||
|
||||
很多用户已经自行采集了指标、日志数据,此时就把存储库(VictoriaMetrics、ElasticSearch等)作为数据源接入夜莺,即可在夜莺里配置告警规则、通知规则,完成告警事件的生成和派发。
|
||||
|
||||

|
||||
|
||||
夜莺项目本身不提供监控数据采集能力。推荐您使用 [Categraf](https://github.com/flashcatcloud/categraf) 作为采集器,可以和夜莺丝滑对接。
|
||||
|
||||
[Categraf](https://github.com/flashcatcloud/categraf) 可以采集操作系统、网络设备、各类中间件、数据库的监控数据,通过 Remote Write 协议推送给夜莺,夜莺把监控数据转存到时序库(如 Prometheus、VictoriaMetrics 等),并提供告警和可视化能力。
|
||||
|
||||
对于个别边缘机房,如果和中心夜莺服务端网络链路不好,希望提升告警可用性,夜莺也提供边缘机房告警引擎下沉部署模式,这个模式下,即便边缘和中心端网络割裂,告警功能也不受影响。
|
||||
|
||||

|
||||
|
||||
> 上图中,机房A和中心机房的网络链路很好,所以直接由中心端的夜莺进程做告警引擎,机房B和中心机房的网络链路不好,所以在机房B部署了 `n9e-edge` 做告警引擎,对机房B的数据源做告警判定。
|
||||
|
||||
## 告警降噪、升级、协同
|
||||
|
||||
夜莺的侧重点是做告警引擎,即负责产生告警事件,并根据规则做灵活派发,内置支持 20 种通知媒介(电话、短信、邮件、钉钉、飞书、企微、Slack 等)。
|
||||
|
||||
如果您有更高级的需求,比如:
|
||||
|
||||
- 想要把公司的多套监控系统产生的事件聚拢到一个平台,统一做收敛降噪、响应处理、数据分析
|
||||
- 想要支持人员的排班,践行 On-call 文化,想要支持告警认领、升级(避免遗漏)、协同处理
|
||||
|
||||
那夜莺是不合适的,推荐您选用 [FlashDuty](https://flashcat.cloud/product/flashcat-duty/) 这样的 On-call 产品,产品简单易用,也有免费套餐。
|
||||
|
||||
|
||||
## 相关资料 & 交流渠道
|
||||
- 📚 [夜莺介绍PPT](https://mp.weixin.qq.com/s/Mkwx_46xrltSq8NLqAIYow) 对您了解夜莺各项关键特性会有帮助(PPT链接在文末)
|
||||
- 👉 [文档中心](https://flashcat.cloud/docs/) 为了更快的访问速度,站点托管在 [FlashcatCloud](https://flashcat.cloud)
|
||||
- ❤️ [报告 Bug](https://github.com/ccfos/nightingale/issues/new?assignees=&labels=&projects=&template=question.yml) 写清楚问题描述、复现步骤、截图等信息,更容易得到答案
|
||||
- 💡 前后端代码分离,前端代码仓库:[https://github.com/n9e/fe](https://github.com/n9e/fe)
|
||||
- 🎯 关注[这个公众号](https://gitlink.org.cn/UlricQin)了解更多夜莺动态和知识
|
||||
- 🌟 加我微信:`picobyte`(我已关闭好友验证)拉入微信群,备注:`夜莺互助群`,如果已经把夜莺上到生产环境,可联系我拉入资深监控用户群
|
||||
|
||||
|
||||
## 关键特性简介
|
||||
|
||||

|
||||
|
||||
- 夜莺支持告警规则、屏蔽规则、订阅规则、通知规则,内置支持 20 种通知媒介,支持消息模板自定义
|
||||
- 支持事件管道,对告警事件做 Pipeline 处理,方便和自有系统做自动化整合,比如给告警事件附加一些元信息,对事件做 relabel
|
||||
- 支持业务组概念,引入权限体系,分门别类管理各类规则
|
||||
- 很多数据库、中间件内置了告警规则,可以直接导入使用,也可以直接导入 Prometheus 的告警规则
|
||||
- 支持告警自愈,即告警之后自动触发一个脚本执行一些预定义的逻辑,比如清理一下磁盘、抓一下现场等
|
||||
|
||||

|
||||
|
||||
- 夜莺存档了历史告警事件,支持多维度的查询和统计
|
||||
- 支持灵活的聚合分组,一目了然看到公司的告警事件分布情况
|
||||
|
||||

|
||||
|
||||
- 夜莺内置常用操作系统、中间件、数据库的的指标说明、仪表盘、告警规则,不过都是社区贡献的,整体也是参差不齐
|
||||
- 夜莺直接接收 Remote Write、OpenTSDB、Datadog、Falcon 等多种协议的数据,故而可以和各类 Agent 对接
|
||||
- 夜莺支持 Prometheus、ElasticSearch、Loki、TDEngine 等多种数据源,可以对其中的数据做告警
|
||||
- 夜莺可以很方便内嵌企业内部系统,比如 Grafana、CMDB 等,甚至可以配置这些内嵌系统的菜单可见性
|
||||
|
||||
|
||||

|
||||
|
||||
- 夜莺支持仪表盘功能,支持常见的图表类型,也内置了一些仪表盘,上图是其中一个仪表盘的截图。
|
||||
- 如果你已经习惯了 Grafana,建议仍然使用 Grafana 看图。Grafana 在看图方面道行更深。
|
||||
- 机器相关的监控数据,如果是 Categraf 采集的,建议使用夜莺自带的仪表盘查看,因为 Categraf 的指标命名 Follow 的是 Telegraf 的命名方式,和 Node Exporter 不同
|
||||
- 因为夜莺有个业务组的概念,机器可以归属不同的业务组,有时在仪表盘里只想查看当前所属业务组的机器,所以夜莺的仪表盘可以和业务组联动
|
||||
|
||||
## 广受关注
|
||||
[](https://star-history.com/#ccfos/nightingale&Date)
|
||||
|
||||
## 感谢众多企业的信赖
|
||||
|
||||

|
||||
|
||||
## 社区共建
|
||||
- ❇️ 请阅读浏览[夜莺开源项目和社区治理架构草案](./doc/community-governance.md),真诚欢迎每一位用户、开发者、公司以及组织,使用夜莺监控、积极反馈 Bug、提交功能需求、分享最佳实践,共建专业、活跃的夜莺开源社区。
|
||||
- ❤️ 夜莺贡献者
|
||||
<a href="https://github.com/ccfos/nightingale/graphs/contributors">
|
||||
<img src="https://contrib.rocks/image?repo=ccfos/nightingale" />
|
||||
</a>
|
||||
|
||||
## License
|
||||
- [Apache License V2.0](https://github.com/didi/nightingale/blob/main/LICENSE)
|
||||
@@ -82,6 +82,10 @@ func NewDispatch(alertRuleCache *memsto.AlertRuleCacheType, userCache *memsto.Us
|
||||
}
|
||||
|
||||
pipeline.Init()
|
||||
|
||||
// 设置通知记录回调函数
|
||||
notifyChannelCache.SetNotifyRecordFunc(sender.NotifyRecord)
|
||||
|
||||
return notify
|
||||
}
|
||||
|
||||
@@ -184,13 +188,15 @@ func (e *Dispatch) HandleEventWithNotifyRule(eventOrigin *models.AlertCurEvent)
|
||||
}
|
||||
|
||||
for _, processor := range processors {
|
||||
var res string
|
||||
var err error
|
||||
logger.Infof("before processor notify_id: %d, event:%+v, processor:%+v", notifyRuleId, eventCopy, processor)
|
||||
eventCopy = processor.Process(e.ctx, eventCopy)
|
||||
logger.Infof("after processor notify_id: %d, event:%+v, processor:%+v", notifyRuleId, eventCopy, processor)
|
||||
eventCopy, res, err = processor.Process(e.ctx, eventCopy)
|
||||
if eventCopy == nil {
|
||||
logger.Warningf("notify_id: %d, event:%+v, processor:%+v, event is nil", notifyRuleId, eventCopy, processor)
|
||||
logger.Warningf("after processor notify_id: %d, event:%+v, processor:%+v, event is nil", notifyRuleId, eventCopy, processor)
|
||||
break
|
||||
}
|
||||
logger.Infof("after processor notify_id: %d, event:%+v, processor:%+v, res:%v, err:%v", notifyRuleId, eventCopy, processor, res, err)
|
||||
}
|
||||
|
||||
if eventCopy == nil {
|
||||
@@ -200,9 +206,12 @@ func (e *Dispatch) HandleEventWithNotifyRule(eventOrigin *models.AlertCurEvent)
|
||||
|
||||
// notify
|
||||
for i := range notifyRule.NotifyConfigs {
|
||||
if !NotifyRuleApplicable(¬ifyRule.NotifyConfigs[i], eventCopy) {
|
||||
err := NotifyRuleMatchCheck(¬ifyRule.NotifyConfigs[i], eventCopy)
|
||||
if err != nil {
|
||||
logger.Errorf("notify_id: %d, event:%+v, channel_id:%d, template_id: %d, notify_config:%+v, err:%v", notifyRuleId, eventCopy, notifyRule.NotifyConfigs[i].ChannelID, notifyRule.NotifyConfigs[i].TemplateID, notifyRule.NotifyConfigs[i], err)
|
||||
continue
|
||||
}
|
||||
|
||||
notifyChannel := e.notifyChannelCache.Get(notifyRule.NotifyConfigs[i].ChannelID)
|
||||
messageTemplate := e.messageTemplateCache.Get(notifyRule.NotifyConfigs[i].TemplateID)
|
||||
if notifyChannel == nil {
|
||||
@@ -265,7 +274,7 @@ func pipelineApplicable(pipeline *models.EventPipeline, event *models.AlertCurEv
|
||||
return tagMatch && attributesMatch
|
||||
}
|
||||
|
||||
func NotifyRuleApplicable(notifyConfig *models.NotifyConfig, event *models.AlertCurEvent) bool {
|
||||
func NotifyRuleMatchCheck(notifyConfig *models.NotifyConfig, event *models.AlertCurEvent) error {
|
||||
tm := time.Unix(event.TriggerTime, 0)
|
||||
triggerTime := tm.Format("15:04")
|
||||
triggerWeek := int(tm.Weekday())
|
||||
@@ -317,6 +326,10 @@ func NotifyRuleApplicable(notifyConfig *models.NotifyConfig, event *models.Alert
|
||||
}
|
||||
}
|
||||
|
||||
if !timeMatch {
|
||||
return fmt.Errorf("event time not match time filter")
|
||||
}
|
||||
|
||||
severityMatch := false
|
||||
for i := range notifyConfig.Severities {
|
||||
if notifyConfig.Severities[i] == event.Severity {
|
||||
@@ -324,6 +337,10 @@ func NotifyRuleApplicable(notifyConfig *models.NotifyConfig, event *models.Alert
|
||||
}
|
||||
}
|
||||
|
||||
if !severityMatch {
|
||||
return fmt.Errorf("event severity not match severity filter")
|
||||
}
|
||||
|
||||
tagMatch := true
|
||||
if len(notifyConfig.LabelKeys) > 0 {
|
||||
for i := range notifyConfig.LabelKeys {
|
||||
@@ -335,23 +352,32 @@ func NotifyRuleApplicable(notifyConfig *models.NotifyConfig, event *models.Alert
|
||||
tagFilters, err := models.ParseTagFilter(notifyConfig.LabelKeys)
|
||||
if err != nil {
|
||||
logger.Errorf("notify send failed to parse tag filter: %v event:%+v notify_config:%+v", err, event, notifyConfig)
|
||||
return false
|
||||
return fmt.Errorf("failed to parse tag filter: %v", err)
|
||||
}
|
||||
tagMatch = common.MatchTags(event.TagsMap, tagFilters)
|
||||
}
|
||||
|
||||
if !tagMatch {
|
||||
return fmt.Errorf("event tag not match tag filter")
|
||||
}
|
||||
|
||||
attributesMatch := true
|
||||
if len(notifyConfig.Attributes) > 0 {
|
||||
tagFilters, err := models.ParseTagFilter(notifyConfig.Attributes)
|
||||
if err != nil {
|
||||
logger.Errorf("notify send failed to parse tag filter: %v event:%+v notify_config:%+v err:%v", tagFilters, event, notifyConfig, err)
|
||||
return false
|
||||
return fmt.Errorf("failed to parse tag filter: %v", err)
|
||||
}
|
||||
|
||||
attributesMatch = common.MatchTags(event.JsonTagsAndValue(), tagFilters)
|
||||
}
|
||||
|
||||
if !attributesMatch {
|
||||
return fmt.Errorf("event attributes not match attributes filter")
|
||||
}
|
||||
|
||||
logger.Infof("notify send timeMatch:%v severityMatch:%v tagMatch:%v attributesMatch:%v event:%+v notify_config:%+v", timeMatch, severityMatch, tagMatch, attributesMatch, event, notifyConfig)
|
||||
return timeMatch && severityMatch && tagMatch && attributesMatch
|
||||
return nil
|
||||
}
|
||||
|
||||
func GetNotifyConfigParams(notifyConfig *models.NotifyConfig, contactKey string, userCache *memsto.UserCacheType, userGroupCache *memsto.UserGroupCacheType) ([]string, []int64, map[string]string) {
|
||||
@@ -447,41 +473,40 @@ func (e *Dispatch) sendV2(events []*models.AlertCurEvent, notifyRuleId int64, no
|
||||
}
|
||||
|
||||
for i := range flashDutyChannelIDs {
|
||||
start := time.Now()
|
||||
respBody, err := notifyChannel.SendFlashDuty(events, flashDutyChannelIDs[i], e.notifyChannelCache.GetHttpClient(notifyChannel.ID))
|
||||
respBody = fmt.Sprintf("duration: %d ms %s", time.Since(start).Milliseconds(), respBody)
|
||||
logger.Infof("notify_id: %d, channel_name: %v, event:%+v, IntegrationUrl: %v dutychannel_id: %v, respBody: %v, err: %v", notifyRuleId, notifyChannel.Name, events[0], notifyChannel.RequestConfig.FlashDutyRequestConfig.IntegrationUrl, flashDutyChannelIDs[i], respBody, err)
|
||||
sender.NotifyRecord(e.ctx, events, notifyRuleId, notifyChannel.Name, strconv.FormatInt(flashDutyChannelIDs[i], 10), respBody, err)
|
||||
}
|
||||
return
|
||||
|
||||
case "http":
|
||||
if e.notifyChannelCache.HttpConcurrencyAdd(notifyChannel.ID) {
|
||||
defer e.notifyChannelCache.HttpConcurrencyDone(notifyChannel.ID)
|
||||
}
|
||||
if notifyChannel.RequestConfig == nil {
|
||||
logger.Warningf("notify_id: %d, channel_name: %v, event:%+v, request config not found", notifyRuleId, notifyChannel.Name, events[0])
|
||||
// 使用队列模式处理 http 通知
|
||||
// 创建通知任务
|
||||
task := &memsto.NotifyTask{
|
||||
Events: events,
|
||||
NotifyRuleId: notifyRuleId,
|
||||
NotifyChannel: notifyChannel,
|
||||
TplContent: tplContent,
|
||||
CustomParams: customParams,
|
||||
Sendtos: sendtos,
|
||||
}
|
||||
|
||||
if notifyChannel.RequestConfig.HTTPRequestConfig == nil {
|
||||
logger.Warningf("notify_id: %d, channel_name: %v, event:%+v, http request config not found", notifyRuleId, notifyChannel.Name, events[0])
|
||||
}
|
||||
|
||||
if NeedBatchContacts(notifyChannel.RequestConfig.HTTPRequestConfig) || len(sendtos) == 0 {
|
||||
resp, err := notifyChannel.SendHTTP(events, tplContent, customParams, sendtos, e.notifyChannelCache.GetHttpClient(notifyChannel.ID))
|
||||
logger.Infof("notify_id: %d, channel_name: %v, event:%+v, tplContent:%s, customParams:%v, userInfo:%+v, respBody: %v, err: %v", notifyRuleId, notifyChannel.Name, events[0], tplContent, customParams, sendtos, resp, err)
|
||||
|
||||
sender.NotifyRecord(e.ctx, events, notifyRuleId, notifyChannel.Name, getSendTarget(customParams, sendtos), resp, err)
|
||||
} else {
|
||||
for i := range sendtos {
|
||||
resp, err := notifyChannel.SendHTTP(events, tplContent, customParams, []string{sendtos[i]}, e.notifyChannelCache.GetHttpClient(notifyChannel.ID))
|
||||
logger.Infof("notify_id: %d, channel_name: %v, event:%+v, tplContent:%s, customParams:%v, userInfo:%+v, respBody: %v, err: %v", notifyRuleId, notifyChannel.Name, events[0], tplContent, customParams, sendtos[i], resp, err)
|
||||
sender.NotifyRecord(e.ctx, events, notifyRuleId, notifyChannel.Name, getSendTarget(customParams, []string{sendtos[i]}), resp, err)
|
||||
}
|
||||
// 将任务加入队列
|
||||
success := e.notifyChannelCache.EnqueueNotifyTask(task)
|
||||
if !success {
|
||||
logger.Errorf("failed to enqueue notify task for channel %d, notify_id: %d", notifyChannel.ID, notifyRuleId)
|
||||
// 如果入队失败,记录错误通知
|
||||
sender.NotifyRecord(e.ctx, events, notifyRuleId, notifyChannel.Name, getSendTarget(customParams, sendtos), "", errors.New("failed to enqueue notify task, queue is full"))
|
||||
}
|
||||
|
||||
case "smtp":
|
||||
notifyChannel.SendEmail(notifyRuleId, events, tplContent, sendtos, e.notifyChannelCache.GetSmtpClient(notifyChannel.ID))
|
||||
|
||||
case "script":
|
||||
start := time.Now()
|
||||
target, res, err := notifyChannel.SendScript(events, tplContent, customParams, sendtos)
|
||||
res = fmt.Sprintf("duration: %d ms %s", time.Since(start).Milliseconds(), res)
|
||||
logger.Infof("notify_id: %d, channel_name: %v, event:%+v, tplContent:%s, customParams:%v, target:%s, res:%s, err:%v", notifyRuleId, notifyChannel.Name, events[0], tplContent, customParams, target, res, err)
|
||||
sender.NotifyRecord(e.ctx, events, notifyRuleId, notifyChannel.Name, target, res, err)
|
||||
default:
|
||||
|
||||
@@ -93,7 +93,7 @@ func (s *Scheduler) syncAlertRules() {
|
||||
}
|
||||
|
||||
ruleType := rule.GetRuleType()
|
||||
if rule.IsPrometheusRule() || rule.IsLokiRule() || rule.IsTdengineRule() || rule.IsClickHouseRule() || rule.IsElasticSearch() {
|
||||
if rule.IsPrometheusRule() || rule.IsInnerRule() {
|
||||
datasourceIds := s.datasourceCache.GetIDsByDsCateAndQueries(rule.Cate, rule.DatasourceQueries)
|
||||
for _, dsId := range datasourceIds {
|
||||
if !naming.DatasourceHashRing.IsHit(strconv.FormatInt(dsId, 10), fmt.Sprintf("%d", rule.Id), s.aconf.Heartbeat.Endpoint) {
|
||||
|
||||
@@ -144,14 +144,24 @@ func (arw *AlertRuleWorker) Start() {
|
||||
}
|
||||
|
||||
func (arw *AlertRuleWorker) Eval() {
|
||||
logger.Infof("eval:%s started", arw.Key())
|
||||
begin := time.Now()
|
||||
var message string
|
||||
|
||||
defer func() {
|
||||
if len(message) == 0 {
|
||||
logger.Infof("rule_eval:%s finished, duration:%v", arw.Key(), time.Since(begin))
|
||||
} else {
|
||||
logger.Infof("rule_eval:%s finished, duration:%v, message:%s", arw.Key(), time.Since(begin), message)
|
||||
}
|
||||
}()
|
||||
|
||||
if arw.Processor.PromEvalInterval == 0 {
|
||||
arw.Processor.PromEvalInterval = getPromEvalInterval(arw.Processor.ScheduleEntry.Schedule)
|
||||
}
|
||||
|
||||
cachedRule := arw.Rule
|
||||
if cachedRule == nil {
|
||||
// logger.Errorf("rule_eval:%s Rule not found", arw.Key())
|
||||
message = "rule not found"
|
||||
return
|
||||
}
|
||||
arw.Processor.Stats.CounterRuleEval.WithLabelValues().Inc()
|
||||
@@ -177,11 +187,12 @@ func (arw *AlertRuleWorker) Eval() {
|
||||
|
||||
if err != nil {
|
||||
logger.Errorf("rule_eval:%s get anomaly point err:%s", arw.Key(), err.Error())
|
||||
message = "failed to get anomaly points"
|
||||
return
|
||||
}
|
||||
|
||||
if arw.Processor == nil {
|
||||
logger.Warningf("rule_eval:%s Processor is nil", arw.Key())
|
||||
message = "processor is nil"
|
||||
return
|
||||
}
|
||||
|
||||
@@ -223,7 +234,7 @@ func (arw *AlertRuleWorker) Eval() {
|
||||
}
|
||||
|
||||
func (arw *AlertRuleWorker) Stop() {
|
||||
logger.Infof("rule_eval %s stopped", arw.Key())
|
||||
logger.Infof("rule_eval:%s stopped", arw.Key())
|
||||
close(arw.Quit)
|
||||
c := arw.Scheduler.Stop()
|
||||
<-c.Done()
|
||||
|
||||
@@ -9,6 +9,7 @@ import (
|
||||
"github.com/ccfos/nightingale/v6/memsto"
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
"github.com/toolkits/pkg/logger"
|
||||
)
|
||||
|
||||
@@ -135,7 +136,8 @@ func EventMuteStrategy(event *models.AlertCurEvent, alertMuteCache *memsto.Alert
|
||||
}
|
||||
|
||||
for i := 0; i < len(mutes); i++ {
|
||||
if MatchMute(event, mutes[i]) {
|
||||
matched, _ := MatchMute(event, mutes[i])
|
||||
if matched {
|
||||
return true, mutes[i].Id
|
||||
}
|
||||
}
|
||||
@@ -144,9 +146,9 @@ func EventMuteStrategy(event *models.AlertCurEvent, alertMuteCache *memsto.Alert
|
||||
}
|
||||
|
||||
// MatchMute 如果传入了clock这个可选参数,就表示使用这个clock表示的时间,否则就从event的字段中取TriggerTime
|
||||
func MatchMute(event *models.AlertCurEvent, mute *models.AlertMute, clock ...int64) bool {
|
||||
func MatchMute(event *models.AlertCurEvent, mute *models.AlertMute, clock ...int64) (bool, error) {
|
||||
if mute.Disabled == 1 {
|
||||
return false
|
||||
return false, errors.New("mute is disabled")
|
||||
}
|
||||
|
||||
// 如果不是全局的,判断 匹配的 datasource id
|
||||
@@ -158,13 +160,13 @@ func MatchMute(event *models.AlertCurEvent, mute *models.AlertMute, clock ...int
|
||||
|
||||
// 判断 event.datasourceId 是否包含在 idm 中
|
||||
if _, has := idm[event.DatasourceId]; !has {
|
||||
return false
|
||||
return false, errors.New("datasource id not match")
|
||||
}
|
||||
}
|
||||
|
||||
if mute.MuteTimeType == models.TimeRange {
|
||||
if !mute.IsWithinTimeRange(event.TriggerTime) {
|
||||
return false
|
||||
return false, errors.New("event trigger time not within mute time range")
|
||||
}
|
||||
} else if mute.MuteTimeType == models.Periodic {
|
||||
ts := event.TriggerTime
|
||||
@@ -173,11 +175,11 @@ func MatchMute(event *models.AlertCurEvent, mute *models.AlertMute, clock ...int
|
||||
}
|
||||
|
||||
if !mute.IsWithinPeriodicMute(ts) {
|
||||
return false
|
||||
return false, errors.New("event trigger time not within periodic mute range")
|
||||
}
|
||||
} else {
|
||||
logger.Warningf("mute time type invalid, %d", mute.MuteTimeType)
|
||||
return false
|
||||
return false, errors.New("mute time type invalid")
|
||||
}
|
||||
|
||||
var matchSeverity bool
|
||||
@@ -193,12 +195,14 @@ func MatchMute(event *models.AlertCurEvent, mute *models.AlertMute, clock ...int
|
||||
}
|
||||
|
||||
if !matchSeverity {
|
||||
return false
|
||||
return false, errors.New("event severity not match mute severity")
|
||||
}
|
||||
|
||||
if mute.ITags == nil || len(mute.ITags) == 0 {
|
||||
return true
|
||||
return true, nil
|
||||
}
|
||||
|
||||
return common.MatchTags(event.TagsMap, mute.ITags)
|
||||
if !common.MatchTags(event.TagsMap, mute.ITags) {
|
||||
return false, errors.New("event tags not match mute tags")
|
||||
}
|
||||
return true, nil
|
||||
}
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
package pipeline
|
||||
|
||||
import (
|
||||
_ "github.com/ccfos/nightingale/v6/alert/pipeline/processor/aisummary"
|
||||
_ "github.com/ccfos/nightingale/v6/alert/pipeline/processor/callback"
|
||||
_ "github.com/ccfos/nightingale/v6/alert/pipeline/processor/eventdrop"
|
||||
_ "github.com/ccfos/nightingale/v6/alert/pipeline/processor/eventupdate"
|
||||
|
||||
198
alert/pipeline/processor/aisummary/ai_summary.go
Normal file
198
alert/pipeline/processor/aisummary/ai_summary.go
Normal file
@@ -0,0 +1,198 @@
|
||||
package aisummary
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"crypto/tls"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"strings"
|
||||
"text/template"
|
||||
"time"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/alert/pipeline/processor/callback"
|
||||
"github.com/ccfos/nightingale/v6/alert/pipeline/processor/common"
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
"github.com/ccfos/nightingale/v6/pkg/ctx"
|
||||
"github.com/ccfos/nightingale/v6/pkg/tplx"
|
||||
)
|
||||
|
||||
const (
|
||||
HTTP_STATUS_SUCCESS_MAX = 299
|
||||
)
|
||||
|
||||
// AISummaryConfig 配置结构体
|
||||
type AISummaryConfig struct {
|
||||
callback.HTTPConfig
|
||||
ModelName string `json:"model_name"`
|
||||
APIKey string `json:"api_key"`
|
||||
PromptTemplate string `json:"prompt_template"`
|
||||
CustomParams map[string]interface{} `json:"custom_params"`
|
||||
}
|
||||
|
||||
type Message struct {
|
||||
Role string `json:"role"`
|
||||
Content string `json:"content"`
|
||||
}
|
||||
|
||||
type ChatCompletionResponse struct {
|
||||
Choices []struct {
|
||||
Message struct {
|
||||
Content string `json:"content"`
|
||||
} `json:"message"`
|
||||
} `json:"choices"`
|
||||
}
|
||||
|
||||
func init() {
|
||||
models.RegisterProcessor("ai_summary", &AISummaryConfig{})
|
||||
}
|
||||
|
||||
func (c *AISummaryConfig) Init(settings interface{}) (models.Processor, error) {
|
||||
result, err := common.InitProcessor[*AISummaryConfig](settings)
|
||||
return result, err
|
||||
}
|
||||
|
||||
func (c *AISummaryConfig) Process(ctx *ctx.Context, event *models.AlertCurEvent) (*models.AlertCurEvent, string, error) {
|
||||
if c.Client == nil {
|
||||
if err := c.initHTTPClient(); err != nil {
|
||||
return event, "", fmt.Errorf("failed to initialize HTTP client: %v processor: %v", err, c)
|
||||
}
|
||||
}
|
||||
|
||||
// 准备告警事件信息
|
||||
eventInfo, err := c.prepareEventInfo(event)
|
||||
if err != nil {
|
||||
return event, "", fmt.Errorf("failed to prepare event info: %v processor: %v", err, c)
|
||||
}
|
||||
|
||||
// 调用AI模型生成总结
|
||||
summary, err := c.generateAISummary(eventInfo)
|
||||
if err != nil {
|
||||
return event, "", fmt.Errorf("failed to generate AI summary: %v processor: %v", err, c)
|
||||
}
|
||||
|
||||
// 将总结添加到annotations字段
|
||||
if event.AnnotationsJSON == nil {
|
||||
event.AnnotationsJSON = make(map[string]string)
|
||||
}
|
||||
event.AnnotationsJSON["ai_summary"] = summary
|
||||
|
||||
// 更新Annotations字段
|
||||
b, err := json.Marshal(event.AnnotationsJSON)
|
||||
if err != nil {
|
||||
return event, "", fmt.Errorf("failed to marshal annotations: %v processor: %v", err, c)
|
||||
}
|
||||
event.Annotations = string(b)
|
||||
|
||||
return event, "", nil
|
||||
}
|
||||
|
||||
func (c *AISummaryConfig) initHTTPClient() error {
|
||||
transport := &http.Transport{
|
||||
TLSClientConfig: &tls.Config{InsecureSkipVerify: c.SkipSSLVerify},
|
||||
}
|
||||
|
||||
if c.Proxy != "" {
|
||||
proxyURL, err := url.Parse(c.Proxy)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to parse proxy url: %v", err)
|
||||
}
|
||||
transport.Proxy = http.ProxyURL(proxyURL)
|
||||
}
|
||||
|
||||
c.Client = &http.Client{
|
||||
Timeout: time.Duration(c.Timeout) * time.Millisecond,
|
||||
Transport: transport,
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *AISummaryConfig) prepareEventInfo(event *models.AlertCurEvent) (string, error) {
|
||||
var defs = []string{
|
||||
"{{$event := .}}",
|
||||
}
|
||||
|
||||
text := strings.Join(append(defs, c.PromptTemplate), "")
|
||||
t, err := template.New("prompt").Funcs(template.FuncMap(tplx.TemplateFuncMap)).Parse(text)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to parse prompt template: %v", err)
|
||||
}
|
||||
|
||||
var body bytes.Buffer
|
||||
err = t.Execute(&body, event)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to execute prompt template: %v", err)
|
||||
}
|
||||
|
||||
return body.String(), nil
|
||||
}
|
||||
|
||||
func (c *AISummaryConfig) generateAISummary(eventInfo string) (string, error) {
|
||||
// 构建基础请求参数
|
||||
reqParams := map[string]interface{}{
|
||||
"model": c.ModelName,
|
||||
"messages": []Message{
|
||||
{
|
||||
Role: "user",
|
||||
Content: eventInfo,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
// 合并自定义参数
|
||||
for k, v := range c.CustomParams {
|
||||
reqParams[k] = v
|
||||
}
|
||||
|
||||
// 序列化请求体
|
||||
jsonData, err := json.Marshal(reqParams)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to marshal request body: %v", err)
|
||||
}
|
||||
|
||||
// 创建HTTP请求
|
||||
req, err := http.NewRequest("POST", c.URL, bytes.NewBuffer(jsonData))
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to create request: %v", err)
|
||||
}
|
||||
|
||||
// 设置请求头
|
||||
req.Header.Set("Authorization", "Bearer "+c.APIKey)
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
for k, v := range c.Headers {
|
||||
req.Header.Set(k, v)
|
||||
}
|
||||
|
||||
// 发送请求
|
||||
resp, err := c.Client.Do(req)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to send request: %v", err)
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
// 检查响应状态码
|
||||
if resp.StatusCode > HTTP_STATUS_SUCCESS_MAX {
|
||||
body, _ := io.ReadAll(resp.Body)
|
||||
return "", fmt.Errorf("unexpected status code: %d, body: %s", resp.StatusCode, string(body))
|
||||
}
|
||||
|
||||
// 读取响应
|
||||
body, err := io.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to read response body: %v", err)
|
||||
}
|
||||
|
||||
// 解析响应
|
||||
var chatResp ChatCompletionResponse
|
||||
if err := json.Unmarshal(body, &chatResp); err != nil {
|
||||
return "", fmt.Errorf("failed to unmarshal response: %v", err)
|
||||
}
|
||||
|
||||
if len(chatResp.Choices) == 0 {
|
||||
return "", fmt.Errorf("no response from AI model")
|
||||
}
|
||||
|
||||
return chatResp.Choices[0].Message.Content, nil
|
||||
}
|
||||
69
alert/pipeline/processor/aisummary/ai_summary_test.go
Normal file
69
alert/pipeline/processor/aisummary/ai_summary_test.go
Normal file
@@ -0,0 +1,69 @@
|
||||
package aisummary
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/alert/pipeline/processor/callback"
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
"github.com/ccfos/nightingale/v6/pkg/ctx"
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
func TestAISummaryConfig_Process(t *testing.T) {
|
||||
// 创建测试配置
|
||||
config := &AISummaryConfig{
|
||||
HTTPConfig: callback.HTTPConfig{
|
||||
URL: "https://generativelanguage.googleapis.com/v1beta/openai/chat/completions",
|
||||
Timeout: 30000,
|
||||
SkipSSLVerify: true,
|
||||
Headers: map[string]string{
|
||||
"Content-Type": "application/json",
|
||||
},
|
||||
},
|
||||
ModelName: "gemini-2.0-flash",
|
||||
APIKey: "*",
|
||||
PromptTemplate: "告警规则:{{$event.RuleName}}\n严重程度:{{$event.Severity}}",
|
||||
CustomParams: map[string]interface{}{
|
||||
"temperature": 0.7,
|
||||
"max_tokens": 2000,
|
||||
"top_p": 0.9,
|
||||
},
|
||||
}
|
||||
|
||||
// 创建测试事件
|
||||
event := &models.AlertCurEvent{
|
||||
RuleName: "Test Rule",
|
||||
Severity: 1,
|
||||
TagsMap: map[string]string{
|
||||
"host": "test-host",
|
||||
},
|
||||
AnnotationsJSON: map[string]string{
|
||||
"description": "Test alert",
|
||||
},
|
||||
}
|
||||
|
||||
// 测试模板处理
|
||||
eventInfo, err := config.prepareEventInfo(event)
|
||||
assert.NoError(t, err)
|
||||
assert.Contains(t, eventInfo, "Test Rule")
|
||||
assert.Contains(t, eventInfo, "1")
|
||||
|
||||
// 测试配置初始化
|
||||
processor, err := config.Init(config)
|
||||
assert.NoError(t, err)
|
||||
assert.NotNil(t, processor)
|
||||
|
||||
// 测试处理函数
|
||||
result, _, err := processor.Process(&ctx.Context{}, event)
|
||||
assert.NoError(t, err)
|
||||
assert.NotNil(t, result)
|
||||
assert.NotEmpty(t, result.AnnotationsJSON["ai_summary"])
|
||||
|
||||
// 展示处理结果
|
||||
t.Log("\n=== 处理结果 ===")
|
||||
t.Logf("告警规则: %s", result.RuleName)
|
||||
t.Logf("严重程度: %d", result.Severity)
|
||||
t.Logf("标签: %v", result.TagsMap)
|
||||
t.Logf("原始注释: %v", result.AnnotationsJSON["description"])
|
||||
t.Logf("AI总结: %s", result.AnnotationsJSON["ai_summary"])
|
||||
}
|
||||
@@ -3,6 +3,7 @@ package callback
|
||||
import (
|
||||
"crypto/tls"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"net/url"
|
||||
@@ -19,7 +20,7 @@ type HTTPConfig struct {
|
||||
URL string `json:"url"`
|
||||
Method string `json:"method,omitempty"`
|
||||
Body string `json:"body,omitempty"`
|
||||
Headers map[string]string `json:"headers"`
|
||||
Headers map[string]string `json:"header"`
|
||||
AuthUsername string `json:"auth_username"`
|
||||
AuthPassword string `json:"auth_password"`
|
||||
Timeout int `json:"timeout"` // 单位:ms
|
||||
@@ -42,7 +43,7 @@ func (c *CallbackConfig) Init(settings interface{}) (models.Processor, error) {
|
||||
return result, err
|
||||
}
|
||||
|
||||
func (c *CallbackConfig) Process(ctx *ctx.Context, event *models.AlertCurEvent) *models.AlertCurEvent {
|
||||
func (c *CallbackConfig) Process(ctx *ctx.Context, event *models.AlertCurEvent) (*models.AlertCurEvent, string, error) {
|
||||
if c.Client == nil {
|
||||
transport := &http.Transport{
|
||||
TLSClientConfig: &tls.Config{InsecureSkipVerify: c.SkipSSLVerify},
|
||||
@@ -51,7 +52,7 @@ func (c *CallbackConfig) Process(ctx *ctx.Context, event *models.AlertCurEvent)
|
||||
if c.Proxy != "" {
|
||||
proxyURL, err := url.Parse(c.Proxy)
|
||||
if err != nil {
|
||||
logger.Errorf("failed to parse proxy url: %v", err)
|
||||
return event, "", fmt.Errorf("failed to parse proxy url: %v processor: %v", err, c)
|
||||
} else {
|
||||
transport.Proxy = http.ProxyURL(proxyURL)
|
||||
}
|
||||
@@ -71,14 +72,12 @@ func (c *CallbackConfig) Process(ctx *ctx.Context, event *models.AlertCurEvent)
|
||||
|
||||
body, err := json.Marshal(event)
|
||||
if err != nil {
|
||||
logger.Errorf("failed to marshal event: %v", err)
|
||||
return event
|
||||
return event, "", fmt.Errorf("failed to marshal event: %v processor: %v", err, c)
|
||||
}
|
||||
|
||||
req, err := http.NewRequest("POST", c.URL, strings.NewReader(string(body)))
|
||||
if err != nil {
|
||||
logger.Errorf("failed to create request: %v event: %v", err, event)
|
||||
return event
|
||||
return event, "", fmt.Errorf("failed to create request: %v processor: %v", err, c)
|
||||
}
|
||||
|
||||
for k, v := range headers {
|
||||
@@ -91,16 +90,14 @@ func (c *CallbackConfig) Process(ctx *ctx.Context, event *models.AlertCurEvent)
|
||||
|
||||
resp, err := c.Client.Do(req)
|
||||
if err != nil {
|
||||
logger.Errorf("failed to send request: %v event: %v", err, event)
|
||||
return event
|
||||
return event, "", fmt.Errorf("failed to send request: %v processor: %v", err, c)
|
||||
}
|
||||
|
||||
b, err := io.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
logger.Errorf("failed to read response body: %v event: %v", err, event)
|
||||
return event
|
||||
return event, "", fmt.Errorf("failed to read response body: %v processor: %v", err, c)
|
||||
}
|
||||
|
||||
logger.Infof("response body: %s", string(b))
|
||||
return event
|
||||
logger.Debugf("callback processor response body: %s", string(b))
|
||||
return event, "callback success", nil
|
||||
}
|
||||
|
||||
@@ -2,6 +2,7 @@ package eventdrop
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"strings"
|
||||
texttemplate "text/template"
|
||||
|
||||
@@ -25,7 +26,7 @@ func (c *EventDropConfig) Init(settings interface{}) (models.Processor, error) {
|
||||
return result, err
|
||||
}
|
||||
|
||||
func (c *EventDropConfig) Process(ctx *ctx.Context, event *models.AlertCurEvent) *models.AlertCurEvent {
|
||||
func (c *EventDropConfig) Process(ctx *ctx.Context, event *models.AlertCurEvent) (*models.AlertCurEvent, string, error) {
|
||||
// 使用背景是可以根据此处理器,实现对事件进行更加灵活的过滤的逻辑
|
||||
// 在标签过滤和属性过滤都不满足需求时可以使用
|
||||
// 如果模板执行结果为 true,则删除该事件
|
||||
@@ -40,22 +41,20 @@ func (c *EventDropConfig) Process(ctx *ctx.Context, event *models.AlertCurEvent)
|
||||
|
||||
tpl, err := texttemplate.New("eventdrop").Funcs(tplx.TemplateFuncMap).Parse(text)
|
||||
if err != nil {
|
||||
logger.Errorf("processor failed to parse template: %v event: %v", err, event)
|
||||
return event
|
||||
return event, "", fmt.Errorf("processor failed to parse template: %v processor: %v", err, c)
|
||||
}
|
||||
|
||||
var body bytes.Buffer
|
||||
if err = tpl.Execute(&body, event); err != nil {
|
||||
logger.Errorf("processor failed to execute template: %v event: %v", err, event)
|
||||
return event
|
||||
return event, "", fmt.Errorf("processor failed to execute template: %v processor: %v", err, c)
|
||||
}
|
||||
|
||||
result := strings.TrimSpace(body.String())
|
||||
logger.Infof("processor eventdrop result: %v", result)
|
||||
if result == "true" {
|
||||
logger.Infof("processor eventdrop drop event: %v", event)
|
||||
return nil
|
||||
return nil, "drop event success", nil
|
||||
}
|
||||
|
||||
return event
|
||||
return event, "drop event failed", nil
|
||||
}
|
||||
|
||||
@@ -3,6 +3,7 @@ package eventupdate
|
||||
import (
|
||||
"crypto/tls"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"net/url"
|
||||
@@ -30,7 +31,7 @@ func (c *EventUpdateConfig) Init(settings interface{}) (models.Processor, error)
|
||||
return result, err
|
||||
}
|
||||
|
||||
func (c *EventUpdateConfig) Process(ctx *ctx.Context, event *models.AlertCurEvent) *models.AlertCurEvent {
|
||||
func (c *EventUpdateConfig) Process(ctx *ctx.Context, event *models.AlertCurEvent) (*models.AlertCurEvent, string, error) {
|
||||
if c.Client == nil {
|
||||
transport := &http.Transport{
|
||||
TLSClientConfig: &tls.Config{InsecureSkipVerify: c.SkipSSLVerify},
|
||||
@@ -39,7 +40,7 @@ func (c *EventUpdateConfig) Process(ctx *ctx.Context, event *models.AlertCurEven
|
||||
if c.Proxy != "" {
|
||||
proxyURL, err := url.Parse(c.Proxy)
|
||||
if err != nil {
|
||||
logger.Errorf("failed to parse proxy url: %v", err)
|
||||
return event, "", fmt.Errorf("failed to parse proxy url: %v processor: %v", err, c)
|
||||
} else {
|
||||
transport.Proxy = http.ProxyURL(proxyURL)
|
||||
}
|
||||
@@ -59,14 +60,12 @@ func (c *EventUpdateConfig) Process(ctx *ctx.Context, event *models.AlertCurEven
|
||||
|
||||
body, err := json.Marshal(event)
|
||||
if err != nil {
|
||||
logger.Errorf("failed to marshal event: %v", err)
|
||||
return event
|
||||
return event, "", fmt.Errorf("failed to marshal event: %v processor: %v", err, c)
|
||||
}
|
||||
|
||||
req, err := http.NewRequest("POST", c.URL, strings.NewReader(string(body)))
|
||||
if err != nil {
|
||||
logger.Errorf("failed to create request: %v event: %v", err, event)
|
||||
return event
|
||||
return event, "", fmt.Errorf("failed to create request: %v processor: %v", err, c)
|
||||
}
|
||||
|
||||
for k, v := range headers {
|
||||
@@ -79,17 +78,19 @@ func (c *EventUpdateConfig) Process(ctx *ctx.Context, event *models.AlertCurEven
|
||||
|
||||
resp, err := c.Client.Do(req)
|
||||
if err != nil {
|
||||
logger.Errorf("failed to send request: %v event: %v", err, event)
|
||||
return event
|
||||
return event, "", fmt.Errorf("failed to send request: %v processor: %v", err, c)
|
||||
}
|
||||
|
||||
b, err := io.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
logger.Errorf("failed to read response body: %v event: %v", err, event)
|
||||
return event
|
||||
return nil, "", fmt.Errorf("failed to read response body: %v processor: %v", err, c)
|
||||
}
|
||||
logger.Infof("response body: %s", string(b))
|
||||
logger.Debugf("event update processor response body: %s", string(b))
|
||||
|
||||
json.Unmarshal(b, &event)
|
||||
return event
|
||||
err = json.Unmarshal(b, &event)
|
||||
if err != nil {
|
||||
return event, "", fmt.Errorf("failed to unmarshal response body: %v processor: %v", err, c)
|
||||
}
|
||||
|
||||
return event, "", nil
|
||||
}
|
||||
|
||||
@@ -42,7 +42,7 @@ func (r *RelabelConfig) Init(settings interface{}) (models.Processor, error) {
|
||||
return result, err
|
||||
}
|
||||
|
||||
func (r *RelabelConfig) Process(ctx *ctx.Context, event *models.AlertCurEvent) *models.AlertCurEvent {
|
||||
func (r *RelabelConfig) Process(ctx *ctx.Context, event *models.AlertCurEvent) (*models.AlertCurEvent, string, error) {
|
||||
sourceLabels := make([]model.LabelName, len(r.SourceLabels))
|
||||
for i := range r.SourceLabels {
|
||||
sourceLabels[i] = model.LabelName(strings.ReplaceAll(r.SourceLabels[i], ".", REPLACE_DOT))
|
||||
@@ -64,7 +64,7 @@ func (r *RelabelConfig) Process(ctx *ctx.Context, event *models.AlertCurEvent) *
|
||||
}
|
||||
|
||||
EventRelabel(event, relabelConfigs)
|
||||
return event
|
||||
return event, "", nil
|
||||
}
|
||||
|
||||
func EventRelabel(event *models.AlertCurEvent, relabelConfigs []*pconf.RelabelConfig) {
|
||||
|
||||
@@ -280,7 +280,7 @@ func Relabel(rule *models.AlertRule, event *models.AlertCurEvent) {
|
||||
|
||||
// need to keep the original label
|
||||
event.OriginalTags = event.Tags
|
||||
event.OriginalTagsJSON = make([]string, len(event.TagsJSON))
|
||||
event.OriginalTagsJSON = event.TagsJSON
|
||||
|
||||
if len(rule.EventRelabelConfig) == 0 {
|
||||
return
|
||||
@@ -468,16 +468,18 @@ func (p *Processor) fireEvent(event *models.AlertCurEvent) {
|
||||
return
|
||||
}
|
||||
|
||||
logger.Debugf("rule_eval:%s event:%+v fire", p.Key(), event)
|
||||
message := "unknown"
|
||||
defer func() {
|
||||
logger.Infof("rule_eval:%s event-hash-%s %s", p.Key(), event.Hash, message)
|
||||
}()
|
||||
|
||||
if fired, has := p.fires.Get(event.Hash); has {
|
||||
p.fires.UpdateLastEvalTime(event.Hash, event.LastEvalTime)
|
||||
event.FirstTriggerTime = fired.FirstTriggerTime
|
||||
p.HandleFireEventHook(event)
|
||||
|
||||
if cachedRule.NotifyRepeatStep == 0 {
|
||||
logger.Debugf("rule_eval:%s event:%+v repeat is zero nothing to do", p.Key(), event)
|
||||
// 说明不想重复通知,那就直接返回了,nothing to do
|
||||
// do not need to send alert again
|
||||
message = "stalled, rule.notify_repeat_step is 0, no need to repeat notify"
|
||||
return
|
||||
}
|
||||
|
||||
@@ -486,21 +488,26 @@ func (p *Processor) fireEvent(event *models.AlertCurEvent) {
|
||||
if cachedRule.NotifyMaxNumber == 0 {
|
||||
// 最大可以发送次数如果是0,表示不想限制最大发送次数,一直发即可
|
||||
event.NotifyCurNumber = fired.NotifyCurNumber + 1
|
||||
message = fmt.Sprintf("fired, notify_repeat_step_matched(%d >= %d + %d * 60) notify_max_number_ignore(#%d / %d)", event.LastEvalTime, fired.LastSentTime, cachedRule.NotifyRepeatStep, event.NotifyCurNumber, cachedRule.NotifyMaxNumber)
|
||||
p.pushEventToQueue(event)
|
||||
} else {
|
||||
// 有最大发送次数的限制,就要看已经发了几次了,是否达到了最大发送次数
|
||||
if fired.NotifyCurNumber >= cachedRule.NotifyMaxNumber {
|
||||
logger.Debugf("rule_eval:%s event:%+v reach max number", p.Key(), event)
|
||||
message = fmt.Sprintf("stalled, notify_repeat_step_matched(%d >= %d + %d * 60) notify_max_number_not_matched(#%d / %d)", event.LastEvalTime, fired.LastSentTime, cachedRule.NotifyRepeatStep, fired.NotifyCurNumber, cachedRule.NotifyMaxNumber)
|
||||
return
|
||||
} else {
|
||||
event.NotifyCurNumber = fired.NotifyCurNumber + 1
|
||||
message = fmt.Sprintf("fired, notify_repeat_step_matched(%d >= %d + %d * 60) notify_max_number_matched(#%d / %d)", event.LastEvalTime, fired.LastSentTime, cachedRule.NotifyRepeatStep, event.NotifyCurNumber, cachedRule.NotifyMaxNumber)
|
||||
p.pushEventToQueue(event)
|
||||
}
|
||||
}
|
||||
} else {
|
||||
message = fmt.Sprintf("stalled, notify_repeat_step_not_matched(%d < %d + %d * 60)", event.LastEvalTime, fired.LastSentTime, cachedRule.NotifyRepeatStep)
|
||||
}
|
||||
} else {
|
||||
event.NotifyCurNumber = 1
|
||||
event.FirstTriggerTime = event.TriggerTime
|
||||
message = fmt.Sprintf("fired, first_trigger_time: %d", event.FirstTriggerTime)
|
||||
p.HandleFireEventHook(event)
|
||||
p.pushEventToQueue(event)
|
||||
}
|
||||
@@ -578,7 +585,9 @@ func (p *Processor) fillTags(anomalyPoint models.AnomalyPoint) {
|
||||
}
|
||||
|
||||
// handle rule tags
|
||||
for _, tag := range p.rule.AppendTagsJSON {
|
||||
tags := p.rule.AppendTagsJSON
|
||||
tags = append(tags, "rulename="+p.rule.Name)
|
||||
for _, tag := range tags {
|
||||
arr := strings.SplitN(tag, "=", 2)
|
||||
|
||||
var defs = []string{
|
||||
@@ -604,8 +613,6 @@ func (p *Processor) fillTags(anomalyPoint models.AnomalyPoint) {
|
||||
|
||||
tagsMap[arr[0]] = body.String()
|
||||
}
|
||||
|
||||
tagsMap["rulename"] = p.rule.Name
|
||||
p.tagsMap = tagsMap
|
||||
|
||||
// handle tagsArr
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
package sender
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"html/template"
|
||||
"net/url"
|
||||
"strings"
|
||||
@@ -134,7 +135,9 @@ func (c *DefaultCallBacker) CallBack(ctx CallBackContext) {
|
||||
|
||||
func doSendAndRecord(ctx *ctx.Context, url, token string, body interface{}, channel string,
|
||||
stats *astats.Stats, events []*models.AlertCurEvent) {
|
||||
start := time.Now()
|
||||
res, err := doSend(url, body, channel, stats)
|
||||
res = fmt.Sprintf("duration: %d ms %s", time.Since(start).Milliseconds(), res)
|
||||
NotifyRecord(ctx, events, 0, channel, token, res, err)
|
||||
}
|
||||
|
||||
@@ -166,7 +169,9 @@ func NotifyRecord(ctx *ctx.Context, evts []*models.AlertCurEvent, notifyRuleID i
|
||||
func doSend(url string, body interface{}, channel string, stats *astats.Stats) (string, error) {
|
||||
stats.AlertNotifyTotal.WithLabelValues(channel).Inc()
|
||||
|
||||
start := time.Now()
|
||||
res, code, err := poster.PostJSON(url, time.Second*5, body, 3)
|
||||
res = []byte(fmt.Sprintf("duration: %d ms %s", time.Since(start).Milliseconds(), res))
|
||||
if err != nil {
|
||||
logger.Errorf("%s_sender: result=fail url=%s code=%d error=%v req:%v response=%s", channel, url, code, err, body, string(res))
|
||||
stats.AlertNotifyErrorTotal.WithLabelValues(channel).Inc()
|
||||
|
||||
@@ -79,6 +79,7 @@ func alertingCallScript(ctx *ctx.Context, stdinBytes []byte, notifyScript models
|
||||
cmd.Stdout = &buf
|
||||
cmd.Stderr = &buf
|
||||
|
||||
start := time.Now()
|
||||
err := startCmd(cmd)
|
||||
if err != nil {
|
||||
logger.Errorf("event_script_notify_fail: run cmd err: %v", err)
|
||||
@@ -88,6 +89,7 @@ func alertingCallScript(ctx *ctx.Context, stdinBytes []byte, notifyScript models
|
||||
err, isTimeout := sys.WrapTimeout(cmd, time.Duration(config.Timeout)*time.Second)
|
||||
|
||||
res := buf.String()
|
||||
res = fmt.Sprintf("duration: %d ms %s", time.Since(start).Milliseconds(), res)
|
||||
|
||||
// 截断超出长度的输出
|
||||
if len(res) > 512 {
|
||||
|
||||
@@ -99,7 +99,9 @@ func SingleSendWebhooks(ctx *ctx.Context, webhooks map[string]*models.Webhook, e
|
||||
for _, conf := range webhooks {
|
||||
retryCount := 0
|
||||
for retryCount < 3 {
|
||||
start := time.Now()
|
||||
needRetry, res, err := sendWebhook(conf, event, stats)
|
||||
res = fmt.Sprintf("duration: %d ms %s", time.Since(start).Milliseconds(), res)
|
||||
NotifyRecord(ctx, []*models.AlertCurEvent{event}, 0, "webhook", conf.Url, res, err)
|
||||
if !needRetry {
|
||||
break
|
||||
@@ -169,7 +171,9 @@ func StartConsumer(ctx *ctx.Context, queue *WebhookQueue, popSize int, webhook *
|
||||
|
||||
retryCount := 0
|
||||
for retryCount < webhook.RetryCount {
|
||||
start := time.Now()
|
||||
needRetry, res, err := sendWebhook(webhook, events, stats)
|
||||
res = fmt.Sprintf("duration: %d ms %s", time.Since(start).Milliseconds(), res)
|
||||
go NotifyRecord(ctx, events, 0, "webhook", webhook.Url, res, err)
|
||||
if !needRetry {
|
||||
break
|
||||
|
||||
@@ -31,4 +31,28 @@ var Plugins = []Plugin{
|
||||
Type: "ck",
|
||||
TypeName: "ClickHouse",
|
||||
},
|
||||
{
|
||||
Id: 6,
|
||||
Category: "timeseries",
|
||||
Type: "mysql",
|
||||
TypeName: "MySQL",
|
||||
},
|
||||
{
|
||||
Id: 7,
|
||||
Category: "timeseries",
|
||||
Type: "pgsql",
|
||||
TypeName: "PostgreSQL",
|
||||
},
|
||||
{
|
||||
Id: 8,
|
||||
Category: "logging",
|
||||
Type: "doris",
|
||||
TypeName: "Doris",
|
||||
},
|
||||
{
|
||||
Id: 9,
|
||||
Category: "logging",
|
||||
Type: "opensearch",
|
||||
TypeName: "OpenSearch",
|
||||
},
|
||||
}
|
||||
|
||||
@@ -3,11 +3,15 @@ package integration
|
||||
import (
|
||||
"encoding/json"
|
||||
"path"
|
||||
"sort"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
"github.com/ccfos/nightingale/v6/pkg/ctx"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
"github.com/toolkits/pkg/container/set"
|
||||
"github.com/toolkits/pkg/file"
|
||||
"github.com/toolkits/pkg/logger"
|
||||
"github.com/toolkits/pkg/runner"
|
||||
@@ -15,7 +19,18 @@ import (
|
||||
|
||||
const SYSTEM = "system"
|
||||
|
||||
var BuiltinPayloadInFile *BuiltinPayloadInFileType
|
||||
|
||||
type BuiltinPayloadInFileType struct {
|
||||
Data map[uint64]map[string]map[string][]*models.BuiltinPayload // map[componet_id]map[type]map[cate][]*models.BuiltinPayload
|
||||
IndexData map[int64]*models.BuiltinPayload // map[uuid]payload
|
||||
|
||||
BuiltinMetrics map[string]*models.BuiltinMetric
|
||||
}
|
||||
|
||||
func Init(ctx *ctx.Context, builtinIntegrationsDir string) {
|
||||
BuiltinPayloadInFile = NewBuiltinPayloadInFileType()
|
||||
|
||||
err := models.InitBuiltinPayloads(ctx)
|
||||
if err != nil {
|
||||
logger.Warning("init old builtinPayloads fail ", err)
|
||||
@@ -146,11 +161,10 @@ func Init(ctx *ctx.Context, builtinIntegrationsDir string) {
|
||||
}
|
||||
|
||||
newAlerts := []models.AlertRule{}
|
||||
writeAlertFileFlag := false
|
||||
for _, alert := range alerts {
|
||||
if alert.UUID == 0 {
|
||||
writeAlertFileFlag = true
|
||||
alert.UUID = time.Now().UnixNano()
|
||||
time.Sleep(time.Microsecond)
|
||||
alert.UUID = time.Now().UnixMicro()
|
||||
}
|
||||
|
||||
newAlerts = append(newAlerts, alert)
|
||||
@@ -169,47 +183,13 @@ func Init(ctx *ctx.Context, builtinIntegrationsDir string) {
|
||||
Tags: alert.AppendTags,
|
||||
Content: string(content),
|
||||
UUID: alert.UUID,
|
||||
ID: alert.UUID,
|
||||
CreatedBy: SYSTEM,
|
||||
UpdatedBy: SYSTEM,
|
||||
}
|
||||
BuiltinPayloadInFile.addBuiltinPayload(&builtinAlert)
|
||||
|
||||
old, err := models.BuiltinPayloadGet(ctx, "uuid = ?", alert.UUID)
|
||||
if err != nil {
|
||||
logger.Warning("get builtin alert fail ", builtinAlert, err)
|
||||
continue
|
||||
}
|
||||
|
||||
if old == nil {
|
||||
err := builtinAlert.Add(ctx, SYSTEM)
|
||||
if err != nil {
|
||||
logger.Warning("add builtin alert fail ", builtinAlert, err)
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
if old.UpdatedBy == SYSTEM {
|
||||
old.ComponentID = component.ID
|
||||
old.Content = string(content)
|
||||
old.Name = alert.Name
|
||||
old.Tags = alert.AppendTags
|
||||
err = models.DB(ctx).Model(old).Select("*").Updates(old).Error
|
||||
if err != nil {
|
||||
logger.Warningf("update builtin alert:%+v fail %v", builtinAlert, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if writeAlertFileFlag {
|
||||
bs, err = json.MarshalIndent(newAlerts, "", " ")
|
||||
if err != nil {
|
||||
logger.Warning("marshal builtin alerts fail ", newAlerts, err)
|
||||
continue
|
||||
}
|
||||
|
||||
_, err = file.WriteBytes(fp, bs)
|
||||
if err != nil {
|
||||
logger.Warning("write builtin alerts file fail ", f, err)
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
@@ -261,32 +241,11 @@ func Init(ctx *ctx.Context, builtinIntegrationsDir string) {
|
||||
Tags: dashboard.Tags,
|
||||
Content: string(content),
|
||||
UUID: dashboard.UUID,
|
||||
ID: dashboard.UUID,
|
||||
CreatedBy: SYSTEM,
|
||||
UpdatedBy: SYSTEM,
|
||||
}
|
||||
|
||||
old, err := models.BuiltinPayloadGet(ctx, "uuid = ?", dashboard.UUID)
|
||||
if err != nil {
|
||||
logger.Warning("get builtin alert fail ", builtinDashboard, err)
|
||||
continue
|
||||
}
|
||||
|
||||
if old == nil {
|
||||
err := builtinDashboard.Add(ctx, SYSTEM)
|
||||
if err != nil {
|
||||
logger.Warning("add builtin alert fail ", builtinDashboard, err)
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
if old.UpdatedBy == SYSTEM {
|
||||
old.ComponentID = component.ID
|
||||
old.Content = string(content)
|
||||
old.Name = dashboard.Name
|
||||
old.Tags = dashboard.Tags
|
||||
err = models.DB(ctx).Model(old).Select("*").Updates(old).Error
|
||||
if err != nil {
|
||||
logger.Warningf("update builtin alert:%+v fail %v", builtinDashboard, err)
|
||||
}
|
||||
}
|
||||
BuiltinPayloadInFile.addBuiltinPayload(&builtinDashboard)
|
||||
}
|
||||
} else if err != nil {
|
||||
logger.Warningf("read builtin component dash dir fail %s %v", component.Ident, err)
|
||||
@@ -304,64 +263,23 @@ func Init(ctx *ctx.Context, builtinIntegrationsDir string) {
|
||||
}
|
||||
|
||||
metrics := []models.BuiltinMetric{}
|
||||
newMetrics := []models.BuiltinMetric{}
|
||||
err = json.Unmarshal(bs, &metrics)
|
||||
if err != nil {
|
||||
logger.Warning("parse builtin component metrics file fail", f, err)
|
||||
continue
|
||||
}
|
||||
|
||||
writeMetricFileFlag := false
|
||||
for _, metric := range metrics {
|
||||
if metric.UUID == 0 {
|
||||
writeMetricFileFlag = true
|
||||
metric.UUID = time.Now().UnixNano()
|
||||
time.Sleep(time.Microsecond)
|
||||
metric.UUID = time.Now().UnixMicro()
|
||||
}
|
||||
newMetrics = append(newMetrics, metric)
|
||||
metric.ID = metric.UUID
|
||||
metric.CreatedBy = SYSTEM
|
||||
metric.UpdatedBy = SYSTEM
|
||||
|
||||
old, err := models.BuiltinMetricGet(ctx, "uuid = ?", metric.UUID)
|
||||
if err != nil {
|
||||
logger.Warning("get builtin metrics fail ", metric, err)
|
||||
continue
|
||||
}
|
||||
|
||||
if old == nil {
|
||||
err := metric.Add(ctx, SYSTEM)
|
||||
if err != nil {
|
||||
logger.Warning("add builtin metrics fail ", metric, err)
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
if old.UpdatedBy == SYSTEM {
|
||||
old.Collector = metric.Collector
|
||||
old.Typ = metric.Typ
|
||||
old.Name = metric.Name
|
||||
old.Unit = metric.Unit
|
||||
old.Note = metric.Note
|
||||
old.Lang = metric.Lang
|
||||
old.Expression = metric.Expression
|
||||
|
||||
err = models.DB(ctx).Model(old).Select("*").Updates(old).Error
|
||||
if err != nil {
|
||||
logger.Warningf("update builtin metric:%+v fail %v", metric, err)
|
||||
}
|
||||
}
|
||||
BuiltinPayloadInFile.BuiltinMetrics[metric.Expression] = &metric
|
||||
}
|
||||
|
||||
if writeMetricFileFlag {
|
||||
bs, err = json.MarshalIndent(newMetrics, "", " ")
|
||||
if err != nil {
|
||||
logger.Warning("marshal builtin metrics fail ", newMetrics, err)
|
||||
continue
|
||||
}
|
||||
|
||||
_, err = file.WriteBytes(fp, bs)
|
||||
if err != nil {
|
||||
logger.Warning("write builtin metrics file fail ", f, err)
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
} else if err != nil {
|
||||
logger.Warningf("read builtin component metrics dir fail %s %v", component.Ident, err)
|
||||
@@ -387,3 +305,321 @@ type BuiltinBoard struct {
|
||||
Hide int `json:"hide"` // 0: false, 1: true
|
||||
UUID int64 `json:"uuid"`
|
||||
}
|
||||
|
||||
func NewBuiltinPayloadInFileType() *BuiltinPayloadInFileType {
|
||||
return &BuiltinPayloadInFileType{
|
||||
Data: make(map[uint64]map[string]map[string][]*models.BuiltinPayload),
|
||||
IndexData: make(map[int64]*models.BuiltinPayload),
|
||||
BuiltinMetrics: make(map[string]*models.BuiltinMetric),
|
||||
}
|
||||
}
|
||||
|
||||
func (b *BuiltinPayloadInFileType) addBuiltinPayload(bp *models.BuiltinPayload) {
|
||||
if _, exists := b.Data[bp.ComponentID]; !exists {
|
||||
b.Data[bp.ComponentID] = make(map[string]map[string][]*models.BuiltinPayload)
|
||||
}
|
||||
bpInType := b.Data[bp.ComponentID]
|
||||
if _, exists := bpInType[bp.Type]; !exists {
|
||||
bpInType[bp.Type] = make(map[string][]*models.BuiltinPayload)
|
||||
}
|
||||
bpInCate := bpInType[bp.Type]
|
||||
if _, exists := bpInCate[bp.Cate]; !exists {
|
||||
bpInCate[bp.Cate] = make([]*models.BuiltinPayload, 0)
|
||||
}
|
||||
bpInCate[bp.Cate] = append(bpInCate[bp.Cate], bp)
|
||||
|
||||
b.IndexData[bp.UUID] = bp
|
||||
}
|
||||
|
||||
func (b *BuiltinPayloadInFileType) GetBuiltinPayload(typ, cate, query string, componentId uint64) ([]*models.BuiltinPayload, error) {
|
||||
|
||||
var result []*models.BuiltinPayload
|
||||
source := b.Data[componentId]
|
||||
|
||||
if source == nil {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
typeMap, exists := source[typ]
|
||||
if !exists {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
if cate != "" {
|
||||
payloads, exists := typeMap[cate]
|
||||
if !exists {
|
||||
return nil, nil
|
||||
}
|
||||
result = append(result, filterByQuery(payloads, query)...)
|
||||
} else {
|
||||
for _, payloads := range typeMap {
|
||||
result = append(result, filterByQuery(payloads, query)...)
|
||||
}
|
||||
}
|
||||
|
||||
if len(result) > 0 {
|
||||
sort.Slice(result, func(i, j int) bool {
|
||||
return result[i].Name < result[j].Name
|
||||
})
|
||||
}
|
||||
|
||||
return result, nil
|
||||
}
|
||||
|
||||
func (b *BuiltinPayloadInFileType) GetBuiltinPayloadCates(typ string, componentId uint64) ([]string, error) {
|
||||
var result []string
|
||||
source := b.Data[componentId]
|
||||
if source == nil {
|
||||
return result, nil
|
||||
}
|
||||
|
||||
typeData := source[typ]
|
||||
if typeData == nil {
|
||||
return result, nil
|
||||
}
|
||||
for cate := range typeData {
|
||||
result = append(result, cate)
|
||||
}
|
||||
|
||||
sort.Strings(result)
|
||||
return result, nil
|
||||
}
|
||||
|
||||
func filterByQuery(payloads []*models.BuiltinPayload, query string) []*models.BuiltinPayload {
|
||||
if query == "" {
|
||||
return payloads
|
||||
}
|
||||
|
||||
var filtered []*models.BuiltinPayload
|
||||
for _, p := range payloads {
|
||||
if strings.Contains(p.Name, query) || strings.Contains(p.Tags, query) {
|
||||
filtered = append(filtered, p)
|
||||
}
|
||||
}
|
||||
return filtered
|
||||
}
|
||||
|
||||
func (b *BuiltinPayloadInFileType) BuiltinMetricGets(metricsInDB []*models.BuiltinMetric, lang, collector, typ, query, unit string, limit, offset int) ([]*models.BuiltinMetric, int, error) {
|
||||
var filteredMetrics []*models.BuiltinMetric
|
||||
expressionSet := set.NewStringSet()
|
||||
builtinMetricsByDB := convertBuiltinMetricByDB(metricsInDB)
|
||||
builtinMetricsMap := make(map[string]*models.BuiltinMetric)
|
||||
|
||||
for expression, metric := range builtinMetricsByDB {
|
||||
builtinMetricsMap[expression] = metric
|
||||
}
|
||||
|
||||
for expression, metric := range b.BuiltinMetrics {
|
||||
builtinMetricsMap[expression] = metric
|
||||
}
|
||||
|
||||
for _, metric := range builtinMetricsMap {
|
||||
if !applyFilter(metric, collector, typ, query, unit) {
|
||||
continue
|
||||
}
|
||||
|
||||
// Skip if expression is already in db cache
|
||||
// NOTE: 忽略重复的expression,特别的,在旧版本中,用户可能已经创建了重复的metrics,需要覆盖掉ByFile中相同的Metrics
|
||||
// NOTE: Ignore duplicate expressions, especially in the old version, users may have created duplicate metrics,
|
||||
if expressionSet.Exists(metric.Expression) {
|
||||
continue
|
||||
}
|
||||
|
||||
// Add db expression in set.
|
||||
expressionSet.Add(metric.Expression)
|
||||
|
||||
// Apply language
|
||||
trans, err := getTranslationWithLanguage(metric, lang)
|
||||
if err != nil {
|
||||
logger.Errorf("Error getting translation for metric %s: %v", metric.Name, err)
|
||||
continue // Skip if translation not found
|
||||
}
|
||||
metric.Name = trans.Name
|
||||
metric.Note = trans.Note
|
||||
|
||||
filteredMetrics = append(filteredMetrics, metric)
|
||||
}
|
||||
|
||||
// Sort metrics
|
||||
sort.Slice(filteredMetrics, func(i, j int) bool {
|
||||
if filteredMetrics[i].Collector != filteredMetrics[j].Collector {
|
||||
return filteredMetrics[i].Collector < filteredMetrics[j].Collector
|
||||
}
|
||||
if filteredMetrics[i].Typ != filteredMetrics[j].Typ {
|
||||
return filteredMetrics[i].Typ < filteredMetrics[j].Typ
|
||||
}
|
||||
return filteredMetrics[i].Expression < filteredMetrics[j].Expression
|
||||
})
|
||||
|
||||
totalCount := len(filteredMetrics)
|
||||
|
||||
// Validate parameters
|
||||
if offset < 0 {
|
||||
offset = 0
|
||||
}
|
||||
if limit < 0 {
|
||||
limit = 0
|
||||
}
|
||||
|
||||
// Handle edge cases
|
||||
if offset >= totalCount || limit == 0 {
|
||||
return []*models.BuiltinMetric{}, totalCount, nil
|
||||
}
|
||||
|
||||
// Apply pagination
|
||||
end := offset + limit
|
||||
if end > totalCount {
|
||||
end = totalCount
|
||||
}
|
||||
|
||||
return filteredMetrics[offset:end], totalCount, nil
|
||||
}
|
||||
|
||||
func (b *BuiltinPayloadInFileType) BuiltinMetricTypes(lang, collector, query string) []string {
|
||||
typeSet := set.NewStringSet()
|
||||
for _, metric := range b.BuiltinMetrics {
|
||||
if !applyFilter(metric, collector, "", query, "") {
|
||||
continue
|
||||
}
|
||||
|
||||
typeSet.Add(metric.Typ)
|
||||
}
|
||||
|
||||
return typeSet.ToSlice()
|
||||
}
|
||||
|
||||
func (b *BuiltinPayloadInFileType) BuiltinMetricCollectors(lang, typ, query string) []string {
|
||||
collectorSet := set.NewStringSet()
|
||||
for _, metric := range b.BuiltinMetrics {
|
||||
if !applyFilter(metric, "", typ, query, "") {
|
||||
continue
|
||||
}
|
||||
|
||||
collectorSet.Add(metric.Collector)
|
||||
}
|
||||
return collectorSet.ToSlice()
|
||||
}
|
||||
|
||||
func applyFilter(metric *models.BuiltinMetric, collector, typ, query, unit string) bool {
|
||||
if collector != "" && collector != metric.Collector {
|
||||
return false
|
||||
}
|
||||
|
||||
if typ != "" && typ != metric.Typ {
|
||||
return false
|
||||
}
|
||||
|
||||
if unit != "" && !containsUnit(unit, metric.Unit) {
|
||||
return false
|
||||
}
|
||||
|
||||
if query != "" && !applyQueryFilter(metric, query) {
|
||||
return false
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
func containsUnit(unit, metricUnit string) bool {
|
||||
us := strings.Split(unit, ",")
|
||||
for _, u := range us {
|
||||
if u == metricUnit {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func applyQueryFilter(metric *models.BuiltinMetric, query string) bool {
|
||||
qs := strings.Split(query, " ")
|
||||
for _, q := range qs {
|
||||
if strings.HasPrefix(q, "-") {
|
||||
q = strings.TrimPrefix(q, "-")
|
||||
if strings.Contains(metric.Name, q) || strings.Contains(metric.Note, q) || strings.Contains(metric.Expression, q) {
|
||||
return false
|
||||
}
|
||||
} else {
|
||||
if !strings.Contains(metric.Name, q) && !strings.Contains(metric.Note, q) && !strings.Contains(metric.Expression, q) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
func getTranslationWithLanguage(bm *models.BuiltinMetric, lang string) (*models.Translation, error) {
|
||||
var defaultTranslation *models.Translation
|
||||
for _, t := range bm.Translation {
|
||||
if t.Lang == lang {
|
||||
return &t, nil
|
||||
}
|
||||
|
||||
if t.Lang == "en_US" {
|
||||
defaultTranslation = &t
|
||||
}
|
||||
}
|
||||
|
||||
if defaultTranslation != nil {
|
||||
return defaultTranslation, nil
|
||||
}
|
||||
|
||||
return nil, errors.Errorf("translation not found for metric %s", bm.Name)
|
||||
}
|
||||
|
||||
func convertBuiltinMetricByDB(metricsInDB []*models.BuiltinMetric) map[string]*models.BuiltinMetric {
|
||||
builtinMetricsByDB := make(map[string]*models.BuiltinMetric)
|
||||
builtinMetricsByDBList := make(map[string][]*models.BuiltinMetric)
|
||||
|
||||
for _, metric := range metricsInDB {
|
||||
builtinMetrics, ok := builtinMetricsByDBList[metric.Expression]
|
||||
if !ok {
|
||||
builtinMetrics = []*models.BuiltinMetric{}
|
||||
}
|
||||
|
||||
builtinMetrics = append(builtinMetrics, metric)
|
||||
builtinMetricsByDBList[metric.Expression] = builtinMetrics
|
||||
}
|
||||
|
||||
for expression, builtinMetrics := range builtinMetricsByDBList {
|
||||
if len(builtinMetrics) == 0 {
|
||||
continue
|
||||
}
|
||||
|
||||
// NOTE: 为兼容旧版本用户已经创建的 metrics,同时将修改 metrics 收敛到同一个记录上,
|
||||
// 我们选择使用 expression 相同但是 id 最小的 metric 记录作为主要的 Metric。
|
||||
sort.Slice(builtinMetrics, func(i, j int) bool {
|
||||
return builtinMetrics[i].ID < builtinMetrics[j].ID
|
||||
})
|
||||
|
||||
currentBuiltinMetric := builtinMetrics[0]
|
||||
// User have no customed translation, so we can merge it
|
||||
if len(currentBuiltinMetric.Translation) == 0 {
|
||||
translationMap := make(map[string]models.Translation)
|
||||
for _, bm := range builtinMetrics {
|
||||
for _, t := range getDefaultTranslation(bm) {
|
||||
translationMap[t.Lang] = t
|
||||
}
|
||||
}
|
||||
currentBuiltinMetric.Translation = make([]models.Translation, 0, len(translationMap))
|
||||
for _, t := range translationMap {
|
||||
currentBuiltinMetric.Translation = append(currentBuiltinMetric.Translation, t)
|
||||
}
|
||||
}
|
||||
|
||||
builtinMetricsByDB[expression] = currentBuiltinMetric
|
||||
}
|
||||
|
||||
return builtinMetricsByDB
|
||||
}
|
||||
|
||||
func getDefaultTranslation(bm *models.BuiltinMetric) []models.Translation {
|
||||
if len(bm.Translation) != 0 {
|
||||
return bm.Translation
|
||||
}
|
||||
|
||||
return []models.Translation{{
|
||||
Lang: bm.Lang,
|
||||
Name: bm.Name,
|
||||
Note: bm.Note,
|
||||
}}
|
||||
}
|
||||
|
||||
@@ -177,6 +177,7 @@ func (rt *Router) Config(r *gin.Engine) {
|
||||
pages := r.Group(pagesPrefix)
|
||||
{
|
||||
|
||||
pages.DELETE("/datasource/series", rt.auth(), rt.admin(), rt.deleteDatasourceSeries)
|
||||
if rt.Center.AnonymousAccess.PromQuerier {
|
||||
pages.Any("/proxy/:id/*url", rt.dsProxy)
|
||||
pages.POST("/query-range-batch", rt.promBatchQueryRange)
|
||||
@@ -231,6 +232,11 @@ func (rt *Router) Config(r *gin.Engine) {
|
||||
pages.POST("/log-query", rt.QueryLog)
|
||||
}
|
||||
|
||||
// OpenSearch 专用接口
|
||||
pages.POST("/os-indices", rt.QueryOSIndices)
|
||||
pages.POST("/os-variable", rt.QueryOSVariable)
|
||||
pages.POST("/os-fields", rt.QueryOSFields)
|
||||
|
||||
pages.GET("/sql-template", rt.QuerySqlTemplate)
|
||||
pages.POST("/auth/login", rt.jwtMock(), rt.loginPost)
|
||||
pages.POST("/auth/logout", rt.jwtMock(), rt.auth(), rt.user(), rt.logoutPost)
|
||||
@@ -254,6 +260,7 @@ func (rt *Router) Config(r *gin.Engine) {
|
||||
|
||||
pages.GET("/notify-channels", rt.notifyChannelsGets)
|
||||
pages.GET("/contact-keys", rt.contactKeysGets)
|
||||
pages.GET("/install-date", rt.installDateGet)
|
||||
|
||||
pages.GET("/self/perms", rt.auth(), rt.user(), rt.permsGets)
|
||||
pages.GET("/self/profile", rt.auth(), rt.user(), rt.selfProfileGet)
|
||||
@@ -372,6 +379,8 @@ func (rt *Router) Config(r *gin.Engine) {
|
||||
pages.POST("/relabel-test", rt.auth(), rt.user(), rt.relabelTest)
|
||||
pages.POST("/busi-group/:id/alert-rules/clone", rt.auth(), rt.user(), rt.perm("/alert-rules/add"), rt.bgrw(), rt.cloneToMachine)
|
||||
pages.POST("/busi-groups/alert-rules/clones", rt.auth(), rt.user(), rt.perm("/alert-rules/add"), rt.batchAlertRuleClone)
|
||||
pages.POST("/busi-group/alert-rules/notify-tryrun", rt.auth(), rt.user(), rt.perm("/alert-rules/add"), rt.alertRuleNotifyTryRun)
|
||||
pages.POST("/busi-group/alert-rules/enable-tryrun", rt.auth(), rt.user(), rt.perm("/alert-rules/add"), rt.alertRuleEnableTryRun)
|
||||
|
||||
pages.GET("/busi-groups/recording-rules", rt.auth(), rt.user(), rt.perm("/recording-rules"), rt.recordingRuleGetsByGids)
|
||||
pages.GET("/busi-group/:id/recording-rules", rt.auth(), rt.user(), rt.perm("/recording-rules"), rt.recordingRuleGets)
|
||||
@@ -397,22 +406,18 @@ func (rt *Router) Config(r *gin.Engine) {
|
||||
pages.POST("/busi-group/:id/alert-subscribes", rt.auth(), rt.user(), rt.perm("/alert-subscribes/add"), rt.bgrw(), rt.alertSubscribeAdd)
|
||||
pages.PUT("/busi-group/:id/alert-subscribes", rt.auth(), rt.user(), rt.perm("/alert-subscribes/put"), rt.bgrw(), rt.alertSubscribePut)
|
||||
pages.DELETE("/busi-group/:id/alert-subscribes", rt.auth(), rt.user(), rt.perm("/alert-subscribes/del"), rt.bgrw(), rt.alertSubscribeDel)
|
||||
pages.POST("/alert-subscribe/alert-subscribes-tryrun", rt.auth(), rt.user(), rt.perm("/alert-subscribes/add"), rt.alertSubscribeTryRun)
|
||||
|
||||
if rt.Center.AnonymousAccess.AlertDetail {
|
||||
pages.GET("/alert-cur-event/:eid", rt.alertCurEventGet)
|
||||
pages.GET("/alert-his-event/:eid", rt.alertHisEventGet)
|
||||
pages.GET("/event-notify-records/:eid", rt.notificationRecordList)
|
||||
} else {
|
||||
pages.GET("/alert-cur-event/:eid", rt.auth(), rt.user(), rt.alertCurEventGet)
|
||||
pages.GET("/alert-his-event/:eid", rt.auth(), rt.user(), rt.alertHisEventGet)
|
||||
pages.GET("/event-notify-records/:eid", rt.auth(), rt.user(), rt.notificationRecordList)
|
||||
}
|
||||
pages.GET("/alert-cur-event/:eid", rt.alertCurEventGet)
|
||||
pages.GET("/alert-his-event/:eid", rt.alertHisEventGet)
|
||||
pages.GET("/event-notify-records/:eid", rt.notificationRecordList)
|
||||
|
||||
// card logic
|
||||
pages.GET("/alert-cur-events/list", rt.auth(), rt.user(), rt.alertCurEventsList)
|
||||
pages.GET("/alert-cur-events/card", rt.auth(), rt.user(), rt.alertCurEventsCard)
|
||||
pages.POST("/alert-cur-events/card/details", rt.auth(), rt.alertCurEventsCardDetails)
|
||||
pages.GET("/alert-his-events/list", rt.auth(), rt.user(), rt.alertHisEventsList)
|
||||
pages.DELETE("/alert-his-events", rt.auth(), rt.admin(), rt.alertHisEventsDelete)
|
||||
pages.DELETE("/alert-cur-events", rt.auth(), rt.user(), rt.perm("/alert-cur-events/del"), rt.alertCurEventDel)
|
||||
pages.GET("/alert-cur-events/stats", rt.auth(), rt.alertCurEventsStatistics)
|
||||
|
||||
@@ -444,7 +449,7 @@ func (rt *Router) Config(r *gin.Engine) {
|
||||
pages.POST("/datasource/status/update", rt.auth(), rt.admin(), rt.datasourceUpdataStatus)
|
||||
pages.DELETE("/datasource/", rt.auth(), rt.admin(), rt.datasourceDel)
|
||||
|
||||
pages.GET("/roles", rt.auth(), rt.user(), rt.perm("/roles"), rt.roleGets)
|
||||
pages.GET("/roles", rt.auth(), rt.user(), rt.roleGets)
|
||||
pages.POST("/roles", rt.auth(), rt.user(), rt.perm("/roles/add"), rt.roleAdd)
|
||||
pages.PUT("/roles", rt.auth(), rt.user(), rt.perm("/roles/put"), rt.rolePut)
|
||||
pages.DELETE("/role/:id", rt.auth(), rt.user(), rt.perm("/roles/del"), rt.roleDel)
|
||||
@@ -518,10 +523,9 @@ func (rt *Router) Config(r *gin.Engine) {
|
||||
pages.GET("/builtin-payloads", rt.auth(), rt.user(), rt.builtinPayloadsGets)
|
||||
pages.GET("/builtin-payloads/cates", rt.auth(), rt.user(), rt.builtinPayloadcatesGet)
|
||||
pages.POST("/builtin-payloads", rt.auth(), rt.user(), rt.perm("/components/add"), rt.builtinPayloadsAdd)
|
||||
pages.GET("/builtin-payload/:id", rt.auth(), rt.user(), rt.perm("/components"), rt.builtinPayloadGet)
|
||||
pages.PUT("/builtin-payloads", rt.auth(), rt.user(), rt.perm("/components/put"), rt.builtinPayloadsPut)
|
||||
pages.DELETE("/builtin-payloads", rt.auth(), rt.user(), rt.perm("/components/del"), rt.builtinPayloadsDel)
|
||||
pages.GET("/builtin-payload", rt.auth(), rt.user(), rt.builtinPayloadsGetByUUIDOrID)
|
||||
pages.GET("/builtin-payload", rt.auth(), rt.user(), rt.builtinPayloadsGetByUUID)
|
||||
|
||||
pages.POST("/message-templates", rt.auth(), rt.user(), rt.perm("/notification-templates/add"), rt.messageTemplatesAdd)
|
||||
pages.DELETE("/message-templates", rt.auth(), rt.user(), rt.perm("/notification-templates/del"), rt.messageTemplatesDel)
|
||||
|
||||
@@ -233,6 +233,14 @@ func (rt *Router) checkCurEventBusiGroupRWPermission(c *gin.Context, ids []int64
|
||||
func (rt *Router) alertCurEventGet(c *gin.Context) {
|
||||
eid := ginx.UrlParamInt64(c, "eid")
|
||||
event, err := GetCurEventDetail(rt.Ctx, eid)
|
||||
|
||||
hasPermission := HasPermission(rt.Ctx, c, "event", fmt.Sprintf("%d", eid), rt.Center.AnonymousAccess.AlertDetail)
|
||||
if !hasPermission {
|
||||
rt.auth()(c)
|
||||
rt.user()(c)
|
||||
rt.bgroCheck(c, event.GroupId)
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Data(event, err)
|
||||
}
|
||||
|
||||
|
||||
@@ -2,6 +2,7 @@ package router
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net/http"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
@@ -10,6 +11,7 @@ import (
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/toolkits/pkg/ginx"
|
||||
"github.com/toolkits/pkg/logger"
|
||||
"golang.org/x/exp/slices"
|
||||
)
|
||||
|
||||
@@ -78,16 +80,56 @@ func (rt *Router) alertHisEventsList(c *gin.Context) {
|
||||
}, nil)
|
||||
}
|
||||
|
||||
type alertHisEventsDeleteForm struct {
|
||||
Severities []int `json:"severities"`
|
||||
Timestamp int64 `json:"timestamp" binding:"required"`
|
||||
}
|
||||
|
||||
func (rt *Router) alertHisEventsDelete(c *gin.Context) {
|
||||
var f alertHisEventsDeleteForm
|
||||
ginx.BindJSON(c, &f)
|
||||
// 校验
|
||||
if f.Timestamp == 0 {
|
||||
ginx.Bomb(http.StatusBadRequest, "timestamp parameter is required")
|
||||
return
|
||||
}
|
||||
|
||||
user := c.MustGet("user").(*models.User)
|
||||
|
||||
// 启动后台清理任务
|
||||
go func() {
|
||||
limit := 100
|
||||
for {
|
||||
n, err := models.AlertHisEventBatchDelete(rt.Ctx, f.Timestamp, f.Severities, limit)
|
||||
if err != nil {
|
||||
logger.Errorf("Failed to delete alert history events: operator=%s, timestamp=%d, severities=%v, error=%v",
|
||||
user.Username, f.Timestamp, f.Severities, err)
|
||||
break
|
||||
}
|
||||
logger.Debugf("Successfully deleted alert history events: operator=%s, timestamp=%d, severities=%v, deleted=%d",
|
||||
user.Username, f.Timestamp, f.Severities, n)
|
||||
if n < int64(limit) {
|
||||
break // 已经删完
|
||||
}
|
||||
|
||||
time.Sleep(100 * time.Millisecond) // 防止锁表
|
||||
}
|
||||
}()
|
||||
ginx.NewRender(c).Message("Alert history events deletion started")
|
||||
}
|
||||
|
||||
func (rt *Router) alertHisEventGet(c *gin.Context) {
|
||||
eid := ginx.UrlParamInt64(c, "eid")
|
||||
event, err := models.AlertHisEventGetById(rt.Ctx, eid)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if event == nil {
|
||||
ginx.Bomb(404, "No such alert event")
|
||||
}
|
||||
|
||||
if !rt.Center.AnonymousAccess.AlertDetail && rt.Center.EventHistoryGroupView {
|
||||
hasPermission := HasPermission(rt.Ctx, c, "event", fmt.Sprintf("%d", eid), rt.Center.AnonymousAccess.AlertDetail)
|
||||
if !hasPermission {
|
||||
rt.auth()(c)
|
||||
rt.user()(c)
|
||||
rt.bgroCheck(c, event.GroupId)
|
||||
}
|
||||
|
||||
|
||||
@@ -11,6 +11,7 @@ import (
|
||||
|
||||
"gopkg.in/yaml.v2"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/alert/mute"
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
"github.com/ccfos/nightingale/v6/pkg/strx"
|
||||
"github.com/ccfos/nightingale/v6/pushgw/pconf"
|
||||
@@ -18,6 +19,7 @@ import (
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/jinzhu/copier"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/prometheus/prometheus/prompb"
|
||||
"github.com/toolkits/pkg/ginx"
|
||||
"github.com/toolkits/pkg/i18n"
|
||||
@@ -157,6 +159,120 @@ func (rt *Router) alertRuleAddByFE(c *gin.Context) {
|
||||
ginx.NewRender(c).Data(reterr, nil)
|
||||
}
|
||||
|
||||
type AlertRuleTryRunForm struct {
|
||||
EventId int64 `json:"event_id" binding:"required"`
|
||||
AlertRuleConfig models.AlertRule `json:"config" binding:"required"`
|
||||
}
|
||||
|
||||
func (rt *Router) alertRuleNotifyTryRun(c *gin.Context) {
|
||||
// check notify channels of old version
|
||||
var f AlertRuleTryRunForm
|
||||
ginx.BindJSON(c, &f)
|
||||
|
||||
hisEvent, err := models.AlertHisEventGetById(rt.Ctx, f.EventId)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if hisEvent == nil {
|
||||
ginx.Bomb(http.StatusNotFound, "event not found")
|
||||
}
|
||||
|
||||
curEvent := *hisEvent.ToCur()
|
||||
curEvent.SetTagsMap()
|
||||
|
||||
if f.AlertRuleConfig.NotifyVersion == 1 {
|
||||
for _, id := range f.AlertRuleConfig.NotifyRuleIds {
|
||||
notifyRule, err := models.GetNotifyRule(rt.Ctx, id)
|
||||
ginx.Dangerous(err)
|
||||
for _, notifyConfig := range notifyRule.NotifyConfigs {
|
||||
_, err = SendNotifyChannelMessage(rt.Ctx, rt.UserCache, rt.UserGroupCache, notifyConfig, []*models.AlertCurEvent{&curEvent})
|
||||
ginx.Dangerous(err)
|
||||
}
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Data("notification test ok", nil)
|
||||
return
|
||||
}
|
||||
|
||||
if len(f.AlertRuleConfig.NotifyChannelsJSON) == 0 {
|
||||
ginx.Bomb(http.StatusOK, "no notify channels selected")
|
||||
}
|
||||
|
||||
if len(f.AlertRuleConfig.NotifyGroupsJSON) == 0 {
|
||||
ginx.Bomb(http.StatusOK, "no notify groups selected")
|
||||
}
|
||||
|
||||
ancs := make([]string, 0, len(curEvent.NotifyChannelsJSON))
|
||||
ugids := f.AlertRuleConfig.NotifyGroupsJSON
|
||||
ngids := make([]int64, 0)
|
||||
for i := 0; i < len(ugids); i++ {
|
||||
if gid, err := strconv.ParseInt(ugids[i], 10, 64); err == nil {
|
||||
ngids = append(ngids, gid)
|
||||
}
|
||||
}
|
||||
userGroups := rt.UserGroupCache.GetByUserGroupIds(ngids)
|
||||
uids := make([]int64, 0)
|
||||
for i := range userGroups {
|
||||
uids = append(uids, userGroups[i].UserIds...)
|
||||
}
|
||||
users := rt.UserCache.GetByUserIds(uids)
|
||||
for _, NotifyChannels := range curEvent.NotifyChannelsJSON {
|
||||
flag := true
|
||||
// ignore non-default channels
|
||||
switch NotifyChannels {
|
||||
case models.Dingtalk, models.Wecom, models.Feishu, models.Mm,
|
||||
models.Telegram, models.Email, models.FeishuCard:
|
||||
// do nothing
|
||||
default:
|
||||
continue
|
||||
}
|
||||
// default channels
|
||||
for ui := range users {
|
||||
if _, b := users[ui].ExtractToken(NotifyChannels); b {
|
||||
flag = false
|
||||
break
|
||||
}
|
||||
}
|
||||
if flag {
|
||||
ancs = append(ancs, NotifyChannels)
|
||||
}
|
||||
}
|
||||
if len(ancs) > 0 {
|
||||
ginx.Dangerous(errors.New(fmt.Sprintf("All users are missing notify channel configurations. Please check for missing tokens (each channel should be configured with at least one user). %v", ancs)))
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Data("notification test ok", nil)
|
||||
}
|
||||
|
||||
func (rt *Router) alertRuleEnableTryRun(c *gin.Context) {
|
||||
// check notify channels of old version
|
||||
var f AlertRuleTryRunForm
|
||||
ginx.BindJSON(c, &f)
|
||||
|
||||
hisEvent, err := models.AlertHisEventGetById(rt.Ctx, f.EventId)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if hisEvent == nil {
|
||||
ginx.Bomb(http.StatusNotFound, "event not found")
|
||||
}
|
||||
|
||||
curEvent := *hisEvent.ToCur()
|
||||
curEvent.SetTagsMap()
|
||||
|
||||
if f.AlertRuleConfig.Disabled == 1 {
|
||||
ginx.Bomb(http.StatusOK, "rule is disabled")
|
||||
}
|
||||
|
||||
if mute.TimeSpanMuteStrategy(&f.AlertRuleConfig, &curEvent) {
|
||||
ginx.Bomb(http.StatusOK, "event is not match for period of time")
|
||||
}
|
||||
|
||||
if mute.BgNotMatchMuteStrategy(&f.AlertRuleConfig, &curEvent, rt.TargetCache) {
|
||||
ginx.Bomb(http.StatusOK, "event target busi group not match rule busi group")
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Data("event is effective", nil)
|
||||
}
|
||||
|
||||
func (rt *Router) alertRuleAddByImport(c *gin.Context) {
|
||||
username := c.MustGet("username").(string)
|
||||
|
||||
|
||||
@@ -2,13 +2,17 @@ package router
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/alert/common"
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
"github.com/ccfos/nightingale/v6/pkg/strx"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/toolkits/pkg/ginx"
|
||||
"github.com/toolkits/pkg/i18n"
|
||||
)
|
||||
|
||||
// Return all, front-end search and paging
|
||||
@@ -104,6 +108,148 @@ func (rt *Router) alertSubscribeAdd(c *gin.Context) {
|
||||
ginx.NewRender(c).Message(f.Add(rt.Ctx))
|
||||
}
|
||||
|
||||
type SubscribeTryRunForm struct {
|
||||
EventId int64 `json:"event_id" binding:"required"`
|
||||
SubscribeConfig models.AlertSubscribe `json:"config" binding:"required"`
|
||||
}
|
||||
|
||||
func (rt *Router) alertSubscribeTryRun(c *gin.Context) {
|
||||
var f SubscribeTryRunForm
|
||||
ginx.BindJSON(c, &f)
|
||||
ginx.Dangerous(f.SubscribeConfig.Verify())
|
||||
|
||||
hisEvent, err := models.AlertHisEventGetById(rt.Ctx, f.EventId)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if hisEvent == nil {
|
||||
ginx.Bomb(http.StatusNotFound, "event not found")
|
||||
}
|
||||
|
||||
curEvent := *hisEvent.ToCur()
|
||||
curEvent.SetTagsMap()
|
||||
|
||||
lang := c.GetHeader("X-Language")
|
||||
|
||||
// 先判断匹配条件
|
||||
if !f.SubscribeConfig.MatchCluster(curEvent.DatasourceId) {
|
||||
ginx.Bomb(http.StatusBadRequest, i18n.Sprintf(lang, "event datasource not match"))
|
||||
}
|
||||
|
||||
if len(f.SubscribeConfig.RuleIds) != 0 {
|
||||
match := false
|
||||
for _, rid := range f.SubscribeConfig.RuleIds {
|
||||
if rid == curEvent.RuleId {
|
||||
match = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !match {
|
||||
ginx.Bomb(http.StatusBadRequest, i18n.Sprintf(lang, "event rule id not match"))
|
||||
}
|
||||
}
|
||||
|
||||
// 匹配 tag
|
||||
f.SubscribeConfig.Parse()
|
||||
if !common.MatchTags(curEvent.TagsMap, f.SubscribeConfig.ITags) {
|
||||
ginx.Bomb(http.StatusBadRequest, i18n.Sprintf(lang, "event tags not match"))
|
||||
}
|
||||
|
||||
// 匹配group name
|
||||
if !common.MatchGroupsName(curEvent.GroupName, f.SubscribeConfig.IBusiGroups) {
|
||||
ginx.Bomb(http.StatusBadRequest, i18n.Sprintf(lang, "event group name not match"))
|
||||
}
|
||||
|
||||
// 检查严重级别(Severity)匹配
|
||||
if len(f.SubscribeConfig.SeveritiesJson) != 0 {
|
||||
match := false
|
||||
for _, s := range f.SubscribeConfig.SeveritiesJson {
|
||||
if s == curEvent.Severity || s == 0 {
|
||||
match = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !match {
|
||||
ginx.Bomb(http.StatusBadRequest, i18n.Sprintf(lang, "event severity not match"))
|
||||
}
|
||||
}
|
||||
|
||||
// 新版本通知规则
|
||||
if f.SubscribeConfig.NotifyVersion == 1 {
|
||||
if len(f.SubscribeConfig.NotifyRuleIds) == 0 {
|
||||
ginx.Bomb(http.StatusBadRequest, i18n.Sprintf(lang, "no notify rules selected"))
|
||||
}
|
||||
|
||||
for _, id := range f.SubscribeConfig.NotifyRuleIds {
|
||||
notifyRule, err := models.GetNotifyRule(rt.Ctx, id)
|
||||
if err != nil {
|
||||
ginx.Bomb(http.StatusNotFound, i18n.Sprintf(lang, "subscribe notify rule not found: %v", err))
|
||||
}
|
||||
|
||||
for _, notifyConfig := range notifyRule.NotifyConfigs {
|
||||
_, err = SendNotifyChannelMessage(rt.Ctx, rt.UserCache, rt.UserGroupCache, notifyConfig, []*models.AlertCurEvent{&curEvent})
|
||||
if err != nil {
|
||||
ginx.Bomb(http.StatusBadRequest, i18n.Sprintf(lang, "notify rule send error: %v", err))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Data(i18n.Sprintf(lang, "event match subscribe and notification test ok"), nil)
|
||||
return
|
||||
}
|
||||
|
||||
// 旧版通知方式
|
||||
f.SubscribeConfig.ModifyEvent(&curEvent)
|
||||
if len(curEvent.NotifyChannelsJSON) == 0 {
|
||||
ginx.Bomb(http.StatusBadRequest, i18n.Sprintf(lang, "no notify channels selected"))
|
||||
}
|
||||
|
||||
if len(curEvent.NotifyGroupsJSON) == 0 {
|
||||
ginx.Bomb(http.StatusOK, i18n.Sprintf(lang, "no notify groups selected"))
|
||||
}
|
||||
|
||||
ancs := make([]string, 0, len(curEvent.NotifyChannelsJSON))
|
||||
ugids := strings.Fields(f.SubscribeConfig.UserGroupIds)
|
||||
ngids := make([]int64, 0)
|
||||
for i := 0; i < len(ugids); i++ {
|
||||
if gid, err := strconv.ParseInt(ugids[i], 10, 64); err == nil {
|
||||
ngids = append(ngids, gid)
|
||||
}
|
||||
}
|
||||
|
||||
userGroups := rt.UserGroupCache.GetByUserGroupIds(ngids)
|
||||
uids := make([]int64, 0)
|
||||
for i := range userGroups {
|
||||
uids = append(uids, userGroups[i].UserIds...)
|
||||
}
|
||||
users := rt.UserCache.GetByUserIds(uids)
|
||||
for _, NotifyChannels := range curEvent.NotifyChannelsJSON {
|
||||
flag := true
|
||||
// ignore non-default channels
|
||||
switch NotifyChannels {
|
||||
case models.Dingtalk, models.Wecom, models.Feishu, models.Mm,
|
||||
models.Telegram, models.Email, models.FeishuCard:
|
||||
// do nothing
|
||||
default:
|
||||
continue
|
||||
}
|
||||
// default channels
|
||||
for ui := range users {
|
||||
if _, b := users[ui].ExtractToken(NotifyChannels); b {
|
||||
flag = false
|
||||
break
|
||||
}
|
||||
}
|
||||
if flag {
|
||||
ancs = append(ancs, NotifyChannels)
|
||||
}
|
||||
}
|
||||
if len(ancs) > 0 {
|
||||
ginx.Bomb(http.StatusBadRequest, i18n.Sprintf(lang, "all users missing notify channel configurations: %v", ancs))
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Data(i18n.Sprintf(lang, "event match subscribe and notify settings ok"), nil)
|
||||
}
|
||||
|
||||
func (rt *Router) alertSubscribePut(c *gin.Context) {
|
||||
var fs []models.AlertSubscribe
|
||||
ginx.BindJSON(c, &fs)
|
||||
|
||||
@@ -2,8 +2,10 @@ package router
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
"sort"
|
||||
"time"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/center/integration"
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
@@ -29,7 +31,7 @@ func (rt *Router) builtinMetricsAdd(c *gin.Context) {
|
||||
reterr := make(map[string]string)
|
||||
for i := 0; i < count; i++ {
|
||||
lst[i].Lang = lang
|
||||
lst[i].UUID = time.Now().UnixNano()
|
||||
lst[i].UUID = time.Now().UnixMicro()
|
||||
if err := lst[i].Add(rt.Ctx, username); err != nil {
|
||||
reterr[lst[i].Name] = i18n.Sprintf(c.GetHeader("X-Language"), err.Error())
|
||||
}
|
||||
@@ -48,11 +50,12 @@ func (rt *Router) builtinMetricsGets(c *gin.Context) {
|
||||
lang = "zh_CN"
|
||||
}
|
||||
|
||||
bm, err := models.BuiltinMetricGets(rt.Ctx, lang, collector, typ, query, unit, limit, ginx.Offset(c, limit))
|
||||
bmInDB, err := models.BuiltinMetricGets(rt.Ctx, "", collector, typ, query, unit, limit, ginx.Offset(c, limit))
|
||||
ginx.Dangerous(err)
|
||||
|
||||
total, err := models.BuiltinMetricCount(rt.Ctx, lang, collector, typ, query, unit)
|
||||
bm, total, err := integration.BuiltinPayloadInFile.BuiltinMetricGets(bmInDB, lang, collector, typ, query, unit, limit, ginx.Offset(c, limit))
|
||||
ginx.Dangerous(err)
|
||||
|
||||
ginx.NewRender(c).Data(gin.H{
|
||||
"list": bm,
|
||||
"total": total,
|
||||
@@ -100,8 +103,26 @@ func (rt *Router) builtinMetricsTypes(c *gin.Context) {
|
||||
query := ginx.QueryStr(c, "query", "")
|
||||
lang := c.GetHeader("X-Language")
|
||||
|
||||
metricTypeList, err := models.BuiltinMetricTypes(rt.Ctx, lang, collector, query)
|
||||
ginx.NewRender(c).Data(metricTypeList, err)
|
||||
metricTypeListInDB, err := models.BuiltinMetricTypes(rt.Ctx, lang, collector, query)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
metricTypeListInFile := integration.BuiltinPayloadInFile.BuiltinMetricTypes(lang, collector, query)
|
||||
|
||||
typeMap := make(map[string]struct{})
|
||||
for _, metricType := range metricTypeListInDB {
|
||||
typeMap[metricType] = struct{}{}
|
||||
}
|
||||
for _, metricType := range metricTypeListInFile {
|
||||
typeMap[metricType] = struct{}{}
|
||||
}
|
||||
|
||||
metricTypeList := make([]string, 0, len(typeMap))
|
||||
for metricType := range typeMap {
|
||||
metricTypeList = append(metricTypeList, metricType)
|
||||
}
|
||||
sort.Strings(metricTypeList)
|
||||
|
||||
ginx.NewRender(c).Data(metricTypeList, nil)
|
||||
}
|
||||
|
||||
func (rt *Router) builtinMetricsCollectors(c *gin.Context) {
|
||||
@@ -109,5 +130,24 @@ func (rt *Router) builtinMetricsCollectors(c *gin.Context) {
|
||||
query := ginx.QueryStr(c, "query", "")
|
||||
lang := c.GetHeader("X-Language")
|
||||
|
||||
ginx.NewRender(c).Data(models.BuiltinMetricCollectors(rt.Ctx, lang, typ, query))
|
||||
collectorListInDB, err := models.BuiltinMetricCollectors(rt.Ctx, lang, typ, query)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
collectorListInFile := integration.BuiltinPayloadInFile.BuiltinMetricCollectors(lang, typ, query)
|
||||
|
||||
collectorMap := make(map[string]struct{})
|
||||
for _, collector := range collectorListInDB {
|
||||
collectorMap[collector] = struct{}{}
|
||||
}
|
||||
for _, collector := range collectorListInFile {
|
||||
collectorMap[collector] = struct{}{}
|
||||
}
|
||||
|
||||
collectorList := make([]string, 0, len(collectorMap))
|
||||
for collector := range collectorMap {
|
||||
collectorList = append(collectorList, collector)
|
||||
}
|
||||
sort.Strings(collectorList)
|
||||
|
||||
ginx.NewRender(c).Data(collectorList, nil)
|
||||
}
|
||||
|
||||
@@ -7,6 +7,7 @@ import (
|
||||
"time"
|
||||
|
||||
"github.com/BurntSushi/toml"
|
||||
"github.com/ccfos/nightingale/v6/center/integration"
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/toolkits/pkg/ginx"
|
||||
@@ -192,13 +193,26 @@ func (rt *Router) builtinPayloadsAdd(c *gin.Context) {
|
||||
|
||||
func (rt *Router) builtinPayloadsGets(c *gin.Context) {
|
||||
typ := ginx.QueryStr(c, "type", "")
|
||||
if typ == "" {
|
||||
ginx.Bomb(http.StatusBadRequest, "type is required")
|
||||
return
|
||||
}
|
||||
ComponentID := ginx.QueryInt64(c, "component_id", 0)
|
||||
|
||||
cate := ginx.QueryStr(c, "cate", "")
|
||||
query := ginx.QueryStr(c, "query", "")
|
||||
|
||||
lst, err := models.BuiltinPayloadGets(rt.Ctx, uint64(ComponentID), typ, cate, query)
|
||||
ginx.NewRender(c).Data(lst, err)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
lstInFile, err := integration.BuiltinPayloadInFile.GetBuiltinPayload(typ, cate, query, uint64(ComponentID))
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if len(lstInFile) > 0 {
|
||||
lst = append(lst, lstInFile...)
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Data(lst, nil)
|
||||
}
|
||||
|
||||
func (rt *Router) builtinPayloadcatesGet(c *gin.Context) {
|
||||
@@ -206,21 +220,31 @@ func (rt *Router) builtinPayloadcatesGet(c *gin.Context) {
|
||||
ComponentID := ginx.QueryInt64(c, "component_id", 0)
|
||||
|
||||
cates, err := models.BuiltinPayloadCates(rt.Ctx, typ, uint64(ComponentID))
|
||||
ginx.NewRender(c).Data(cates, err)
|
||||
}
|
||||
ginx.Dangerous(err)
|
||||
|
||||
func (rt *Router) builtinPayloadGet(c *gin.Context) {
|
||||
id := ginx.UrlParamInt64(c, "id")
|
||||
catesInFile, err := integration.BuiltinPayloadInFile.GetBuiltinPayloadCates(typ, uint64(ComponentID))
|
||||
ginx.Dangerous(err)
|
||||
|
||||
bp, err := models.BuiltinPayloadGet(rt.Ctx, "id = ?", id)
|
||||
if err != nil {
|
||||
ginx.Bomb(http.StatusInternalServerError, err.Error())
|
||||
}
|
||||
if bp == nil {
|
||||
ginx.Bomb(http.StatusNotFound, "builtin payload not found")
|
||||
// 使用 map 进行去重
|
||||
cateMap := make(map[string]bool)
|
||||
|
||||
// 添加数据库中的分类
|
||||
for _, cate := range cates {
|
||||
cateMap[cate] = true
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Data(bp, nil)
|
||||
// 添加文件中的分类
|
||||
for _, cate := range catesInFile {
|
||||
cateMap[cate] = true
|
||||
}
|
||||
|
||||
// 将去重后的结果转换回切片
|
||||
result := make([]string, 0, len(cateMap))
|
||||
for cate := range cateMap {
|
||||
result = append(result, cate)
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Data(result, nil)
|
||||
}
|
||||
|
||||
func (rt *Router) builtinPayloadsPut(c *gin.Context) {
|
||||
@@ -273,14 +297,15 @@ func (rt *Router) builtinPayloadsDel(c *gin.Context) {
|
||||
ginx.NewRender(c).Message(models.BuiltinPayloadDels(rt.Ctx, req.Ids))
|
||||
}
|
||||
|
||||
func (rt *Router) builtinPayloadsGetByUUIDOrID(c *gin.Context) {
|
||||
uuid := ginx.QueryInt64(c, "uuid", 0)
|
||||
// 优先以 uuid 为准
|
||||
if uuid != 0 {
|
||||
ginx.NewRender(c).Data(models.BuiltinPayloadGet(rt.Ctx, "uuid = ?", uuid))
|
||||
return
|
||||
}
|
||||
func (rt *Router) builtinPayloadsGetByUUID(c *gin.Context) {
|
||||
uuid := ginx.QueryInt64(c, "uuid")
|
||||
|
||||
id := ginx.QueryInt64(c, "id", 0)
|
||||
ginx.NewRender(c).Data(models.BuiltinPayloadGet(rt.Ctx, "id = ?", id))
|
||||
bp, err := models.BuiltinPayloadGet(rt.Ctx, "uuid = ?", uuid)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
if bp != nil {
|
||||
ginx.NewRender(c).Data(bp, nil)
|
||||
} else {
|
||||
ginx.NewRender(c).Data(integration.BuiltinPayloadInFile.IndexData[uuid], nil)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -2,12 +2,14 @@ package router
|
||||
|
||||
import (
|
||||
"crypto/tls"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"strings"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/datasource/opensearch"
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
@@ -108,6 +110,48 @@ func (rt *Router) datasourceUpsert(c *gin.Context) {
|
||||
}
|
||||
}
|
||||
|
||||
for k, v := range req.SettingsJson {
|
||||
if strings.Contains(k, "cluster_name") {
|
||||
req.ClusterName = v.(string)
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if req.PluginType == models.OPENSEARCH {
|
||||
b, err := json.Marshal(req.SettingsJson)
|
||||
if err != nil {
|
||||
logger.Warningf("marshal settings fail: %v", err)
|
||||
return
|
||||
}
|
||||
|
||||
var os opensearch.OpenSearch
|
||||
err = json.Unmarshal(b, &os)
|
||||
if err != nil {
|
||||
logger.Warningf("unmarshal settings fail: %v", err)
|
||||
return
|
||||
}
|
||||
|
||||
if len(os.Nodes) == 0 {
|
||||
logger.Warningf("nodes empty, %+v", req)
|
||||
return
|
||||
}
|
||||
|
||||
req.HTTPJson = models.HTTP{
|
||||
Timeout: os.Timeout,
|
||||
Url: os.Nodes[0],
|
||||
Headers: os.Headers,
|
||||
TLS: models.TLS{
|
||||
SkipTlsVerify: os.TLS.SkipTlsVerify,
|
||||
},
|
||||
}
|
||||
|
||||
req.AuthJson = models.Auth{
|
||||
BasicAuth: os.Basic.Enable,
|
||||
BasicAuthUser: os.Basic.Username,
|
||||
BasicAuthPassword: os.Basic.Password,
|
||||
}
|
||||
}
|
||||
|
||||
if req.Id == 0 {
|
||||
req.CreatedBy = username
|
||||
req.Status = "enabled"
|
||||
@@ -148,11 +192,12 @@ func DatasourceCheck(ds models.Datasource) error {
|
||||
},
|
||||
}
|
||||
|
||||
ds.HTTPJson.Url = strings.TrimRight(ds.HTTPJson.Url, "/")
|
||||
var fullURL string
|
||||
req, err := ds.HTTPJson.NewReq(&fullURL)
|
||||
if err != nil {
|
||||
logger.Errorf("Error creating request: %v", err)
|
||||
return fmt.Errorf("request urls:%v failed", ds.HTTPJson.GetUrls())
|
||||
return fmt.Errorf("request urls:%v failed: %v", ds.HTTPJson.GetUrls(), err)
|
||||
}
|
||||
|
||||
if ds.PluginType == models.PROMETHEUS {
|
||||
@@ -168,14 +213,14 @@ func DatasourceCheck(ds models.Datasource) error {
|
||||
req, err = http.NewRequest("GET", fullURL, nil)
|
||||
if err != nil {
|
||||
logger.Errorf("Error creating request: %v", err)
|
||||
return fmt.Errorf("request url:%s failed", fullURL)
|
||||
return fmt.Errorf("request url:%s failed: %v", fullURL, err)
|
||||
}
|
||||
} else if ds.PluginType == models.TDENGINE {
|
||||
fullURL = fmt.Sprintf("%s/rest/sql", ds.HTTPJson.Url)
|
||||
req, err = http.NewRequest("POST", fullURL, strings.NewReader("show databases"))
|
||||
if err != nil {
|
||||
logger.Errorf("Error creating request: %v", err)
|
||||
return fmt.Errorf("request url:%s failed", fullURL)
|
||||
return fmt.Errorf("request url:%s failed: %v", fullURL, err)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -187,7 +232,7 @@ func DatasourceCheck(ds models.Datasource) error {
|
||||
req, err = http.NewRequest("GET", fullURL, nil)
|
||||
if err != nil {
|
||||
logger.Errorf("Error creating request: %v", err)
|
||||
return fmt.Errorf("request url:%s failed", fullURL)
|
||||
return fmt.Errorf("request url:%s failed: %v", fullURL, err)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -202,7 +247,7 @@ func DatasourceCheck(ds models.Datasource) error {
|
||||
resp, err := client.Do(req)
|
||||
if err != nil {
|
||||
logger.Errorf("Error making request: %v\n", err)
|
||||
return fmt.Errorf("request url:%s failed", fullURL)
|
||||
return fmt.Errorf("request url:%s failed: %v", fullURL, err)
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
|
||||
@@ -8,7 +8,6 @@ import (
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/toolkits/pkg/ginx"
|
||||
"github.com/toolkits/pkg/logger"
|
||||
)
|
||||
|
||||
// 获取事件Pipeline列表
|
||||
@@ -143,15 +142,27 @@ func (rt *Router) tryRunEventPipeline(c *gin.Context) {
|
||||
for _, p := range f.PipelineConfig.ProcessorConfigs {
|
||||
processor, err := models.GetProcessorByType(p.Typ, p.Config)
|
||||
if err != nil {
|
||||
ginx.Bomb(http.StatusBadRequest, "processor %+v type not found", p)
|
||||
ginx.Bomb(http.StatusBadRequest, "get processor: %+v err: %+v", p, err)
|
||||
}
|
||||
event = processor.Process(rt.Ctx, event)
|
||||
event, _, err = processor.Process(rt.Ctx, event)
|
||||
if err != nil {
|
||||
ginx.Bomb(http.StatusBadRequest, "processor: %+v err: %+v", p, err)
|
||||
}
|
||||
|
||||
if event == nil {
|
||||
ginx.Bomb(http.StatusBadRequest, "event is nil")
|
||||
ginx.NewRender(c).Data(map[string]interface{}{
|
||||
"event": event,
|
||||
"result": "event is dropped",
|
||||
}, nil)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Data(event, nil)
|
||||
m := map[string]interface{}{
|
||||
"event": event,
|
||||
"result": "",
|
||||
}
|
||||
ginx.NewRender(c).Data(m, nil)
|
||||
}
|
||||
|
||||
// 测试事件处理器
|
||||
@@ -170,15 +181,17 @@ func (rt *Router) tryRunEventProcessor(c *gin.Context) {
|
||||
|
||||
processor, err := models.GetProcessorByType(f.ProcessorConfig.Typ, f.ProcessorConfig.Config)
|
||||
if err != nil {
|
||||
ginx.Bomb(http.StatusBadRequest, "processor type not found")
|
||||
ginx.Bomb(200, "get processor err: %+v", err)
|
||||
}
|
||||
event = processor.Process(rt.Ctx, event)
|
||||
logger.Infof("processor %+v result: %+v", f.ProcessorConfig, event)
|
||||
if event == nil {
|
||||
ginx.Bomb(http.StatusBadRequest, "event is nil")
|
||||
event, res, err := processor.Process(rt.Ctx, event)
|
||||
if err != nil {
|
||||
ginx.Bomb(200, "processor err: %+v", err)
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Data(event, nil)
|
||||
ginx.NewRender(c).Data(map[string]interface{}{
|
||||
"event": event,
|
||||
"result": res,
|
||||
}, nil)
|
||||
}
|
||||
|
||||
func (rt *Router) tryRunEventProcessorByNotifyRule(c *gin.Context) {
|
||||
@@ -210,11 +223,19 @@ func (rt *Router) tryRunEventProcessorByNotifyRule(c *gin.Context) {
|
||||
for _, p := range pl.ProcessorConfigs {
|
||||
processor, err := models.GetProcessorByType(p.Typ, p.Config)
|
||||
if err != nil {
|
||||
ginx.Bomb(http.StatusBadRequest, "processor %+v type not found", p)
|
||||
ginx.Bomb(http.StatusBadRequest, "get processor: %+v err: %+v", p, err)
|
||||
}
|
||||
|
||||
event, _, err := processor.Process(rt.Ctx, event)
|
||||
if err != nil {
|
||||
ginx.Bomb(http.StatusBadRequest, "processor: %+v err: %+v", p, err)
|
||||
}
|
||||
event = processor.Process(rt.Ctx, event)
|
||||
if event == nil {
|
||||
ginx.Bomb(http.StatusBadRequest, "event is nil")
|
||||
ginx.NewRender(c).Data(map[string]interface{}{
|
||||
"event": event,
|
||||
"result": "event is dropped",
|
||||
}, nil)
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -173,3 +173,38 @@ func Username(c *gin.Context) string {
|
||||
}
|
||||
return username
|
||||
}
|
||||
|
||||
func HasPermission(ctx *ctx.Context, c *gin.Context, sourceType, sourceId string, isAnonymousAccess bool) bool {
|
||||
if sourceType == "event" && isAnonymousAccess {
|
||||
return true
|
||||
}
|
||||
|
||||
// 尝试从请求中获取 __token 参数
|
||||
token := ginx.QueryStr(c, "__token", "")
|
||||
|
||||
// 如果有 __token 参数,验证其合法性
|
||||
if token != "" {
|
||||
return ValidateSourceToken(ctx, sourceType, sourceId, token)
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
func ValidateSourceToken(ctx *ctx.Context, sourceType, sourceId, token string) bool {
|
||||
if token == "" {
|
||||
return false
|
||||
}
|
||||
|
||||
// 根据源类型、源ID和令牌获取源令牌记录
|
||||
sourceToken, err := models.GetSourceTokenBySource(ctx, sourceType, sourceId, token)
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
|
||||
// 检查令牌是否过期
|
||||
if sourceToken.IsExpired() {
|
||||
return false
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
@@ -12,7 +12,9 @@ import (
|
||||
"github.com/ccfos/nightingale/v6/pkg/slice"
|
||||
"github.com/ccfos/nightingale/v6/pkg/strx"
|
||||
"github.com/ccfos/nightingale/v6/pkg/tplx"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/google/uuid"
|
||||
"github.com/toolkits/pkg/ginx"
|
||||
)
|
||||
|
||||
@@ -30,6 +32,9 @@ func (rt *Router) messageTemplatesAdd(c *gin.Context) {
|
||||
ginx.Dangerous(err)
|
||||
now := time.Now().Unix()
|
||||
for _, tpl := range lst {
|
||||
// 生成一个唯一的标识符,以后也不允许修改,前端不需要传这个参数
|
||||
tpl.Ident = uuid.New().String()
|
||||
|
||||
ginx.Dangerous(tpl.Verify())
|
||||
if !isAdmin && !slice.HaveIntersection(gids, tpl.UserGroupIds) {
|
||||
ginx.Bomb(http.StatusForbidden, "forbidden")
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package router
|
||||
|
||||
import (
|
||||
"math"
|
||||
"net/http"
|
||||
"strings"
|
||||
"time"
|
||||
@@ -13,6 +12,7 @@ import (
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/toolkits/pkg/ginx"
|
||||
"github.com/toolkits/pkg/i18n"
|
||||
)
|
||||
|
||||
// Return all, front-end search and paging
|
||||
@@ -71,14 +71,15 @@ func (rt *Router) alertMuteAdd(c *gin.Context) {
|
||||
}
|
||||
|
||||
type MuteTestForm struct {
|
||||
EventId int64 `json:"event_id" binding:"required"`
|
||||
AlertMute models.AlertMute `json:"mute_config" binding:"required"`
|
||||
EventId int64 `json:"event_id" binding:"required"`
|
||||
AlertMute models.AlertMute `json:"config" binding:"required"`
|
||||
PassTimeCheck bool `json:"pass_time_check"`
|
||||
}
|
||||
|
||||
func (rt *Router) alertMuteTryRun(c *gin.Context) {
|
||||
|
||||
var f MuteTestForm
|
||||
ginx.BindJSON(c, &f)
|
||||
ginx.Dangerous(f.AlertMute.Verify())
|
||||
|
||||
hisEvent, err := models.AlertHisEventGetById(rt.Ctx, f.EventId)
|
||||
ginx.Dangerous(err)
|
||||
@@ -90,18 +91,30 @@ func (rt *Router) alertMuteTryRun(c *gin.Context) {
|
||||
curEvent := *hisEvent.ToCur()
|
||||
curEvent.SetTagsMap()
|
||||
|
||||
// 绕过时间范围检查:设置时间范围为全量(0 到 int64 最大值),仅验证其他匹配条件(如标签、策略类型等)
|
||||
f.AlertMute.MuteTimeType = models.TimeRange
|
||||
f.AlertMute.Btime = 0 // 最小可能值(如 Unix 时间戳起点)
|
||||
f.AlertMute.Etime = math.MaxInt64 // 最大可能值(int64 上限)
|
||||
if f.PassTimeCheck {
|
||||
f.AlertMute.MuteTimeType = models.Periodic
|
||||
f.AlertMute.PeriodicMutesJson = []models.PeriodicMute{
|
||||
{
|
||||
EnableDaysOfWeek: "0 1 2 3 4 5 6",
|
||||
EnableStime: "00:00",
|
||||
EnableEtime: "00:00",
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
if !mute.MatchMute(&curEvent, &f.AlertMute) {
|
||||
ginx.NewRender(c).Data("not match", nil)
|
||||
match, err := mute.MatchMute(&curEvent, &f.AlertMute)
|
||||
if err != nil {
|
||||
// 对错误信息进行 i18n 翻译
|
||||
translatedErr := i18n.Sprintf(c.GetHeader("X-Language"), err.Error())
|
||||
ginx.Bomb(http.StatusBadRequest, translatedErr)
|
||||
}
|
||||
|
||||
if !match {
|
||||
ginx.NewRender(c).Data("event not match mute", nil)
|
||||
return
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Data("mute test match", nil)
|
||||
|
||||
ginx.NewRender(c).Data("event match mute", nil)
|
||||
}
|
||||
|
||||
// Preview events (alert_cur_event) that match the mute strategy based on the following criteria:
|
||||
|
||||
@@ -6,11 +6,12 @@ import (
|
||||
"time"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/alert/dispatch"
|
||||
"github.com/ccfos/nightingale/v6/memsto"
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
"github.com/ccfos/nightingale/v6/pkg/ctx"
|
||||
"github.com/ccfos/nightingale/v6/pkg/slice"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/toolkits/pkg/ginx"
|
||||
"github.com/toolkits/pkg/logger"
|
||||
)
|
||||
@@ -152,100 +153,114 @@ func (rt *Router) notifyTest(c *gin.Context) {
|
||||
for _, he := range hisEvents {
|
||||
event := he.ToCur()
|
||||
event.SetTagsMap()
|
||||
if dispatch.NotifyRuleApplicable(&f.NotifyConfig, event) {
|
||||
events = append(events, event)
|
||||
if err := dispatch.NotifyRuleMatchCheck(&f.NotifyConfig, event); err != nil {
|
||||
ginx.Bomb(http.StatusBadRequest, err.Error())
|
||||
}
|
||||
|
||||
events = append(events, event)
|
||||
}
|
||||
|
||||
if len(events) == 0 {
|
||||
ginx.Bomb(http.StatusBadRequest, "not events applicable")
|
||||
resp, err := SendNotifyChannelMessage(rt.Ctx, rt.UserCache, rt.UserGroupCache, f.NotifyConfig, events)
|
||||
ginx.NewRender(c).Data(resp, err)
|
||||
}
|
||||
|
||||
func SendNotifyChannelMessage(ctx *ctx.Context, userCache *memsto.UserCacheType, userGroup *memsto.UserGroupCacheType, notifyConfig models.NotifyConfig, events []*models.AlertCurEvent) (string, error) {
|
||||
notifyChannels, err := models.NotifyChannelGets(ctx, notifyConfig.ChannelID, "", "", -1)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to get notify channels: %v", err)
|
||||
}
|
||||
|
||||
notifyChannels, err := models.NotifyChannelGets(rt.Ctx, f.NotifyConfig.ChannelID, "", "", -1)
|
||||
ginx.Dangerous(err)
|
||||
if len(notifyChannels) == 0 {
|
||||
ginx.Bomb(http.StatusBadRequest, "notify channel not found")
|
||||
return "", fmt.Errorf("notify channel not found")
|
||||
}
|
||||
|
||||
notifyChannel := notifyChannels[0]
|
||||
|
||||
if !notifyChannel.Enable {
|
||||
ginx.Bomb(http.StatusBadRequest, "notify channel not enabled, please enable it first")
|
||||
return "", fmt.Errorf("notify channel not enabled, please enable it first")
|
||||
}
|
||||
|
||||
tplContent := make(map[string]interface{})
|
||||
if notifyChannel.RequestType != "flashtudy" {
|
||||
messageTemplates, err := models.MessageTemplateGets(rt.Ctx, f.NotifyConfig.TemplateID, "", "")
|
||||
ginx.Dangerous(err)
|
||||
if notifyChannel.RequestType != "flashduty" {
|
||||
messageTemplates, err := models.MessageTemplateGets(ctx, notifyConfig.TemplateID, "", "")
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to get message templates: %v", err)
|
||||
}
|
||||
|
||||
if len(messageTemplates) == 0 {
|
||||
ginx.Bomb(http.StatusBadRequest, "message template not found")
|
||||
return "", fmt.Errorf("message template not found")
|
||||
}
|
||||
tplContent = messageTemplates[0].RenderEvent(events)
|
||||
}
|
||||
|
||||
var contactKey string
|
||||
if notifyChannel.ParamConfig != nil && notifyChannel.ParamConfig.UserInfo != nil {
|
||||
contactKey = notifyChannel.ParamConfig.UserInfo.ContactKey
|
||||
}
|
||||
|
||||
sendtos, flashDutyChannelIDs, customParams := dispatch.GetNotifyConfigParams(&f.NotifyConfig, contactKey, rt.UserCache, rt.UserGroupCache)
|
||||
sendtos, flashDutyChannelIDs, customParams := dispatch.GetNotifyConfigParams(¬ifyConfig, contactKey, userCache, userGroup)
|
||||
|
||||
var resp string
|
||||
switch notifyChannel.RequestType {
|
||||
case "flashduty":
|
||||
client, err := models.GetHTTPClient(notifyChannel)
|
||||
ginx.Dangerous(err)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to get http client: %v", err)
|
||||
}
|
||||
|
||||
for i := range flashDutyChannelIDs {
|
||||
resp, err = notifyChannel.SendFlashDuty(events, flashDutyChannelIDs[i], client)
|
||||
if err != nil {
|
||||
break
|
||||
return "", fmt.Errorf("failed to send flashduty notify: %v", err)
|
||||
}
|
||||
}
|
||||
logger.Infof("channel_name: %v, event:%+v, tplContent:%s, customParams:%v, respBody: %v, err: %v", notifyChannel.Name, events[0], tplContent, customParams, resp, err)
|
||||
ginx.NewRender(c).Data(resp, err)
|
||||
return resp, nil
|
||||
case "http":
|
||||
client, err := models.GetHTTPClient(notifyChannel)
|
||||
ginx.Dangerous(err)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to get http client: %v", err)
|
||||
}
|
||||
|
||||
if notifyChannel.RequestConfig == nil {
|
||||
ginx.Bomb(http.StatusBadRequest, "request config not found")
|
||||
return "", fmt.Errorf("request config is nil")
|
||||
}
|
||||
|
||||
if notifyChannel.RequestConfig.HTTPRequestConfig == nil {
|
||||
ginx.Bomb(http.StatusBadRequest, "http request config not found")
|
||||
return "", fmt.Errorf("http request config is nil")
|
||||
}
|
||||
|
||||
if dispatch.NeedBatchContacts(notifyChannel.RequestConfig.HTTPRequestConfig) || len(sendtos) == 0 {
|
||||
resp, err = notifyChannel.SendHTTP(events, tplContent, customParams, sendtos, client)
|
||||
logger.Infof("channel_name: %v, event:%+v, sendtos:%+v, tplContent:%s, customParams:%v, respBody: %v, err: %v", notifyChannel.Name, events[0], sendtos, tplContent, customParams, resp, err)
|
||||
if err != nil {
|
||||
logger.Errorf("failed to send http notify: %v", err)
|
||||
return "", fmt.Errorf("failed to send http notify: %v", err)
|
||||
}
|
||||
ginx.NewRender(c).Data(resp, err)
|
||||
return resp, nil
|
||||
} else {
|
||||
for i := range sendtos {
|
||||
resp, err = notifyChannel.SendHTTP(events, tplContent, customParams, []string{sendtos[i]}, client)
|
||||
logger.Infof("channel_name: %v, event:%+v, tplContent:%s, customParams:%v, sendto:%+v, respBody: %v, err: %v", notifyChannel.Name, events[0], tplContent, customParams, sendtos[i], resp, err)
|
||||
if err != nil {
|
||||
logger.Errorf("failed to send http notify: %v", err)
|
||||
ginx.NewRender(c).Message(err)
|
||||
return
|
||||
return "", fmt.Errorf("failed to send http notify: %v", err)
|
||||
}
|
||||
}
|
||||
ginx.NewRender(c).Message(err)
|
||||
return resp, nil
|
||||
}
|
||||
|
||||
case "smtp":
|
||||
if len(sendtos) == 0 {
|
||||
return "", fmt.Errorf("no valid email address in the user and team")
|
||||
}
|
||||
err := notifyChannel.SendEmailNow(events, tplContent, sendtos)
|
||||
ginx.NewRender(c).Message(err)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to send email notify: %v", err)
|
||||
}
|
||||
return resp, nil
|
||||
case "script":
|
||||
resp, _, err := notifyChannel.SendScript(events, tplContent, customParams, sendtos)
|
||||
logger.Infof("channel_name: %v, event:%+v, tplContent:%s, customParams:%v, respBody: %v, err: %v", notifyChannel.Name, events[0], tplContent, customParams, resp, err)
|
||||
ginx.NewRender(c).Data(resp, err)
|
||||
return resp, err
|
||||
default:
|
||||
logger.Errorf("unsupported request type: %v", notifyChannel.RequestType)
|
||||
ginx.NewRender(c).Message(errors.New("unsupported request type"))
|
||||
return "", fmt.Errorf("unsupported request type")
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
58
center/router/router_opensearch.go
Normal file
58
center/router/router_opensearch.go
Normal file
@@ -0,0 +1,58 @@
|
||||
package router
|
||||
|
||||
import (
|
||||
"github.com/ccfos/nightingale/v6/datasource/opensearch"
|
||||
"github.com/ccfos/nightingale/v6/dscache"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/toolkits/pkg/ginx"
|
||||
"github.com/toolkits/pkg/logger"
|
||||
)
|
||||
|
||||
func (rt *Router) QueryOSIndices(c *gin.Context) {
|
||||
var f IndexReq
|
||||
ginx.BindJSON(c, &f)
|
||||
|
||||
plug, exists := dscache.DsCache.Get(f.Cate, f.DatasourceId)
|
||||
if !exists {
|
||||
logger.Warningf("cluster:%d not exists", f.DatasourceId)
|
||||
ginx.Bomb(200, "cluster not exists")
|
||||
}
|
||||
|
||||
indices, err := plug.(*opensearch.OpenSearch).QueryIndices()
|
||||
ginx.Dangerous(err)
|
||||
|
||||
ginx.NewRender(c).Data(indices, nil)
|
||||
}
|
||||
|
||||
func (rt *Router) QueryOSFields(c *gin.Context) {
|
||||
var f IndexReq
|
||||
ginx.BindJSON(c, &f)
|
||||
|
||||
plug, exists := dscache.DsCache.Get(f.Cate, f.DatasourceId)
|
||||
if !exists {
|
||||
logger.Warningf("cluster:%d not exists", f.DatasourceId)
|
||||
ginx.Bomb(200, "cluster not exists")
|
||||
}
|
||||
|
||||
fields, err := plug.(*opensearch.OpenSearch).QueryFields([]string{f.Index})
|
||||
ginx.Dangerous(err)
|
||||
|
||||
ginx.NewRender(c).Data(fields, nil)
|
||||
}
|
||||
|
||||
func (rt *Router) QueryOSVariable(c *gin.Context) {
|
||||
var f FieldValueReq
|
||||
ginx.BindJSON(c, &f)
|
||||
|
||||
plug, exists := dscache.DsCache.Get(f.Cate, f.DatasourceId)
|
||||
if !exists {
|
||||
logger.Warningf("cluster:%d not exists", f.DatasourceId)
|
||||
ginx.Bomb(200, "cluster not exists")
|
||||
}
|
||||
|
||||
fields, err := plug.(*opensearch.OpenSearch).QueryFieldValue([]string{f.Index}, f.Query.Field, f.Query.Query)
|
||||
ginx.Dangerous(err)
|
||||
|
||||
ginx.NewRender(c).Data(fields, nil)
|
||||
}
|
||||
@@ -7,16 +7,20 @@ import (
|
||||
"net"
|
||||
"net/http"
|
||||
"net/http/httputil"
|
||||
"regexp"
|
||||
"strconv"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/pkg/poster"
|
||||
pkgprom "github.com/ccfos/nightingale/v6/pkg/prom"
|
||||
"github.com/ccfos/nightingale/v6/prom"
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/prometheus/common/model"
|
||||
"github.com/toolkits/pkg/ginx"
|
||||
"github.com/toolkits/pkg/logger"
|
||||
"github.com/toolkits/pkg/net/httplib"
|
||||
)
|
||||
|
||||
type QueryFormItem struct {
|
||||
@@ -235,3 +239,94 @@ func transportPut(dsid, updatedat int64, tran http.RoundTripper) {
|
||||
updatedAts[dsid] = updatedat
|
||||
transportsLock.Unlock()
|
||||
}
|
||||
|
||||
const (
|
||||
DatasourceTypePrometheus = "Prometheus"
|
||||
DatasourceTypeVictoriaMetrics = "VictoriaMetrics"
|
||||
)
|
||||
|
||||
type deleteDatasourceSeriesForm struct {
|
||||
DatasourceID int64 `json:"datasource_id"`
|
||||
Match []string `json:"match"`
|
||||
Start string `json:"start"`
|
||||
End string `json:"end"`
|
||||
}
|
||||
|
||||
func (rt *Router) deleteDatasourceSeries(c *gin.Context) {
|
||||
var ddsf deleteDatasourceSeriesForm
|
||||
ginx.BindJSON(c, &ddsf)
|
||||
ds := rt.DatasourceCache.GetById(ddsf.DatasourceID)
|
||||
|
||||
if ds == nil {
|
||||
ginx.Bomb(http.StatusBadRequest, "no such datasource")
|
||||
return
|
||||
}
|
||||
|
||||
// Get datasource type, now only support prometheus and victoriametrics
|
||||
datasourceType, ok := ds.SettingsJson["prometheus.tsdb_type"]
|
||||
if !ok {
|
||||
ginx.Bomb(http.StatusBadRequest, "datasource type not found, please check your datasource settings")
|
||||
return
|
||||
}
|
||||
|
||||
target, err := ds.HTTPJson.ParseUrl()
|
||||
if err != nil {
|
||||
ginx.Bomb(http.StatusInternalServerError, "invalid urls: %s", ds.HTTPJson.GetUrls())
|
||||
return
|
||||
}
|
||||
|
||||
timeout := time.Duration(ds.HTTPJson.DialTimeout) * time.Millisecond
|
||||
matchQuerys := make([]string, 0)
|
||||
for _, match := range ddsf.Match {
|
||||
matchQuerys = append(matchQuerys, fmt.Sprintf("match[]=%s", match))
|
||||
}
|
||||
matchQuery := strings.Join(matchQuerys, "&")
|
||||
|
||||
switch datasourceType {
|
||||
case DatasourceTypePrometheus:
|
||||
// Prometheus delete api need POST method
|
||||
// https://prometheus.io/docs/prometheus/latest/querying/api/#delete-series
|
||||
url := fmt.Sprintf("http://%s/api/v1/admin/tsdb/delete_series?%s&start=%s&end=%s", target.Host, matchQuery, ddsf.Start, ddsf.End)
|
||||
go func() {
|
||||
resp, _, err := poster.PostJSON(url, timeout, nil)
|
||||
if err != nil {
|
||||
logger.Errorf("delete series error datasource_id: %d, datasource_name: %s, match: %s, start: %s, end: %s, err: %v",
|
||||
ddsf.DatasourceID, ds.Name, ddsf.Match, ddsf.Start, ddsf.End, err)
|
||||
return
|
||||
}
|
||||
logger.Infof("delete datasource series datasource_id: %d, datasource_name: %s, match: %s, start: %s, end: %s, respBody: %s",
|
||||
ddsf.DatasourceID, ds.Name, ddsf.Match, ddsf.Start, ddsf.End, string(resp))
|
||||
}()
|
||||
case DatasourceTypeVictoriaMetrics:
|
||||
// Delete API doesn’t support the deletion of specific time ranges.
|
||||
// Refer: https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#how-to-delete-time-series
|
||||
var url string
|
||||
// Check VictoriaMetrics is single node or cluster
|
||||
// Cluster will have /select/<accountID>/prometheus pattern
|
||||
re := regexp.MustCompile(`/select/(\d+)/prometheus`)
|
||||
matches := re.FindStringSubmatch(ds.HTTPJson.Url)
|
||||
if len(matches) > 0 && matches[1] != "" {
|
||||
accountID, err := strconv.Atoi(matches[1])
|
||||
if err != nil {
|
||||
ginx.Bomb(http.StatusInternalServerError, "invalid accountID: %s", matches[1])
|
||||
}
|
||||
url = fmt.Sprintf("http://%s/delete/%d/prometheus/api/v1/admin/tsdb/delete_series?%s", target.Host, accountID, matchQuery)
|
||||
} else {
|
||||
url = fmt.Sprintf("http://%s/api/v1/admin/tsdb/delete_series?%s", target.Host, matchQuery)
|
||||
}
|
||||
go func() {
|
||||
resp, err := httplib.Get(url).SetTimeout(timeout).Response()
|
||||
if err != nil {
|
||||
logger.Errorf("delete series failed | datasource_id: %d, datasource_name: %s, match: %s, start: %s, end: %s, err: %v",
|
||||
ddsf.DatasourceID, ds.Name, ddsf.Match, ddsf.Start, ddsf.End, err)
|
||||
return
|
||||
}
|
||||
logger.Infof("sending delete series request | datasource_id: %d, datasource_name: %s, match: %s, start: %s, end: %s, respBody: %s",
|
||||
ddsf.DatasourceID, ds.Name, ddsf.Match, ddsf.Start, ddsf.End, resp.Body)
|
||||
}()
|
||||
default:
|
||||
ginx.Bomb(http.StatusBadRequest, "not support delete series yet")
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Data(nil, nil)
|
||||
}
|
||||
|
||||
@@ -235,3 +235,20 @@ func (rt *Router) userDel(c *gin.Context) {
|
||||
|
||||
ginx.NewRender(c).Message(target.Del(rt.Ctx))
|
||||
}
|
||||
|
||||
func (rt *Router) installDateGet(c *gin.Context) {
|
||||
rootUser, err := models.UserGetByUsername(rt.Ctx, "root")
|
||||
if err != nil {
|
||||
logger.Errorf("get root user failed: %v", err)
|
||||
ginx.NewRender(c).Data(0, nil)
|
||||
return
|
||||
}
|
||||
|
||||
if rootUser == nil {
|
||||
logger.Errorf("root user not found")
|
||||
ginx.NewRender(c).Data(0, nil)
|
||||
return
|
||||
}
|
||||
|
||||
ginx.NewRender(c).Data(rootUser.CreateAt, nil)
|
||||
}
|
||||
|
||||
@@ -14,6 +14,13 @@ func decryptConfig(config *ConfigType, cryptoKey string) error {
|
||||
|
||||
config.DB.DSN = decryptDsn
|
||||
|
||||
decryptRedisPwd, err := secu.DealWithDecrypt(config.Redis.Password, cryptoKey)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to decrypt the redis password: %s", err)
|
||||
}
|
||||
|
||||
config.Redis.Password = decryptRedisPwd
|
||||
|
||||
for k := range config.HTTP.APIForService.BasicAuth {
|
||||
decryptPwd, err := secu.DealWithDecrypt(config.HTTP.APIForService.BasicAuth[k], cryptoKey)
|
||||
if err != nil {
|
||||
|
||||
@@ -53,6 +53,20 @@ func init() {
|
||||
PluginType: "ck",
|
||||
PluginTypeName: "ClickHouse",
|
||||
}
|
||||
|
||||
DatasourceTypes[5] = DatasourceType{
|
||||
Id: 5,
|
||||
Category: "timeseries",
|
||||
PluginType: "mysql",
|
||||
PluginTypeName: "MySQL",
|
||||
}
|
||||
|
||||
DatasourceTypes[6] = DatasourceType{
|
||||
Id: 6,
|
||||
Category: "timeseries",
|
||||
PluginType: "pgsql",
|
||||
PluginTypeName: "PostgreSQL",
|
||||
}
|
||||
}
|
||||
|
||||
type NewDatasrouceFn func(settings map[string]interface{}) (Datasource, error)
|
||||
|
||||
199
datasource/doris/doris.go
Normal file
199
datasource/doris/doris.go
Normal file
@@ -0,0 +1,199 @@
|
||||
package doris
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/datasource"
|
||||
"github.com/ccfos/nightingale/v6/dskit/doris"
|
||||
"github.com/ccfos/nightingale/v6/dskit/types"
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
|
||||
"github.com/mitchellh/mapstructure"
|
||||
"github.com/toolkits/pkg/logger"
|
||||
)
|
||||
|
||||
const (
|
||||
DorisType = "doris"
|
||||
)
|
||||
|
||||
func init() {
|
||||
datasource.RegisterDatasource(DorisType, new(Doris))
|
||||
}
|
||||
|
||||
type Doris struct {
|
||||
doris.Doris `json:",inline" mapstructure:",squash"`
|
||||
}
|
||||
|
||||
type QueryParam struct {
|
||||
Ref string `json:"ref" mapstructure:"ref"`
|
||||
Database string `json:"database" mapstructure:"database"`
|
||||
Table string `json:"table" mapstructure:"table"`
|
||||
SQL string `json:"sql" mapstructure:"sql"`
|
||||
Keys datasource.Keys `json:"keys" mapstructure:"keys"`
|
||||
}
|
||||
|
||||
func (d *Doris) InitClient() error {
|
||||
if len(d.Addr) == 0 {
|
||||
return fmt.Errorf("not found doris addr, please check datasource config")
|
||||
}
|
||||
if _, err := d.NewConn(context.TODO(), ""); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (d *Doris) Init(settings map[string]interface{}) (datasource.Datasource, error) {
|
||||
newest := new(Doris)
|
||||
err := mapstructure.Decode(settings, newest)
|
||||
return newest, err
|
||||
}
|
||||
|
||||
func (d *Doris) Validate(ctx context.Context) error {
|
||||
if len(d.Addr) == 0 || len(strings.TrimSpace(d.Addr)) == 0 {
|
||||
return fmt.Errorf("doris addr is invalid, please check datasource setting")
|
||||
}
|
||||
|
||||
if len(strings.TrimSpace(d.User)) == 0 {
|
||||
return fmt.Errorf("doris user is invalid, please check datasource setting")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Equal compares whether two objects are the same, used for caching
|
||||
func (d *Doris) Equal(p datasource.Datasource) bool {
|
||||
newest, ok := p.(*Doris)
|
||||
if !ok {
|
||||
logger.Errorf("unexpected plugin type, expected is ck")
|
||||
return false
|
||||
}
|
||||
|
||||
// only compare first shard
|
||||
if d.Addr != newest.Addr {
|
||||
return false
|
||||
}
|
||||
|
||||
if d.User != newest.User {
|
||||
return false
|
||||
}
|
||||
|
||||
if d.Password != newest.Password {
|
||||
return false
|
||||
}
|
||||
|
||||
if d.EnableWrite != newest.EnableWrite {
|
||||
return false
|
||||
}
|
||||
|
||||
if d.FeAddr != newest.FeAddr {
|
||||
return false
|
||||
}
|
||||
|
||||
if d.MaxQueryRows != newest.MaxQueryRows {
|
||||
return false
|
||||
}
|
||||
|
||||
if d.Timeout != newest.Timeout {
|
||||
return false
|
||||
}
|
||||
|
||||
if d.MaxIdleConns != newest.MaxIdleConns {
|
||||
return false
|
||||
}
|
||||
|
||||
if d.MaxOpenConns != newest.MaxOpenConns {
|
||||
return false
|
||||
}
|
||||
|
||||
if d.ConnMaxLifetime != newest.ConnMaxLifetime {
|
||||
return false
|
||||
}
|
||||
|
||||
if d.ClusterName != newest.ClusterName {
|
||||
return false
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
func (d *Doris) MakeLogQuery(ctx context.Context, query interface{}, eventTags []string, start, end int64) (interface{}, error) {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
func (d *Doris) MakeTSQuery(ctx context.Context, query interface{}, eventTags []string, start, end int64) (interface{}, error) {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
func (d *Doris) QueryMapData(ctx context.Context, query interface{}) ([]map[string]string, error) {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
func (d *Doris) QueryData(ctx context.Context, query interface{}) ([]models.DataResp, error) {
|
||||
dorisQueryParam := new(QueryParam)
|
||||
if err := mapstructure.Decode(query, dorisQueryParam); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if dorisQueryParam.Keys.ValueKey == "" {
|
||||
return nil, fmt.Errorf("valueKey is required")
|
||||
}
|
||||
|
||||
items, err := d.QueryTimeseries(context.TODO(), &doris.QueryParam{
|
||||
Database: dorisQueryParam.Database,
|
||||
Sql: dorisQueryParam.SQL,
|
||||
Keys: types.Keys{
|
||||
ValueKey: dorisQueryParam.Keys.ValueKey,
|
||||
LabelKey: dorisQueryParam.Keys.LabelKey,
|
||||
TimeKey: dorisQueryParam.Keys.TimeKey,
|
||||
},
|
||||
})
|
||||
if err != nil {
|
||||
logger.Warningf("query:%+v get data err:%v", dorisQueryParam, err)
|
||||
return []models.DataResp{}, err
|
||||
}
|
||||
data := make([]models.DataResp, 0)
|
||||
for i := range items {
|
||||
data = append(data, models.DataResp{
|
||||
Ref: dorisQueryParam.Ref,
|
||||
Metric: items[i].Metric,
|
||||
Values: items[i].Values,
|
||||
})
|
||||
}
|
||||
|
||||
// parse resp to time series data
|
||||
logger.Infof("req:%+v keys:%+v \n data:%v", dorisQueryParam, dorisQueryParam.Keys, data)
|
||||
|
||||
return data, nil
|
||||
}
|
||||
|
||||
func (d *Doris) QueryLog(ctx context.Context, query interface{}) ([]interface{}, int64, error) {
|
||||
dorisQueryParam := new(QueryParam)
|
||||
if err := mapstructure.Decode(query, dorisQueryParam); err != nil {
|
||||
return nil, 0, err
|
||||
}
|
||||
|
||||
items, err := d.QueryLogs(ctx, &doris.QueryParam{
|
||||
Database: dorisQueryParam.Database,
|
||||
Sql: dorisQueryParam.SQL,
|
||||
})
|
||||
if err != nil {
|
||||
logger.Warningf("query:%+v get data err:%v", dorisQueryParam, err)
|
||||
return []interface{}{}, 0, err
|
||||
}
|
||||
logs := make([]interface{}, 0)
|
||||
for i := range items {
|
||||
logs = append(logs, items[i])
|
||||
}
|
||||
|
||||
return logs, 0, nil
|
||||
}
|
||||
|
||||
func (d *Doris) DescribeTable(ctx context.Context, query interface{}) ([]*types.ColumnProperty, error) {
|
||||
dorisQueryParam := new(QueryParam)
|
||||
if err := mapstructure.Decode(query, dorisQueryParam); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return d.DescTable(ctx, dorisQueryParam.Database, dorisQueryParam.Table)
|
||||
}
|
||||
227
datasource/mysql/mysql.go
Normal file
227
datasource/mysql/mysql.go
Normal file
@@ -0,0 +1,227 @@
|
||||
package mysql
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/datasource"
|
||||
"github.com/ccfos/nightingale/v6/dskit/mysql"
|
||||
"github.com/ccfos/nightingale/v6/dskit/sqlbase"
|
||||
"github.com/ccfos/nightingale/v6/dskit/types"
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
"github.com/ccfos/nightingale/v6/pkg/macros"
|
||||
|
||||
"github.com/mitchellh/mapstructure"
|
||||
"github.com/toolkits/pkg/logger"
|
||||
)
|
||||
|
||||
const (
|
||||
MySQLType = "mysql"
|
||||
)
|
||||
|
||||
func init() {
|
||||
datasource.RegisterDatasource(MySQLType, new(MySQL))
|
||||
}
|
||||
|
||||
type MySQL struct {
|
||||
mysql.MySQL `json:",inline" mapstructure:",squash"`
|
||||
}
|
||||
|
||||
type QueryParam struct {
|
||||
Ref string `json:"ref" mapstructure:"ref"`
|
||||
Database string `json:"database" mapstructure:"database"`
|
||||
Table string `json:"table" mapstructure:"table"`
|
||||
SQL string `json:"sql" mapstructure:"sql"`
|
||||
Keys datasource.Keys `json:"keys" mapstructure:"keys"`
|
||||
From int64 `json:"from" mapstructure:"from"`
|
||||
To int64 `json:"to" mapstructure:"to"`
|
||||
}
|
||||
|
||||
func (m *MySQL) InitClient() error {
|
||||
if len(m.Shards) == 0 {
|
||||
return fmt.Errorf("not found mysql addr, please check datasource config")
|
||||
}
|
||||
if _, err := m.NewConn(context.TODO(), ""); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *MySQL) Init(settings map[string]interface{}) (datasource.Datasource, error) {
|
||||
newest := new(MySQL)
|
||||
err := mapstructure.Decode(settings, newest)
|
||||
return newest, err
|
||||
}
|
||||
|
||||
func (m *MySQL) Validate(ctx context.Context) error {
|
||||
if len(m.Shards) == 0 || len(strings.TrimSpace(m.Shards[0].Addr)) == 0 {
|
||||
return fmt.Errorf("mysql addr is invalid, please check datasource setting")
|
||||
}
|
||||
|
||||
if len(strings.TrimSpace(m.Shards[0].User)) == 0 {
|
||||
return fmt.Errorf("mysql user is invalid, please check datasource setting")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Equal compares whether two objects are the same, used for caching
|
||||
func (m *MySQL) Equal(p datasource.Datasource) bool {
|
||||
newest, ok := p.(*MySQL)
|
||||
if !ok {
|
||||
logger.Errorf("unexpected plugin type, expected is mysql")
|
||||
return false
|
||||
}
|
||||
|
||||
if len(m.Shards) == 0 || len(newest.Shards) == 0 {
|
||||
return false
|
||||
}
|
||||
|
||||
oldShard := m.Shards[0]
|
||||
newShard := newest.Shards[0]
|
||||
|
||||
if oldShard.Addr != newShard.Addr {
|
||||
return false
|
||||
}
|
||||
|
||||
if oldShard.User != newShard.User {
|
||||
return false
|
||||
}
|
||||
|
||||
if oldShard.Password != newShard.Password {
|
||||
return false
|
||||
}
|
||||
|
||||
if oldShard.MaxQueryRows != newShard.MaxQueryRows {
|
||||
return false
|
||||
}
|
||||
|
||||
if oldShard.Timeout != newShard.Timeout {
|
||||
return false
|
||||
}
|
||||
|
||||
if oldShard.MaxIdleConns != newShard.MaxIdleConns {
|
||||
return false
|
||||
}
|
||||
|
||||
if oldShard.MaxOpenConns != newShard.MaxOpenConns {
|
||||
return false
|
||||
}
|
||||
|
||||
if oldShard.ConnMaxLifetime != newShard.ConnMaxLifetime {
|
||||
return false
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
func (m *MySQL) MakeLogQuery(ctx context.Context, query interface{}, eventTags []string, start, end int64) (interface{}, error) {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
func (m *MySQL) MakeTSQuery(ctx context.Context, query interface{}, eventTags []string, start, end int64) (interface{}, error) {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
func (m *MySQL) QueryMapData(ctx context.Context, query interface{}) ([]map[string]string, error) {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
func (m *MySQL) QueryData(ctx context.Context, query interface{}) ([]models.DataResp, error) {
|
||||
mysqlQueryParam := new(QueryParam)
|
||||
if err := mapstructure.Decode(query, mysqlQueryParam); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if strings.Contains(mysqlQueryParam.SQL, "$__") {
|
||||
var err error
|
||||
mysqlQueryParam.SQL, err = macros.Macro(mysqlQueryParam.SQL, mysqlQueryParam.From, mysqlQueryParam.To)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
if mysqlQueryParam.Keys.ValueKey == "" {
|
||||
return nil, fmt.Errorf("valueKey is required")
|
||||
}
|
||||
|
||||
timeout := m.Shards[0].Timeout
|
||||
if timeout == 0 {
|
||||
timeout = 60
|
||||
}
|
||||
|
||||
timeoutCtx, cancel := context.WithTimeout(ctx, time.Duration(timeout)*time.Second)
|
||||
defer cancel()
|
||||
|
||||
items, err := m.QueryTimeseries(timeoutCtx, &sqlbase.QueryParam{
|
||||
Sql: mysqlQueryParam.SQL,
|
||||
Keys: types.Keys{
|
||||
ValueKey: mysqlQueryParam.Keys.ValueKey,
|
||||
LabelKey: mysqlQueryParam.Keys.LabelKey,
|
||||
TimeKey: mysqlQueryParam.Keys.TimeKey,
|
||||
},
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
logger.Warningf("query:%+v get data err:%v", mysqlQueryParam, err)
|
||||
return []models.DataResp{}, err
|
||||
}
|
||||
data := make([]models.DataResp, 0)
|
||||
for i := range items {
|
||||
data = append(data, models.DataResp{
|
||||
Ref: mysqlQueryParam.Ref,
|
||||
Metric: items[i].Metric,
|
||||
Values: items[i].Values,
|
||||
})
|
||||
}
|
||||
|
||||
return data, nil
|
||||
}
|
||||
|
||||
func (m *MySQL) QueryLog(ctx context.Context, query interface{}) ([]interface{}, int64, error) {
|
||||
mysqlQueryParam := new(QueryParam)
|
||||
if err := mapstructure.Decode(query, mysqlQueryParam); err != nil {
|
||||
return nil, 0, err
|
||||
}
|
||||
|
||||
if strings.Contains(mysqlQueryParam.SQL, "$__") {
|
||||
var err error
|
||||
mysqlQueryParam.SQL, err = macros.Macro(mysqlQueryParam.SQL, mysqlQueryParam.From, mysqlQueryParam.To)
|
||||
if err != nil {
|
||||
return nil, 0, err
|
||||
}
|
||||
}
|
||||
|
||||
timeout := m.Shards[0].Timeout
|
||||
if timeout == 0 {
|
||||
timeout = 60
|
||||
}
|
||||
|
||||
timeoutCtx, cancel := context.WithTimeout(ctx, time.Duration(timeout)*time.Second)
|
||||
defer cancel()
|
||||
|
||||
items, err := m.Query(timeoutCtx, &sqlbase.QueryParam{
|
||||
Sql: mysqlQueryParam.SQL,
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
logger.Warningf("query:%+v get data err:%v", mysqlQueryParam, err)
|
||||
return []interface{}{}, 0, err
|
||||
}
|
||||
logs := make([]interface{}, 0)
|
||||
for i := range items {
|
||||
logs = append(logs, items[i])
|
||||
}
|
||||
|
||||
return logs, 0, nil
|
||||
}
|
||||
|
||||
func (m *MySQL) DescribeTable(ctx context.Context, query interface{}) ([]*types.ColumnProperty, error) {
|
||||
mysqlQueryParam := new(QueryParam)
|
||||
if err := mapstructure.Decode(query, mysqlQueryParam); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return m.DescTable(ctx, mysqlQueryParam.Database, mysqlQueryParam.Table)
|
||||
}
|
||||
399
datasource/opensearch/opensearch.go
Normal file
399
datasource/opensearch/opensearch.go
Normal file
@@ -0,0 +1,399 @@
|
||||
package opensearch
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"net"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"reflect"
|
||||
"regexp"
|
||||
"sort"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/datasource"
|
||||
"github.com/ccfos/nightingale/v6/datasource/commons/eslike"
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
"github.com/ccfos/nightingale/v6/pkg/tlsx"
|
||||
|
||||
"github.com/mitchellh/mapstructure"
|
||||
"github.com/olivere/elastic/v7"
|
||||
oscliv2 "github.com/opensearch-project/opensearch-go/v2"
|
||||
osapiv2 "github.com/opensearch-project/opensearch-go/v2/opensearchapi"
|
||||
)
|
||||
|
||||
const (
|
||||
OpenSearchType = "opensearch"
|
||||
)
|
||||
|
||||
type OpenSearch struct {
|
||||
Addr string `json:"os.addr" mapstructure:"os.addr"`
|
||||
Nodes []string `json:"os.nodes" mapstructure:"os.nodes"`
|
||||
Timeout int64 `json:"os.timeout" mapstructure:"os.timeout"` // millis
|
||||
Basic BasicAuth `json:"os.basic" mapstructure:"os.basic"`
|
||||
TLS TLS `json:"os.tls" mapstructure:"os.tls"`
|
||||
Version string `json:"os.version" mapstructure:"os.version"`
|
||||
Headers map[string]string `json:"os.headers" mapstructure:"os.headers"`
|
||||
MinInterval int `json:"os.min_interval" mapstructure:"os.min_interval"` // seconds
|
||||
MaxShard int `json:"os.max_shard" mapstructure:"os.max_shard"`
|
||||
ClusterName string `json:"os.cluster_name" mapstructure:"os.cluster_name"`
|
||||
Client *oscliv2.Client `json:"os.client" mapstructure:"os.client"`
|
||||
}
|
||||
|
||||
type TLS struct {
|
||||
SkipTlsVerify bool `json:"os.tls.skip_tls_verify" mapstructure:"os.tls.skip_tls_verify"`
|
||||
}
|
||||
|
||||
type BasicAuth struct {
|
||||
Enable bool `json:"os.auth.enable" mapstructure:"os.auth.enable"`
|
||||
Username string `json:"os.user" mapstructure:"os.user"`
|
||||
Password string `json:"os.password" mapstructure:"os.password"`
|
||||
}
|
||||
|
||||
func init() {
|
||||
datasource.RegisterDatasource(OpenSearchType, new(OpenSearch))
|
||||
}
|
||||
|
||||
func (os *OpenSearch) Init(settings map[string]interface{}) (datasource.Datasource, error) {
|
||||
newest := new(OpenSearch)
|
||||
err := mapstructure.Decode(settings, newest)
|
||||
return newest, err
|
||||
}
|
||||
|
||||
func (os *OpenSearch) InitClient() error {
|
||||
transport := &http.Transport{
|
||||
Proxy: http.ProxyFromEnvironment,
|
||||
DialContext: (&net.Dialer{
|
||||
Timeout: time.Duration(os.Timeout) * time.Millisecond,
|
||||
}).DialContext,
|
||||
ResponseHeaderTimeout: time.Duration(os.Timeout) * time.Millisecond,
|
||||
}
|
||||
|
||||
if len(os.Nodes) > 0 {
|
||||
os.Addr = os.Nodes[0]
|
||||
}
|
||||
|
||||
if strings.Contains(os.Addr, "https") {
|
||||
tlsConfig := tlsx.ClientConfig{
|
||||
InsecureSkipVerify: os.TLS.SkipTlsVerify,
|
||||
UseTLS: true,
|
||||
}
|
||||
cfg, err := tlsConfig.TLSConfig()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
transport.TLSClientConfig = cfg
|
||||
}
|
||||
|
||||
headers := http.Header{}
|
||||
for k, v := range os.Headers {
|
||||
headers[k] = []string{v}
|
||||
}
|
||||
|
||||
options := oscliv2.Config{
|
||||
Addresses: os.Nodes,
|
||||
Transport: transport,
|
||||
Header: headers,
|
||||
}
|
||||
|
||||
if os.Basic.Enable && os.Basic.Username != "" {
|
||||
options.Username = os.Basic.Username
|
||||
options.Password = os.Basic.Password
|
||||
}
|
||||
|
||||
var err = error(nil)
|
||||
os.Client, err = oscliv2.NewClient(options)
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
func (os *OpenSearch) Equal(other datasource.Datasource) bool {
|
||||
sort.Strings(os.Nodes)
|
||||
sort.Strings(other.(*OpenSearch).Nodes)
|
||||
|
||||
if strings.Join(os.Nodes, ",") != strings.Join(other.(*OpenSearch).Nodes, ",") {
|
||||
return false
|
||||
}
|
||||
|
||||
if os.Basic.Username != other.(*OpenSearch).Basic.Username {
|
||||
return false
|
||||
}
|
||||
|
||||
if os.Basic.Password != other.(*OpenSearch).Basic.Password {
|
||||
return false
|
||||
}
|
||||
|
||||
if os.TLS.SkipTlsVerify != other.(*OpenSearch).TLS.SkipTlsVerify {
|
||||
return false
|
||||
}
|
||||
|
||||
if os.Timeout != other.(*OpenSearch).Timeout {
|
||||
return false
|
||||
}
|
||||
|
||||
if !reflect.DeepEqual(os.Headers, other.(*OpenSearch).Headers) {
|
||||
return false
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
func (os *OpenSearch) Validate(ctx context.Context) (err error) {
|
||||
if len(os.Nodes) == 0 {
|
||||
return fmt.Errorf("need a valid addr")
|
||||
}
|
||||
|
||||
for _, addr := range os.Nodes {
|
||||
_, err = url.Parse(addr)
|
||||
if err != nil {
|
||||
return fmt.Errorf("parse addr error: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
if os.Basic.Enable && (len(os.Basic.Username) == 0 || len(os.Basic.Password) == 0) {
|
||||
return fmt.Errorf("need a valid user, password")
|
||||
}
|
||||
|
||||
if os.MaxShard == 0 {
|
||||
os.MaxShard = 5
|
||||
}
|
||||
|
||||
if os.MinInterval < 10 {
|
||||
os.MinInterval = 10
|
||||
}
|
||||
|
||||
if os.Timeout == 0 {
|
||||
os.Timeout = 6000
|
||||
}
|
||||
|
||||
if !strings.HasPrefix(os.Version, "2") {
|
||||
return fmt.Errorf("version must be 2.0+")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (os *OpenSearch) MakeLogQuery(ctx context.Context, query interface{}, eventTags []string, start, end int64) (interface{}, error) {
|
||||
return eslike.MakeLogQuery(ctx, query, eventTags, start, end)
|
||||
}
|
||||
|
||||
func (os *OpenSearch) MakeTSQuery(ctx context.Context, query interface{}, eventTags []string, start, end int64) (interface{}, error) {
|
||||
return eslike.MakeTSQuery(ctx, query, eventTags, start, end)
|
||||
}
|
||||
|
||||
func search(ctx context.Context, indices []string, source interface{}, timeout int, cli *oscliv2.Client) (*elastic.SearchResult, error) {
|
||||
var body *bytes.Buffer = nil
|
||||
if source != nil {
|
||||
body = new(bytes.Buffer)
|
||||
err := json.NewEncoder(body).Encode(source)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
req := osapiv2.SearchRequest{
|
||||
Index: indices,
|
||||
Body: body,
|
||||
}
|
||||
|
||||
if timeout > 0 {
|
||||
req.Timeout = time.Second * time.Duration(timeout)
|
||||
}
|
||||
|
||||
resp, err := req.Do(ctx, cli)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode < 200 || resp.StatusCode >= 300 {
|
||||
return nil, fmt.Errorf("opensearch response not 2xx, resp is %v", resp)
|
||||
}
|
||||
|
||||
bs, err := io.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
result := new(elastic.SearchResult)
|
||||
err = json.Unmarshal(bs, &result)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return result, nil
|
||||
}
|
||||
|
||||
func (os *OpenSearch) QueryData(ctx context.Context, queryParam interface{}) ([]models.DataResp, error) {
|
||||
|
||||
search := func(ctx context.Context, indices []string, source interface{}, timeout int, maxShard int) (*elastic.SearchResult, error) {
|
||||
return search(ctx, indices, source, timeout, os.Client)
|
||||
}
|
||||
|
||||
return eslike.QueryData(ctx, queryParam, os.Timeout, os.Version, search)
|
||||
}
|
||||
|
||||
func (os *OpenSearch) QueryIndices() ([]string, error) {
|
||||
|
||||
cir := osapiv2.CatIndicesRequest{
|
||||
Format: "json",
|
||||
}
|
||||
|
||||
rsp, err := cir.Do(context.Background(), os.Client)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer rsp.Body.Close()
|
||||
|
||||
bs, err := io.ReadAll(rsp.Body)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
resp := make([]struct {
|
||||
Index string `json:"index"`
|
||||
}, 0)
|
||||
|
||||
err = json.Unmarshal(bs, &resp)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var ret []string
|
||||
for _, k := range resp {
|
||||
ret = append(ret, k.Index)
|
||||
}
|
||||
|
||||
return ret, nil
|
||||
}
|
||||
|
||||
func (os *OpenSearch) QueryFields(indices []string) ([]string, error) {
|
||||
|
||||
var fields []string
|
||||
mappingRequest := osapiv2.IndicesGetMappingRequest{
|
||||
Index: indices,
|
||||
}
|
||||
|
||||
resp, err := mappingRequest.Do(context.Background(), os.Client)
|
||||
if err != nil {
|
||||
return fields, err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
bs, err := io.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
return fields, err
|
||||
}
|
||||
|
||||
result := map[string]interface{}{}
|
||||
|
||||
err = json.Unmarshal(bs, &result)
|
||||
if err != nil {
|
||||
return fields, err
|
||||
}
|
||||
|
||||
idx := ""
|
||||
if len(indices) > 0 {
|
||||
idx = indices[0]
|
||||
}
|
||||
|
||||
mappingIndex := ""
|
||||
indexReg, _ := regexp.Compile(idx)
|
||||
for key, value := range result {
|
||||
mappings, ok := value.(map[string]interface{})
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
if len(mappings) == 0 {
|
||||
continue
|
||||
}
|
||||
if key == idx || strings.Contains(key, idx) ||
|
||||
(indexReg != nil && indexReg.MatchString(key)) {
|
||||
mappingIndex = key
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if len(mappingIndex) == 0 {
|
||||
return fields, nil
|
||||
}
|
||||
|
||||
fields = propertyMappingRange(result[mappingIndex], 1)
|
||||
|
||||
sort.Strings(fields)
|
||||
return fields, nil
|
||||
}
|
||||
|
||||
func propertyMappingRange(v interface{}, depth int) (fields []string) {
|
||||
mapping, ok := v.(map[string]interface{})
|
||||
if !ok {
|
||||
return
|
||||
}
|
||||
if len(mapping) == 0 {
|
||||
return
|
||||
}
|
||||
for key, value := range mapping {
|
||||
if reflect.TypeOf(value).Kind() == reflect.Map {
|
||||
valueMap := value.(map[string]interface{})
|
||||
if prop, found := valueMap["properties"]; found {
|
||||
subFields := propertyMappingRange(prop, depth+1)
|
||||
for i := range subFields {
|
||||
if depth == 1 {
|
||||
fields = append(fields, subFields[i])
|
||||
} else {
|
||||
fields = append(fields, key+"."+subFields[i])
|
||||
}
|
||||
}
|
||||
} else if typ, found := valueMap["type"]; found {
|
||||
if eslike.HitFilter(typ.(string)) {
|
||||
continue
|
||||
}
|
||||
fields = append(fields, key)
|
||||
}
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func (os *OpenSearch) QueryLog(ctx context.Context, queryParam interface{}) ([]interface{}, int64, error) {
|
||||
|
||||
search := func(ctx context.Context, indices []string, source interface{}, timeout int, maxShard int) (*elastic.SearchResult, error) {
|
||||
return search(ctx, indices, source, timeout, os.Client)
|
||||
}
|
||||
|
||||
return eslike.QueryLog(ctx, queryParam, os.Timeout, os.Version, 0, search)
|
||||
}
|
||||
|
||||
func (os *OpenSearch) QueryFieldValue(indexs []string, field string, query string) ([]string, error) {
|
||||
var values []string
|
||||
source := elastic.NewSearchSource().
|
||||
Size(0)
|
||||
|
||||
if query != "" {
|
||||
source = source.Query(elastic.NewBoolQuery().Must(elastic.NewQueryStringQuery(query)))
|
||||
}
|
||||
source = source.Aggregation("distinct", elastic.NewTermsAggregation().Field(field).Size(10000))
|
||||
|
||||
result, err := search(context.Background(), indexs, source, 0, os.Client)
|
||||
if err != nil {
|
||||
return values, err
|
||||
}
|
||||
|
||||
agg, found := result.Aggregations.Terms("distinct")
|
||||
if !found {
|
||||
return values, nil
|
||||
}
|
||||
|
||||
for _, bucket := range agg.Buckets {
|
||||
values = append(values, bucket.Key.(string))
|
||||
}
|
||||
|
||||
return values, nil
|
||||
}
|
||||
|
||||
func (os *OpenSearch) QueryMapData(ctx context.Context, query interface{}) ([]map[string]string, error) {
|
||||
return nil, nil
|
||||
}
|
||||
346
datasource/postgresql/postgresql.go
Normal file
346
datasource/postgresql/postgresql.go
Normal file
@@ -0,0 +1,346 @@
|
||||
package postgresql
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"regexp"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/datasource"
|
||||
"github.com/ccfos/nightingale/v6/pkg/macros"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/dskit/postgres"
|
||||
"github.com/ccfos/nightingale/v6/dskit/sqlbase"
|
||||
"github.com/ccfos/nightingale/v6/dskit/types"
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
"github.com/mitchellh/mapstructure"
|
||||
"github.com/toolkits/pkg/logger"
|
||||
)
|
||||
|
||||
const (
|
||||
PostgreSQLType = "pgsql"
|
||||
)
|
||||
|
||||
var (
|
||||
regx = "(?i)from\\s+([a-zA-Z0-9_]+)\\.([a-zA-Z0-9_]+)\\.([a-zA-Z0-9_]+)"
|
||||
)
|
||||
|
||||
func init() {
|
||||
datasource.RegisterDatasource(PostgreSQLType, new(PostgreSQL))
|
||||
}
|
||||
|
||||
type PostgreSQL struct {
|
||||
Shards []*postgres.PostgreSQL `json:"pgsql.shards" mapstructure:"pgsql.shards"`
|
||||
}
|
||||
|
||||
type QueryParam struct {
|
||||
Ref string `json:"ref" mapstructure:"ref"`
|
||||
Database string `json:"database" mapstructure:"database"`
|
||||
Table string `json:"table" mapstructure:"table"`
|
||||
SQL string `json:"sql" mapstructure:"sql"`
|
||||
Keys datasource.Keys `json:"keys" mapstructure:"keys"`
|
||||
From int64 `json:"from" mapstructure:"from"`
|
||||
To int64 `json:"to" mapstructure:"to"`
|
||||
}
|
||||
|
||||
func (p *PostgreSQL) InitClient() error {
|
||||
if len(p.Shards) == 0 {
|
||||
return fmt.Errorf("not found postgresql addr, please check datasource config")
|
||||
}
|
||||
for _, shard := range p.Shards {
|
||||
if db, err := shard.NewConn(context.TODO(), "postgres"); err != nil {
|
||||
defer sqlbase.CloseDB(db)
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (p *PostgreSQL) Init(settings map[string]interface{}) (datasource.Datasource, error) {
|
||||
newest := new(PostgreSQL)
|
||||
err := mapstructure.Decode(settings, newest)
|
||||
return newest, err
|
||||
}
|
||||
|
||||
func (p *PostgreSQL) Validate(ctx context.Context) error {
|
||||
if len(p.Shards) == 0 || len(strings.TrimSpace(p.Shards[0].Addr)) == 0 {
|
||||
return fmt.Errorf("postgresql addr is invalid, please check datasource setting")
|
||||
}
|
||||
|
||||
if len(strings.TrimSpace(p.Shards[0].User)) == 0 {
|
||||
return fmt.Errorf("postgresql user is invalid, please check datasource setting")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Equal compares whether two objects are the same, used for caching
|
||||
func (p *PostgreSQL) Equal(d datasource.Datasource) bool {
|
||||
newest, ok := d.(*PostgreSQL)
|
||||
if !ok {
|
||||
logger.Errorf("unexpected plugin type, expected is postgresql")
|
||||
return false
|
||||
}
|
||||
|
||||
if len(p.Shards) == 0 || len(newest.Shards) == 0 {
|
||||
return false
|
||||
}
|
||||
|
||||
oldShard := p.Shards[0]
|
||||
newShard := newest.Shards[0]
|
||||
|
||||
if oldShard.Addr != newShard.Addr {
|
||||
return false
|
||||
}
|
||||
|
||||
if oldShard.User != newShard.User {
|
||||
return false
|
||||
}
|
||||
|
||||
if oldShard.Password != newShard.Password {
|
||||
return false
|
||||
}
|
||||
|
||||
if oldShard.MaxQueryRows != newShard.MaxQueryRows {
|
||||
return false
|
||||
}
|
||||
|
||||
if oldShard.Timeout != newShard.Timeout {
|
||||
return false
|
||||
}
|
||||
|
||||
if oldShard.MaxIdleConns != newShard.MaxIdleConns {
|
||||
return false
|
||||
}
|
||||
|
||||
if oldShard.MaxOpenConns != newShard.MaxOpenConns {
|
||||
return false
|
||||
}
|
||||
|
||||
if oldShard.ConnMaxLifetime != newShard.ConnMaxLifetime {
|
||||
return false
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
func (p *PostgreSQL) ShowDatabases(ctx context.Context) ([]string, error) {
|
||||
return p.Shards[0].ShowDatabases(ctx, "")
|
||||
}
|
||||
|
||||
func (p *PostgreSQL) ShowTables(ctx context.Context, database string) ([]string, error) {
|
||||
p.Shards[0].DB = database
|
||||
rets, err := p.Shards[0].ShowTables(ctx, "")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
tables := make([]string, 0, len(rets))
|
||||
for scheme, tabs := range rets {
|
||||
for _, tab := range tabs {
|
||||
tables = append(tables, scheme+"."+tab)
|
||||
}
|
||||
}
|
||||
return tables, nil
|
||||
}
|
||||
|
||||
func (p *PostgreSQL) MakeLogQuery(ctx context.Context, query interface{}, eventTags []string, start, end int64) (interface{}, error) {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
func (p *PostgreSQL) MakeTSQuery(ctx context.Context, query interface{}, eventTags []string, start, end int64) (interface{}, error) {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
func (p *PostgreSQL) QueryMapData(ctx context.Context, query interface{}) ([]map[string]string, error) {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
func (p *PostgreSQL) QueryData(ctx context.Context, query interface{}) ([]models.DataResp, error) {
|
||||
postgresqlQueryParam := new(QueryParam)
|
||||
if err := mapstructure.Decode(query, postgresqlQueryParam); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if strings.Contains(postgresqlQueryParam.SQL, "$__") {
|
||||
var err error
|
||||
postgresqlQueryParam.SQL, err = macros.Macro(postgresqlQueryParam.SQL, postgresqlQueryParam.From, postgresqlQueryParam.To)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
if postgresqlQueryParam.Database != "" {
|
||||
p.Shards[0].DB = postgresqlQueryParam.Database
|
||||
} else {
|
||||
db, err := parseDBName(postgresqlQueryParam.SQL)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
p.Shards[0].DB = db
|
||||
}
|
||||
|
||||
timeout := p.Shards[0].Timeout
|
||||
if timeout == 0 {
|
||||
timeout = 60
|
||||
}
|
||||
timeoutCtx, cancel := context.WithTimeout(ctx, time.Duration(timeout)*time.Second)
|
||||
defer cancel()
|
||||
|
||||
items, err := p.Shards[0].QueryTimeseries(timeoutCtx, &sqlbase.QueryParam{
|
||||
Sql: postgresqlQueryParam.SQL,
|
||||
Keys: types.Keys{
|
||||
ValueKey: postgresqlQueryParam.Keys.ValueKey,
|
||||
LabelKey: postgresqlQueryParam.Keys.LabelKey,
|
||||
TimeKey: postgresqlQueryParam.Keys.TimeKey,
|
||||
},
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
logger.Warningf("query:%+v get data err:%v", postgresqlQueryParam, err)
|
||||
return []models.DataResp{}, err
|
||||
}
|
||||
data := make([]models.DataResp, 0)
|
||||
for i := range items {
|
||||
data = append(data, models.DataResp{
|
||||
Ref: postgresqlQueryParam.Ref,
|
||||
Metric: items[i].Metric,
|
||||
Values: items[i].Values,
|
||||
})
|
||||
}
|
||||
|
||||
// parse resp to time series data
|
||||
logger.Infof("req:%+v keys:%+v \n data:%v", postgresqlQueryParam, postgresqlQueryParam.Keys, data)
|
||||
|
||||
return data, nil
|
||||
}
|
||||
|
||||
func (p *PostgreSQL) QueryLog(ctx context.Context, query interface{}) ([]interface{}, int64, error) {
|
||||
postgresqlQueryParam := new(QueryParam)
|
||||
if err := mapstructure.Decode(query, postgresqlQueryParam); err != nil {
|
||||
return nil, 0, err
|
||||
}
|
||||
if postgresqlQueryParam.Database != "" {
|
||||
p.Shards[0].DB = postgresqlQueryParam.Database
|
||||
} else {
|
||||
db, err := parseDBName(postgresqlQueryParam.SQL)
|
||||
if err != nil {
|
||||
return nil, 0, err
|
||||
}
|
||||
p.Shards[0].DB = db
|
||||
}
|
||||
|
||||
if strings.Contains(postgresqlQueryParam.SQL, "$__") {
|
||||
var err error
|
||||
postgresqlQueryParam.SQL, err = macros.Macro(postgresqlQueryParam.SQL, postgresqlQueryParam.From, postgresqlQueryParam.To)
|
||||
if err != nil {
|
||||
return nil, 0, err
|
||||
}
|
||||
}
|
||||
|
||||
timeout := p.Shards[0].Timeout
|
||||
if timeout == 0 {
|
||||
timeout = 60
|
||||
}
|
||||
timeoutCtx, cancel := context.WithTimeout(ctx, time.Duration(timeout)*time.Second)
|
||||
defer cancel()
|
||||
items, err := p.Shards[0].Query(timeoutCtx, &sqlbase.QueryParam{
|
||||
Sql: postgresqlQueryParam.SQL,
|
||||
})
|
||||
if err != nil {
|
||||
logger.Warningf("query:%+v get data err:%v", postgresqlQueryParam, err)
|
||||
return []interface{}{}, 0, err
|
||||
}
|
||||
logs := make([]interface{}, 0)
|
||||
for i := range items {
|
||||
logs = append(logs, items[i])
|
||||
}
|
||||
|
||||
return logs, 0, nil
|
||||
}
|
||||
|
||||
func (p *PostgreSQL) DescribeTable(ctx context.Context, query interface{}) ([]*types.ColumnProperty, error) {
|
||||
postgresqlQueryParam := new(QueryParam)
|
||||
if err := mapstructure.Decode(query, postgresqlQueryParam); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
p.Shards[0].DB = postgresqlQueryParam.Database
|
||||
pairs := strings.Split(postgresqlQueryParam.Table, ".") // format: scheme.table_name
|
||||
scheme := ""
|
||||
table := postgresqlQueryParam.Table
|
||||
if len(pairs) == 2 {
|
||||
scheme = pairs[0]
|
||||
table = pairs[1]
|
||||
}
|
||||
return p.Shards[0].DescTable(ctx, scheme, table)
|
||||
}
|
||||
|
||||
func parseDBName(sql string) (db string, err error) {
|
||||
re := regexp.MustCompile(regx)
|
||||
matches := re.FindStringSubmatch(sql)
|
||||
if len(matches) != 4 {
|
||||
return "", fmt.Errorf("no valid table name in format database.schema.table found")
|
||||
}
|
||||
return matches[1], nil
|
||||
}
|
||||
|
||||
func extractColumns(sql string) ([]string, error) {
|
||||
// 将 SQL 转换为小写以简化匹配
|
||||
sql = strings.ToLower(sql)
|
||||
|
||||
// 匹配 SELECT 和 FROM 之间的内容
|
||||
re := regexp.MustCompile(`select\s+(.*?)\s+from`)
|
||||
matches := re.FindStringSubmatch(sql)
|
||||
|
||||
if len(matches) < 2 {
|
||||
return nil, fmt.Errorf("no columns found or invalid SQL syntax")
|
||||
}
|
||||
|
||||
// 提取列部分
|
||||
columnsString := matches[1]
|
||||
|
||||
// 分割列
|
||||
columns := splitColumns(columnsString)
|
||||
|
||||
// 清理每个列名
|
||||
for i, col := range columns {
|
||||
columns[i] = strings.TrimSpace(col)
|
||||
}
|
||||
|
||||
return columns, nil
|
||||
}
|
||||
|
||||
func splitColumns(columnsString string) []string {
|
||||
var columns []string
|
||||
var currentColumn strings.Builder
|
||||
parenthesesCount := 0
|
||||
inQuotes := false
|
||||
|
||||
for _, char := range columnsString {
|
||||
switch char {
|
||||
case '(':
|
||||
parenthesesCount++
|
||||
currentColumn.WriteRune(char)
|
||||
case ')':
|
||||
parenthesesCount--
|
||||
currentColumn.WriteRune(char)
|
||||
case '\'', '"':
|
||||
inQuotes = !inQuotes
|
||||
currentColumn.WriteRune(char)
|
||||
case ',':
|
||||
if parenthesesCount == 0 && !inQuotes {
|
||||
columns = append(columns, currentColumn.String())
|
||||
currentColumn.Reset()
|
||||
} else {
|
||||
currentColumn.WriteRune(char)
|
||||
}
|
||||
default:
|
||||
currentColumn.WriteRune(char)
|
||||
}
|
||||
}
|
||||
|
||||
if currentColumn.Len() > 0 {
|
||||
columns = append(columns, currentColumn.String())
|
||||
}
|
||||
|
||||
return columns
|
||||
}
|
||||
@@ -204,6 +204,7 @@ CREATE TABLE board (
|
||||
public smallint not null default 0 ,
|
||||
built_in smallint not null default 0 ,
|
||||
hide smallint not null default 0 ,
|
||||
public_cate bigint NOT NULL DEFAULT 0,
|
||||
create_at bigint not null default 0,
|
||||
create_by varchar(64) not null default '',
|
||||
update_at bigint not null default 0,
|
||||
@@ -217,6 +218,7 @@ COMMENT ON COLUMN board.tags IS 'split by space';
|
||||
COMMENT ON COLUMN board.public IS '0:false 1:true';
|
||||
COMMENT ON COLUMN board.built_in IS '0:false 1:true';
|
||||
COMMENT ON COLUMN board.hide IS '0:false 1:true';
|
||||
COMMENT ON COLUMN board.public_cate IS '0 anonymous 1 login 2 busi';
|
||||
|
||||
|
||||
-- for dashboard new version
|
||||
@@ -429,43 +431,31 @@ CREATE TABLE target (
|
||||
ident varchar(191) not null,
|
||||
note varchar(255) not null default '',
|
||||
tags varchar(512) not null default '',
|
||||
host_tags text,
|
||||
host_ip varchar(15) default '',
|
||||
agent_version varchar(255) default '',
|
||||
engine_name varchar(255) default '',
|
||||
os varchar(31) default '',
|
||||
update_at bigint not null default 0,
|
||||
PRIMARY KEY (id),
|
||||
UNIQUE (ident)
|
||||
);
|
||||
|
||||
CREATE INDEX ON target (group_id);
|
||||
CREATE INDEX idx_host_ip ON target (host_ip);
|
||||
CREATE INDEX idx_agent_version ON target (agent_version);
|
||||
CREATE INDEX idx_engine_name ON target (engine_name);
|
||||
CREATE INDEX idx_os ON target (os);
|
||||
|
||||
COMMENT ON COLUMN target.group_id IS 'busi group id';
|
||||
COMMENT ON COLUMN target.ident IS 'target id';
|
||||
COMMENT ON COLUMN target.note IS 'append to alert event as field';
|
||||
COMMENT ON COLUMN target.tags IS 'append to series data as tags, split by space, append external space at suffix';
|
||||
COMMENT ON COLUMN target.host_tags IS 'global labels set in conf file';
|
||||
COMMENT ON COLUMN target.host_ip IS 'IPv4 string';
|
||||
COMMENT ON COLUMN target.agent_version IS 'agent version';
|
||||
COMMENT ON COLUMN target.engine_name IS 'engine_name';
|
||||
-- case1: target_idents; case2: target_tags
|
||||
-- CREATE TABLE collect_rule (
|
||||
-- id bigserial,
|
||||
-- group_id bigint not null default 0 comment 'busi group id',
|
||||
-- cluster varchar(128) not null,
|
||||
-- target_idents varchar(512) not null default '' comment 'ident list, split by space',
|
||||
-- target_tags varchar(512) not null default '' comment 'filter targets by tags, split by space',
|
||||
-- name varchar(191) not null default '',
|
||||
-- note varchar(255) not null default '',
|
||||
-- step int not null,
|
||||
-- type varchar(64) not null comment 'e.g. port proc log plugin',
|
||||
-- data text not null,
|
||||
-- append_tags varchar(255) not null default '' comment 'split by space: e.g. mod=n9e dept=cloud',
|
||||
-- create_at bigint not null default 0,
|
||||
-- create_by varchar(64) not null default '',
|
||||
-- update_at bigint not null default 0,
|
||||
-- update_by varchar(64) not null default '',
|
||||
-- PRIMARY KEY (id),
|
||||
-- KEY (group_id, type, name)
|
||||
-- ) ;
|
||||
COMMENT ON COLUMN target.os IS 'os type';
|
||||
|
||||
CREATE TABLE metric_view (
|
||||
id bigserial,
|
||||
@@ -734,6 +724,7 @@ CREATE TABLE datasource
|
||||
(
|
||||
id serial,
|
||||
name varchar(191) not null default '',
|
||||
identifier varchar(255) not null default '',
|
||||
description varchar(255) not null default '',
|
||||
category varchar(255) not null default '',
|
||||
plugin_id int not null default 0,
|
||||
@@ -751,8 +742,8 @@ CREATE TABLE datasource
|
||||
updated_by varchar(64) not null default '',
|
||||
UNIQUE (name),
|
||||
PRIMARY KEY (id)
|
||||
) ;
|
||||
|
||||
) ;
|
||||
|
||||
CREATE TABLE builtin_cate (
|
||||
id bigserial,
|
||||
name varchar(191) not null,
|
||||
@@ -795,10 +786,12 @@ CREATE TABLE es_index_pattern (
|
||||
create_by varchar(64) default '',
|
||||
update_at bigint default '0',
|
||||
update_by varchar(64) default '',
|
||||
note varchar(4096) not null default '',
|
||||
PRIMARY KEY (id),
|
||||
UNIQUE (datasource_id, name)
|
||||
) ;
|
||||
COMMENT ON COLUMN es_index_pattern.datasource_id IS 'datasource id';
|
||||
COMMENT ON COLUMN es_index_pattern.note IS 'description of metric in Chinese';
|
||||
|
||||
CREATE TABLE builtin_metrics (
|
||||
id bigserial,
|
||||
@@ -813,6 +806,7 @@ CREATE TABLE builtin_metrics (
|
||||
created_by varchar(191) NOT NULL DEFAULT '',
|
||||
updated_at bigint NOT NULL DEFAULT 0,
|
||||
updated_by varchar(191) NOT NULL DEFAULT '',
|
||||
uuid BIGINT NOT NULL DEFAULT 0,
|
||||
PRIMARY KEY (id),
|
||||
UNIQUE (lang, collector, typ, name)
|
||||
);
|
||||
@@ -834,6 +828,7 @@ COMMENT ON COLUMN builtin_metrics.created_at IS 'create time';
|
||||
COMMENT ON COLUMN builtin_metrics.created_by IS 'creator';
|
||||
COMMENT ON COLUMN builtin_metrics.updated_at IS 'update time';
|
||||
COMMENT ON COLUMN builtin_metrics.updated_by IS 'updater';
|
||||
COMMENT ON COLUMN builtin_metrics.uuid IS 'unique identifier';
|
||||
|
||||
CREATE TABLE metric_filter (
|
||||
id BIGSERIAL PRIMARY KEY,
|
||||
@@ -916,3 +911,115 @@ CREATE TABLE source_token (
|
||||
);
|
||||
|
||||
CREATE INDEX idx_source_token_type_id_token ON source_token (source_type, source_id, token);
|
||||
|
||||
CREATE TABLE notification_record (
|
||||
id BIGSERIAL PRIMARY KEY,
|
||||
notify_rule_id BIGINT NOT NULL DEFAULT 0,
|
||||
event_id bigint NOT NULL,
|
||||
sub_id bigint DEFAULT NULL,
|
||||
channel varchar(255) NOT NULL,
|
||||
status bigint DEFAULT NULL,
|
||||
target varchar(1024) NOT NULL,
|
||||
details varchar(2048) DEFAULT '',
|
||||
created_at bigint NOT NULL
|
||||
);
|
||||
|
||||
CREATE INDEX idx_evt ON notification_record (event_id);
|
||||
|
||||
COMMENT ON COLUMN notification_record.event_id IS 'event history id';
|
||||
COMMENT ON COLUMN notification_record.sub_id IS 'subscribed rule id';
|
||||
COMMENT ON COLUMN notification_record.channel IS 'notification channel name';
|
||||
COMMENT ON COLUMN notification_record.status IS 'notification status';
|
||||
COMMENT ON COLUMN notification_record.target IS 'notification target';
|
||||
COMMENT ON COLUMN notification_record.details IS 'notification other info';
|
||||
COMMENT ON COLUMN notification_record.created_at IS 'create time';
|
||||
|
||||
CREATE TABLE target_busi_group (
|
||||
id BIGSERIAL PRIMARY KEY,
|
||||
target_ident varchar(191) NOT NULL,
|
||||
group_id bigint NOT NULL,
|
||||
update_at bigint NOT NULL
|
||||
);
|
||||
|
||||
CREATE UNIQUE INDEX idx_target_group ON target_busi_group (target_ident, group_id);
|
||||
|
||||
CREATE TABLE user_token (
|
||||
id BIGSERIAL PRIMARY KEY,
|
||||
username varchar(255) NOT NULL DEFAULT '',
|
||||
token_name varchar(255) NOT NULL DEFAULT '',
|
||||
token varchar(255) NOT NULL DEFAULT '',
|
||||
create_at bigint NOT NULL DEFAULT 0,
|
||||
last_used bigint NOT NULL DEFAULT 0
|
||||
);
|
||||
|
||||
CREATE TABLE notify_rule (
|
||||
id bigserial PRIMARY KEY,
|
||||
name varchar(255) NOT NULL,
|
||||
description text,
|
||||
enable smallint NOT NULL DEFAULT 0,
|
||||
user_group_ids varchar(255) NOT NULL DEFAULT '',
|
||||
notify_configs text,
|
||||
pipeline_configs text,
|
||||
create_at bigint NOT NULL DEFAULT 0,
|
||||
create_by varchar(64) NOT NULL DEFAULT '',
|
||||
update_at bigint NOT NULL DEFAULT 0,
|
||||
update_by varchar(64) NOT NULL DEFAULT ''
|
||||
);
|
||||
|
||||
CREATE TABLE notify_channel (
|
||||
id bigserial PRIMARY KEY,
|
||||
name varchar(255) NOT NULL,
|
||||
ident varchar(255) NOT NULL,
|
||||
description text,
|
||||
enable smallint NOT NULL DEFAULT 0,
|
||||
param_config text,
|
||||
request_type varchar(50) NOT NULL,
|
||||
request_config text,
|
||||
weight int NOT NULL DEFAULT 0,
|
||||
create_at bigint NOT NULL DEFAULT 0,
|
||||
create_by varchar(64) NOT NULL DEFAULT '',
|
||||
update_at bigint NOT NULL DEFAULT 0,
|
||||
update_by varchar(64) NOT NULL DEFAULT ''
|
||||
);
|
||||
|
||||
CREATE TABLE message_template (
|
||||
id bigserial PRIMARY KEY,
|
||||
name varchar(64) NOT NULL,
|
||||
ident varchar(64) NOT NULL,
|
||||
content text,
|
||||
user_group_ids varchar(64),
|
||||
notify_channel_ident varchar(64) NOT NULL DEFAULT '',
|
||||
private int NOT NULL DEFAULT 0,
|
||||
weight int NOT NULL DEFAULT 0,
|
||||
create_at bigint NOT NULL DEFAULT 0,
|
||||
create_by varchar(64) NOT NULL DEFAULT '',
|
||||
update_at bigint NOT NULL DEFAULT 0,
|
||||
update_by varchar(64) NOT NULL DEFAULT ''
|
||||
);
|
||||
|
||||
CREATE TABLE event_pipeline (
|
||||
id bigserial PRIMARY KEY,
|
||||
name varchar(128) NOT NULL,
|
||||
team_ids text,
|
||||
description varchar(255) NOT NULL DEFAULT '',
|
||||
filter_enable smallint NOT NULL DEFAULT 0,
|
||||
label_filters text,
|
||||
attribute_filters text,
|
||||
processors text,
|
||||
create_at bigint NOT NULL DEFAULT 0,
|
||||
create_by varchar(64) NOT NULL DEFAULT '',
|
||||
update_at bigint NOT NULL DEFAULT 0,
|
||||
update_by varchar(64) NOT NULL DEFAULT ''
|
||||
);
|
||||
|
||||
CREATE TABLE embedded_product (
|
||||
id bigserial PRIMARY KEY,
|
||||
name varchar(255) DEFAULT NULL,
|
||||
url varchar(255) DEFAULT NULL,
|
||||
is_private boolean DEFAULT NULL,
|
||||
team_ids varchar(255),
|
||||
create_at bigint NOT NULL DEFAULT 0,
|
||||
create_by varchar(64) NOT NULL DEFAULT '',
|
||||
update_at bigint NOT NULL DEFAULT 0,
|
||||
update_by varchar(64) NOT NULL DEFAULT ''
|
||||
);
|
||||
@@ -723,7 +723,6 @@ CREATE TABLE `builtin_metrics` (
|
||||
`updated_by` varchar(191) NOT NULL DEFAULT '' COMMENT '''updater''',
|
||||
`uuid` bigint NOT NULL DEFAULT 0 COMMENT '''uuid''',
|
||||
PRIMARY KEY (`id`),
|
||||
UNIQUE KEY `idx_collector_typ_name` (`lang`,`collector`, `typ`, `name`),
|
||||
INDEX `idx_uuid` (`uuid`),
|
||||
INDEX `idx_collector` (`collector`),
|
||||
INDEX `idx_typ` (`typ`),
|
||||
|
||||
@@ -13,7 +13,6 @@ CREATE TABLE `builtin_metrics` (
|
||||
`updated_at` bigint NOT NULL DEFAULT 0 COMMENT 'update time',
|
||||
`updated_by` varchar(191) NOT NULL DEFAULT '' COMMENT 'updater',
|
||||
PRIMARY KEY (`id`),
|
||||
UNIQUE KEY `idx_collector_typ_name` (`lang`,`collector`, `typ`, `name`),
|
||||
INDEX `idx_collector` (`collector`),
|
||||
INDEX `idx_typ` (`typ`),
|
||||
INDEX `idx_name` (`name`),
|
||||
@@ -246,7 +245,21 @@ CREATE TABLE `event_pipeline` (
|
||||
PRIMARY KEY (`id`)
|
||||
) ENGINE = InnoDB DEFAULT CHARSET = utf8mb4;
|
||||
|
||||
/* v8.0.0-next */
|
||||
/* v8.0.0 2025-05-15 */
|
||||
CREATE TABLE `embedded_product` (
|
||||
`id` bigint unsigned NOT NULL AUTO_INCREMENT,
|
||||
`name` varchar(255) DEFAULT NULL,
|
||||
`url` varchar(255) DEFAULT NULL,
|
||||
`is_private` boolean DEFAULT NULL,
|
||||
`team_ids` varchar(255),
|
||||
`create_at` bigint not null default 0,
|
||||
`create_by` varchar(64) not null default '',
|
||||
`update_at` bigint not null default 0,
|
||||
`update_by` varchar(64) not null default '',
|
||||
PRIMARY KEY (`id`)
|
||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
|
||||
|
||||
/* v8.0.0 2025-05-29 */
|
||||
CREATE TABLE `source_token` (
|
||||
`id` bigint unsigned NOT NULL AUTO_INCREMENT,
|
||||
`source_type` varchar(64) NOT NULL DEFAULT '' COMMENT 'source type',
|
||||
@@ -259,6 +272,15 @@ CREATE TABLE `source_token` (
|
||||
KEY `idx_source_type_id_token` (`source_type`, `source_id`, `token`)
|
||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
|
||||
|
||||
|
||||
/* Add translation column for builtin metrics */
|
||||
ALTER TABLE `builtin_metrics` ADD COLUMN `translation` TEXT COMMENT 'translation of metric' AFTER `lang`;
|
||||
|
||||
/* v8.0.0-beta.12 2025-06-03 */
|
||||
ALTER TABLE `alert_his_event` ADD COLUMN `notify_rule_ids` text COMMENT 'notify rule ids';
|
||||
ALTER TABLE `alert_cur_event` ADD COLUMN `notify_rule_ids` text COMMENT 'notify rule ids';
|
||||
|
||||
/* v8.0.0-beta.13 */
|
||||
-- 删除 builtin_metrics 表的 idx_collector_typ_name 唯一索引
|
||||
DROP INDEX IF EXISTS `idx_collector_typ_name` ON `builtin_metrics`;
|
||||
|
||||
|
||||
@@ -656,7 +656,6 @@ CREATE TABLE `builtin_metrics` (
|
||||
`uuid integer` not null default 0
|
||||
);
|
||||
|
||||
CREATE UNIQUE INDEX idx_collector_typ_name ON builtin_metrics (lang, collector, typ, name);
|
||||
CREATE INDEX idx_collector ON builtin_metrics (collector);
|
||||
CREATE INDEX idx_typ ON builtin_metrics (typ);
|
||||
CREATE INDEX idx_builtinmetric_name ON builtin_metrics (name);
|
||||
|
||||
@@ -8,7 +8,11 @@ import (
|
||||
|
||||
"github.com/ccfos/nightingale/v6/datasource"
|
||||
_ "github.com/ccfos/nightingale/v6/datasource/ck"
|
||||
_ "github.com/ccfos/nightingale/v6/datasource/doris"
|
||||
"github.com/ccfos/nightingale/v6/datasource/es"
|
||||
_ "github.com/ccfos/nightingale/v6/datasource/mysql"
|
||||
_ "github.com/ccfos/nightingale/v6/datasource/opensearch"
|
||||
_ "github.com/ccfos/nightingale/v6/datasource/postgresql"
|
||||
"github.com/ccfos/nightingale/v6/dskit/tdengine"
|
||||
"github.com/ccfos/nightingale/v6/models"
|
||||
"github.com/ccfos/nightingale/v6/pkg/ctx"
|
||||
@@ -80,8 +84,6 @@ func getDatasourcesFromDBLoop(ctx *ctx.Context, fromAPI bool) {
|
||||
|
||||
if item.PluginType == "elasticsearch" {
|
||||
esN9eToDatasourceInfo(&ds, item)
|
||||
} else if item.PluginType == "opensearch" {
|
||||
osN9eToDatasourceInfo(&ds, item)
|
||||
} else if item.PluginType == "tdengine" {
|
||||
tdN9eToDatasourceInfo(&ds, item)
|
||||
} else {
|
||||
@@ -142,24 +144,6 @@ func esN9eToDatasourceInfo(ds *datasource.DatasourceInfo, item models.Datasource
|
||||
ds.Settings["es.enable_write"] = item.SettingsJson["enable_write"]
|
||||
}
|
||||
|
||||
// for opensearch
|
||||
func osN9eToDatasourceInfo(ds *datasource.DatasourceInfo, item models.Datasource) {
|
||||
ds.Settings = make(map[string]interface{})
|
||||
ds.Settings["os.nodes"] = []string{item.HTTPJson.Url}
|
||||
ds.Settings["os.timeout"] = item.HTTPJson.Timeout
|
||||
ds.Settings["os.basic"] = es.BasicAuth{
|
||||
Username: item.AuthJson.BasicAuthUser,
|
||||
Password: item.AuthJson.BasicAuthPassword,
|
||||
}
|
||||
ds.Settings["os.tls"] = es.TLS{
|
||||
SkipTlsVerify: item.HTTPJson.TLS.SkipTlsVerify,
|
||||
}
|
||||
ds.Settings["os.version"] = item.SettingsJson["version"]
|
||||
ds.Settings["os.headers"] = item.HTTPJson.Headers
|
||||
ds.Settings["os.min_interval"] = item.SettingsJson["min_interval"]
|
||||
ds.Settings["os.max_shard"] = item.SettingsJson["max_shard"]
|
||||
}
|
||||
|
||||
func PutDatasources(items []datasource.DatasourceInfo) {
|
||||
ids := make([]int64, 0)
|
||||
for _, item := range items {
|
||||
|
||||
543
dskit/doris/doris.go
Normal file
543
dskit/doris/doris.go
Normal file
@@ -0,0 +1,543 @@
|
||||
package doris
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"reflect"
|
||||
"strings"
|
||||
"time"
|
||||
"unicode"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/dskit/pool"
|
||||
"github.com/ccfos/nightingale/v6/dskit/types"
|
||||
|
||||
_ "github.com/go-sql-driver/mysql" // MySQL driver
|
||||
"github.com/mitchellh/mapstructure"
|
||||
)
|
||||
|
||||
// Doris struct to hold connection details and the connection object
|
||||
type Doris struct {
|
||||
Addr string `json:"doris.addr" mapstructure:"doris.addr"` // be node
|
||||
FeAddr string `json:"doris.fe_addr" mapstructure:"doris.fe_addr"` // fe node
|
||||
User string `json:"doris.user" mapstructure:"doris.user"` //
|
||||
Password string `json:"doris.password" mapstructure:"doris.password"` //
|
||||
Timeout int `json:"doris.timeout" mapstructure:"doris.timeout"`
|
||||
MaxIdleConns int `json:"doris.max_idle_conns" mapstructure:"doris.max_idle_conns"`
|
||||
MaxOpenConns int `json:"doris.max_open_conns" mapstructure:"doris.max_open_conns"`
|
||||
ConnMaxLifetime int `json:"doris.conn_max_lifetime" mapstructure:"doris.conn_max_lifetime"`
|
||||
MaxQueryRows int `json:"doris.max_query_rows" mapstructure:"doris.max_query_rows"`
|
||||
ClusterName string `json:"doris.cluster_name" mapstructure:"doris.cluster_name"`
|
||||
EnableWrite bool `json:"doris.enable_write" mapstructure:"doris.enable_write"`
|
||||
}
|
||||
|
||||
// NewDorisWithSettings initializes a new Doris instance with the given settings
|
||||
func NewDorisWithSettings(ctx context.Context, settings interface{}) (*Doris, error) {
|
||||
newest := new(Doris)
|
||||
settingsMap := map[string]interface{}{}
|
||||
if reflect.TypeOf(settings).Kind() == reflect.String {
|
||||
if err := json.Unmarshal([]byte(settings.(string)), &settingsMap); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
} else {
|
||||
var assert bool
|
||||
settingsMap, assert = settings.(map[string]interface{})
|
||||
if !assert {
|
||||
return nil, errors.New("settings type invalid")
|
||||
}
|
||||
}
|
||||
if err := mapstructure.Decode(settingsMap, newest); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return newest, nil
|
||||
}
|
||||
|
||||
// NewConn establishes a new connection to Doris
|
||||
func (d *Doris) NewConn(ctx context.Context, database string) (*sql.DB, error) {
|
||||
if len(d.Addr) == 0 {
|
||||
return nil, errors.New("empty fe-node addr")
|
||||
}
|
||||
|
||||
// Set default values similar to postgres implementation
|
||||
if d.Timeout == 0 {
|
||||
d.Timeout = 60
|
||||
}
|
||||
if d.MaxIdleConns == 0 {
|
||||
d.MaxIdleConns = 10
|
||||
}
|
||||
if d.MaxOpenConns == 0 {
|
||||
d.MaxOpenConns = 100
|
||||
}
|
||||
if d.ConnMaxLifetime == 0 {
|
||||
d.ConnMaxLifetime = 14400
|
||||
}
|
||||
if d.MaxQueryRows == 0 {
|
||||
d.MaxQueryRows = 500
|
||||
}
|
||||
|
||||
var keys []string
|
||||
keys = append(keys, d.Addr)
|
||||
keys = append(keys, d.Password, d.User)
|
||||
if len(database) > 0 {
|
||||
keys = append(keys, database)
|
||||
}
|
||||
cachedkey := strings.Join(keys, ":")
|
||||
// cache conn with database
|
||||
conn, ok := pool.PoolClient.Load(cachedkey)
|
||||
if ok {
|
||||
return conn.(*sql.DB), nil
|
||||
}
|
||||
var db *sql.DB
|
||||
var err error
|
||||
defer func() {
|
||||
if db != nil && err == nil {
|
||||
pool.PoolClient.Store(cachedkey, db)
|
||||
}
|
||||
}()
|
||||
|
||||
// Simplified connection logic for Doris using MySQL driver
|
||||
dsn := fmt.Sprintf("%s:%s@tcp(%s)/%s?charset=utf8", d.User, d.Password, d.Addr, database)
|
||||
db, err = sql.Open("mysql", dsn)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Set connection pool configuration
|
||||
db.SetMaxIdleConns(d.MaxIdleConns)
|
||||
db.SetMaxOpenConns(d.MaxOpenConns)
|
||||
db.SetConnMaxLifetime(time.Duration(d.ConnMaxLifetime) * time.Second)
|
||||
|
||||
return db, nil
|
||||
}
|
||||
|
||||
// createTimeoutContext creates a context with timeout based on Doris configuration
|
||||
func (d *Doris) createTimeoutContext(ctx context.Context) (context.Context, context.CancelFunc) {
|
||||
timeout := d.Timeout
|
||||
if timeout == 0 {
|
||||
timeout = 60
|
||||
}
|
||||
return context.WithTimeout(ctx, time.Duration(timeout)*time.Second)
|
||||
}
|
||||
|
||||
// ShowDatabases lists all databases in Doris
|
||||
func (d *Doris) ShowDatabases(ctx context.Context) ([]string, error) {
|
||||
timeoutCtx, cancel := d.createTimeoutContext(ctx)
|
||||
defer cancel()
|
||||
|
||||
db, err := d.NewConn(timeoutCtx, "")
|
||||
if err != nil {
|
||||
return []string{}, err
|
||||
}
|
||||
|
||||
rows, err := db.QueryContext(timeoutCtx, "SHOW DATABASES")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
var databases []string
|
||||
for rows.Next() {
|
||||
var dbName string
|
||||
if err := rows.Scan(&dbName); err != nil {
|
||||
continue
|
||||
}
|
||||
databases = append(databases, dbName)
|
||||
}
|
||||
return databases, nil
|
||||
}
|
||||
|
||||
// ShowResources lists all resources with type resourceType in Doris
|
||||
func (d *Doris) ShowResources(ctx context.Context, resourceType string) ([]string, error) {
|
||||
timeoutCtx, cancel := d.createTimeoutContext(ctx)
|
||||
defer cancel()
|
||||
|
||||
db, err := d.NewConn(timeoutCtx, "")
|
||||
if err != nil {
|
||||
return []string{}, err
|
||||
}
|
||||
|
||||
// 使用 SHOW RESOURCES 命令
|
||||
query := fmt.Sprintf("SHOW RESOURCES WHERE RESOURCETYPE = '%s'", resourceType)
|
||||
rows, err := db.QueryContext(timeoutCtx, query)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to execute query: %w", err)
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
distinctName := make(map[string]struct{})
|
||||
|
||||
// 获取列信息
|
||||
columns, err := rows.Columns()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to get columns: %w", err)
|
||||
}
|
||||
|
||||
// 准备接收数据的变量
|
||||
values := make([]interface{}, len(columns))
|
||||
valuePtrs := make([]interface{}, len(columns))
|
||||
for i := range values {
|
||||
valuePtrs[i] = &values[i]
|
||||
}
|
||||
|
||||
// 遍历结果集
|
||||
for rows.Next() {
|
||||
err := rows.Scan(valuePtrs...)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error scanning row: %w", err)
|
||||
}
|
||||
// 提取资源名称并添加到 map 中(自动去重)
|
||||
if name, ok := values[0].([]byte); ok {
|
||||
distinctName[string(name)] = struct{}{}
|
||||
} else if nameStr, ok := values[0].(string); ok {
|
||||
distinctName[nameStr] = struct{}{}
|
||||
}
|
||||
}
|
||||
|
||||
if err := rows.Err(); err != nil {
|
||||
return nil, fmt.Errorf("error iterating rows: %w", err)
|
||||
}
|
||||
|
||||
// 将 map 转换为切片
|
||||
var resources []string
|
||||
for name := range distinctName {
|
||||
resources = append(resources, name)
|
||||
}
|
||||
|
||||
return resources, nil
|
||||
}
|
||||
|
||||
// ShowTables lists all tables in a given database
|
||||
func (d *Doris) ShowTables(ctx context.Context, database string) ([]string, error) {
|
||||
timeoutCtx, cancel := d.createTimeoutContext(ctx)
|
||||
defer cancel()
|
||||
|
||||
db, err := d.NewConn(timeoutCtx, database)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
query := fmt.Sprintf("SHOW TABLES IN %s", database)
|
||||
rows, err := db.QueryContext(timeoutCtx, query)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
var tables []string
|
||||
for rows.Next() {
|
||||
var tableName string
|
||||
if err := rows.Scan(&tableName); err != nil {
|
||||
continue
|
||||
}
|
||||
tables = append(tables, tableName)
|
||||
}
|
||||
return tables, nil
|
||||
}
|
||||
|
||||
// DescTable describes the schema of a specified table in Doris
|
||||
func (d *Doris) DescTable(ctx context.Context, database, table string) ([]*types.ColumnProperty, error) {
|
||||
timeoutCtx, cancel := d.createTimeoutContext(ctx)
|
||||
defer cancel()
|
||||
|
||||
db, err := d.NewConn(timeoutCtx, database)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
query := fmt.Sprintf("DESCRIBE %s.%s", database, table)
|
||||
rows, err := db.QueryContext(timeoutCtx, query)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
// 日志报表中需要把 .type 转化成内部类型
|
||||
// TODO: 是否有复合类型, Array/JSON/Tuple/Nested, 是否有更多的类型
|
||||
convertDorisType := func(origin string) (string, bool) {
|
||||
lower := strings.ToLower(origin)
|
||||
switch lower {
|
||||
case "double":
|
||||
return types.LogExtractValueTypeFloat, true
|
||||
|
||||
case "datetime", "date":
|
||||
return types.LogExtractValueTypeDate, false
|
||||
|
||||
case "text":
|
||||
return types.LogExtractValueTypeText, true
|
||||
|
||||
default:
|
||||
if strings.Contains(lower, "int") {
|
||||
return types.LogExtractValueTypeLong, true
|
||||
}
|
||||
// 日期类型统一按照.date处理
|
||||
if strings.HasPrefix(lower, "date") {
|
||||
return types.LogExtractValueTypeDate, false
|
||||
}
|
||||
if strings.HasPrefix(lower, "varchar") || strings.HasPrefix(lower, "char") {
|
||||
return types.LogExtractValueTypeText, true
|
||||
}
|
||||
if strings.HasPrefix(lower, "decimal") {
|
||||
return types.LogExtractValueTypeFloat, true
|
||||
}
|
||||
}
|
||||
|
||||
return origin, false
|
||||
}
|
||||
|
||||
var columns []*types.ColumnProperty
|
||||
for rows.Next() {
|
||||
var (
|
||||
field string
|
||||
typ string
|
||||
null string
|
||||
key string
|
||||
defaultValue sql.NullString
|
||||
extra string
|
||||
)
|
||||
if err := rows.Scan(&field, &typ, &null, &key, &defaultValue, &extra); err != nil {
|
||||
continue
|
||||
}
|
||||
type2, indexable := convertDorisType(typ)
|
||||
columns = append(columns, &types.ColumnProperty{
|
||||
Field: field,
|
||||
Type: typ, // You might want to convert MySQL types to your custom types
|
||||
|
||||
Type2: type2,
|
||||
Indexable: indexable,
|
||||
})
|
||||
}
|
||||
return columns, nil
|
||||
}
|
||||
|
||||
// SelectRows selects rows from a specified table in Doris based on a given query with MaxQueryRows check
|
||||
func (d *Doris) SelectRows(ctx context.Context, database, table, query string) ([]map[string]interface{}, error) {
|
||||
sql := fmt.Sprintf("SELECT * FROM %s.%s", database, table)
|
||||
if query != "" {
|
||||
sql += " " + query
|
||||
}
|
||||
|
||||
// 检查查询结果行数
|
||||
err := d.CheckMaxQueryRows(ctx, database, sql)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return d.ExecQuery(ctx, database, sql)
|
||||
}
|
||||
|
||||
// ExecQuery executes a given SQL query in Doris and returns the results
|
||||
func (d *Doris) ExecQuery(ctx context.Context, database string, sql string) ([]map[string]interface{}, error) {
|
||||
timeoutCtx, cancel := d.createTimeoutContext(ctx)
|
||||
defer cancel()
|
||||
|
||||
db, err := d.NewConn(timeoutCtx, database)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
rows, err := db.QueryContext(timeoutCtx, sql)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
columns, err := rows.Columns()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var results []map[string]interface{}
|
||||
|
||||
for rows.Next() {
|
||||
columnValues := make([]interface{}, len(columns))
|
||||
columnPointers := make([]interface{}, len(columns))
|
||||
for i := range columnValues {
|
||||
columnPointers[i] = &columnValues[i]
|
||||
}
|
||||
|
||||
if err := rows.Scan(columnPointers...); err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
rowMap := make(map[string]interface{})
|
||||
for i, colName := range columns {
|
||||
val := columnValues[i]
|
||||
bytes, ok := val.([]byte)
|
||||
if ok {
|
||||
rowMap[colName] = string(bytes)
|
||||
} else {
|
||||
rowMap[colName] = val
|
||||
}
|
||||
}
|
||||
results = append(results, rowMap)
|
||||
}
|
||||
return results, nil
|
||||
}
|
||||
|
||||
// ExecContext executes a given SQL query in Doris and returns the results
|
||||
func (d *Doris) ExecContext(ctx context.Context, database string, sql string) error {
|
||||
timeoutCtx, cancel := d.createTimeoutContext(ctx)
|
||||
defer cancel()
|
||||
|
||||
db, err := d.NewConn(timeoutCtx, database)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
_, err = db.ExecContext(timeoutCtx, sql)
|
||||
return err
|
||||
}
|
||||
|
||||
// ExecBatchSQL 执行多条 SQL 语句
|
||||
func (d *Doris) ExecBatchSQL(ctx context.Context, database string, sqlBatch string) error {
|
||||
// 分割 SQL 语句
|
||||
sqlStatements := SplitSQLStatements(sqlBatch)
|
||||
|
||||
// 逐条执行 SQL 语句
|
||||
for _, ql := range sqlStatements {
|
||||
// 跳过空语句
|
||||
ql = strings.TrimSpace(ql)
|
||||
if ql == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
// 检查是否是 CREATE DATABASE 语句
|
||||
isCreateDB := strings.HasPrefix(strings.ToUpper(ql), "CREATE DATABASE")
|
||||
// strings.HasPrefix(strings.ToUpper(sql), "CREATE SCHEMA") // 暂时不支持CREATE SCHEMA
|
||||
|
||||
// 对于 CREATE DATABASE 语句,使用空数据库名连接
|
||||
currentDB := database
|
||||
if isCreateDB {
|
||||
currentDB = ""
|
||||
}
|
||||
|
||||
// 执行单条 SQL,ExecContext 内部已经包含超时处理
|
||||
err := d.ExecContext(ctx, currentDB, ql)
|
||||
if err != nil {
|
||||
return fmt.Errorf("exec sql failed, sql:%s, err:%w", sqlBatch, err)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// SplitSQLStatements 将多条 SQL 语句分割成单独的语句
|
||||
func SplitSQLStatements(sqlBatch string) []string {
|
||||
var statements []string
|
||||
var currentStatement strings.Builder
|
||||
|
||||
// 状态标记
|
||||
var (
|
||||
inString bool // 是否在字符串内
|
||||
inComment bool // 是否在单行注释内
|
||||
inMultilineComment bool // 是否在多行注释内
|
||||
escaped bool // 前一个字符是否为转义字符
|
||||
)
|
||||
|
||||
for i := 0; i < len(sqlBatch); i++ {
|
||||
char := sqlBatch[i]
|
||||
currentStatement.WriteByte(char)
|
||||
|
||||
// 处理转义字符
|
||||
if inString && char == '\\' {
|
||||
escaped = !escaped
|
||||
continue
|
||||
}
|
||||
|
||||
// 处理字符串
|
||||
if char == '\'' && !inComment && !inMultilineComment {
|
||||
if !escaped {
|
||||
inString = !inString
|
||||
}
|
||||
escaped = false
|
||||
continue
|
||||
}
|
||||
|
||||
// 处理单行注释
|
||||
if !inString && !inMultilineComment && !inComment && char == '-' && i+1 < len(sqlBatch) && sqlBatch[i+1] == '-' {
|
||||
inComment = true
|
||||
currentStatement.WriteByte(sqlBatch[i+1]) // 写入第二个'-'
|
||||
i++
|
||||
continue
|
||||
}
|
||||
|
||||
// 处理多行注释开始
|
||||
if !inString && !inComment && char == '/' && i+1 < len(sqlBatch) && sqlBatch[i+1] == '*' {
|
||||
inMultilineComment = true
|
||||
currentStatement.WriteByte(sqlBatch[i+1]) // 写入'*'
|
||||
i++
|
||||
continue
|
||||
}
|
||||
|
||||
// 处理多行注释结束
|
||||
if inMultilineComment && char == '*' && i+1 < len(sqlBatch) && sqlBatch[i+1] == '/' {
|
||||
inMultilineComment = false
|
||||
currentStatement.WriteByte(sqlBatch[i+1]) // 写入'/'
|
||||
i++
|
||||
continue
|
||||
}
|
||||
|
||||
// 处理换行符,结束单行注释
|
||||
if inComment && (char == '\n' || char == '\r') {
|
||||
inComment = false
|
||||
}
|
||||
|
||||
// 分割SQL语句
|
||||
if char == ';' && !inString && !inMultilineComment && !inComment {
|
||||
// 收集到分号后面的单行注释(如果有)
|
||||
for j := i + 1; j < len(sqlBatch); j++ {
|
||||
nextChar := sqlBatch[j]
|
||||
|
||||
// 检查是否是注释开始
|
||||
if nextChar == '-' && j+1 < len(sqlBatch) && sqlBatch[j+1] == '-' {
|
||||
// 找到了注释,添加到当前语句
|
||||
currentStatement.WriteByte(nextChar) // 添加'-'
|
||||
currentStatement.WriteByte(sqlBatch[j+1]) // 添加第二个'-'
|
||||
j++
|
||||
|
||||
// 读取直到行尾
|
||||
for k := j + 1; k < len(sqlBatch); k++ {
|
||||
commentChar := sqlBatch[k]
|
||||
currentStatement.WriteByte(commentChar)
|
||||
j = k
|
||||
|
||||
if commentChar == '\n' || commentChar == '\r' {
|
||||
break
|
||||
}
|
||||
}
|
||||
i = j
|
||||
break
|
||||
} else if !isWhitespace(nextChar) {
|
||||
// 非注释且非空白字符,停止收集
|
||||
break
|
||||
} else {
|
||||
// 是空白字符,添加到当前语句
|
||||
currentStatement.WriteByte(nextChar)
|
||||
i = j
|
||||
}
|
||||
}
|
||||
|
||||
statements = append(statements, strings.TrimSpace(currentStatement.String()))
|
||||
currentStatement.Reset()
|
||||
continue
|
||||
}
|
||||
|
||||
escaped = false
|
||||
}
|
||||
|
||||
// 处理最后一条可能没有分号的语句
|
||||
lastStatement := strings.TrimSpace(currentStatement.String())
|
||||
if lastStatement != "" {
|
||||
statements = append(statements, lastStatement)
|
||||
}
|
||||
|
||||
return statements
|
||||
}
|
||||
|
||||
// 判断字符是否为空白字符
|
||||
func isWhitespace(c byte) bool {
|
||||
return unicode.IsSpace(rune(c))
|
||||
}
|
||||
36
dskit/doris/logs.go
Normal file
36
dskit/doris/logs.go
Normal file
@@ -0,0 +1,36 @@
|
||||
package doris
|
||||
|
||||
import (
|
||||
"context"
|
||||
"sort"
|
||||
)
|
||||
|
||||
// 日志相关的操作
|
||||
const (
|
||||
TimeseriesAggregationTimestamp = "__ts__"
|
||||
)
|
||||
|
||||
// TODO: 待测试, MAP/ARRAY/STRUCT/JSON 等类型能否处理
|
||||
func (d *Doris) QueryLogs(ctx context.Context, query *QueryParam) ([]map[string]interface{}, error) {
|
||||
// 等同于 Query()
|
||||
return d.Query(ctx, query)
|
||||
}
|
||||
|
||||
// 本质是查询时序数据, 取第一组, SQL由上层封装, 不再做复杂的解析和截断
|
||||
func (d *Doris) QueryHistogram(ctx context.Context, query *QueryParam) ([][]float64, error) {
|
||||
values, err := d.QueryTimeseries(ctx, query)
|
||||
if err != nil {
|
||||
return [][]float64{}, nil
|
||||
}
|
||||
if len(values) > 0 && len(values[0].Values) > 0 {
|
||||
items := values[0].Values
|
||||
sort.Slice(items, func(i, j int) bool {
|
||||
if len(items[i]) > 0 && len(items[j]) > 0 {
|
||||
return items[i][0] < items[j][0]
|
||||
}
|
||||
return false
|
||||
})
|
||||
return items, nil
|
||||
}
|
||||
return [][]float64{}, nil
|
||||
}
|
||||
126
dskit/doris/template.md
Normal file
126
dskit/doris/template.md
Normal file
@@ -0,0 +1,126 @@
|
||||
## SQL变量
|
||||
|
||||
| 字段名 | 含义 | 使用场景 |
|
||||
| ---- | ---- | ---- |
|
||||
|database|数据库|无|
|
||||
|table|表名||
|
||||
|time_field|时间戳的字段||
|
||||
|query|查询条件|日志原文|
|
||||
|from|开始时间||
|
||||
|to|结束时间||
|
||||
|aggregation|聚合算法|时序图|
|
||||
|field|聚合的字段|时序图|
|
||||
|limit|分页参数|日志原文|
|
||||
|offset|分页参数|日志原文|
|
||||
|interval|直方图的时间粒度|直方图|
|
||||
|
||||
## 日志原文
|
||||
### 直方图
|
||||
|
||||
```
|
||||
# 如何计算interval的值
|
||||
max := 60 // 最多60个柱子
|
||||
interval := ($to-$from) / max
|
||||
interval = interval - interval%10
|
||||
if interval <= 0 {
|
||||
interval = 60
|
||||
}
|
||||
```
|
||||
|
||||
```
|
||||
SELECT count() as cnt,
|
||||
FLOOR(UNIX_TIMESTAMP($time_field) / $interval) * $interval AS __ts__
|
||||
FROM $table
|
||||
WHERE $time_field BETWEEN FROM_UNIXTIME($from) AND FROM_UNIXTIME($to)
|
||||
GROUP BY __ts__;
|
||||
```
|
||||
|
||||
```
|
||||
{
|
||||
"database":"$database",
|
||||
"sql":"$sql",
|
||||
"keys:": {
|
||||
"valueKey":"cnt",
|
||||
"timeKey":"__ts__"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 日志原文
|
||||
|
||||
```
|
||||
SELECT * from $table
|
||||
WHERE $time_field BETWEEN FROM_UNIXTIME($from) AND FROM_UNIXTIME($to)
|
||||
ORDER by $time_filed
|
||||
LIMIT $limit OFFSET $offset;
|
||||
```
|
||||
|
||||
```
|
||||
{
|
||||
"database":"$database",
|
||||
"sql":"$sql"
|
||||
}
|
||||
```
|
||||
|
||||
## 时序图
|
||||
|
||||
### 日志行数
|
||||
|
||||
```
|
||||
SELECT COUNT() AS cnt, DATE_FORMAT(date, '%Y-%m-%d %H:%i:00') AS __ts__
|
||||
FROM nginx_access_log
|
||||
WHERE $time_field BETWEEN FROM_UNIXTIME($from) AND FROM_UNIXTIME($to)
|
||||
GROUP BY __ts__
|
||||
```
|
||||
|
||||
```
|
||||
{
|
||||
"database":"$database",
|
||||
"sql":"$sql",
|
||||
"keys:": {
|
||||
"valueKey":"cnt",
|
||||
"timeKey":"__ts__"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### max/min/avg/sum
|
||||
|
||||
```
|
||||
SELECT $aggregation($field) AS series, DATE_FORMAT(date, '%Y-%m-%d %H:%i:00') AS __ts__
|
||||
FROM nginx_access_log
|
||||
WHERE $time_field BETWEEN FROM_UNIXTIME($from) AND FROM_UNIXTIME($to)
|
||||
GROUP BY __ts__
|
||||
```
|
||||
|
||||
```
|
||||
{
|
||||
"database":"$database",
|
||||
"sql":"$sql",
|
||||
"keys:": {
|
||||
"valueKey":"series",
|
||||
"timeKey":"__ts__"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
### 分位值
|
||||
|
||||
```
|
||||
SELECT percentile($field, 0.95) AS series, DATE_FORMAT(date, '%Y-%m-%d %H:%i:00') AS __ts__
|
||||
FROM nginx_access_log
|
||||
WHERE $time_field BETWEEN FROM_UNIXTIME($from) AND FROM_UNIXTIME($to)
|
||||
GROUP BY __ts__
|
||||
```
|
||||
|
||||
```
|
||||
{
|
||||
"database":"$database",
|
||||
"sql":"$sql",
|
||||
"keys:": {
|
||||
"valueKey":"series",
|
||||
"timeKey":"__ts__"
|
||||
}
|
||||
}
|
||||
```
|
||||
108
dskit/doris/timeseries.go
Normal file
108
dskit/doris/timeseries.go
Normal file
@@ -0,0 +1,108 @@
|
||||
package doris
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/dskit/sqlbase"
|
||||
"github.com/ccfos/nightingale/v6/dskit/types"
|
||||
)
|
||||
|
||||
const (
|
||||
TimeFieldFormatEpochMilli = "epoch_millis"
|
||||
TimeFieldFormatEpochSecond = "epoch_second"
|
||||
TimeFieldFormatDateTime = "datetime"
|
||||
)
|
||||
|
||||
// 不再拼接SQL, 完全信赖用户的输入
|
||||
type QueryParam struct {
|
||||
Database string `json:"database"`
|
||||
Sql string `json:"sql"`
|
||||
Keys types.Keys `json:"keys" mapstructure:"keys"`
|
||||
}
|
||||
|
||||
var (
|
||||
DorisBannedOp = map[string]struct{}{
|
||||
"CREATE": {},
|
||||
"INSERT": {},
|
||||
"ALTER": {},
|
||||
"REVOKE": {},
|
||||
"DROP": {},
|
||||
"RENAME": {},
|
||||
"ATTACH": {},
|
||||
"DETACH": {},
|
||||
"OPTIMIZE": {},
|
||||
"TRUNCATE": {},
|
||||
"SET": {},
|
||||
}
|
||||
)
|
||||
|
||||
// Query executes a given SQL query in Doris and returns the results with MaxQueryRows check
|
||||
func (d *Doris) Query(ctx context.Context, query *QueryParam) ([]map[string]interface{}, error) {
|
||||
// 校验SQL的合法性, 过滤掉 write请求
|
||||
sqlItem := strings.Split(strings.ToUpper(query.Sql), " ")
|
||||
for _, item := range sqlItem {
|
||||
if _, ok := DorisBannedOp[item]; ok {
|
||||
return nil, fmt.Errorf("operation %s is forbid, only read db, please check your sql", item)
|
||||
}
|
||||
}
|
||||
|
||||
// 检查查询结果行数
|
||||
err := d.CheckMaxQueryRows(ctx, query.Database, query.Sql)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
rows, err := d.ExecQuery(ctx, query.Database, query.Sql)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return rows, nil
|
||||
}
|
||||
|
||||
// QueryTimeseries executes a time series data query using the given parameters with MaxQueryRows check
|
||||
func (d *Doris) QueryTimeseries(ctx context.Context, query *QueryParam) ([]types.MetricValues, error) {
|
||||
// 使用 Query 方法执行查询,Query方法内部已包含MaxQueryRows检查
|
||||
rows, err := d.Query(ctx, query)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return sqlbase.FormatMetricValues(query.Keys, rows), nil
|
||||
}
|
||||
|
||||
// CheckMaxQueryRows checks if the query result exceeds the maximum allowed rows
|
||||
func (d *Doris) CheckMaxQueryRows(ctx context.Context, database, sql string) error {
|
||||
timeoutCtx, cancel := d.createTimeoutContext(ctx)
|
||||
defer cancel()
|
||||
|
||||
cleanedSQL := strings.ReplaceAll(sql, ";", "")
|
||||
checkQuery := fmt.Sprintf("SELECT COUNT(*) as count FROM (%s) AS subquery;", cleanedSQL)
|
||||
|
||||
// 执行计数查询
|
||||
results, err := d.ExecQuery(timeoutCtx, database, checkQuery)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if len(results) > 0 {
|
||||
if count, exists := results[0]["count"]; exists {
|
||||
v, err := sqlbase.ParseFloat64Value(count)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
maxQueryRows := d.MaxQueryRows
|
||||
if maxQueryRows == 0 {
|
||||
maxQueryRows = 500
|
||||
}
|
||||
|
||||
if v > float64(maxQueryRows) {
|
||||
return fmt.Errorf("query result rows count %d exceeds the maximum limit %d", int(v), maxQueryRows)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
172
dskit/mysql/mysql.go
Normal file
172
dskit/mysql/mysql.go
Normal file
@@ -0,0 +1,172 @@
|
||||
// @Author: Ciusyan 5/10/24
|
||||
|
||||
package mysql
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/dskit/pool"
|
||||
"github.com/ccfos/nightingale/v6/dskit/sqlbase"
|
||||
"github.com/ccfos/nightingale/v6/dskit/types"
|
||||
|
||||
_ "github.com/go-sql-driver/mysql" // MySQL driver
|
||||
"github.com/mitchellh/mapstructure"
|
||||
"gorm.io/driver/mysql"
|
||||
"gorm.io/gorm"
|
||||
)
|
||||
|
||||
type MySQL struct {
|
||||
Shards []Shard `json:"mysql.shards" mapstructure:"mysql.shards"`
|
||||
}
|
||||
|
||||
type Shard struct {
|
||||
Addr string `json:"mysql.addr" mapstructure:"mysql.addr"`
|
||||
DB string `json:"mysql.db" mapstructure:"mysql.db"`
|
||||
User string `json:"mysql.user" mapstructure:"mysql.user"`
|
||||
Password string `json:"mysql.password" mapstructure:"mysql.password"`
|
||||
Timeout int `json:"mysql.timeout" mapstructure:"mysql.timeout"`
|
||||
MaxIdleConns int `json:"mysql.max_idle_conns" mapstructure:"mysql.max_idle_conns"`
|
||||
MaxOpenConns int `json:"mysql.max_open_conns" mapstructure:"mysql.max_open_conns"`
|
||||
ConnMaxLifetime int `json:"mysql.conn_max_lifetime" mapstructure:"mysql.conn_max_lifetime"`
|
||||
MaxQueryRows int `json:"mysql.max_query_rows" mapstructure:"mysql.max_query_rows"`
|
||||
}
|
||||
|
||||
func NewMySQLWithSettings(ctx context.Context, settings interface{}) (*MySQL, error) {
|
||||
newest := new(MySQL)
|
||||
settingsMap := map[string]interface{}{}
|
||||
|
||||
switch s := settings.(type) {
|
||||
case string:
|
||||
if err := json.Unmarshal([]byte(s), &settingsMap); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
case map[string]interface{}:
|
||||
settingsMap = s
|
||||
default:
|
||||
return nil, errors.New("unsupported settings type")
|
||||
}
|
||||
|
||||
if err := mapstructure.Decode(settingsMap, newest); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return newest, nil
|
||||
}
|
||||
|
||||
// NewConn establishes a new connection to MySQL
|
||||
func (m *MySQL) NewConn(ctx context.Context, database string) (*gorm.DB, error) {
|
||||
if len(m.Shards) == 0 {
|
||||
return nil, errors.New("empty pgsql shards")
|
||||
}
|
||||
|
||||
shard := m.Shards[0]
|
||||
|
||||
if shard.Timeout == 0 {
|
||||
shard.Timeout = 300
|
||||
}
|
||||
|
||||
if shard.MaxIdleConns == 0 {
|
||||
shard.MaxIdleConns = 10
|
||||
}
|
||||
|
||||
if shard.MaxOpenConns == 0 {
|
||||
shard.MaxOpenConns = 100
|
||||
}
|
||||
|
||||
if shard.ConnMaxLifetime == 0 {
|
||||
shard.ConnMaxLifetime = 300
|
||||
}
|
||||
|
||||
if shard.MaxQueryRows == 0 {
|
||||
shard.MaxQueryRows = 100
|
||||
}
|
||||
|
||||
if len(shard.Addr) == 0 {
|
||||
return nil, errors.New("empty addr")
|
||||
}
|
||||
|
||||
if len(shard.Addr) == 0 {
|
||||
return nil, errors.New("empty addr")
|
||||
}
|
||||
var keys []string
|
||||
var err error
|
||||
keys = append(keys, shard.Addr)
|
||||
|
||||
keys = append(keys, shard.Password, shard.User)
|
||||
if len(database) > 0 {
|
||||
keys = append(keys, database)
|
||||
}
|
||||
cachedKey := strings.Join(keys, ":")
|
||||
// cache conn with database
|
||||
conn, ok := pool.PoolClient.Load(cachedKey)
|
||||
if ok {
|
||||
return conn.(*gorm.DB), nil
|
||||
}
|
||||
var db *gorm.DB
|
||||
defer func() {
|
||||
if db != nil && err == nil {
|
||||
pool.PoolClient.Store(cachedKey, db)
|
||||
}
|
||||
}()
|
||||
|
||||
dsn := fmt.Sprintf("%s:%s@tcp(%s)/%s?charset=utf8&parseTime=True", shard.User, shard.Password, shard.Addr, database)
|
||||
|
||||
return sqlbase.NewDB(
|
||||
ctx,
|
||||
mysql.Open(dsn),
|
||||
shard.MaxIdleConns,
|
||||
shard.MaxOpenConns,
|
||||
time.Duration(shard.ConnMaxLifetime)*time.Second,
|
||||
)
|
||||
}
|
||||
|
||||
func (m *MySQL) ShowDatabases(ctx context.Context) ([]string, error) {
|
||||
db, err := m.NewConn(ctx, "")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return sqlbase.ShowDatabases(ctx, db, "SHOW DATABASES")
|
||||
}
|
||||
|
||||
func (m *MySQL) ShowTables(ctx context.Context, database string) ([]string, error) {
|
||||
db, err := m.NewConn(ctx, database)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return sqlbase.ShowTables(ctx, db, "SHOW TABLES")
|
||||
}
|
||||
|
||||
func (m *MySQL) DescTable(ctx context.Context, database, table string) ([]*types.ColumnProperty, error) {
|
||||
db, err := m.NewConn(ctx, database)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
query := fmt.Sprintf("DESCRIBE %s", table)
|
||||
return sqlbase.DescTable(ctx, db, query)
|
||||
}
|
||||
|
||||
func (m *MySQL) SelectRows(ctx context.Context, database, table, query string) ([]map[string]interface{}, error) {
|
||||
db, err := m.NewConn(ctx, database)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return sqlbase.SelectRows(ctx, db, table, query)
|
||||
}
|
||||
|
||||
func (m *MySQL) ExecQuery(ctx context.Context, database string, sql string) ([]map[string]interface{}, error) {
|
||||
db, err := m.NewConn(ctx, database)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return sqlbase.ExecQuery(ctx, db, sql)
|
||||
}
|
||||
129
dskit/mysql/mysql_test.go
Normal file
129
dskit/mysql/mysql_test.go
Normal file
@@ -0,0 +1,129 @@
|
||||
// @Author: Ciusyan 5/11/24
|
||||
|
||||
package mysql
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestNewMySQLWithSettings(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
settings interface{}
|
||||
wantErr bool
|
||||
}{
|
||||
{
|
||||
name: "valid string settings",
|
||||
settings: `{"mysql.addr":"localhost:3306","mysql.user":"root","mysql.password":"root","mysql.maxIdleConns":5,"mysql.maxOpenConns":10,"mysql.connMaxLifetime":30}`,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "invalid settings type",
|
||||
settings: 12345,
|
||||
wantErr: true,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
got, err := NewMySQLWithSettings(context.Background(), tt.settings)
|
||||
if (err != nil) != tt.wantErr {
|
||||
t.Errorf("NewMySQLWithSettings() error = %v, wantErr %v", err, tt.wantErr)
|
||||
}
|
||||
t.Log(got)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestNewConn(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
settings := `{"mysql.addr":"localhost:3306","mysql.user":"root","mysql.password":"root","mysql.maxIdleConns":5,"mysql.maxOpenConns":10,"mysql.connMaxLifetime":30}`
|
||||
mysql, err := NewMySQLWithSettings(ctx, settings)
|
||||
require.NoError(t, err)
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
database string
|
||||
wantErr bool
|
||||
}{
|
||||
{
|
||||
name: "valid connection",
|
||||
database: "db1",
|
||||
wantErr: false,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
_, err := mysql.NewConn(ctx, tt.database)
|
||||
if (err != nil) != tt.wantErr {
|
||||
t.Errorf("NewConn() error = %v, wantErr %v", err, tt.wantErr)
|
||||
return
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestShowDatabases(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
settings := `{"mysql.addr":"localhost:3306","mysql.user":"root","mysql.password":"root","mysql.maxIdleConns":5,"mysql.maxOpenConns":10,"mysql.connMaxLifetime":30}`
|
||||
mysql, err := NewMySQLWithSettings(ctx, settings)
|
||||
require.NoError(t, err)
|
||||
|
||||
databases, err := mysql.ShowDatabases(ctx)
|
||||
require.NoError(t, err)
|
||||
t.Log(databases)
|
||||
}
|
||||
|
||||
func TestShowTables(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
settings := `{"mysql.addr":"localhost:3306","mysql.user":"root","mysql.password":"root","mysql.maxIdleConns":5,"mysql.maxOpenConns":10,"mysql.connMaxLifetime":30}`
|
||||
mysql, err := NewMySQLWithSettings(ctx, settings)
|
||||
require.NoError(t, err)
|
||||
|
||||
tables, err := mysql.ShowTables(ctx, "db1")
|
||||
require.NoError(t, err)
|
||||
t.Log(tables)
|
||||
}
|
||||
|
||||
func TestDescTable(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
settings := `{"mysql.addr":"localhost:3306","mysql.user":"root","mysql.password":"root","mysql.maxIdleConns":5,"mysql.maxOpenConns":10,"mysql.connMaxLifetime":30}`
|
||||
mysql, err := NewMySQLWithSettings(ctx, settings)
|
||||
require.NoError(t, err)
|
||||
|
||||
descTable, err := mysql.DescTable(ctx, "db1", "students")
|
||||
require.NoError(t, err)
|
||||
for _, desc := range descTable {
|
||||
t.Logf("%+v", *desc)
|
||||
}
|
||||
}
|
||||
|
||||
func TestExecQuery(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
settings := `{"mysql.addr":"localhost:3306","mysql.user":"root","mysql.password":"root","mysql.maxIdleConns":5,"mysql.maxOpenConns":10,"mysql.connMaxLifetime":30}`
|
||||
mysql, err := NewMySQLWithSettings(ctx, settings)
|
||||
require.NoError(t, err)
|
||||
|
||||
rows, err := mysql.ExecQuery(ctx, "db1", "SELECT * FROM students WHERE id = 10008")
|
||||
require.NoError(t, err)
|
||||
for _, row := range rows {
|
||||
t.Log(row)
|
||||
}
|
||||
}
|
||||
|
||||
func TestSelectRows(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
settings := `{"mysql.addr":"localhost:3306","mysql.user":"root","mysql.password":"root","mysql.maxIdleConns":5,"mysql.maxOpenConns":10,"mysql.connMaxLifetime":30}`
|
||||
mysql, err := NewMySQLWithSettings(ctx, settings)
|
||||
require.NoError(t, err)
|
||||
|
||||
rows, err := mysql.SelectRows(ctx, "db1", "students", "id > 10008")
|
||||
require.NoError(t, err)
|
||||
for _, row := range rows {
|
||||
t.Log(row)
|
||||
}
|
||||
}
|
||||
74
dskit/mysql/timeseries.go
Normal file
74
dskit/mysql/timeseries.go
Normal file
@@ -0,0 +1,74 @@
|
||||
package mysql
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/dskit/sqlbase"
|
||||
"github.com/ccfos/nightingale/v6/dskit/types"
|
||||
|
||||
"gorm.io/gorm"
|
||||
)
|
||||
|
||||
// Query executes a given SQL query in MySQL and returns the results
|
||||
func (m *MySQL) Query(ctx context.Context, query *sqlbase.QueryParam) ([]map[string]interface{}, error) {
|
||||
db, err := m.NewConn(ctx, "")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
err = m.CheckMaxQueryRows(db, ctx, query)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return sqlbase.Query(ctx, db, query)
|
||||
}
|
||||
|
||||
// QueryTimeseries executes a time series data query using the given parameters
|
||||
func (m *MySQL) QueryTimeseries(ctx context.Context, query *sqlbase.QueryParam) ([]types.MetricValues, error) {
|
||||
db, err := m.NewConn(ctx, "")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
err = m.CheckMaxQueryRows(db, ctx, query)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return sqlbase.QueryTimeseries(ctx, db, query)
|
||||
}
|
||||
|
||||
func (m *MySQL) CheckMaxQueryRows(db *gorm.DB, ctx context.Context, query *sqlbase.QueryParam) error {
|
||||
sql := strings.ReplaceAll(query.Sql, ";", "")
|
||||
checkQuery := &sqlbase.QueryParam{
|
||||
Sql: fmt.Sprintf("SELECT COUNT(*) as count FROM (%s) AS subquery;", sql),
|
||||
}
|
||||
|
||||
res, err := sqlbase.Query(ctx, db, checkQuery)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if len(res) > 0 {
|
||||
if count, exists := res[0]["count"]; exists {
|
||||
v, err := sqlbase.ParseFloat64Value(count)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
maxQueryRows := m.Shards[0].MaxQueryRows
|
||||
if maxQueryRows == 0 {
|
||||
maxQueryRows = 500
|
||||
}
|
||||
|
||||
if v > float64(maxQueryRows) {
|
||||
return fmt.Errorf("query result rows count %d exceeds the maximum limit %d", int(v), maxQueryRows)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
62
dskit/mysql/timeseries_test.go
Normal file
62
dskit/mysql/timeseries_test.go
Normal file
@@ -0,0 +1,62 @@
|
||||
// @Author: Ciusyan 5/11/24
|
||||
|
||||
package mysql
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/dskit/sqlbase"
|
||||
"github.com/ccfos/nightingale/v6/dskit/types"
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestQuery(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
settings := `{"mysql.addr":"localhost:3306","mysql.user":"root","mysql.password":"root","mysql.maxIdleConns":5,"mysql.maxOpenConns":10,"mysql.connMaxLifetime":30}`
|
||||
mysql, err := NewMySQLWithSettings(ctx, settings)
|
||||
require.NoError(t, err)
|
||||
|
||||
param := &sqlbase.QueryParam{
|
||||
Sql: "SELECT * FROM students WHERE id > 10900",
|
||||
Keys: types.Keys{
|
||||
ValueKey: "",
|
||||
LabelKey: "",
|
||||
TimeKey: "",
|
||||
TimeFormat: "",
|
||||
},
|
||||
}
|
||||
|
||||
rows, err := mysql.Query(ctx, param)
|
||||
require.NoError(t, err)
|
||||
for _, row := range rows {
|
||||
t.Log(row)
|
||||
}
|
||||
}
|
||||
|
||||
func TestQueryTimeseries(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
settings := `{"mysql.addr":"localhost:3306","mysql.user":"root","mysql.password":"root","mysql.maxIdleConns":5,"mysql.maxOpenConns":10,"mysql.connMaxLifetime":30}`
|
||||
mysql, err := NewMySQLWithSettings(ctx, settings)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Prepare a test query parameter
|
||||
param := &sqlbase.QueryParam{
|
||||
Sql: "SELECT id, grade, student_name, a_grade, update_time FROM students WHERE grade > 20000", // Modify SQL query to select specific columns
|
||||
Keys: types.Keys{
|
||||
ValueKey: "grade a_grade", // Set the value key to the column name containing the metric value
|
||||
LabelKey: "id student_name", // Set the label key to the column name containing the metric label
|
||||
TimeKey: "update_time", // Set the time key to the column name containing the timestamp
|
||||
TimeFormat: "2006-01-02 15:04:05 +0000 UTC", // Provide the time format according to the timestamp column's format
|
||||
},
|
||||
}
|
||||
|
||||
// Execute the query and retrieve the time series data
|
||||
metricValues, err := mysql.QueryTimeseries(ctx, param)
|
||||
require.NoError(t, err)
|
||||
|
||||
for _, metric := range metricValues {
|
||||
t.Log(metric)
|
||||
}
|
||||
}
|
||||
37
dskit/pool/pool.go
Normal file
37
dskit/pool/pool.go
Normal file
@@ -0,0 +1,37 @@
|
||||
package pool
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
gc "github.com/patrickmn/go-cache"
|
||||
)
|
||||
|
||||
var (
|
||||
PoolClient = new(sync.Map)
|
||||
)
|
||||
|
||||
var (
|
||||
// default cache instance, do not use this if you want to specify the defaultExpiration
|
||||
DefaultCache = gc.New(time.Hour*24, time.Hour)
|
||||
)
|
||||
|
||||
var (
|
||||
bytesPool = sync.Pool{
|
||||
New: func() interface{} { return new(bytes.Buffer) },
|
||||
}
|
||||
)
|
||||
|
||||
func PoolGetBytesBuffer() *bytes.Buffer {
|
||||
buf := bytesPool.Get().(*bytes.Buffer)
|
||||
buf.Reset()
|
||||
return buf
|
||||
}
|
||||
|
||||
func PoolPutBytesBuffer(buf *bytes.Buffer) {
|
||||
if buf == nil {
|
||||
return
|
||||
}
|
||||
bytesPool.Put(buf)
|
||||
}
|
||||
207
dskit/postgres/postgres.go
Normal file
207
dskit/postgres/postgres.go
Normal file
@@ -0,0 +1,207 @@
|
||||
// @Author: Ciusyan 5/20/24
|
||||
|
||||
package postgres
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/dskit/pool"
|
||||
"github.com/ccfos/nightingale/v6/dskit/sqlbase"
|
||||
"github.com/ccfos/nightingale/v6/dskit/types"
|
||||
|
||||
_ "github.com/lib/pq" // PostgreSQL driver
|
||||
"github.com/mitchellh/mapstructure"
|
||||
"gorm.io/driver/postgres"
|
||||
"gorm.io/gorm"
|
||||
)
|
||||
|
||||
type PostgreSQL struct {
|
||||
Shard `json:",inline" mapstructure:",squash"`
|
||||
}
|
||||
|
||||
type Shard struct {
|
||||
Addr string `json:"pgsql.addr" mapstructure:"pgsql.addr"`
|
||||
DB string `json:"pgsql.db" mapstructure:"pgsql.db"`
|
||||
User string `json:"pgsql.user" mapstructure:"pgsql.user"`
|
||||
Password string `json:"pgsql.password" mapstructure:"pgsql.password" `
|
||||
Timeout int `json:"pgsql.timeout" mapstructure:"pgsql.timeout"`
|
||||
MaxIdleConns int `json:"pgsql.max_idle_conns" mapstructure:"pgsql.max_idle_conns"`
|
||||
MaxOpenConns int `json:"pgsql.max_open_conns" mapstructure:"pgsql.max_open_conns"`
|
||||
ConnMaxLifetime int `json:"pgsql.conn_max_lifetime" mapstructure:"pgsql.conn_max_lifetime"`
|
||||
MaxQueryRows int `json:"pgsql.max_query_rows" mapstructure:"pgsql.max_query_rows"`
|
||||
}
|
||||
|
||||
// NewPostgreSQLWithSettings initializes a new PostgreSQL instance with the given settings
|
||||
func NewPostgreSQLWithSettings(ctx context.Context, settings interface{}) (*PostgreSQL, error) {
|
||||
newest := new(PostgreSQL)
|
||||
settingsMap := map[string]interface{}{}
|
||||
|
||||
switch s := settings.(type) {
|
||||
case string:
|
||||
if err := json.Unmarshal([]byte(s), &settingsMap); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
case map[string]interface{}:
|
||||
settingsMap = s
|
||||
case *PostgreSQL:
|
||||
return s, nil
|
||||
case PostgreSQL:
|
||||
return &s, nil
|
||||
case Shard:
|
||||
newest.Shard = s
|
||||
return newest, nil
|
||||
case *Shard:
|
||||
newest.Shard = *s
|
||||
return newest, nil
|
||||
default:
|
||||
return nil, errors.New("unsupported settings type")
|
||||
}
|
||||
|
||||
if err := mapstructure.Decode(settingsMap, newest); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return newest, nil
|
||||
}
|
||||
|
||||
// NewConn establishes a new connection to PostgreSQL
|
||||
func (p *PostgreSQL) NewConn(ctx context.Context, database string) (*gorm.DB, error) {
|
||||
if len(p.DB) == 0 && len(database) == 0 {
|
||||
return nil, errors.New("empty pgsql database") // 兼容阿里实时数仓Holgres, 连接时必须指定db名字
|
||||
}
|
||||
|
||||
if p.Shard.Timeout == 0 {
|
||||
p.Shard.Timeout = 60
|
||||
}
|
||||
|
||||
if p.Shard.MaxIdleConns == 0 {
|
||||
p.Shard.MaxIdleConns = 10
|
||||
}
|
||||
|
||||
if p.Shard.MaxOpenConns == 0 {
|
||||
p.Shard.MaxOpenConns = 100
|
||||
}
|
||||
|
||||
if p.Shard.ConnMaxLifetime == 0 {
|
||||
p.Shard.ConnMaxLifetime = 14400
|
||||
}
|
||||
|
||||
if len(p.Shard.Addr) == 0 {
|
||||
return nil, errors.New("empty fe-node addr")
|
||||
}
|
||||
var keys []string
|
||||
var err error
|
||||
keys = append(keys, p.Shard.Addr)
|
||||
|
||||
keys = append(keys, p.Shard.Password, p.Shard.User)
|
||||
if len(database) > 0 {
|
||||
keys = append(keys, database)
|
||||
}
|
||||
cachedKey := strings.Join(keys, ":")
|
||||
// cache conn with database
|
||||
conn, ok := pool.PoolClient.Load(cachedKey)
|
||||
if ok {
|
||||
return conn.(*gorm.DB), nil
|
||||
}
|
||||
|
||||
var db *gorm.DB
|
||||
defer func() {
|
||||
if db != nil && err == nil {
|
||||
pool.PoolClient.Store(cachedKey, db)
|
||||
}
|
||||
}()
|
||||
|
||||
// Simplified connection logic for PostgreSQL
|
||||
dsn := fmt.Sprintf("postgres://%s:%s@%s/%s?sslmode=disable&TimeZone=Asia/Shanghai", p.Shard.User, p.Shard.Password, p.Shard.Addr, database)
|
||||
db, err = sqlbase.NewDB(
|
||||
ctx,
|
||||
postgres.Open(dsn),
|
||||
p.Shard.MaxIdleConns,
|
||||
p.Shard.MaxOpenConns,
|
||||
time.Duration(p.Shard.ConnMaxLifetime)*time.Second,
|
||||
)
|
||||
|
||||
if err != nil {
|
||||
if db != nil {
|
||||
sqlDB, _ := db.DB()
|
||||
if sqlDB != nil {
|
||||
sqlDB.Close()
|
||||
}
|
||||
}
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return db, nil
|
||||
}
|
||||
|
||||
// ShowDatabases lists all databases in PostgreSQL
|
||||
func (p *PostgreSQL) ShowDatabases(ctx context.Context, searchKeyword string) ([]string, error) {
|
||||
db, err := p.NewConn(ctx, "postgres")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
sql := fmt.Sprintf("SELECT datname FROM pg_database WHERE datistemplate = false AND datname LIKE %s",
|
||||
"'%"+searchKeyword+"%'")
|
||||
return sqlbase.ShowDatabases(ctx, db, sql)
|
||||
}
|
||||
|
||||
// ShowTables lists all tables in a given database
|
||||
func (p *PostgreSQL) ShowTables(ctx context.Context, searchKeyword string) (map[string][]string, error) {
|
||||
db, err := p.NewConn(ctx, p.DB)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
sql := fmt.Sprintf("SELECT schemaname, tablename FROM pg_tables WHERE schemaname !='information_schema' and schemaname !='pg_catalog' and tablename LIKE %s",
|
||||
"'%"+searchKeyword+"%'")
|
||||
rets, err := sqlbase.ExecQuery(ctx, db, sql)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
tabs := make(map[string][]string, 3)
|
||||
for _, row := range rets {
|
||||
if val, ok := row["schemaname"].(string); ok {
|
||||
tabs[val] = append(tabs[val], row["tablename"].(string))
|
||||
}
|
||||
}
|
||||
return tabs, nil
|
||||
}
|
||||
|
||||
// DescTable describes the schema of a specified table in PostgreSQL
|
||||
// scheme default: public if not specified
|
||||
func (p *PostgreSQL) DescTable(ctx context.Context, scheme, table string) ([]*types.ColumnProperty, error) {
|
||||
db, err := p.NewConn(ctx, p.DB)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if scheme == "" {
|
||||
scheme = "public"
|
||||
}
|
||||
|
||||
query := fmt.Sprintf("SELECT column_name, data_type, is_nullable, column_default FROM information_schema.columns WHERE table_name = '%s' AND table_schema = '%s'", table, scheme)
|
||||
return sqlbase.DescTable(ctx, db, query)
|
||||
}
|
||||
|
||||
// SelectRows selects rows from a specified table in PostgreSQL based on a given query
|
||||
func (p *PostgreSQL) SelectRows(ctx context.Context, table, where string) ([]map[string]interface{}, error) {
|
||||
db, err := p.NewConn(ctx, p.DB)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return sqlbase.SelectRows(ctx, db, table, where)
|
||||
}
|
||||
|
||||
// ExecQuery executes a SQL query in PostgreSQL
|
||||
func (p *PostgreSQL) ExecQuery(ctx context.Context, sql string) ([]map[string]interface{}, error) {
|
||||
db, err := p.NewConn(ctx, p.DB)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return sqlbase.ExecQuery(ctx, db, sql)
|
||||
}
|
||||
73
dskit/postgres/timeseries.go
Normal file
73
dskit/postgres/timeseries.go
Normal file
@@ -0,0 +1,73 @@
|
||||
package postgres
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/dskit/sqlbase"
|
||||
"github.com/ccfos/nightingale/v6/dskit/types"
|
||||
"gorm.io/gorm"
|
||||
)
|
||||
|
||||
// Query executes a given SQL query in PostgreSQL and returns the results
|
||||
func (p *PostgreSQL) Query(ctx context.Context, query *sqlbase.QueryParam) ([]map[string]interface{}, error) {
|
||||
db, err := p.NewConn(ctx, p.Shard.DB)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
err = p.CheckMaxQueryRows(db, ctx, query)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return sqlbase.Query(ctx, db, query)
|
||||
}
|
||||
|
||||
// QueryTimeseries executes a time series data query using the given parameters
|
||||
func (p *PostgreSQL) QueryTimeseries(ctx context.Context, query *sqlbase.QueryParam) ([]types.MetricValues, error) {
|
||||
db, err := p.NewConn(ctx, p.Shard.DB)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
err = p.CheckMaxQueryRows(db, ctx, query)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return sqlbase.QueryTimeseries(ctx, db, query, true)
|
||||
}
|
||||
|
||||
func (p *PostgreSQL) CheckMaxQueryRows(db *gorm.DB, ctx context.Context, query *sqlbase.QueryParam) error {
|
||||
sql := strings.ReplaceAll(query.Sql, ";", "")
|
||||
checkQuery := &sqlbase.QueryParam{
|
||||
Sql: fmt.Sprintf("SELECT COUNT(*) as count FROM (%s) AS subquery;", sql),
|
||||
}
|
||||
|
||||
res, err := sqlbase.Query(ctx, db, checkQuery)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if len(res) > 0 {
|
||||
if count, exists := res[0]["count"]; exists {
|
||||
v, err := sqlbase.ParseFloat64Value(count)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
maxQueryRows := p.Shard.MaxQueryRows
|
||||
if maxQueryRows == 0 {
|
||||
maxQueryRows = 500
|
||||
}
|
||||
|
||||
if v > float64(maxQueryRows) {
|
||||
return fmt.Errorf("query result rows count %d exceeds the maximum limit %d", int(v), maxQueryRows)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
@@ -9,9 +9,9 @@ import (
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/dskit/types"
|
||||
|
||||
"gorm.io/gorm"
|
||||
|
||||
"github.com/ccfos/nightingale/v6/dskit/types"
|
||||
)
|
||||
|
||||
// NewDB creates a new Gorm DB instance based on the provided gorm.Dialector and configures the connection pool
|
||||
@@ -19,7 +19,7 @@ func NewDB(ctx context.Context, dialector gorm.Dialector, maxIdleConns, maxOpenC
|
||||
// Create a new Gorm DB instance
|
||||
db, err := gorm.Open(dialector, &gorm.Config{})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
return db, err
|
||||
}
|
||||
|
||||
// Configure the connection pool
|
||||
@@ -35,6 +35,17 @@ func NewDB(ctx context.Context, dialector gorm.Dialector, maxIdleConns, maxOpenC
|
||||
return db.WithContext(ctx), sqlDB.Ping()
|
||||
}
|
||||
|
||||
func CloseDB(db *gorm.DB) error {
|
||||
if db != nil {
|
||||
sqlDb, err := db.DB()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return sqlDb.Close()
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// ShowTables retrieves a list of all tables in the specified database
|
||||
func ShowTables(ctx context.Context, db *gorm.DB, query string) ([]string, error) {
|
||||
var tables []string
|
||||
@@ -112,7 +123,7 @@ func DescTable(ctx context.Context, db *gorm.DB, query string) ([]*types.ColumnP
|
||||
}
|
||||
|
||||
// Convert the database-specific type to internal type
|
||||
type2, indexable := convertDBType(db.Dialector.Name(), typ)
|
||||
type2, indexable := ConvertDBType(db.Dialector.Name(), typ)
|
||||
columns = append(columns, &types.ColumnProperty{
|
||||
Field: field,
|
||||
Type: typ,
|
||||
@@ -175,7 +186,7 @@ func SelectRows(ctx context.Context, db *gorm.DB, table, query string) ([]map[st
|
||||
}
|
||||
|
||||
// convertDBType converts MySQL or PostgreSQL data types to custom internal types and determines if they are indexable
|
||||
func convertDBType(dialect, dbType string) (string, bool) {
|
||||
func ConvertDBType(dialect, dbType string) (string, bool) {
|
||||
typ := strings.ToLower(dbType)
|
||||
|
||||
// Common type conversions
|
||||
@@ -190,7 +201,7 @@ func convertDBType(dialect, dbType string) (string, bool) {
|
||||
strings.HasPrefix(typ, "char"), strings.HasPrefix(typ, "tinytext"),
|
||||
strings.HasPrefix(typ, "mediumtext"), strings.HasPrefix(typ, "longtext"),
|
||||
strings.HasPrefix(typ, "character varying"), strings.HasPrefix(typ, "nvarchar"),
|
||||
strings.HasPrefix(typ, "nchar"):
|
||||
strings.HasPrefix(typ, "nchar"), strings.HasPrefix(typ, "bpchar"):
|
||||
return types.LogExtractValueTypeText, true
|
||||
|
||||
case strings.HasPrefix(typ, "float"), strings.HasPrefix(typ, "double"),
|
||||
@@ -203,7 +214,7 @@ func convertDBType(dialect, dbType string) (string, bool) {
|
||||
strings.HasPrefix(typ, "time"), strings.HasPrefix(typ, "smalldatetime"):
|
||||
return types.LogExtractValueTypeDate, false
|
||||
|
||||
case strings.HasPrefix(typ, "boolean"), strings.HasPrefix(typ, "bit"):
|
||||
case strings.HasPrefix(typ, "boolean"), strings.HasPrefix(typ, "bit"), strings.HasPrefix(typ, "bool"):
|
||||
return types.LogExtractValueTypeBool, false
|
||||
}
|
||||
|
||||
|
||||
@@ -7,6 +7,7 @@ import (
|
||||
"crypto/md5"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"math"
|
||||
"reflect"
|
||||
"sort"
|
||||
"strconv"
|
||||
@@ -112,7 +113,8 @@ func FormatMetricValues(keys types.Keys, rows []map[string]interface{}, ignoreDe
|
||||
metricTs[k] = float64(ts.Unix())
|
||||
default:
|
||||
// Default to labels for any unrecognized columns
|
||||
if !ignore {
|
||||
if !ignore && keys.LabelKey == "" {
|
||||
// 只有当 labelKey 为空时,才将剩余的列作为 label
|
||||
labels[k] = fmt.Sprintf("%v", v)
|
||||
}
|
||||
}
|
||||
@@ -120,6 +122,11 @@ func FormatMetricValues(keys types.Keys, rows []map[string]interface{}, ignoreDe
|
||||
|
||||
// Compile and store the metric values
|
||||
for metricName, value := range metricValue {
|
||||
// NaN 无法执行json.Marshal(), 接口会报错
|
||||
if math.IsNaN(value) {
|
||||
continue
|
||||
}
|
||||
|
||||
metrics := make(model.Metric)
|
||||
var labelsStr []string
|
||||
|
||||
|
||||
@@ -68,6 +68,9 @@ Enable = false
|
||||
HeaderUserNameKey = "X-User-Name"
|
||||
DefaultRoles = ["Standard"]
|
||||
|
||||
[HTTP.TokenAuth]
|
||||
Enable = true
|
||||
|
||||
[HTTP.RSA]
|
||||
# open RSA
|
||||
OpenRSA = false
|
||||
|
||||
5
go.mod
5
go.mod
@@ -27,11 +27,14 @@ require (
|
||||
github.com/jinzhu/copier v0.4.0
|
||||
github.com/json-iterator/go v1.1.12
|
||||
github.com/koding/multiconfig v0.0.0-20171124222453-69c27309b2d7
|
||||
github.com/lib/pq v1.10.9
|
||||
github.com/mailru/easyjson v0.7.7
|
||||
github.com/mattn/go-isatty v0.0.19
|
||||
github.com/mitchellh/mapstructure v1.5.0
|
||||
github.com/mojocn/base64Captcha v1.3.6
|
||||
github.com/olivere/elastic/v7 v7.0.32
|
||||
github.com/opensearch-project/opensearch-go/v2 v2.3.0
|
||||
github.com/patrickmn/go-cache v2.1.0+incompatible
|
||||
github.com/pelletier/go-toml/v2 v2.0.8
|
||||
github.com/pkg/errors v0.9.1
|
||||
github.com/prometheus/client_golang v1.20.5
|
||||
@@ -120,7 +123,7 @@ require (
|
||||
github.com/go-playground/locales v0.14.1 // indirect
|
||||
github.com/go-playground/universal-translator v0.18.1 // indirect
|
||||
github.com/go-playground/validator/v10 v10.14.0 // indirect
|
||||
github.com/go-sql-driver/mysql v1.6.0 // indirect
|
||||
github.com/go-sql-driver/mysql v1.6.0
|
||||
github.com/goccy/go-json v0.10.2 // indirect
|
||||
github.com/golang/freetype v0.0.0-20170609003504-e2365dfdc4a0 // indirect
|
||||
github.com/grafana/regexp v0.0.0-20221122212121-6b5c0a4cb7fd // indirect
|
||||
|
||||
26
go.sum
26
go.sum
@@ -31,8 +31,21 @@ github.com/andybalholm/brotli v1.1.0 h1:eLKJA0d02Lf0mVpIDgYnqXcUn0GqVmEFny3VuID1
|
||||
github.com/andybalholm/brotli v1.1.0/go.mod h1:sms7XGricyQI9K10gOSf56VKKWS4oLer58Q+mhRPtnY=
|
||||
github.com/araddon/dateparse v0.0.0-20210429162001-6b43995a97de h1:FxWPpzIjnTlhPwqqXc4/vE0f7GvRjuAsbW+HOIe8KnA=
|
||||
github.com/araddon/dateparse v0.0.0-20210429162001-6b43995a97de/go.mod h1:DCaWoUhZrYW9p1lxo/cm8EmUOOzAPSEZNGF2DK1dJgw=
|
||||
github.com/aws/aws-sdk-go v1.44.263/go.mod h1:aVsgQcEevwlmQ7qHE9I3h+dtQgpqhFB+i8Phjh7fkwI=
|
||||
github.com/aws/aws-sdk-go v1.44.302 h1:ST3ko6GrJKn3Xi+nAvxjG3uk/V1pW8KC52WLeIxqqNk=
|
||||
github.com/aws/aws-sdk-go v1.44.302/go.mod h1:aVsgQcEevwlmQ7qHE9I3h+dtQgpqhFB+i8Phjh7fkwI=
|
||||
github.com/aws/aws-sdk-go-v2 v1.18.0/go.mod h1:uzbQtefpm44goOPmdKyAlXSNcwlRgF3ePWVW6EtJvvw=
|
||||
github.com/aws/aws-sdk-go-v2/config v1.18.25/go.mod h1:dZnYpD5wTW/dQF0rRNLVypB396zWCcPiBIvdvSWHEg4=
|
||||
github.com/aws/aws-sdk-go-v2/credentials v1.13.24/go.mod h1:jYPYi99wUOPIFi0rhiOvXeSEReVOzBqFNOX5bXYoG2o=
|
||||
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.13.3/go.mod h1:4Q0UFP0YJf0NrsEuEYHpM9fTSEVnD16Z3uyEF7J9JGM=
|
||||
github.com/aws/aws-sdk-go-v2/internal/configsources v1.1.33/go.mod h1:7i0PF1ME/2eUPFcjkVIwq+DOygHEoK92t5cDqNgYbIw=
|
||||
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.4.27/go.mod h1:UrHnn3QV/d0pBZ6QBAEQcqFLf8FAzLmoUfPVIueOvoM=
|
||||
github.com/aws/aws-sdk-go-v2/internal/ini v1.3.34/go.mod h1:Etz2dj6UHYuw+Xw830KfzCfWGMzqvUTCjUj5b76GVDc=
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.9.27/go.mod h1:EOwBD4J4S5qYszS5/3DpkejfuK+Z5/1uzICfPaZLtqw=
|
||||
github.com/aws/aws-sdk-go-v2/service/sso v1.12.10/go.mod h1:ouy2P4z6sJN70fR3ka3wD3Ro3KezSxU6eKGQI2+2fjI=
|
||||
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.14.10/go.mod h1:AFvkxc8xfBe8XA+5St5XIHHrQQtkxqrRincx4hmMHOk=
|
||||
github.com/aws/aws-sdk-go-v2/service/sts v1.19.0/go.mod h1:BgQOMsg8av8jset59jelyPW7NoZcZXLVpDsXunGDrk8=
|
||||
github.com/aws/smithy-go v1.13.5/go.mod h1:Tg+OJXh4MB2R/uN61Ko2f6hTZwB/ZYGOtib8J3gBHzA=
|
||||
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
|
||||
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
|
||||
github.com/bitly/go-simplejson v0.5.1 h1:xgwPbetQScXt1gh9BmoJ6j9JMr3TElvuIyjR8pgdoow=
|
||||
@@ -139,6 +152,7 @@ github.com/golang/snappy v0.0.4 h1:yAGX7huGHXlcLOEtBnF4w7FQwA26wojNCwOYAEhLjQM=
|
||||
github.com/golang/snappy v0.0.4/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
|
||||
github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/google/go-cmp v0.5.8/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
|
||||
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
|
||||
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
|
||||
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
|
||||
@@ -189,6 +203,7 @@ github.com/jinzhu/now v1.1.5 h1:/o9tlHleP7gOFmsnYNz3RGnqzefHA47wQpKrrdTIwXQ=
|
||||
github.com/jinzhu/now v1.1.5/go.mod h1:d3SSVoowX0Lcu0IBviAWJpolVfI5UJVZZ7cO71lE/z8=
|
||||
github.com/jmespath/go-jmespath v0.4.0 h1:BEgLn5cpjn8UN1mAw4NjwDrS35OdebyEtFe+9YPoQUg=
|
||||
github.com/jmespath/go-jmespath v0.4.0/go.mod h1:T8mJZnbsbmF+m6zOOFylbeCJqk5+pHWvzYPziyZiYoo=
|
||||
github.com/jmespath/go-jmespath/internal/testify v1.5.1/go.mod h1:L3OGu8Wl2/fWfCI6z80xFu9LTZmf1ZRjMHUOPmWr69U=
|
||||
github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8HmY=
|
||||
github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y=
|
||||
github.com/jpillora/backoff v1.0.0 h1:uvFg412JmmHBHw7iwprIxkPMI+sGQ4kzOWsMeHnm2EA=
|
||||
@@ -220,6 +235,8 @@ github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+
|
||||
github.com/leodido/go-urn v1.2.1/go.mod h1:zt4jvISO2HfUBqxjfIshjdMTYS56ZS/qv49ictyFfxY=
|
||||
github.com/leodido/go-urn v1.2.4 h1:XlAE/cm/ms7TE/VMVoduSpNBoyc2dOxHs5MZSwAN63Q=
|
||||
github.com/leodido/go-urn v1.2.4/go.mod h1:7ZrI8mTSeBSHl/UaRyKQW1qZeMgak41ANeCNaVckg+4=
|
||||
github.com/lib/pq v1.10.9 h1:YXG7RB+JIjhP29X+OtkiDnYaXQwpS4JEWq7dtCCRUEw=
|
||||
github.com/lib/pq v1.10.9/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
|
||||
github.com/mailru/easyjson v0.7.7 h1:UGYAvKxe3sBsEDzO8ZeWOSlIQfWFlxbzLZe7hwFURr0=
|
||||
github.com/mailru/easyjson v0.7.7/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc=
|
||||
github.com/mattn/go-isatty v0.0.14/go.mod h1:7GGIvUiUoEMVVmxf/4nioHXj79iQHKdU27kJ6hsGG94=
|
||||
@@ -246,6 +263,10 @@ github.com/oklog/ulid v1.3.1 h1:EGfNDEx6MqHz8B3uNV6QAib1UR2Lm97sHi3ocA6ESJ4=
|
||||
github.com/oklog/ulid v1.3.1/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U=
|
||||
github.com/olivere/elastic/v7 v7.0.32 h1:R7CXvbu8Eq+WlsLgxmKVKPox0oOwAE/2T9Si5BnvK6E=
|
||||
github.com/olivere/elastic/v7 v7.0.32/go.mod h1:c7PVmLe3Fxq77PIfY/bZmxY/TAamBhCzZ8xDOE09a9k=
|
||||
github.com/opensearch-project/opensearch-go/v2 v2.3.0 h1:nQIEMr+A92CkhHrZgUhcfsrZjibvB3APXf2a1VwCmMQ=
|
||||
github.com/opensearch-project/opensearch-go/v2 v2.3.0/go.mod h1:8LDr9FCgUTVoT+5ESjc2+iaZuldqE+23Iq0r1XeNue8=
|
||||
github.com/patrickmn/go-cache v2.1.0+incompatible h1:HRMgzkcYKYpi3C8ajMPV8OFXaaRUnok+kx1WdO15EQc=
|
||||
github.com/patrickmn/go-cache v2.1.0+incompatible/go.mod h1:3Qf8kWWT7OJRJbdiICTKqZju1ZixQ/KpMGzzAfe6+WQ=
|
||||
github.com/paulmach/orb v0.11.1 h1:3koVegMC4X/WeiXYz9iswopaTwMem53NzTJuTF20JzU=
|
||||
github.com/paulmach/orb v0.11.1/go.mod h1:5mULz1xQfs3bmQm63QEJA6lNGujuRafwA5S/EnuLaLU=
|
||||
github.com/paulmach/protoscan v0.2.1/go.mod h1:SpcSwydNLrxUGSDvXvO0P7g7AuhJ7lcKfDlhJCDw2gY=
|
||||
@@ -388,6 +409,7 @@ golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwY
|
||||
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
|
||||
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
|
||||
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
|
||||
golang.org/x/net v0.1.0/go.mod h1:Cx3nUiGt4eDBEyega/BKRp+/AlGL8hYe7U9odMt2Cco=
|
||||
golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
|
||||
golang.org/x/net v0.7.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
|
||||
golang.org/x/net v0.8.0/go.mod h1:QVkue5JL9kW//ek3r6jTKnTFis1tRmNAW2P1shuFdJc=
|
||||
@@ -415,6 +437,7 @@ golang.org/x/sys v0.0.0-20210806184541-e5e7981a1069/go.mod h1:oPkhp1MJrh7nUepCBc
|
||||
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20220704084225-05e143d24a9e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
@@ -424,6 +447,7 @@ golang.org/x/sys v0.29.0 h1:TPYlXGxvx1MGTn2GiZDhnjPA9wZzZeGKHHmKhHYvgaU=
|
||||
golang.org/x/sys v0.29.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
|
||||
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
|
||||
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
|
||||
golang.org/x/term v0.1.0/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
|
||||
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
|
||||
golang.org/x/term v0.6.0/go.mod h1:m6U89DPEgQRMq3DNkDClhWw02AUbt2daBVO4cn4Hv9U=
|
||||
golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo=
|
||||
@@ -432,6 +456,7 @@ golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
|
||||
golang.org/x/text v0.3.8/go.mod h1:E6s5w1FMmriuDzIBO73fBruAKo1PCIq6d2Q6DHfQ8WQ=
|
||||
golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
|
||||
golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
|
||||
golang.org/x/text v0.8.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
|
||||
golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
|
||||
@@ -466,6 +491,7 @@ gopkg.in/gomail.v2 v2.0.0-20160411212932-81ebce5c23df/go.mod h1:LRQQ+SO6ZHR7tOkp
|
||||
gopkg.in/square/go-jose.v2 v2.6.0 h1:NGk74WTnPKBNUhNzQX7PYcTLUjoq7mzKk2OKbvwk2iI=
|
||||
gopkg.in/square/go-jose.v2 v2.6.0/go.mod h1:M9dMgbHiYLoDGQrXy7OpJDJWiKiU//h+vD76mk0e1AI=
|
||||
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
|
||||
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
|
||||
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -12,7 +12,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "ClickHouse HTTP 连接数",
|
||||
"note": "通过HTTP协议连接到ClickHouse服务器的客户端数量。"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "ClickHouse HTTP Connections",
|
||||
"note": "The number of clients connected to the ClickHouse server via the HTTP protocol."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -27,7 +39,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "ClickHouse INSERT查询平均时间",
|
||||
"note": "插入查询执行的平均时间(微秒)。"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "ClickHouse INSERT query average time",
|
||||
"note": "The average time in microseconds for the insertion query to execute."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -42,7 +66,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "ClickHouse SELECT 查询数",
|
||||
"note": "执行的选择(SELECT)查询的数量"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "ClickHouse SELECT Query Number",
|
||||
"note": "Number of SELECT queries executed"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -57,7 +93,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "ClickHouse SELECT查询平均时间",
|
||||
"note": "选择查询执行的平均时间(微秒)。"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "ClickHouse SELECT query average time",
|
||||
"note": "Select the average time (microseconds) for query execution."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -72,7 +120,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "ClickHouse TCP 连接数",
|
||||
"note": "通过TCP协议连接到ClickHouse服务器的客户端数量。"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "ClickHouse TCP Connections",
|
||||
"note": "The number of clients connected to the ClickHouse server via the TCP protocol."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -87,7 +147,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "ClickHouse 临时数据量",
|
||||
"note": "临时数据部分的数量,这些部分当前正在生成。"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "ClickHouse Temporary Data Volume",
|
||||
"note": "The number of temporary data sections that are currently being generated."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -102,7 +174,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "ClickHouse 分布式表连接数",
|
||||
"note": "发送到分布式表的远程服务器的数据连接数。"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "ClickHouse Distributed Table Joins",
|
||||
"note": "The number of data connections sent to the remote server of the distributed table."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -117,7 +201,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "ClickHouse 宽数据量",
|
||||
"note": "宽数据部分的数量。"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "ClickHouse wide data volume",
|
||||
"note": "Number of wide data sections."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -132,7 +228,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "ClickHouse 待插入分布式表文件数",
|
||||
"note": "等待异步插入到分布式表的文件数量。"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "ClickHouse Number of distributed table files to be inserted",
|
||||
"note": "The number of files waiting to be inserted asynchronously into the distributed table."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -147,7 +255,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "ClickHouse 提交前数据量",
|
||||
"note": "提交前的数据部分数量,这些部分在data_parts列表中,但不用于SELECT查询。"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Data volume before ClickHouse submission",
|
||||
"note": "The number of data parts before submission, which are in the data _ parts list, but are not used for SELECT queries."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -162,7 +282,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "ClickHouse 提交后数据量",
|
||||
"note": "提交后的数据部分数量,这些部分在data_parts列表中,并且用于SELECT查询。"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Data volume after ClickHouse submission",
|
||||
"note": "The number of submitted data parts, which are in the data _ parts list and used for SELECT queries."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -177,7 +309,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "ClickHouse 插入未压缩",
|
||||
"note": " 插入操作写入的未压缩字节数。"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "ClickHouse Insert Uncompressed",
|
||||
"note": "The number of uncompressed bytes written by the insert operation."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -192,7 +336,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "ClickHouse 插入行数",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Number of ClickHouse inserted rows",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -207,7 +363,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "ClickHouse 查询优先级",
|
||||
"note": "由于优先级设置,被停止并等待的查询数量。\n"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "ClickHouse Query Priority",
|
||||
"note": "The number of queries that were stopped and waiting due to the priority setting. \n"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -222,7 +390,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "ClickHouse 查询总数",
|
||||
"note": "ClickHouse执行的查询总数。"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Total ClickHouse Queries",
|
||||
"note": "The total number of queries executed by ClickHouse."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -237,7 +417,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "ClickHouse 查询总时间",
|
||||
"note": "查询执行的总时间(微秒)。"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Total ClickHouse query time",
|
||||
"note": "The total time in microseconds for the query to execute."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -252,7 +444,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "ClickHouse 正被删除数据量",
|
||||
"note": "正在被删除的数据部分数量。"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "ClickHouse Amount of Data being Deleted",
|
||||
"note": "The number of data parts being deleted."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -267,7 +471,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "ClickHouse 移动池活动任务数",
|
||||
"note": "后台移动池中的活动任务数,用于处理数据移动。"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Number of active tasks in ClickHouse mobile pool",
|
||||
"note": "The number of active tasks in the background move pool, used to handle data moves."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -282,7 +498,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "ClickHouse 紧凑数据量",
|
||||
"note": "紧凑数据部分的数量。"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "ClickHouse Compact Data Volume",
|
||||
"note": "Number of compact data sections."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -297,7 +525,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "ClickHouse 缓冲区活动任务数",
|
||||
"note": "后台缓冲区冲洗调度池中的活动任务数,用于定期缓冲区冲洗。"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Number of active tasks in ClickHouse buffer",
|
||||
"note": "The number of active tasks in the background buffer flushing scheduling pool for periodic buffer flushing."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -312,7 +552,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "ClickHouse 跨磁盘量",
|
||||
"note": "移动到另一个磁盘并应在析构函数中删除的数据部分数量。"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "ClickHouse cross-disk volume",
|
||||
"note": "The number of portions of data that are moved to another disk and should be deleted in the destructor."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -327,7 +579,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "ClickHouse 过时数据量",
|
||||
"note": " 过时的数据部分数量,这些部分不是活动数据部分,但当前SELECT查询可能使用它们。"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "ClickHouse Obsolete Data Volume",
|
||||
"note": "The number of obsolete data parts that are not active data parts, but may be used by the current SELECT query."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -342,7 +606,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "ClickHouse中内存使用情况",
|
||||
"note": "ClickHouse服务器使用的总内存量。"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Memory usage in ClickHouse",
|
||||
"note": "The total amount of memory used by the ClickHouse server."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -357,7 +633,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "ClickHouse中数据库数量",
|
||||
"note": "ClickHouse数据库数量"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Number of databases in ClickHouse",
|
||||
"note": "Number of ClickHouse databases"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -372,7 +660,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "ClickHouse中表的数量",
|
||||
"note": "ClickHouse表数量"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Number of tables in ClickHouse",
|
||||
"note": "Number of ClickHouse tables"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -387,7 +687,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "ClickHouse修订",
|
||||
"note": "ClickHouse服务器的修订号,通常是一个用于标识特定构建的数字。"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "ClickHouse Revision",
|
||||
"note": "The revision number of the ClickHouse server, usually a number used to identify a specific build."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -402,7 +714,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "ClickHouse服务器运行时间",
|
||||
"note": "ClickHouse服务器自启动以来的运行时间。"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "ClickHouse server runtime",
|
||||
"note": "The running time of the ClickHouse server since it started."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -417,6 +741,18 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "ClickHouse版本号",
|
||||
"note": "ClickHouse服务器的版本号,以整数形式表示。"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "ClickHouse version number",
|
||||
"note": "Version number of the ClickHouse server, expressed as an integer."
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
File diff suppressed because it is too large
Load Diff
@@ -12,7 +12,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "Cluster Health delayed unassigned 的分片数",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Number of Cluster Health delayed unassigned shards",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -27,7 +39,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "Cluster Health Pending task 数量",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Cluster Health Pending tasks quantity",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -42,7 +66,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "Cluster Health relocating 的分片数",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Number of shards for Cluster Health relocating",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -57,7 +93,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "Cluster Health unassigned 的分片数",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Cluster Health unassigned number of shards",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -72,7 +120,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "Cluster Health 健康度状态码",
|
||||
"note": "- 1:Green,绿色状态,表示所有分片都正常\n- 2:Yellow,黄色状态,主分片都正常,从分片有不正常的\n- 3:Red,红色状态,有些主分片不正常"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Cluster Health health status code",
|
||||
"note": "-1: Green, Green state, indicating that all shards are normal \n-2: Yellow, Yellow state, the main shard is normal, the slave shard is abnormal \n-3: Red, Red state, some main shards are abnormal"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -87,7 +147,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "Cluster Health 数据节点数量",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Number of Cluster Health data nodes",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -102,7 +174,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "Cluster Health 正在初始化的分片数",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Number of shards being initialized by Cluster Health",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -117,7 +201,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "Cluster Health 活跃主分片数",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Cluster Health Number of active primary shards",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -132,7 +228,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "Cluster Health 活跃分片数",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Cluster Health Active Shards",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -147,7 +255,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "Cluster Health 节点数量",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Number of Cluster Health nodes",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -162,7 +282,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "Indexing 平均耗时",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Indexing average time consumption",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -177,7 +309,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "Merge 平均耗时",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Average time consumed by Merge",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -192,7 +336,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "Query 平均耗时",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Query average time consumption",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -207,7 +363,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "每秒 indexing 数量",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "indexing per second",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -222,7 +390,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "每秒 merge 大小",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "merge size per second",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -237,7 +417,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "每秒 merge 数量",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Number of merges per second",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -252,7 +444,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "每秒删除 doc 数量",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Number of docs deleted per second",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -267,7 +471,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "硬盘使用率",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Hard Drive Usage",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -282,7 +498,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "网络流量 - 入向每秒流量",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Network traffic-inbound traffic per second",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -297,7 +525,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "网络流量 - 出向每秒流量",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Network traffic-outbound traffic per second",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -312,7 +552,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "进程 CPU 使用率",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Process CPU usage",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -327,7 +579,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "进程 JVM Heap 使用率",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Process JVM Heap Usage",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -342,7 +606,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "进程 JVM Heap 区 committed 大小",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Process JVM Heap area committed size",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -357,7 +633,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "进程 JVM Non Heap 区 committed 大小",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Process JVM Non Heap area committed size",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -372,7 +660,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "进程 JVM Old 内存池 used 大小",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Process JVM Old memory pool used size",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -387,7 +687,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "进程 JVM Young 内存池 used 大小",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Process JVM Young memory pool used size",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -402,7 +714,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "进程新生代每秒 GC 次数",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Number of GCs per second for the new generation of the process",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -417,7 +741,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "进程新生代每秒 GC 耗时",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Process new generation time per second GC",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -432,7 +768,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "进程老生代每秒 GC 次数",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Number of GCs per second of process old generation",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -447,6 +795,18 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "进程老生代每秒 GC 耗时",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Process old generation GC time per second",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
@@ -12,7 +12,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "HTTP 探测响应码",
|
||||
"note": "如果没有拿到 response,这个指标就没有值了"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "HTTP probe response code",
|
||||
"note": "If you don't get response, this indicator has no value"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -27,7 +39,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "HTTP 探测结果状态码",
|
||||
"note": "0 值表示正常,大于 0 就是异常,各个值的含义如下:\n\n```\nSuccess = 0\nConnectionFailed = 1\nTimeout = 2\nDNSError = 3\nAddressError = 4\nBodyMismatch = 5\nCodeMismatch = 6\n```"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "HTTP probe result status code",
|
||||
"note": "A value of 0 means normal, and a value greater than 0 means abnormal. The meanings of each value are as follows: \n \n``` \nSuccess = 0 \nConnectionFailed = 1 \nTimeout = 2 \nDNSError = 3 \nAddressError = 4 \nBodyMismatch = 5 \nCodeMismatch = 6 \n```"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -42,7 +66,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "HTTP 探测耗时",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "HTTP probe time-consuming",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -57,7 +93,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "HTTP 证书过期时间",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "HTTP certificate expiration time",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -72,7 +120,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "拨测 - DNS 请求耗时",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Dial test-DNS request time-consuming",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -87,7 +147,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "拨测 - TCP建连耗时",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Dial test-TCP connection establishment time",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -102,7 +174,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "拨测 - TLS握手耗时",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Dial test-TLS handshake time-consuming",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -117,7 +201,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "拨测 - 探测结果状态码",
|
||||
"note": "探测结果,0 是正常,其他数字有不同含义\n- 0:成功\n- 1:连接失败\n- 2:监测超时\n- 3:DNS解析失败\n- 4:地址格式错误\n- 5:返回内容不匹配\n- 6:返回码不匹配\n- 其他数字为未知错误"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Dial test-detection result status code",
|
||||
"note": "Detection result, 0 is normal, other numbers have different meanings \n-0: Success \n-1: Connection failed \n-2: Monitoring timeout \n-3: DNS resolution failed \n-4: Address format is wrong \n-5: Return content does not match \n-6: Return code mismatch \n-Other numbers are unknown error"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -132,7 +228,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "拨测 - 整体耗时",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Dial test-overall time-consuming",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -147,7 +255,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "拨测 - 返回状态码",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Dial test-Return status code",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -162,6 +282,18 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "拨测 - 首包耗时",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Dial test-first package time-consuming",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
1620
integrations/Java/dashboards/jvm_by_opentelementry.json
Normal file
1620
integrations/Java/dashboards/jvm_by_opentelementry.json
Normal file
File diff suppressed because it is too large
Load Diff
|
Before Width: | Height: | Size: 2.5 KiB After Width: | Height: | Size: 2.5 KiB |
@@ -12,7 +12,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "Broker 数量",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Number of Brokers",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -27,7 +39,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "Partition 副本不同步的数量",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Number of out-of-sync copies of Partition",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -42,7 +66,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "Partition 副本数量",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Number of Partition copies",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -57,7 +93,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "各个 Topic 每秒消费消息量",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Each Topic consumes messages per second",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -72,7 +120,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "各个 Topic 每秒生产消息量",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Production message volume per second per Topic",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -87,6 +147,18 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "各个 Topic 的 Partition 数量",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Number of Partitions for each Topic",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,282 +1,618 @@
|
||||
[
|
||||
{
|
||||
"uuid": 1745893024149445000,
|
||||
"collector": "Pod",
|
||||
"typ": "Kubernetes",
|
||||
"name": "Inode数量",
|
||||
"unit": "",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\",",
|
||||
"lang": "zh_CN",
|
||||
"expression": "sum(container_fs_inodes_total{namespace=\"$namespace\", pod=~\"$pod_name\", image!~\".*pause.*\"}) by (name)"
|
||||
"uuid": 1745893024149445000,
|
||||
"collector": "Pod",
|
||||
"typ": "Kubernetes",
|
||||
"name": "Inode数量",
|
||||
"unit": "",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\",",
|
||||
"lang": "zh_CN",
|
||||
"expression": "sum(container_fs_inodes_total{namespace=\"$namespace\", pod=~\"$pod_name\", image!~\".*pause.*\"}) by (name)",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "Inode数量",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\","
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Number of Inodes",
|
||||
"note": "Pod's own indicators \nType: pod = ~ \"$pod _ name\","
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"uuid": 1745893024121015300,
|
||||
"collector": "Pod",
|
||||
"typ": "Kubernetes",
|
||||
"name": "不可中断任务数量",
|
||||
"unit": "",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\",",
|
||||
"lang": "zh_CN",
|
||||
"expression": "sum(container_tasks_state{namespace=\"$namespace\", pod=~\"$pod_name\", image!~\".*pause.*\", state=\"uninterruptible\"}) by (name)"
|
||||
"uuid": 1745893024121015300,
|
||||
"collector": "Pod",
|
||||
"typ": "Kubernetes",
|
||||
"name": "不可中断任务数量",
|
||||
"unit": "",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\",",
|
||||
"lang": "zh_CN",
|
||||
"expression": "sum(container_tasks_state{namespace=\"$namespace\", pod=~\"$pod_name\", image!~\".*pause.*\", state=\"uninterruptible\"}) by (name)",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "不可中断任务数量",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\","
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Number of uninterruptible tasks",
|
||||
"note": "Pod's own indicators \nType: pod = ~ \"$pod _ name\","
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"uuid": 1745893024130551800,
|
||||
"collector": "Pod",
|
||||
"typ": "Kubernetes",
|
||||
"name": "容器cache使用",
|
||||
"unit": "",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\",",
|
||||
"lang": "zh_CN",
|
||||
"expression": "(sum(container_memory_cache{namespace=\"$namespace\", pod=~\"$pod_name\", image!~\".*pause.*\"}) by (name))"
|
||||
"uuid": 1745893024130551800,
|
||||
"collector": "Pod",
|
||||
"typ": "Kubernetes",
|
||||
"name": "容器cache使用",
|
||||
"unit": "",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\",",
|
||||
"lang": "zh_CN",
|
||||
"expression": "(sum(container_memory_cache{namespace=\"$namespace\", pod=~\"$pod_name\", image!~\".*pause.*\"}) by (name))",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "容器cache使用",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\","
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Container cache use",
|
||||
"note": "Pod's own indicators \nType: pod = ~ \"$pod _ name\","
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"uuid": 1745893024108569900,
|
||||
"collector": "Pod",
|
||||
"typ": "Kubernetes",
|
||||
"name": "容器CPU Limit",
|
||||
"unit": "",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\"}/container_spec_cpu_period{namespace=\"$namespace\",",
|
||||
"lang": "zh_CN",
|
||||
"expression": "(sum(container_spec_cpu_quota{namespace=\"$namespace\", pod=~\"$pod_name\"}/container_spec_cpu_period{namespace=\"$namespace\", pod=~\"$pod_name\"}) by (name))"
|
||||
"uuid": 1745893024108569900,
|
||||
"collector": "Pod",
|
||||
"typ": "Kubernetes",
|
||||
"name": "容器CPU Limit",
|
||||
"unit": "",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\"}/container_spec_cpu_period{namespace=\"$namespace\",",
|
||||
"lang": "zh_CN",
|
||||
"expression": "(sum(container_spec_cpu_quota{namespace=\"$namespace\", pod=~\"$pod_name\"}/container_spec_cpu_period{namespace=\"$namespace\", pod=~\"$pod_name\"}) by (name))",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "容器CPU Limit",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\"}/container_spec_cpu_period{namespace=\"$namespace\","
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Container CPU Limit",
|
||||
"note": "Pod's own indicators \nType: pod = ~ \"$pod _ name\"}/container _ spec _ cpu _ period {namespace = \"$namespace\","
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"uuid": 1745893024112672500,
|
||||
"collector": "Pod",
|
||||
"typ": "Kubernetes",
|
||||
"name": "容器CPU load 10",
|
||||
"unit": "",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\",",
|
||||
"lang": "zh_CN",
|
||||
"expression": "sum(container_cpu_load_average_10s{namespace=\"$namespace\", pod=~\"$pod_name\", image!~\".*pause.*\"}) by (name)"
|
||||
"uuid": 1745893024112672500,
|
||||
"collector": "Pod",
|
||||
"typ": "Kubernetes",
|
||||
"name": "容器CPU load 10",
|
||||
"unit": "",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\",",
|
||||
"lang": "zh_CN",
|
||||
"expression": "sum(container_cpu_load_average_10s{namespace=\"$namespace\", pod=~\"$pod_name\", image!~\".*pause.*\"}) by (name)",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "容器CPU load 10",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\","
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Container CPU load 10",
|
||||
"note": "Pod's own indicators \nType: pod = ~ \"$pod _ name\","
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"uuid": 1745893024026246700,
|
||||
"collector": "Pod",
|
||||
"typ": "Kubernetes",
|
||||
"name": "容器CPU使用率",
|
||||
"unit": "",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\",",
|
||||
"lang": "zh_CN",
|
||||
"expression": "sum(rate(container_cpu_usage_seconds_total{namespace=\"$namespace\", pod=~\"$pod_name\", image!~\".*pause.*\"}[1m])*100) by(name)"
|
||||
"uuid": 1745893024026246700,
|
||||
"collector": "Pod",
|
||||
"typ": "Kubernetes",
|
||||
"name": "容器CPU使用率",
|
||||
"unit": "",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\",",
|
||||
"lang": "zh_CN",
|
||||
"expression": "sum(rate(container_cpu_usage_seconds_total{namespace=\"$namespace\", pod=~\"$pod_name\", image!~\".*pause.*\"}[1m])*100) by(name)",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "容器CPU使用率",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\","
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Container CPU usage",
|
||||
"note": "Pod's own indicators \nType: pod = ~ \"$pod _ name\","
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"uuid": 1745893024029544000,
|
||||
"collector": "Pod",
|
||||
"typ": "Kubernetes",
|
||||
"name": "容器CPU归一化后使用率",
|
||||
"unit": "",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\",",
|
||||
"lang": "zh_CN",
|
||||
"expression": "sum(rate(container_cpu_usage_seconds_total{namespace=\"$namespace\", pod=~\"$pod_name\", image!~\".*pause.*\"}[1m])*100) by(name)/((sum(container_spec_cpu_quota{namespace=\"$namespace\", pod=~\"$pod_name\"}/container_spec_cpu_period{namespace=\"$namespace\", pod=~\"$pod_name\"}) by (name)))"
|
||||
"uuid": 1745893024029544000,
|
||||
"collector": "Pod",
|
||||
"typ": "Kubernetes",
|
||||
"name": "容器CPU归一化后使用率",
|
||||
"unit": "",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\",",
|
||||
"lang": "zh_CN",
|
||||
"expression": "sum(rate(container_cpu_usage_seconds_total{namespace=\"$namespace\", pod=~\"$pod_name\", image!~\".*pause.*\"}[1m])*100) by(name)/((sum(container_spec_cpu_quota{namespace=\"$namespace\", pod=~\"$pod_name\"}/container_spec_cpu_period{namespace=\"$namespace\", pod=~\"$pod_name\"}) by (name)))",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "容器CPU归一化后使用率",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\","
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Container CPU usage after normalization",
|
||||
"note": "Pod's own indicators \nType: pod = ~ \"$pod _ name\","
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"uuid": 1745893024146207700,
|
||||
"collector": "Pod",
|
||||
"typ": "Kubernetes",
|
||||
"name": "容器I/O",
|
||||
"unit": "",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\",",
|
||||
"lang": "zh_CN",
|
||||
"expression": "sum(container_fs_io_current{namespace=\"$namespace\", pod=~\"$pod_name\", image!~\".*pause.*\"}) by (name)"
|
||||
"uuid": 1745893024146207700,
|
||||
"collector": "Pod",
|
||||
"typ": "Kubernetes",
|
||||
"name": "容器I/O",
|
||||
"unit": "",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\",",
|
||||
"lang": "zh_CN",
|
||||
"expression": "sum(container_fs_io_current{namespace=\"$namespace\", pod=~\"$pod_name\", image!~\".*pause.*\"}) by (name)",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "容器I/O",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\","
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Container I/O",
|
||||
"note": "Pod's own indicators \nType: pod = ~ \"$pod _ name\","
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"uuid": 1745893024136457000,
|
||||
"collector": "Pod",
|
||||
"typ": "Kubernetes",
|
||||
"name": "容器RSS内存使用",
|
||||
"unit": "",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\",",
|
||||
"lang": "zh_CN",
|
||||
"expression": "(sum(container_memory_rss{namespace=\"$namespace\", pod=~\"$pod_name\", image!~\".*pause.*\"}) by (name))"
|
||||
"uuid": 1745893024136457000,
|
||||
"collector": "Pod",
|
||||
"typ": "Kubernetes",
|
||||
"name": "容器RSS内存使用",
|
||||
"unit": "",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\",",
|
||||
"lang": "zh_CN",
|
||||
"expression": "(sum(container_memory_rss{namespace=\"$namespace\", pod=~\"$pod_name\", image!~\".*pause.*\"}) by (name))",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "容器RSS内存使用",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\","
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Container RSS memory usage",
|
||||
"note": "Pod's own indicators \nType: pod = ~ \"$pod _ name\","
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"uuid": 1745893024139900200,
|
||||
"collector": "Pod",
|
||||
"typ": "Kubernetes",
|
||||
"name": "容器内存 Limit",
|
||||
"unit": "",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\",",
|
||||
"lang": "zh_CN",
|
||||
"expression": "sum(container_spec_memory_limit_bytes{namespace=\"$namespace\", pod=~\"$pod_name\", image!~\".*pause.*\"}) by (name)"
|
||||
"uuid": 1745893024139900200,
|
||||
"collector": "Pod",
|
||||
"typ": "Kubernetes",
|
||||
"name": "容器内存 Limit",
|
||||
"unit": "",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\",",
|
||||
"lang": "zh_CN",
|
||||
"expression": "sum(container_spec_memory_limit_bytes{namespace=\"$namespace\", pod=~\"$pod_name\", image!~\".*pause.*\"}) by (name)",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "容器内存 Limit",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\","
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Container Memory Limit",
|
||||
"note": "Pod's own indicators \nType: pod = ~ \"$pod _ name\","
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"uuid": 1745893024032984300,
|
||||
"collector": "Pod",
|
||||
"typ": "Kubernetes",
|
||||
"name": "容器内存使用",
|
||||
"unit": "",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\",",
|
||||
"lang": "zh_CN",
|
||||
"expression": "(sum(container_memory_usage_bytes{namespace=\"$namespace\", pod=~\"$pod_name\", image!~\".*pause.*\"}) by (name))"
|
||||
"uuid": 1745893024032984300,
|
||||
"collector": "Pod",
|
||||
"typ": "Kubernetes",
|
||||
"name": "容器内存使用",
|
||||
"unit": "",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\",",
|
||||
"lang": "zh_CN",
|
||||
"expression": "(sum(container_memory_usage_bytes{namespace=\"$namespace\", pod=~\"$pod_name\", image!~\".*pause.*\"}) by (name))",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "容器内存使用",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\","
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Container memory usage",
|
||||
"note": "Pod's own indicators \nType: pod = ~ \"$pod _ name\","
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"uuid": 1745893024127585500,
|
||||
"collector": "Pod",
|
||||
"typ": "Kubernetes",
|
||||
"name": "容器内存使用率",
|
||||
"unit": "",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\",",
|
||||
"lang": "zh_CN",
|
||||
"expression": "((sum(container_memory_usage_bytes{namespace=\"$namespace\", pod=~\"$pod_name\", image!~\".*pause.*\"}) by (name)) /(sum(container_spec_memory_limit_bytes{namespace=\"$namespace\", pod=~\"$pod_name\", image!~\".*pause.*\"}) by (name)))*100"
|
||||
"uuid": 1745893024127585500,
|
||||
"collector": "Pod",
|
||||
"typ": "Kubernetes",
|
||||
"name": "容器内存使用率",
|
||||
"unit": "",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\",",
|
||||
"lang": "zh_CN",
|
||||
"expression": "((sum(container_memory_usage_bytes{namespace=\"$namespace\", pod=~\"$pod_name\", image!~\".*pause.*\"}) by (name)) /(sum(container_spec_memory_limit_bytes{namespace=\"$namespace\", pod=~\"$pod_name\", image!~\".*pause.*\"}) by (name)))*100",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "容器内存使用率",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\","
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Container memory usage",
|
||||
"note": "Pod's own indicators \nType: pod = ~ \"$pod _ name\","
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"uuid": 1745893024093620000,
|
||||
"collector": "Pod",
|
||||
"typ": "Kubernetes",
|
||||
"name": "容器内核态CPU使用率",
|
||||
"unit": "",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\",",
|
||||
"lang": "zh_CN",
|
||||
"expression": "sum(rate(container_cpu_system_seconds_total{namespace=\"$namespace\", pod=~\"$pod_name\", image!~\".*pause.*\"}[1m])*100) by(name)"
|
||||
"uuid": 1745893024093620000,
|
||||
"collector": "Pod",
|
||||
"typ": "Kubernetes",
|
||||
"name": "容器内核态CPU使用率",
|
||||
"unit": "",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\",",
|
||||
"lang": "zh_CN",
|
||||
"expression": "sum(rate(container_cpu_system_seconds_total{namespace=\"$namespace\", pod=~\"$pod_name\", image!~\".*pause.*\"}[1m])*100) by(name)",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "容器内核态CPU使用率",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\","
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Container kernel mode CPU usage",
|
||||
"note": "Pod's own indicators \nType: pod = ~ \"$pod _ name\","
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"uuid": 1745893024102879000,
|
||||
"collector": "Pod",
|
||||
"typ": "Kubernetes",
|
||||
"name": "容器发生CPU throttle的比率",
|
||||
"unit": "",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\",",
|
||||
"lang": "zh_CN",
|
||||
"expression": "sum(rate(container_cpu_cfs_throttled_periods_total{namespace=\"$namespace\", pod=~\"$pod_name\", image!~\".*pause.*\"}[1m]))by(name) *100"
|
||||
"uuid": 1745893024102879000,
|
||||
"collector": "Pod",
|
||||
"typ": "Kubernetes",
|
||||
"name": "容器发生CPU throttle的比率",
|
||||
"unit": "",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\",",
|
||||
"lang": "zh_CN",
|
||||
"expression": "sum(rate(container_cpu_cfs_throttled_periods_total{namespace=\"$namespace\", pod=~\"$pod_name\", image!~\".*pause.*\"}[1m]))by(name) *100",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "容器发生CPU throttle的比率",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\","
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "The rate at which container CPU throttle occurs",
|
||||
"note": "Pod's own indicators \nType: pod = ~ \"$pod _ name\","
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"uuid": 1745893024143177000,
|
||||
"collector": "Pod",
|
||||
"typ": "Kubernetes",
|
||||
"name": "容器发生OOM次数",
|
||||
"unit": "",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\",",
|
||||
"lang": "zh_CN",
|
||||
"expression": "sum(container_oom_events_total{namespace=\"$namespace\", pod=~\"$pod_name\", image!~\".*pause.*\"}) by (name)"
|
||||
"uuid": 1745893024143177000,
|
||||
"collector": "Pod",
|
||||
"typ": "Kubernetes",
|
||||
"name": "容器发生OOM次数",
|
||||
"unit": "",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\",",
|
||||
"lang": "zh_CN",
|
||||
"expression": "sum(container_oom_events_total{namespace=\"$namespace\", pod=~\"$pod_name\", image!~\".*pause.*\"}) by (name)",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "容器发生OOM次数",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\","
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Number of OOM occurrences for container",
|
||||
"note": "Pod's own indicators \nType: pod = ~ \"$pod _ name\","
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"uuid": 1745893024083942000,
|
||||
"collector": "Pod",
|
||||
"typ": "Kubernetes",
|
||||
"name": "容器启动时长(小时)",
|
||||
"unit": "",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\",",
|
||||
"lang": "zh_CN",
|
||||
"expression": "sum((time()-container_start_time_seconds{namespace=\"$namespace\", pod=~\"$pod_name\", image!~\".*pause.*\"})) by (name)"
|
||||
"uuid": 1745893024083942000,
|
||||
"collector": "Pod",
|
||||
"typ": "Kubernetes",
|
||||
"name": "容器启动时长(小时)",
|
||||
"unit": "",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\",",
|
||||
"lang": "zh_CN",
|
||||
"expression": "sum((time()-container_start_time_seconds{namespace=\"$namespace\", pod=~\"$pod_name\", image!~\".*pause.*\"})) by (name)",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "容器启动时长(小时)",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\","
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Container startup time (hours)",
|
||||
"note": "Pod's own indicators \nType: pod = ~ \"$pod _ name\","
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"uuid": 1745893024152466200,
|
||||
"collector": "Pod",
|
||||
"typ": "Kubernetes",
|
||||
"name": "容器已使用的文件系统大小",
|
||||
"unit": "",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\",",
|
||||
"lang": "zh_CN",
|
||||
"expression": "sum(container_fs_usage_bytes{namespace=\"$namespace\", pod=~\"$pod_name\", image!~\".*pause.*\"}) by (name)"
|
||||
"uuid": 1745893024152466200,
|
||||
"collector": "Pod",
|
||||
"typ": "Kubernetes",
|
||||
"name": "容器已使用的文件系统大小",
|
||||
"unit": "",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\",",
|
||||
"lang": "zh_CN",
|
||||
"expression": "sum(container_fs_usage_bytes{namespace=\"$namespace\", pod=~\"$pod_name\", image!~\".*pause.*\"}) by (name)",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "容器已使用的文件系统大小",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\","
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "File system size used by the container",
|
||||
"note": "Pod's own indicators \nType: pod = ~ \"$pod _ name\","
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"uuid": 1745893024097849600,
|
||||
"collector": "Pod",
|
||||
"typ": "Kubernetes",
|
||||
"name": "容器用户态CPU使用率",
|
||||
"unit": "",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\",",
|
||||
"lang": "zh_CN",
|
||||
"expression": "sum(rate(container_cpu_user_seconds_total{namespace=\"$namespace\", pod=~\"$pod_name\", image!~\".*pause.*\"}[1m])*100) by(name)"
|
||||
"uuid": 1745893024097849600,
|
||||
"collector": "Pod",
|
||||
"typ": "Kubernetes",
|
||||
"name": "容器用户态CPU使用率",
|
||||
"unit": "",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\",",
|
||||
"lang": "zh_CN",
|
||||
"expression": "sum(rate(container_cpu_user_seconds_total{namespace=\"$namespace\", pod=~\"$pod_name\", image!~\".*pause.*\"}[1m])*100) by(name)",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "容器用户态CPU使用率",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\","
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Container user mode CPU usage",
|
||||
"note": "Pod's own indicators \nType: pod = ~ \"$pod _ name\","
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"uuid": 1745893024036896800,
|
||||
"collector": "Pod",
|
||||
"typ": "Kubernetes",
|
||||
"name": "文件系统写入速率",
|
||||
"unit": "",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\",",
|
||||
"lang": "zh_CN",
|
||||
"expression": "sum(rate(container_fs_writes_bytes_total{namespace=\"$namespace\", pod=~\"$pod_name\", image!~\".*pause.*\"}[1m])) by(name)"
|
||||
"uuid": 1745893024036896800,
|
||||
"collector": "Pod",
|
||||
"typ": "Kubernetes",
|
||||
"name": "文件系统写入速率",
|
||||
"unit": "",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\",",
|
||||
"lang": "zh_CN",
|
||||
"expression": "sum(rate(container_fs_writes_bytes_total{namespace=\"$namespace\", pod=~\"$pod_name\", image!~\".*pause.*\"}[1m])) by(name)",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "文件系统写入速率",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\","
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "File system write rate",
|
||||
"note": "Pod's own indicators \nType: pod = ~ \"$pod _ name\","
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"uuid": 1745893024057722000,
|
||||
"collector": "Pod",
|
||||
"typ": "Kubernetes",
|
||||
"name": "文件系统读取速率",
|
||||
"unit": "",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\",",
|
||||
"lang": "zh_CN",
|
||||
"expression": "sum(rate(container_fs_reads_bytes_total{namespace=\"$namespace\", pod=~\"$pod_name\", image!~\".*pause.*\"}[1m])) by(name)"
|
||||
"uuid": 1745893024057722000,
|
||||
"collector": "Pod",
|
||||
"typ": "Kubernetes",
|
||||
"name": "文件系统读取速率",
|
||||
"unit": "",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\",",
|
||||
"lang": "zh_CN",
|
||||
"expression": "sum(rate(container_fs_reads_bytes_total{namespace=\"$namespace\", pod=~\"$pod_name\", image!~\".*pause.*\"}[1m])) by(name)",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "文件系统读取速率",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\","
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "File system read rate",
|
||||
"note": "Pod's own indicators \nType: pod = ~ \"$pod _ name\","
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"uuid": 1745893024166898000,
|
||||
"collector": "Pod",
|
||||
"typ": "Kubernetes",
|
||||
"name": "网络发送丢包数",
|
||||
"unit": "",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\"}[1m]))",
|
||||
"lang": "zh_CN",
|
||||
"expression": "sum(rate(container_network_transmit_packets_dropped_total{namespace=\"$namespace\", pod=~\"$pod_name\"}[1m])) by(name, interface)"
|
||||
"uuid": 1745893024166898000,
|
||||
"collector": "Pod",
|
||||
"typ": "Kubernetes",
|
||||
"name": "网络发送丢包数",
|
||||
"unit": "",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\"}[1m]))",
|
||||
"lang": "zh_CN",
|
||||
"expression": "sum(rate(container_network_transmit_packets_dropped_total{namespace=\"$namespace\", pod=~\"$pod_name\"}[1m])) by(name, interface)",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "网络发送丢包数",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\"}[1m]))"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Number of packets lost by network transmission",
|
||||
"note": "Pod's own indicators \nType: pod = ~ \"$pod _ name\"} [1m]))"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"uuid": 1745893024160266500,
|
||||
"collector": "Pod",
|
||||
"typ": "Kubernetes",
|
||||
"name": "网络发送数据包",
|
||||
"unit": "",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\"}[1m]))",
|
||||
"lang": "zh_CN",
|
||||
"expression": "sum(rate(container_network_transmit_packets_total{namespace=\"$namespace\", pod=~\"$pod_name\"}[1m])) by(name, interface)"
|
||||
"uuid": 1745893024160266500,
|
||||
"collector": "Pod",
|
||||
"typ": "Kubernetes",
|
||||
"name": "网络发送数据包",
|
||||
"unit": "",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\"}[1m]))",
|
||||
"lang": "zh_CN",
|
||||
"expression": "sum(rate(container_network_transmit_packets_total{namespace=\"$namespace\", pod=~\"$pod_name\"}[1m])) by(name, interface)",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "网络发送数据包",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\"}[1m]))"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "The network sends packets",
|
||||
"note": "Pod's own indicators \nType: pod = ~ \"$pod _ name\"} [1m]))"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"uuid": 1745893024069935000,
|
||||
"collector": "Pod",
|
||||
"typ": "Kubernetes",
|
||||
"name": "网络发送速率",
|
||||
"unit": "",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\"}[1m]))",
|
||||
"lang": "zh_CN",
|
||||
"expression": "sum(rate(container_network_transmit_bytes_total{namespace=\"$namespace\", pod=~\"$pod_name\"}[1m])) by(name, interface)"
|
||||
"uuid": 1745893024069935000,
|
||||
"collector": "Pod",
|
||||
"typ": "Kubernetes",
|
||||
"name": "网络发送速率",
|
||||
"unit": "",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\"}[1m]))",
|
||||
"lang": "zh_CN",
|
||||
"expression": "sum(rate(container_network_transmit_bytes_total{namespace=\"$namespace\", pod=~\"$pod_name\"}[1m])) by(name, interface)",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "网络发送速率",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\"}[1m]))"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Network transmission rate",
|
||||
"note": "Pod's own indicators \nType: pod = ~ \"$pod _ name\"} [1m]))"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"uuid": 1745893024163721700,
|
||||
"collector": "Pod",
|
||||
"typ": "Kubernetes",
|
||||
"name": "网络发送错误数",
|
||||
"unit": "",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\"}[1m]))",
|
||||
"lang": "zh_CN",
|
||||
"expression": "sum(rate(container_network_transmit_errors_total{namespace=\"$namespace\", pod=~\"$pod_name\"}[1m])) by(name, interface)"
|
||||
"uuid": 1745893024163721700,
|
||||
"collector": "Pod",
|
||||
"typ": "Kubernetes",
|
||||
"name": "网络发送错误数",
|
||||
"unit": "",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\"}[1m]))",
|
||||
"lang": "zh_CN",
|
||||
"expression": "sum(rate(container_network_transmit_errors_total{namespace=\"$namespace\", pod=~\"$pod_name\"}[1m])) by(name, interface)",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "网络发送错误数",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\"}[1m]))"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Number of network transmission errors",
|
||||
"note": "Pod's own indicators \nType: pod = ~ \"$pod _ name\"} [1m]))"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"uuid": 1745893024173485600,
|
||||
"collector": "Pod",
|
||||
"typ": "Kubernetes",
|
||||
"name": "网络接收丢包数",
|
||||
"unit": "",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\"}[1m]))",
|
||||
"lang": "zh_CN",
|
||||
"expression": "sum(rate(container_network_receive_packets_dropped_total{namespace=\"$namespace\", pod=~\"$pod_name\"}[1m])) by(name, interface)"
|
||||
"uuid": 1745893024173485600,
|
||||
"collector": "Pod",
|
||||
"typ": "Kubernetes",
|
||||
"name": "网络接收丢包数",
|
||||
"unit": "",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\"}[1m]))",
|
||||
"lang": "zh_CN",
|
||||
"expression": "sum(rate(container_network_receive_packets_dropped_total{namespace=\"$namespace\", pod=~\"$pod_name\"}[1m])) by(name, interface)",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "网络接收丢包数",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\"}[1m]))"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Number of packet losses received by network",
|
||||
"note": "Pod's own indicators \nType: pod = ~ \"$pod _ name\"} [1m]))"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"uuid": 1745893024156389600,
|
||||
"collector": "Pod",
|
||||
"typ": "Kubernetes",
|
||||
"name": "网络接收数据包数",
|
||||
"unit": "",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\"}[1m]))",
|
||||
"lang": "zh_CN",
|
||||
"expression": "sum(rate(container_network_receive_packets_total{namespace=\"$namespace\", pod=~\"$pod_name\"}[1m])) by(name, interface)"
|
||||
"uuid": 1745893024156389600,
|
||||
"collector": "Pod",
|
||||
"typ": "Kubernetes",
|
||||
"name": "网络接收数据包数",
|
||||
"unit": "",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\"}[1m]))",
|
||||
"lang": "zh_CN",
|
||||
"expression": "sum(rate(container_network_receive_packets_total{namespace=\"$namespace\", pod=~\"$pod_name\"}[1m])) by(name, interface)",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "网络接收数据包数",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\"}[1m]))"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Number of packets received by network",
|
||||
"note": "Pod's own indicators \nType: pod = ~ \"$pod _ name\"} [1m]))"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"uuid": 1745893024075864800,
|
||||
"collector": "Pod",
|
||||
"typ": "Kubernetes",
|
||||
"name": "网络接收速率",
|
||||
"unit": "",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\"}[1m]))",
|
||||
"lang": "zh_CN",
|
||||
"expression": "sum(rate(container_network_receive_bytes_total{namespace=\"$namespace\", pod=~\"$pod_name\"}[1m])) by(name, interface)"
|
||||
"uuid": 1745893024075864800,
|
||||
"collector": "Pod",
|
||||
"typ": "Kubernetes",
|
||||
"name": "网络接收速率",
|
||||
"unit": "",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\"}[1m]))",
|
||||
"lang": "zh_CN",
|
||||
"expression": "sum(rate(container_network_receive_bytes_total{namespace=\"$namespace\", pod=~\"$pod_name\"}[1m])) by(name, interface)",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "网络接收速率",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\"}[1m]))"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Network reception rate",
|
||||
"note": "Pod's own indicators \nType: pod = ~ \"$pod _ name\"} [1m]))"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"uuid": 1745893024170233300,
|
||||
"collector": "Pod",
|
||||
"typ": "Kubernetes",
|
||||
"name": "网络接收错误数",
|
||||
"unit": "",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\"}[1m]))",
|
||||
"lang": "zh_CN",
|
||||
"expression": "sum(rate(container_network_receive_errors_total{namespace=\"$namespace\", pod=~\"$pod_name\"}[1m])) by(name, interface)"
|
||||
"uuid": 1745893024170233300,
|
||||
"collector": "Pod",
|
||||
"typ": "Kubernetes",
|
||||
"name": "网络接收错误数",
|
||||
"unit": "",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\"}[1m]))",
|
||||
"lang": "zh_CN",
|
||||
"expression": "sum(rate(container_network_receive_errors_total{namespace=\"$namespace\", pod=~\"$pod_name\"}[1m])) by(name, interface)",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "网络接收错误数",
|
||||
"note": "Pod自身指标\n类型: pod=~\"$pod_name\"}[1m]))"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Number of network reception errors",
|
||||
"note": "Pod's own indicators \nType: pod = ~ \"$pod _ name\"} [1m]))"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
]
|
||||
File diff suppressed because it is too large
Load Diff
@@ -12,7 +12,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "CPU Steal 时间占比(整机平均)",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "CPU Steal time ratio (average of the whole machine)",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -27,7 +39,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "CPU 内核态时间占比(整机平均)",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "CPU core mode time ratio (average of the whole machine)",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -42,7 +66,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "CPU 利用率(整机平均)",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "CPU utilization (machine average)",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -57,7 +93,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "CPU 用户态时间占比(整机平均)",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "CPU user mode time ratio (average of the whole machine)",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -72,7 +120,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "CPU 硬中断时间占比(整机平均)",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Proportion of CPU hard interrupt time (average of the whole machine)",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -87,7 +147,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "CPU 空闲率(整机平均)",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "CPU idle rate (overall machine average)",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -102,7 +174,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "CPU 软中断时间占比(整机平均)",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Proportion of CPU soft interrupt time (average of the whole machine)",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -113,11 +197,23 @@
|
||||
"unit": "percent",
|
||||
"note": "交换空间使用率。计算原子取自 `/proc/meminfo`。",
|
||||
"lang": "zh_CN",
|
||||
"expression": "(node_memory_SwapTotal_bytes - node_memory_SwapFree_bytes)/node_memory_SwapTotal_bytes * 100 and node_memory_SwapTotal_bytes \u003e 0",
|
||||
"expression": "(node_memory_SwapTotal_bytes - node_memory_SwapFree_bytes)/node_memory_SwapTotal_bytes * 100 and node_memory_SwapTotal_bytes > 0",
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "交换空间使用率",
|
||||
"note": "交换空间使用率。计算原子取自 `/proc/meminfo`。"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Swap space usage",
|
||||
"note": "Swap space usage. The computational atom is taken from `/proc/meminfo `."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -132,7 +228,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "交换空间总量",
|
||||
"note": "交换空间总量。取自 `/proc/meminfo`。"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Total swap space",
|
||||
"note": "Total amount of swap space. Taken from `/proc/meminfo `."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -147,7 +255,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "交换空间空闲量",
|
||||
"note": "交换空间空闲量。取自 `/proc/meminfo`。"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Swap space free amount",
|
||||
"note": "Exchange space free amount. Taken from `/proc/meminfo `."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -162,7 +282,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "内存 Buffered 量",
|
||||
"note": "用作缓冲区的内存量。取自 `/proc/meminfo`。"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Memory Buffered amount",
|
||||
"note": "The amount of memory used as a buffer. Taken from `/proc/meminfo `."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -177,7 +309,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "内存 Cached 量",
|
||||
"note": "用作文件缓存的内存量。取自 `/proc/meminfo`。"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Memory Cached amount",
|
||||
"note": "The amount of memory used as file cache. Taken from `/proc/meminfo `."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -192,7 +336,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "内存使用率(基于MemAvailable)",
|
||||
"note": "内存使用率。基于 MemAvailable 计算更准确,但是老版本的 Linux 不支持。"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Memory usage (based on MemAvailable)",
|
||||
"note": "Memory usage. Calculation based on MemAvailable is more accurate, but older versions of Linux do not support it."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -207,7 +363,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "内存可用量",
|
||||
"note": "可以立即分配给进程的可用内存量。取自 `/proc/meminfo`。"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Memory Availability",
|
||||
"note": "The amount of available memory that can be immediately allocated to a process. Taken from `/proc/meminfo `."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -222,7 +390,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "内存总量",
|
||||
"note": "内存总量。取自 `/proc/meminfo`。"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Total memory",
|
||||
"note": "Total amount of memory. Taken from `/proc/meminfo `."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -237,7 +417,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "内存空闲量",
|
||||
"note": "未使用的内存量。取自 `/proc/meminfo`。"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Free memory amount",
|
||||
"note": "Amount of unused memory. Taken from `/proc/meminfo `."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -252,7 +444,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "文件句柄 - 已分配占比",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "File handle-allocated proportion",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -267,7 +471,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "文件句柄 - 已分配量",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "File Handle-Amount Allocated",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -282,7 +498,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "文件句柄 - 总可分配量",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "File handle-total allocable quantity",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -297,7 +525,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "硬盘 IO - 时间维度 Utilization",
|
||||
"note": "在时间维度统计硬盘 IO 时间占比,比如该值是 50%,表示有 50% 的时间是在处理 IO,该值 100%,表示一直在处理 IO,但是注意,现代磁盘设备具备并行处理多个 I/O 请求的能力,所以即便该值是 100%,可能硬盘还是可以接收新的处理请求。\n\n比如某人有两只手,最近 1 分钟一直在用单手劳动,从时间维度来看,利用率是 100%,但即便是 100%,再给他更多的活,他也能干,因为他还有一只手可用。"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Hard Disk IO-Time Dimension Utilization",
|
||||
"note": "Count the proportion of hard disk IO time in the time dimension. For example, if the value is 50%, it means that 50% of the time is processing IO, and if the value is 100%, it means that IO has been processing all the time. However, note that modern disk devices have the ability to process multiple I/O requests in parallel, so even if the value is 100%, the hard disk may still be able to receive new processing requests. \n \nFor example, someone has two hands and has been working with one hand in the last minute. From the time dimension, the utilization rate is 100%, but even if it is 100%, he can do it if he is given more work, because he still has one hand available."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -312,7 +552,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "硬盘 IO - 每秒写入字节数量",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Hard disk IO-bytes written per second",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -327,7 +579,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "硬盘 IO - 每秒写次数",
|
||||
"note": "每秒写次数"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Hard drive IO-writes per second",
|
||||
"note": "Writes per second"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -342,7 +606,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "硬盘 IO - 每秒读取字节数量",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Hard Drive IO-bytes read per second",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -357,7 +633,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "硬盘 IO - 每秒读次数",
|
||||
"note": "每秒读次数"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Hard drive IO-Reads per second",
|
||||
"note": "Reads per second"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -372,7 +660,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "硬盘使用率",
|
||||
"note": "硬盘空间使用率。"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Hard Drive Usage",
|
||||
"note": "Hard disk space usage."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -387,7 +687,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "硬盘剩余量",
|
||||
"note": "使用 SI 标准渲染数据,和 df 命令保持一致。"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Remaining hard disk",
|
||||
"note": "Use the SI standard to render data, consistent with the df command."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -402,7 +714,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "硬盘可用量",
|
||||
"note": "使用 SI 标准渲染数据,和 df 命令保持一致。"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Hard Drive Availability",
|
||||
"note": "Use the SI standard to render data, consistent with the df command."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -417,7 +741,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "硬盘总量",
|
||||
"note": "使用 SI 标准渲染数据,和 df 命令保持一致。"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Total hard disk",
|
||||
"note": "Use the SI standard to render data, consistent with the df command."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -432,7 +768,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "系统 CPU 核数",
|
||||
"note": "CPU 逻辑核的数量。"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Number of CPU cores",
|
||||
"note": "Number of CPU logical cores."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -447,7 +795,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "系统平均负载 - 最近 1 分钟",
|
||||
"note": "取自 `/proc/loadavg`。"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "System load average-last 1 minute",
|
||||
"note": "Taken from `/proc/loadavg `."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -462,7 +822,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "系统平均负载 - 最近 15 分钟",
|
||||
"note": "取自 `/proc/loadavg`。"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "System load average-last 15 minutes",
|
||||
"note": "Taken from `/proc/loadavg `."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -477,7 +849,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "系统平均负载 - 最近 5 分钟",
|
||||
"note": "取自 `/proc/loadavg`。"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "System load average-last 5 minutes",
|
||||
"note": "Taken from `/proc/loadavg `."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -492,7 +876,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "系统平均负载(单核) - 最近 1 分钟",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "System Load Average (Single Core)-Last 1 Minute",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -507,7 +903,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "系统平均负载(单核) - 最近 15 分钟",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "System Load Average (Single Core)-Last 15 Minutes",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -522,7 +930,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "系统平均负载(单核) - 最近 5 分钟",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "System Load Average (Single Core)-Last 5 Minutes",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -537,7 +957,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "网卡入方向(接收)每秒丢弃的数据包个数",
|
||||
"note": "原始指标 node_network_receive_drop_total 表示操作系统启动之后各个网卡入方向(接收)丢弃的数据包总数。"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Number of packets dropped per second in the incoming direction (receiving) of the network card",
|
||||
"note": "The original indicator node _ network _ receive _ drop _ total indicates the total number of packets dropped (received) by each network card after the operating system starts."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -552,7 +984,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "网卡入方向(接收)每秒数据包数",
|
||||
"note": "原始指标 node_network_receive_packets_total 表示操作系统启动之后各个网卡入方向(接收)数据包总数。"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "NIC incoming (receiving) packets per second",
|
||||
"note": "The original indicator node _ network _ receive _ packets _ total indicates the total number of data packets incoming (received) by each network card after the operating system is booted."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -567,7 +1011,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "网卡入方向(接收)每秒错包数",
|
||||
"note": "原始指标 node_network_receive_errs_total 表示操作系统启动之后各个网卡入方向(接收)错包总数。"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Number of wrong packets per second in the incoming direction (receiving) of the network card",
|
||||
"note": "The original indicator node _ network _ receive _ errs _ total indicates the total number of error packets incoming (received) by each network card after the operating system is started."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -582,7 +1038,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "网卡出方向(发送)每秒丢弃的数据包个数",
|
||||
"note": "原始指标 node_network_transmit_drop_total 表示操作系统启动之后各个网卡出方向(发送)丢弃的数据包总数。"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Number of packets discarded per second in the outbound direction (sending) of the network card",
|
||||
"note": "The original indicator node _ network _ transmit _ drop _ total indicates the total number of packets discarded (sent) by each network card after the operating system starts."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -597,7 +1065,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "网卡出方向(发送)每秒数据包数",
|
||||
"note": "原始指标 node_network_transmit_packets_total 表示操作系统启动之后各个网卡出方向(发送)数据包总数。"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Number of packets per second in the outgoing direction (sent) of the network card",
|
||||
"note": "The original indicator node _ network _ transmit _ packets _ total indicates the total number of outbound (sent) data packets from each network card after the operating system is started."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -612,7 +1092,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "网卡出方向(发送)每秒错包数",
|
||||
"note": "原始指标 node_network_transmit_errs_total 表示操作系统启动之后各个网卡出方向(发送)错包总数。"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Number of wrong packets per second in the outgoing direction (sending) of the network card",
|
||||
"note": "The original indicator node _ network _ transmit _ errs _ total indicates the total number of error packets sent out (sent) by each network card after the operating system is started."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -627,7 +1119,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "网卡每秒发送的 bit 量",
|
||||
"note": "原始指标 node_network_transmit_bytes_total 表示操作系统启动之后发送的 byte 总量,因为网卡流量习惯使用 bit 作为单位,所以在表达式中做了换算。"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "The amount of bits sent per second by the network card",
|
||||
"note": "The original indicator node _ network _ transmit _ bytes _ total represents the total number of bytes sent after the operating system starts. Because the network card traffic is used to using bits as a unit, it is converted in the expression."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -642,6 +1146,18 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "网卡每秒接收的 bit 量",
|
||||
"note": "原始指标 node_network_receive_bytes_total 表示操作系统启动之后接收的 byte 总量,因为网卡流量习惯使用 bit 作为单位,所以在表达式中做了换算。"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "The number of bits received by the network card per second",
|
||||
"note": "The original indicator node _ network _ received _ bytes _ total represents the total number of bytes received after the operating system is started. Because the network card traffic is used to using bits as a unit, it is converted in the expression."
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
@@ -12,7 +12,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "Global Status InnoDB 缓冲池 data 大小",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Global Status InnoDB buffer pool data size",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -27,7 +39,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "Global Status InnoDB 缓冲池 dirty 大小",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Global Status InnoDB buffer pool dirty size",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -42,7 +66,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "Global Status InnoDB 缓冲池 free 大小",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Global Status InnoDB buffer pool free size",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -57,7 +93,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "Global Status InnoDB 缓冲池 page 使用率",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Global Status InnoDB buffer pool page usage",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -72,7 +120,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "Global Status InnoDB 缓冲池 used 大小",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Global Status InnoDB Buffer Pool used Size",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -87,7 +147,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "Global Status InnoDB 缓冲池总大小",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Global Status Total InnoDB Buffer Pool Size",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -102,7 +174,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "Global Status 启动时长",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Global Status Startup Time",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -117,7 +201,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "Global Status 当前 running 的 threads 数量",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Global Status The number of threads currently running",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -132,7 +228,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "Global Status 当前打开的文件句柄数",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Global Status Number of file handles currently open",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -147,7 +255,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "Global Status 当前连接数",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Global Status Number of current connections",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -162,7 +282,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "Global Status 最大曾用连接数",
|
||||
"note": "曾经达到过的最大连接数"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Global Status Maximum number of connections used",
|
||||
"note": "Maximum number of connections ever reached"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -177,7 +309,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "Global Status 每秒 Command 数量",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Global Status Number of Commands per second",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -192,7 +336,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "Global Status 每秒 query 数量",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Global Status queries per second",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -207,7 +363,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "Global Status 每秒 question 数量",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Global Status Questions per second",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -222,7 +390,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "Global Status 每秒 slow query 数量",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Global Status slow queries per second",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -237,7 +417,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "Global Status 每秒事务操作数量",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Global Status Number of transactions per second",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -252,7 +444,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "Global Status 每秒写操作数量",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Global Status Number of writes per second",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -267,7 +471,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "Global Status 每秒发送流量",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Global Status sends traffic per second",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -282,7 +498,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "Global Status 每秒接收流量",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Global Status receives traffic per second",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -297,7 +525,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "Global Status 每秒读操作数量",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Global Status Read operations per second",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -312,7 +552,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "Global Status 近 3 分钟 abort 的客户端",
|
||||
"note": "原始指标 mysql_global_status_aborted_clients 表示由于客户端未正确关闭连接而终止的连接数,Counter 类型,单调递增。"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Global Status nearly 3 minutes abort client",
|
||||
"note": "The raw metric mysql _ global _ status _ aborted _ clients represents the number of connections terminated because the client did not properly close the connection, Counter type, monotonically increasing."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -327,7 +579,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "Global Status 近 3 分钟 abort 的连接数",
|
||||
"note": "原始指标 mysql_global_status_aborted_connects 表示尝试连接到 MySQL 服务器失败的次数,Counter 类型,单调递增。"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Global Status Number of connections in last 3 minutes abort",
|
||||
"note": "The raw metric MySQL _ global _ status _ aborted _ connects represents the number of failed attempts to connect to a MySQL server, Counter type, monotonically increasing."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -342,7 +606,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "Global Status 近 3 分钟 table lock 等待次数",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Global Status nearly 3 minutes table lock waiting times",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -357,7 +633,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "Global Variables InnoDB 缓冲池配置大小",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Global Variables InnoDB buffer pool configuration size",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -372,7 +660,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "Global Variables read_only 开关值",
|
||||
"note": "0 就是 OFF,1 是 ON"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Global Variables read _ only Switch value",
|
||||
"note": "0 is OFF, 1 is ON"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -387,7 +687,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "Global Variables 允许打开的文件句柄数",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Number of file handles that Global Variables allows to open",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -402,7 +714,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "Global Variables 最大连接数限制",
|
||||
"note": "允许的最大连接数,默认值是 151,过小了。\n\n- 通过 `SHOW VARIABLES LIKE 'max_connections'` 命令查看当前设置\n- 通过 `SET GLOBAL max_connections = 2048` 重新设置最大连接数\n- 通过修改 MySQL 配置文件,在 `[mysqld]` 下面添加 `max_connections = 2048` 使其重启依旧生效"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Global Variables Maximum Connection Limit",
|
||||
"note": "The maximum number of connections allowed, the default value is 151, is too small. \n \n-View the current settings with the ` SHOW VARIABLES LIKE'max _ connections ''command \n-Reset the maximum number of connections via ` SET GLOBAL max _ connections = 2048 ` \n-By modifying the MySQL configuration file, add ` max _ connections = 2048 ` under ` [mysqld] ` so that its restart still works"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -417,7 +741,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "Global Variables 查询缓存大小",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Global Variables Query Cache Size",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -432,7 +768,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "MySQL 实例是否 UP",
|
||||
"note": "1 表示 UP,说明能正常连到 MySQL 采集数据;0 表示无法连通 MySQL 实例,可能是网络问题、认证问题,或者 MySQL 本身就是挂了"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Whether MySQL instance is UP",
|
||||
"note": "1 means UP, indicating that it can normally connect to MySQL to collect data; 0 means that the MySQL instance cannot be connected. It may be a network problem, authentication problem, or MySQL itself is down"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -447,7 +795,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "MySQL 指标抓取耗时",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "MySQL metric crawling time-consuming",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -462,6 +822,18 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "MySQL 版本信息",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "MySQL version information",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
@@ -12,7 +12,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "NET 探测结果状态码",
|
||||
"note": "0 值表示正常,大于 0 就是异常,各个值的含义如下:\n\n- 0: Success\n- 1: Timeout\n- 2: ConnectionFailed\n- 3: ReadFailed\n- 4: StringMismatch"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "NET Probe Result Status Code",
|
||||
"note": "A value of 0 means normal, and a value greater than 0 means abnormal. The meanings of each value are as follows: \n \n-0: Success \n1: Timeout \n2: ConnectionFailed \n-3: ReadFailed \n4: StringMismatch"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -27,6 +39,18 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "NET 探测耗时",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "NET probe time-consuming",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
@@ -12,7 +12,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "Nginx stub_status 当前空闲连接数",
|
||||
"note": "[文档](https://github.com/flashcatcloud/categraf/blob/main/inputs/nginx/README.md)"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Nginx stub _ status Number of current idle connections",
|
||||
"note": "[Documentation] (https://github.com/flashcatcloud/categraf/blob/main/inputs/nginx/README.md)"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -27,7 +39,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "Nginx stub_status 正在回写 response 的连接数",
|
||||
"note": "[文档](https://github.com/flashcatcloud/categraf/blob/main/inputs/nginx/README.md)"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Nginx stub _ status The number of connections that are writing back response",
|
||||
"note": "[Documentation] (https://github.com/flashcatcloud/categraf/blob/main/inputs/nginx/README.md)"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -42,7 +66,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "Nginx stub_status 正在处理的活动连接数",
|
||||
"note": "[文档](https://github.com/flashcatcloud/categraf/blob/main/inputs/nginx/README.md)\n\nReading + Writing + Waiting 的总和"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Nginx stub _ status Number of active connections being processed",
|
||||
"note": "[Documentation] (https://github.com/flashcatcloud/categraf/blob/main/inputs/nginx/README.md) \n \nSum of Reading + Writing + Waiting"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -57,7 +93,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "Nginx stub_status 正在读取 request header 的连接数",
|
||||
"note": "[文档](https://github.com/flashcatcloud/categraf/blob/main/inputs/nginx/README.md)"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Nginx stub _ status is reading the number of connections to the request header",
|
||||
"note": "[Documentation] (https://github.com/flashcatcloud/categraf/blob/main/inputs/nginx/README.md)"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -72,7 +120,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "Nginx stub_status 每秒 accept 的新连接数",
|
||||
"note": "[文档](https://github.com/flashcatcloud/categraf/blob/main/inputs/nginx/README.md)"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Nginx stub _ status New connections accepted per second",
|
||||
"note": "[Documentation] (https://github.com/flashcatcloud/categraf/blob/main/inputs/nginx/README.md)"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -87,7 +147,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "Nginx stub_status 每秒 handle 的新连接数",
|
||||
"note": "[文档](https://github.com/flashcatcloud/categraf/blob/main/inputs/nginx/README.md)"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Nginx stub _ status New connections per second handle",
|
||||
"note": "[Documentation] (https://github.com/flashcatcloud/categraf/blob/main/inputs/nginx/README.md)"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -102,6 +174,18 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "Nginx stub_status 每秒处理的请求数",
|
||||
"note": "[文档](https://github.com/flashcatcloud/categraf/blob/main/inputs/nginx/README.md)\n\n如果有 keep-alive 连接的情况,一个连接上会处理多个请求。"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Nginx stub _ status requests processed per second",
|
||||
"note": "[Documentation] (https://github.com/flashcatcloud/categraf/blob/main/inputs/nginx/README.md) \n \nIf there is a keep-alive connection, multiple requests will be processed on one connection."
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
@@ -12,7 +12,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "Ping ttl 时间",
|
||||
"note": "Time To Live,指的是报文在网络中能够“存活”的限制时间"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Ping ttl time",
|
||||
"note": "Time To Live refers to the limited time that a packet can \"survive\" in the network"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -27,7 +39,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "Ping 丢包率",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Ping packet loss rate",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -42,7 +66,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "Ping 平均耗时",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Ping average time consumed",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -57,7 +93,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "Ping 探测结果状态码",
|
||||
"note": "值为 0 就是正常,非 0 值就是异常。如果 Ping 失败,Categraf 日志中理应会有异常日志"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Ping probe result status code",
|
||||
"note": "A value of 0 is normal, and a non-0 value is abnormal. If the Ping fails, there should be an exception log in the Categraf log"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -72,7 +120,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "Ping 最大耗时",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Ping maximum time consumption",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -87,6 +147,18 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "Ping 最小耗时",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Ping minimum time consumption",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
@@ -12,7 +12,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "进程 CPU 利用率(单进程)",
|
||||
"note": "[文档](https://github.com/flashcatcloud/categraf/blob/main/inputs/procstat/README.md)\n\nCPU 利用率有两个模式,一个是 solaris,一个是 irix,默认是 irix,irix 模式下,CPU 利用率可能会超过 100%,solaris 会考虑 CPU 核数,solaris 模式的 CPU 利用率不会超过 100%。"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Process CPU utilization (single process)",
|
||||
"note": "[Documentation] (https://github.com/flashcatcloud/categraf/blob/main/inputs/procstat/README.md) \n \nThere are two modes of CPU utilization, one is solaris and the other is irix. The default is irix. In irix mode, the CPU utilization may exceed 100%. solaris will consider the number of CPU cores, and the CPU utilization in solaris mode will not exceed 100%."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -27,7 +39,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "进程 CPU 总利用率(匹配到的所有进程加和)",
|
||||
"note": "[文档](https://github.com/flashcatcloud/categraf/blob/main/inputs/procstat/README.md)\n\nCPU 利用率有两个模式,一个是 solaris,一个是 irix,默认是 irix,irix 模式下,CPU 利用率可能会超过 100%,solaris 会考虑 CPU 核数,solaris 模式的 CPU 利用率不会超过 100%。"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Total process CPU utilization (sum of all processes matched to)",
|
||||
"note": "[Documentation] (https://github.com/flashcatcloud/categraf/blob/main/inputs/procstat/README.md) \n \nThere are two modes of CPU utilization, one is solaris and the other is irix. The default is irix. In irix mode, the CPU utilization may exceed 100%. solaris will consider the number of CPU cores, and the CPU utilization in solaris mode will not exceed 100%."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -42,7 +66,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "进程 IO 每秒写入字节总数(匹配到的所有进程加和)",
|
||||
"note": "[文档](https://github.com/flashcatcloud/categraf/blob/main/inputs/procstat/README.md)"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Total number of bytes written per second by process IO (sum of all processes matched to)",
|
||||
"note": "[Documentation] (https://github.com/flashcatcloud/categraf/blob/main/inputs/procstat/README.md)"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -57,7 +93,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "进程 IO 每秒写入字节数(单进程)",
|
||||
"note": "[文档](https://github.com/flashcatcloud/categraf/blob/main/inputs/procstat/README.md)"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Number of bytes written per second by process IO (single process)",
|
||||
"note": "[Documentation] (https://github.com/flashcatcloud/categraf/blob/main/inputs/procstat/README.md)"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -72,7 +120,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "进程 IO 每秒写入次数总数(匹配到的所有进程加和)",
|
||||
"note": "[文档](https://github.com/flashcatcloud/categraf/blob/main/inputs/procstat/README.md)"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Total number of process IO writes per second (sum of all processes matched to)",
|
||||
"note": "[Documentation] (https://github.com/flashcatcloud/categraf/blob/main/inputs/procstat/README.md)"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -87,7 +147,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "进程 IO 每秒写入次数(单进程)",
|
||||
"note": "[文档](https://github.com/flashcatcloud/categraf/blob/main/inputs/procstat/README.md)"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Process IO writes per second (single process)",
|
||||
"note": "[Documentation] (https://github.com/flashcatcloud/categraf/blob/main/inputs/procstat/README.md)"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -102,7 +174,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "进程 IO 每秒读取字节总数(匹配到的所有进程加和)",
|
||||
"note": "[文档](https://github.com/flashcatcloud/categraf/blob/main/inputs/procstat/README.md)"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Total number of bytes read per second by process IO (sum of all processes matched to)",
|
||||
"note": "[Documentation] (https://github.com/flashcatcloud/categraf/blob/main/inputs/procstat/README.md)"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -117,7 +201,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "进程 IO 每秒读取字节数(单进程)",
|
||||
"note": "[文档](https://github.com/flashcatcloud/categraf/blob/main/inputs/procstat/README.md)"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Process IO reads bytes per second (single process)",
|
||||
"note": "[Documentation] (https://github.com/flashcatcloud/categraf/blob/main/inputs/procstat/README.md)"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -132,7 +228,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "进程 IO 每秒读取次数总数(匹配到的所有进程加和)",
|
||||
"note": "[文档](https://github.com/flashcatcloud/categraf/blob/main/inputs/procstat/README.md)"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Total number of process IO reads per second (sum of all processes matched to)",
|
||||
"note": "[Documentation] (https://github.com/flashcatcloud/categraf/blob/main/inputs/procstat/README.md)"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -147,7 +255,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "进程 IO 每秒读取次数(单进程)",
|
||||
"note": "[文档](https://github.com/flashcatcloud/categraf/blob/main/inputs/procstat/README.md)"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Process IO reads per second (single process)",
|
||||
"note": "[Documentation] (https://github.com/flashcatcloud/categraf/blob/main/inputs/procstat/README.md)"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -162,7 +282,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "进程 Memory 利用率(单进程)",
|
||||
"note": "[文档](https://github.com/flashcatcloud/categraf/blob/main/inputs/procstat/README.md)"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Process Memory utilization (single process)",
|
||||
"note": "[Documentation] (https://github.com/flashcatcloud/categraf/blob/main/inputs/procstat/README.md)"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -177,7 +309,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "进程 Memory 总利用率(匹配到的所有进程加和)",
|
||||
"note": "[文档](https://github.com/flashcatcloud/categraf/blob/main/inputs/procstat/README.md)"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Process Memory Total utilization (sum of all processes matched to)",
|
||||
"note": "[Documentation] (https://github.com/flashcatcloud/categraf/blob/main/inputs/procstat/README.md)"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -192,7 +336,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "进程 rlimit fd 软限制数量(匹配到的所有进程中的最小值)",
|
||||
"note": "[文档](https://github.com/flashcatcloud/categraf/blob/main/inputs/procstat/README.md)"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Process rlimit fd Number of soft limits (minimum of all processes matched to)",
|
||||
"note": "[Documentation] (https://github.com/flashcatcloud/categraf/blob/main/inputs/procstat/README.md)"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -207,7 +363,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "进程 rlimit fd 软限制数量(单进程)",
|
||||
"note": "[文档](https://github.com/flashcatcloud/categraf/blob/main/inputs/procstat/README.md)"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Process rlimit fd Number of soft limits (single process)",
|
||||
"note": "[Documentation] (https://github.com/flashcatcloud/categraf/blob/main/inputs/procstat/README.md)"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -222,7 +390,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "进程启动时长(匹配到的所有进程的最小值)",
|
||||
"note": "启动了多久"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Process start time (minimum of all processes matched to)",
|
||||
"note": "How long has it started"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -237,7 +417,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "进程启动时长(单进程)",
|
||||
"note": "启动了多久"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Process startup time (single process)",
|
||||
"note": "How long has it started"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -252,7 +444,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "进程数量(根据匹配条件查到的进程数量)",
|
||||
"note": "[文档](https://github.com/flashcatcloud/categraf/blob/main/inputs/procstat/README.md)"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Number of processes (the number of processes found according to matching conditions)",
|
||||
"note": "[Documentation] (https://github.com/flashcatcloud/categraf/blob/main/inputs/procstat/README.md)"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -267,7 +471,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "进程文件句柄总打开数(匹配到的所有进程加和)",
|
||||
"note": "[文档](https://github.com/flashcatcloud/categraf/blob/main/inputs/procstat/README.md)"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Total number of process file handles open (sum of all processes matched to)",
|
||||
"note": "[Documentation] (https://github.com/flashcatcloud/categraf/blob/main/inputs/procstat/README.md)"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -282,7 +498,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "进程文件句柄打开数(单进程)",
|
||||
"note": "[文档](https://github.com/flashcatcloud/categraf/blob/main/inputs/procstat/README.md)"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Number of process file handle openings (single process)",
|
||||
"note": "[Documentation] (https://github.com/flashcatcloud/categraf/blob/main/inputs/procstat/README.md)"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -297,7 +525,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "进程线程总数(匹配到的所有进程加和)",
|
||||
"note": "[文档](https://github.com/flashcatcloud/categraf/blob/main/inputs/procstat/README.md)"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Total number of process threads (sum of all processes matched to)",
|
||||
"note": "[Documentation] (https://github.com/flashcatcloud/categraf/blob/main/inputs/procstat/README.md)"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -312,6 +552,18 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "进程线程数(单进程)",
|
||||
"note": "[文档](https://github.com/flashcatcloud/categraf/blob/main/inputs/procstat/README.md)"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Number of process threads (single process)",
|
||||
"note": "[Documentation] (https://github.com/flashcatcloud/categraf/blob/main/inputs/procstat/README.md)"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
@@ -12,7 +12,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "容器 CPU 利用率(system)",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Container CPU utilization (system)",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -27,7 +39,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "容器 CPU 利用率(user)",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Container CPU utilization (user)",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -42,7 +66,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "容器 CPU 利用率(整体,值不会大于 100)",
|
||||
"note": "只有设置了 limit 的容器才能计算此利用率"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Container CPU utilization (overall, the value will not be greater than 100)",
|
||||
"note": "Only containers with limit set can calculate this utilization"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -57,7 +93,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "容器 CPU 利用率(整体,值可能大于 100)",
|
||||
"note": "如果是 200% 表示占用了 2 个核"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Container CPU utilization (overall, value may be greater than 100)",
|
||||
"note": "If 200%, it means that 2 cores are occupied"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -72,7 +120,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "容器 CPU 每秒有多少 period",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "How many periods does the container CPU have per second",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -87,7 +147,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "容器 CPU 每秒被 throttle 的 period 量",
|
||||
"note": "如果容器限制了 CPU,而 app 所需算法过多, 会被抑制使用,container_cpu_cfs_throttled_periods_total 统计总共有多少个 period 被抑制了,如果近期发生抑制是需要关注的,一些延迟敏感的 app 受影响尤为明显。出现被抑制的情况,大概率是需要升配了。"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "The amount of periods that the container CPU is throttle per second",
|
||||
"note": "If the container limits the CPU and the app requires too many algorithms, it will be suppressed. container _ CPU _ cfs _ throttled _ periods _ total counts how many periods have been suppressed in total. If suppression occurs recently, it needs attention. Some delay-sensitive apps are particularly affected. If it is suppressed, there is a high probability that it needs to be upgraded."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -102,7 +174,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "容器 CPU 被 throttle 的比例",
|
||||
"note": "这个值大于 0 就要注意"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "The proportion of container CPU being throttle",
|
||||
"note": "If this value is greater than 0, pay attention"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -117,7 +201,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "容器 filesystem 使用率",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Container filesystem usage",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -132,7 +228,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "容器 filesystem 使用量",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Container filesystem usage",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -147,7 +255,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "容器 filesystem 当前 IO 次数",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Container filesystem Current IO times",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -162,7 +282,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "容器 filesystem 总量",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Container filesystem Total",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -177,7 +309,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "容器 inode free 量",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Container inode free amount",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -192,7 +336,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "容器 inode total 量",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Container inode total",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -207,7 +363,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "容器 inode 使用率",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Container inode usage",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -222,7 +390,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "容器 IO 每秒写入 byte 量",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Container IO writes bytes per second",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -237,7 +417,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "容器 IO 每秒读取 byte 量",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Container IO reads bytes per second",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -252,7 +444,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "容器 memory cache 量",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Container memory cache amount",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -267,7 +471,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "容器 memory 使用率(Usage)",
|
||||
"note": "如果有大量文件 IO,有大量 container_memory_cache,container_memory_usage_bytes 和 container_memory_working_set_bytes 的大小会有差异"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Container memory Usage (Usage)",
|
||||
"note": "If there is a large number of file IO and a large number of container _ memory _ cache, the size of container _ memory _ usage _ bytes and container _ memory _ working _ set _ bytes will be different"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -282,7 +498,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "容器 memory 使用率(Working Set)",
|
||||
"note": "如果有大量文件 IO,有大量 container_memory_cache,container_memory_usage_bytes 和 container_memory_working_set_bytes 的大小会有差异"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Container memory usage rate (Working Set)",
|
||||
"note": "If there is a large number of file IO and a large number of container _ memory _ cache, the size of container _ memory _ usage _ bytes and container _ memory _ working _ set _ bytes will be different"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -297,7 +525,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "容器 memory 使用量(mapped_file)",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Container memory usage (mapped _ file)",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -312,7 +552,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "容器 memory 使用量(RSS)",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Container memory usage (RSS)",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -327,7 +579,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "容器 memory 使用量(Swap)",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Container memory usage (Swap)",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -342,7 +606,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "容器 memory 使用量(Usage)",
|
||||
"note": "如果有大量文件 IO,有大量 container_memory_cache,container_memory_usage_bytes 和 container_memory_working_set_bytes 的大小会有差异"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Container memory Usage",
|
||||
"note": "If there is a large number of file IO and a large number of container _ memory _ cache, the size of container _ memory _ usage _ bytes and container _ memory _ working _ set _ bytes will be different"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -357,7 +633,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "容器 memory 使用量(Working Set)",
|
||||
"note": "如果有大量文件 IO,有大量 container_memory_cache,container_memory_usage_bytes 和 container_memory_working_set_bytes 的大小会有差异"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Container memory usage (Working Set)",
|
||||
"note": "If there is a large number of file IO and a large number of container _ memory _ cache, the size of container _ memory _ usage _ bytes and container _ memory _ working _ set _ bytes will be different"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -372,7 +660,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "容器 memory 分配失败次数(每秒)",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Container memory allocation failures (per second)",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -387,7 +687,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "容器 memory 限制量",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Container memory limit",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -402,7 +714,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "容器 net 每秒发送 bit 量",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Container net sends bits per second",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -417,7 +741,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "容器 net 每秒发送 byte 量",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Container net sends bytes per second",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -432,7 +768,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "容器 net 每秒发送数据包数量",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Number of packets sent per second by container net",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -447,7 +795,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "容器 net 每秒发送时 drop 包数量",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Number of drop packets sent by container net per second",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -462,7 +822,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "容器 net 每秒发送错包数",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Number of wrong packets sent by container net per second",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -477,7 +849,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "容器 net 每秒接收 bit 量",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "The amount of bits received by the container net per second",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -492,7 +876,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "容器 net 每秒接收 byte 量",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Container net receives bytes per second",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -507,7 +903,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "容器 net 每秒接收数据包数量",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Number of packets received per second by container net",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -522,7 +930,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "容器 net 每秒接收时 drop 包数量",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Number of drop packets received by container net per second",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -537,7 +957,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "容器 net 每秒接收错包数",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Number of wrong packets received by container net per second",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -552,7 +984,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "容器允许运行的最大线程数",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "The maximum number of threads the container is allowed to run",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -567,7 +1011,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "容器内 1 号进程 soft ulimit 值",
|
||||
"note": "容器内1号进程的软 ulimit 值。如果为-1,则无限制。"
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Process No. 1 soft ulimit value in container",
|
||||
"note": "Soft ulimit value for process # 1 inside the container. If-1, there is no limit."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -582,7 +1038,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "容器已经运行的时间",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "How long the container has been running",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -597,7 +1065,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "容器当前打开套接字数量",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Number of currently open sockets in the container",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -612,7 +1092,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "容器当前打开文件句柄数量",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Container Number of currently open file handles",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -627,7 +1119,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "容器当前运行的线程数",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Number of threads currently running in the container",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -642,7 +1146,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "容器当前运行的进程数",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Number of processes currently running in the container",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -657,7 +1173,19 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "容器总 GPU 加速卡可用内存量",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "Container Total GPU Accelerator Available Memory",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 0,
|
||||
@@ -672,6 +1200,18 @@
|
||||
"created_at": 0,
|
||||
"created_by": "",
|
||||
"updated_at": 0,
|
||||
"updated_by": ""
|
||||
"updated_by": "",
|
||||
"translation": [
|
||||
{
|
||||
"lang": "zh_CN",
|
||||
"name": "容器正在使用的 GPU 加速卡内存量",
|
||||
"note": ""
|
||||
},
|
||||
{
|
||||
"lang": "en_US",
|
||||
"name": "The amount of GPU accelerator card memory the container is using",
|
||||
"note": ""
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
@@ -142,7 +142,6 @@ func (amc *AlertMuteCacheType) syncAlertMutes() error {
|
||||
ms := time.Since(start).Milliseconds()
|
||||
amc.stats.GaugeCronDuration.WithLabelValues("sync_alert_mutes").Set(float64(ms))
|
||||
amc.stats.GaugeSyncNumber.WithLabelValues("sync_alert_mutes").Set(float64(len(lst)))
|
||||
logger.Infof("timer: sync mutes done, cost: %dms, number: %d", ms, len(lst))
|
||||
dumper.PutSyncRecord("alert_mutes", start.Unix(), ms, len(lst), "success")
|
||||
|
||||
return nil
|
||||
|
||||
@@ -132,7 +132,6 @@ func (arc *AlertRuleCacheType) syncAlertRules() error {
|
||||
ms := time.Since(start).Milliseconds()
|
||||
arc.stats.GaugeCronDuration.WithLabelValues("sync_alert_rules").Set(float64(ms))
|
||||
arc.stats.GaugeSyncNumber.WithLabelValues("sync_alert_rules").Set(float64(len(m)))
|
||||
logger.Infof("timer: sync rules done, cost: %dms, number: %d", ms, len(m))
|
||||
dumper.PutSyncRecord("alert_rules", start.Unix(), ms, len(m), "success")
|
||||
|
||||
return nil
|
||||
|
||||
@@ -180,7 +180,6 @@ func (c *AlertSubscribeCacheType) syncAlertSubscribes() error {
|
||||
ms := time.Since(start).Milliseconds()
|
||||
c.stats.GaugeCronDuration.WithLabelValues("sync_alert_subscribes").Set(float64(ms))
|
||||
c.stats.GaugeSyncNumber.WithLabelValues("sync_alert_subscribes").Set(float64(len(lst)))
|
||||
logger.Infof("timer: sync subscribes done, cost: %dms, number: %d", ms, len(lst))
|
||||
dumper.PutSyncRecord("alert_subscribes", start.Unix(), ms, len(lst), "success")
|
||||
|
||||
return nil
|
||||
|
||||
@@ -118,8 +118,6 @@ func (c *BusiGroupCacheType) syncBusiGroups() error {
|
||||
ms := time.Since(start).Milliseconds()
|
||||
c.stats.GaugeCronDuration.WithLabelValues("sync_busi_groups").Set(float64(ms))
|
||||
c.stats.GaugeSyncNumber.WithLabelValues("sync_busi_groups").Set(float64(len(m)))
|
||||
|
||||
logger.Infof("timer: sync busi groups done, cost: %dms, number: %d", ms, len(m))
|
||||
dumper.PutSyncRecord("busi_groups", start.Unix(), ms, len(m), "success")
|
||||
|
||||
return nil
|
||||
|
||||
@@ -86,8 +86,6 @@ func (c *ConfigCache) syncConfigs() error {
|
||||
ms := time.Since(start).Milliseconds()
|
||||
c.stats.GaugeCronDuration.WithLabelValues("sync_user_variables").Set(float64(ms))
|
||||
c.stats.GaugeSyncNumber.WithLabelValues("sync_user_variables").Set(float64(len(decryptMap)))
|
||||
|
||||
logger.Infof("timer: sync user_variables done, cost: %dms, number: %d", ms, len(decryptMap))
|
||||
dumper.PutSyncRecord("user_variables", start.Unix(), ms, len(decryptMap), "success")
|
||||
|
||||
return nil
|
||||
|
||||
@@ -82,8 +82,6 @@ func (c *CvalCache) syncConfigs() error {
|
||||
ms := time.Since(start).Milliseconds()
|
||||
c.stats.GaugeCronDuration.WithLabelValues("sync_cvals").Set(float64(ms))
|
||||
c.stats.GaugeSyncNumber.WithLabelValues("sync_cvals").Set(float64(len(c.cvals)))
|
||||
|
||||
logger.Infof("timer: sync cvals done, cost: %dms", ms)
|
||||
dumper.PutSyncRecord("cvals", start.Unix(), ms, len(c.cvals), "success")
|
||||
|
||||
return nil
|
||||
|
||||
@@ -134,8 +134,6 @@ func (d *DatasourceCacheType) syncDatasources() error {
|
||||
ms := time.Since(start).Milliseconds()
|
||||
d.stats.GaugeCronDuration.WithLabelValues("sync_datasources").Set(float64(ms))
|
||||
d.stats.GaugeSyncNumber.WithLabelValues("sync_datasources").Set(float64(len(ds)))
|
||||
|
||||
logger.Infof("timer: sync datasources done, cost: %dms, number: %d", ms, len(ds))
|
||||
dumper.PutSyncRecord("datasources", start.Unix(), ms, len(ds), "success")
|
||||
|
||||
return nil
|
||||
|
||||
@@ -141,7 +141,7 @@ func (epc *EventProcessorCacheType) syncEventProcessors() error {
|
||||
for _, p := range eventPipeline.ProcessorConfigs {
|
||||
processor, err := models.GetProcessorByType(p.Typ, p.Config)
|
||||
if err != nil {
|
||||
logger.Warningf("event_pipeline_id: %d, event:%+v, processor:%+v type not found", eventPipeline.ID, eventPipeline, p)
|
||||
logger.Warningf("event_pipeline_id: %d, event:%+v, processor:%+v get processor err: %+v", eventPipeline.ID, eventPipeline, p, err)
|
||||
continue
|
||||
}
|
||||
|
||||
@@ -156,7 +156,6 @@ func (epc *EventProcessorCacheType) syncEventProcessors() error {
|
||||
ms := time.Since(start).Milliseconds()
|
||||
epc.stats.GaugeCronDuration.WithLabelValues("sync_event_processors").Set(float64(ms))
|
||||
epc.stats.GaugeSyncNumber.WithLabelValues("sync_event_processors").Set(float64(len(m)))
|
||||
logger.Infof("timer: sync event processors done, cost: %dms, number: %d", ms, len(m))
|
||||
dumper.PutSyncRecord("event_processors", start.Unix(), ms, len(m), "success")
|
||||
|
||||
return nil
|
||||
|
||||
@@ -132,7 +132,6 @@ func (mtc *MessageTemplateCacheType) syncMessageTemplates() error {
|
||||
ms := time.Since(start).Milliseconds()
|
||||
mtc.stats.GaugeCronDuration.WithLabelValues("sync_message_templates").Set(float64(ms))
|
||||
mtc.stats.GaugeSyncNumber.WithLabelValues("sync_message_templates").Set(float64(len(m)))
|
||||
logger.Infof("timer: sync message templates done, cost: %dms, number: %d", ms, len(m))
|
||||
dumper.PutSyncRecord("message_templates", start.Unix(), ms, len(m), "success")
|
||||
|
||||
return nil
|
||||
|
||||
@@ -2,8 +2,10 @@ package memsto
|
||||
|
||||
import (
|
||||
"crypto/tls"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
@@ -14,9 +16,23 @@ import (
|
||||
"github.com/ccfos/nightingale/v6/pkg/ctx"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
"github.com/toolkits/pkg/container/list"
|
||||
"github.com/toolkits/pkg/logger"
|
||||
)
|
||||
|
||||
// NotifyTask 表示一个通知发送任务
|
||||
type NotifyTask struct {
|
||||
Events []*models.AlertCurEvent
|
||||
NotifyRuleId int64
|
||||
NotifyChannel *models.NotifyChannelConfig
|
||||
TplContent map[string]interface{}
|
||||
CustomParams map[string]string
|
||||
Sendtos []string
|
||||
}
|
||||
|
||||
// NotifyRecordFunc 通知记录函数类型
|
||||
type NotifyRecordFunc func(ctx *ctx.Context, events []*models.AlertCurEvent, notifyRuleId int64, channelName, target, resp string, err error)
|
||||
|
||||
type NotifyChannelCacheType struct {
|
||||
statTotal int64
|
||||
statLastUpdated int64
|
||||
@@ -24,13 +40,18 @@ type NotifyChannelCacheType struct {
|
||||
stats *Stats
|
||||
|
||||
sync.RWMutex
|
||||
channels map[int64]*models.NotifyChannelConfig // key: channel id
|
||||
|
||||
httpConcurrency map[int64]chan struct{}
|
||||
channels map[int64]*models.NotifyChannelConfig // key: channel id
|
||||
channelsQueue map[int64]*list.SafeListLimited
|
||||
|
||||
httpClient map[int64]*http.Client
|
||||
smtpCh map[int64]chan *models.EmailContext
|
||||
smtpQuitCh map[int64]chan struct{}
|
||||
|
||||
// 队列消费者控制
|
||||
queueQuitCh map[int64]chan struct{}
|
||||
|
||||
// 通知记录回调函数
|
||||
notifyRecordFunc NotifyRecordFunc
|
||||
}
|
||||
|
||||
func NewNotifyChannelCache(ctx *ctx.Context, stats *Stats) *NotifyChannelCacheType {
|
||||
@@ -40,18 +61,20 @@ func NewNotifyChannelCache(ctx *ctx.Context, stats *Stats) *NotifyChannelCacheTy
|
||||
ctx: ctx,
|
||||
stats: stats,
|
||||
channels: make(map[int64]*models.NotifyChannelConfig),
|
||||
channelsQueue: make(map[int64]*list.SafeListLimited),
|
||||
queueQuitCh: make(map[int64]chan struct{}),
|
||||
httpClient: make(map[int64]*http.Client),
|
||||
smtpCh: make(map[int64]chan *models.EmailContext),
|
||||
smtpQuitCh: make(map[int64]chan struct{}),
|
||||
}
|
||||
|
||||
ncc.SyncNotifyChannels()
|
||||
return ncc
|
||||
}
|
||||
|
||||
func (ncc *NotifyChannelCacheType) Reset() {
|
||||
ncc.Lock()
|
||||
defer ncc.Unlock()
|
||||
|
||||
ncc.statTotal = -1
|
||||
ncc.statLastUpdated = -1
|
||||
ncc.channels = make(map[int64]*models.NotifyChannelConfig)
|
||||
// SetNotifyRecordFunc 设置通知记录回调函数
|
||||
func (ncc *NotifyChannelCacheType) SetNotifyRecordFunc(fn NotifyRecordFunc) {
|
||||
ncc.notifyRecordFunc = fn
|
||||
}
|
||||
|
||||
func (ncc *NotifyChannelCacheType) StatChanged(total, lastUpdated int64) bool {
|
||||
@@ -62,30 +85,257 @@ func (ncc *NotifyChannelCacheType) StatChanged(total, lastUpdated int64) bool {
|
||||
return true
|
||||
}
|
||||
|
||||
func (ncc *NotifyChannelCacheType) Set(m map[int64]*models.NotifyChannelConfig, httpConcurrency map[int64]chan struct{}, httpClient map[int64]*http.Client,
|
||||
smtpCh map[int64]chan *models.EmailContext, quitCh map[int64]chan struct{}, total, lastUpdated int64) {
|
||||
func (ncc *NotifyChannelCacheType) Set(m map[int64]*models.NotifyChannelConfig, total, lastUpdated int64) {
|
||||
ncc.Lock()
|
||||
for _, k := range ncc.httpConcurrency {
|
||||
close(k)
|
||||
}
|
||||
ncc.httpConcurrency = httpConcurrency
|
||||
ncc.channels = m
|
||||
ncc.httpClient = httpClient
|
||||
ncc.smtpCh = smtpCh
|
||||
defer ncc.Unlock()
|
||||
|
||||
for i := range ncc.smtpQuitCh {
|
||||
close(ncc.smtpQuitCh[i])
|
||||
}
|
||||
// 1. 处理需要删除的通道
|
||||
ncc.removeDeletedChannels(m)
|
||||
|
||||
ncc.smtpQuitCh = quitCh
|
||||
|
||||
ncc.Unlock()
|
||||
// 2. 处理新增和更新的通道
|
||||
ncc.addOrUpdateChannels(m)
|
||||
|
||||
// only one goroutine used, so no need lock
|
||||
ncc.statTotal = total
|
||||
ncc.statLastUpdated = lastUpdated
|
||||
}
|
||||
|
||||
// removeDeletedChannels 移除已删除的通道
|
||||
func (ncc *NotifyChannelCacheType) removeDeletedChannels(newChannels map[int64]*models.NotifyChannelConfig) {
|
||||
for chID := range ncc.channels {
|
||||
if _, exists := newChannels[chID]; !exists {
|
||||
logger.Infof("removing deleted channel %d", chID)
|
||||
|
||||
// 停止消费者协程
|
||||
if quitCh, exists := ncc.queueQuitCh[chID]; exists {
|
||||
close(quitCh)
|
||||
delete(ncc.queueQuitCh, chID)
|
||||
}
|
||||
|
||||
// 删除队列
|
||||
delete(ncc.channelsQueue, chID)
|
||||
|
||||
// 删除HTTP客户端
|
||||
delete(ncc.httpClient, chID)
|
||||
|
||||
// 停止SMTP发送器
|
||||
if quitCh, exists := ncc.smtpQuitCh[chID]; exists {
|
||||
close(quitCh)
|
||||
delete(ncc.smtpQuitCh, chID)
|
||||
delete(ncc.smtpCh, chID)
|
||||
}
|
||||
|
||||
// 删除通道配置
|
||||
delete(ncc.channels, chID)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// addOrUpdateChannels 添加或更新通道
|
||||
func (ncc *NotifyChannelCacheType) addOrUpdateChannels(newChannels map[int64]*models.NotifyChannelConfig) {
|
||||
for chID, newChannel := range newChannels {
|
||||
oldChannel, exists := ncc.channels[chID]
|
||||
if exists {
|
||||
if ncc.channelConfigChanged(oldChannel, newChannel) {
|
||||
logger.Infof("updating channel %d (new: %t)", chID, !exists)
|
||||
ncc.stopChannelResources(chID)
|
||||
} else {
|
||||
logger.Infof("channel %d config not changed", chID)
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
||||
// 更新通道配置
|
||||
ncc.channels[chID] = newChannel
|
||||
|
||||
// 根据类型创建相应的资源
|
||||
switch newChannel.RequestType {
|
||||
case "http", "flashduty":
|
||||
// 创建HTTP客户端
|
||||
if newChannel.RequestConfig != nil && newChannel.RequestConfig.HTTPRequestConfig != nil {
|
||||
cli, err := models.GetHTTPClient(newChannel)
|
||||
if err != nil {
|
||||
logger.Warningf("failed to create HTTP client for channel %d: %v", chID, err)
|
||||
} else {
|
||||
if ncc.httpClient == nil {
|
||||
ncc.httpClient = make(map[int64]*http.Client)
|
||||
}
|
||||
ncc.httpClient[chID] = cli
|
||||
}
|
||||
}
|
||||
|
||||
// 对于 http 类型,启动队列和消费者
|
||||
if newChannel.RequestType == "http" {
|
||||
ncc.startHttpChannel(chID, newChannel)
|
||||
}
|
||||
case "smtp":
|
||||
// 创建SMTP发送器
|
||||
if newChannel.RequestConfig != nil && newChannel.RequestConfig.SMTPRequestConfig != nil {
|
||||
ch := make(chan *models.EmailContext)
|
||||
quit := make(chan struct{})
|
||||
go ncc.startEmailSender(chID, newChannel.RequestConfig.SMTPRequestConfig, ch, quit)
|
||||
|
||||
if ncc.smtpCh == nil {
|
||||
ncc.smtpCh = make(map[int64]chan *models.EmailContext)
|
||||
}
|
||||
if ncc.smtpQuitCh == nil {
|
||||
ncc.smtpQuitCh = make(map[int64]chan struct{})
|
||||
}
|
||||
ncc.smtpCh[chID] = ch
|
||||
ncc.smtpQuitCh[chID] = quit
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// channelConfigChanged 检查通道配置是否发生变化
|
||||
func (ncc *NotifyChannelCacheType) channelConfigChanged(oldChannel, newChannel *models.NotifyChannelConfig) bool {
|
||||
if oldChannel == nil || newChannel == nil {
|
||||
return true
|
||||
}
|
||||
|
||||
// check updateat
|
||||
if oldChannel.UpdateAt != newChannel.UpdateAt {
|
||||
return true
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
// stopChannelResources 停止通道的相关资源
|
||||
func (ncc *NotifyChannelCacheType) stopChannelResources(chID int64) {
|
||||
// 停止HTTP消费者协程
|
||||
if quitCh, exists := ncc.queueQuitCh[chID]; exists {
|
||||
close(quitCh)
|
||||
delete(ncc.queueQuitCh, chID)
|
||||
delete(ncc.channelsQueue, chID)
|
||||
}
|
||||
|
||||
// 停止SMTP发送器
|
||||
if quitCh, exists := ncc.smtpQuitCh[chID]; exists {
|
||||
close(quitCh)
|
||||
delete(ncc.smtpQuitCh, chID)
|
||||
delete(ncc.smtpCh, chID)
|
||||
}
|
||||
}
|
||||
|
||||
// startHttpChannel 启动HTTP通道的队列和消费者
|
||||
func (ncc *NotifyChannelCacheType) startHttpChannel(chID int64, channel *models.NotifyChannelConfig) {
|
||||
if channel.RequestConfig == nil || channel.RequestConfig.HTTPRequestConfig == nil {
|
||||
logger.Warningf("notify channel %+v http request config not found", channel)
|
||||
return
|
||||
}
|
||||
|
||||
// 创建队列
|
||||
queue := list.NewSafeListLimited(100000)
|
||||
ncc.channelsQueue[chID] = queue
|
||||
|
||||
// 启动消费者协程
|
||||
quitCh := make(chan struct{})
|
||||
ncc.queueQuitCh[chID] = quitCh
|
||||
|
||||
// 启动指定数量的消费者协程
|
||||
concurrency := channel.RequestConfig.HTTPRequestConfig.Concurrency
|
||||
for i := 0; i < concurrency; i++ {
|
||||
go ncc.startNotifyConsumer(chID, queue, quitCh)
|
||||
}
|
||||
|
||||
logger.Infof("started %d notify consumers for channel %d", concurrency, chID)
|
||||
}
|
||||
|
||||
// 启动通知消费者协程
|
||||
func (ncc *NotifyChannelCacheType) startNotifyConsumer(channelID int64, queue *list.SafeListLimited, quitCh chan struct{}) {
|
||||
logger.Infof("starting notify consumer for channel %d", channelID)
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-quitCh:
|
||||
logger.Infof("notify consumer for channel %d stopped", channelID)
|
||||
return
|
||||
default:
|
||||
// 从队列中取出任务
|
||||
task := queue.PopBack()
|
||||
if task == nil {
|
||||
// 队列为空,等待一段时间
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
continue
|
||||
}
|
||||
|
||||
notifyTask, ok := task.(*NotifyTask)
|
||||
if !ok {
|
||||
logger.Errorf("invalid task type in queue for channel %d", channelID)
|
||||
continue
|
||||
}
|
||||
|
||||
// 处理通知任务
|
||||
ncc.processNotifyTask(notifyTask)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// processNotifyTask 处理通知任务(仅处理 http 类型)
|
||||
func (ncc *NotifyChannelCacheType) processNotifyTask(task *NotifyTask) {
|
||||
httpClient := ncc.GetHttpClient(task.NotifyChannel.ID)
|
||||
|
||||
// 现在只处理 http 类型,flashduty 保持直接发送
|
||||
if task.NotifyChannel.RequestType == "http" {
|
||||
if len(task.Sendtos) == 0 || ncc.needBatchContacts(task.NotifyChannel.RequestConfig.HTTPRequestConfig) {
|
||||
start := time.Now()
|
||||
resp, err := task.NotifyChannel.SendHTTP(task.Events, task.TplContent, task.CustomParams, task.Sendtos, httpClient)
|
||||
resp = fmt.Sprintf("duration: %d ms %s", time.Since(start).Milliseconds(), resp)
|
||||
logger.Infof("notify_id: %d, channel_name: %v, event:%+v, tplContent:%v, customParams:%v, userInfo:%+v, respBody: %v, err: %v",
|
||||
task.NotifyRuleId, task.NotifyChannel.Name, task.Events[0], task.TplContent, task.CustomParams, task.Sendtos, resp, err)
|
||||
|
||||
// 调用通知记录回调函数
|
||||
if ncc.notifyRecordFunc != nil {
|
||||
ncc.notifyRecordFunc(ncc.ctx, task.Events, task.NotifyRuleId, task.NotifyChannel.Name, ncc.getSendTarget(task.CustomParams, task.Sendtos), resp, err)
|
||||
}
|
||||
} else {
|
||||
for i := range task.Sendtos {
|
||||
start := time.Now()
|
||||
resp, err := task.NotifyChannel.SendHTTP(task.Events, task.TplContent, task.CustomParams, []string{task.Sendtos[i]}, httpClient)
|
||||
resp = fmt.Sprintf("duration: %d ms %s", time.Since(start).Milliseconds(), resp)
|
||||
logger.Infof("notify_id: %d, channel_name: %v, event:%+v, tplContent:%v, customParams:%v, userInfo:%+v, respBody: %v, err: %v",
|
||||
task.NotifyRuleId, task.NotifyChannel.Name, task.Events[0], task.TplContent, task.CustomParams, task.Sendtos[i], resp, err)
|
||||
|
||||
// 调用通知记录回调函数
|
||||
if ncc.notifyRecordFunc != nil {
|
||||
ncc.notifyRecordFunc(ncc.ctx, task.Events, task.NotifyRuleId, task.NotifyChannel.Name, ncc.getSendTarget(task.CustomParams, []string{task.Sendtos[i]}), resp, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// 判断是否需要批量发送联系人
|
||||
func (ncc *NotifyChannelCacheType) needBatchContacts(requestConfig *models.HTTPRequestConfig) bool {
|
||||
if requestConfig == nil {
|
||||
return false
|
||||
}
|
||||
b, _ := json.Marshal(requestConfig)
|
||||
return strings.Contains(string(b), "$sendtos")
|
||||
}
|
||||
|
||||
// 获取发送目标
|
||||
func (ncc *NotifyChannelCacheType) getSendTarget(customParams map[string]string, sendtos []string) string {
|
||||
if len(customParams) == 0 {
|
||||
return strings.Join(sendtos, ",")
|
||||
}
|
||||
|
||||
values := make([]string, 0)
|
||||
for _, value := range customParams {
|
||||
runes := []rune(value)
|
||||
if len(runes) <= 4 {
|
||||
values = append(values, value)
|
||||
} else {
|
||||
maskedValue := string(runes[:len(runes)-4]) + "****"
|
||||
values = append(values, maskedValue)
|
||||
}
|
||||
}
|
||||
|
||||
return strings.Join(values, ",")
|
||||
}
|
||||
|
||||
func (ncc *NotifyChannelCacheType) Get(channelId int64) *models.NotifyChannelConfig {
|
||||
ncc.RLock()
|
||||
defer ncc.RUnlock()
|
||||
@@ -117,6 +367,25 @@ func (ncc *NotifyChannelCacheType) GetChannelIds() []int64 {
|
||||
return list
|
||||
}
|
||||
|
||||
// 新增:将通知任务加入队列
|
||||
func (ncc *NotifyChannelCacheType) EnqueueNotifyTask(task *NotifyTask) bool {
|
||||
ncc.RLock()
|
||||
queue := ncc.channelsQueue[task.NotifyChannel.ID]
|
||||
ncc.RUnlock()
|
||||
|
||||
if queue == nil {
|
||||
logger.Errorf("no queue found for channel %d", task.NotifyChannel.ID)
|
||||
return false
|
||||
}
|
||||
|
||||
success := queue.PushFront(task)
|
||||
if !success {
|
||||
logger.Warningf("failed to enqueue notify task for channel %d, queue is full", task.NotifyChannel.ID)
|
||||
}
|
||||
|
||||
return success
|
||||
}
|
||||
|
||||
func (ncc *NotifyChannelCacheType) SyncNotifyChannels() {
|
||||
err := ncc.syncNotifyChannels()
|
||||
if err != nil {
|
||||
@@ -162,43 +431,12 @@ func (ncc *NotifyChannelCacheType) syncNotifyChannels() error {
|
||||
m[lst[i].ID] = lst[i]
|
||||
}
|
||||
|
||||
httpConcurrency := make(map[int64]chan struct{})
|
||||
httpClient := make(map[int64]*http.Client)
|
||||
smtpCh := make(map[int64]chan *models.EmailContext)
|
||||
quitCh := make(map[int64]chan struct{})
|
||||
|
||||
for i := range lst {
|
||||
// todo 优化变更粒度
|
||||
|
||||
switch lst[i].RequestType {
|
||||
case "http", "flashduty":
|
||||
if lst[i].RequestConfig == nil || lst[i].RequestConfig.HTTPRequestConfig == nil {
|
||||
logger.Warningf("notify channel %+v http request config not found", lst[i])
|
||||
continue
|
||||
}
|
||||
|
||||
cli, _ := models.GetHTTPClient(lst[i])
|
||||
httpClient[lst[i].ID] = cli
|
||||
httpConcurrency[lst[i].ID] = make(chan struct{}, lst[i].RequestConfig.HTTPRequestConfig.Concurrency)
|
||||
for j := 0; j < lst[i].RequestConfig.HTTPRequestConfig.Concurrency; j++ {
|
||||
httpConcurrency[lst[i].ID] <- struct{}{}
|
||||
}
|
||||
case "smtp":
|
||||
ch := make(chan *models.EmailContext)
|
||||
quit := make(chan struct{})
|
||||
go ncc.startEmailSender(lst[i].ID, lst[i].RequestConfig.SMTPRequestConfig, ch, quit)
|
||||
smtpCh[lst[i].ID] = ch
|
||||
quitCh[lst[i].ID] = quit
|
||||
default:
|
||||
}
|
||||
}
|
||||
|
||||
ncc.Set(m, httpConcurrency, httpClient, smtpCh, quitCh, stat.Total, stat.LastUpdated)
|
||||
// 增量更新:只传递通道配置,让增量更新逻辑按需创建资源
|
||||
ncc.Set(m, stat.Total, stat.LastUpdated)
|
||||
|
||||
ms := time.Since(start).Milliseconds()
|
||||
ncc.stats.GaugeCronDuration.WithLabelValues("sync_notify_channels").Set(float64(ms))
|
||||
ncc.stats.GaugeSyncNumber.WithLabelValues("sync_notify_channels").Set(float64(len(m)))
|
||||
logger.Infof("timer: sync notify channels done, cost: %dms, number: %d", ms, len(m))
|
||||
dumper.PutSyncRecord("notify_channels", start.Unix(), ms, len(m), "success")
|
||||
|
||||
return nil
|
||||
@@ -305,22 +543,3 @@ func (ncc *NotifyChannelCacheType) dialSmtp(quitCh chan struct{}, d *gomail.Dial
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (ncc *NotifyChannelCacheType) HttpConcurrencyAdd(channelId int64) bool {
|
||||
ncc.RLock()
|
||||
defer ncc.RUnlock()
|
||||
if _, ok := ncc.httpConcurrency[channelId]; !ok {
|
||||
return false
|
||||
}
|
||||
_, ok := <-ncc.httpConcurrency[channelId]
|
||||
return ok
|
||||
}
|
||||
|
||||
func (ncc *NotifyChannelCacheType) HttpConcurrencyDone(channelId int64) {
|
||||
ncc.RLock()
|
||||
defer ncc.RUnlock()
|
||||
if _, ok := ncc.httpConcurrency[channelId]; !ok {
|
||||
return
|
||||
}
|
||||
ncc.httpConcurrency[channelId] <- struct{}{}
|
||||
}
|
||||
|
||||
@@ -132,7 +132,6 @@ func (nrc *NotifyRuleCacheType) syncNotifyRules() error {
|
||||
ms := time.Since(start).Milliseconds()
|
||||
nrc.stats.GaugeCronDuration.WithLabelValues("sync_notify_rules").Set(float64(ms))
|
||||
nrc.stats.GaugeSyncNumber.WithLabelValues("sync_notify_rules").Set(float64(len(m)))
|
||||
logger.Infof("timer: sync notify rules done, cost: %dms, number: %d", ms, len(m))
|
||||
dumper.PutSyncRecord("notify_rules", start.Unix(), ms, len(m), "success")
|
||||
|
||||
return nil
|
||||
|
||||
@@ -133,7 +133,6 @@ func (rrc *RecordingRuleCacheType) syncRecordingRules() error {
|
||||
ms := time.Since(start).Milliseconds()
|
||||
rrc.stats.GaugeCronDuration.WithLabelValues("sync_recording_rules").Set(float64(ms))
|
||||
rrc.stats.GaugeSyncNumber.WithLabelValues("sync_recording_rules").Set(float64(len(m)))
|
||||
logger.Infof("timer: sync recording rules done, cost: %dms, number: %d", ms, len(m))
|
||||
dumper.PutSyncRecord("recording_rules", start.Unix(), ms, len(m), "success")
|
||||
|
||||
return nil
|
||||
|
||||
@@ -179,7 +179,6 @@ func (tc *TargetCacheType) syncTargets() error {
|
||||
ms := time.Since(start).Milliseconds()
|
||||
tc.stats.GaugeCronDuration.WithLabelValues("sync_targets").Set(float64(ms))
|
||||
tc.stats.GaugeSyncNumber.WithLabelValues("sync_targets").Set(float64(len(lst)))
|
||||
logger.Infof("timer: sync targets done, cost: %dms, number: %d", ms, len(lst))
|
||||
dumper.PutSyncRecord("targets", start.Unix(), ms, len(lst), "success")
|
||||
|
||||
return nil
|
||||
|
||||
@@ -84,7 +84,6 @@ func (ttc *TaskTplCache) syncTaskTpl() error {
|
||||
ttc.Set(m, stat.Total, stat.LastUpdated)
|
||||
|
||||
ms := time.Since(start).Milliseconds()
|
||||
logger.Infof("timer: sync task tpls done, cost: %dms, number: %d", ms, len(m))
|
||||
dumper.PutSyncRecord("task_tpls", start.Unix(), ms, len(m), "success")
|
||||
|
||||
return nil
|
||||
|
||||
@@ -189,8 +189,6 @@ func (uc *UserCacheType) syncUsers() error {
|
||||
ms := time.Since(start).Milliseconds()
|
||||
uc.stats.GaugeCronDuration.WithLabelValues("sync_users").Set(float64(ms))
|
||||
uc.stats.GaugeSyncNumber.WithLabelValues("sync_users").Set(float64(len(m)))
|
||||
|
||||
logger.Infof("timer: sync users done, cost: %dms, number: %d", ms, len(m))
|
||||
dumper.PutSyncRecord("users", start.Unix(), ms, len(m), "success")
|
||||
|
||||
return nil
|
||||
|
||||
@@ -158,8 +158,6 @@ func (ugc *UserGroupCacheType) syncUserGroups() error {
|
||||
ms := time.Since(start).Milliseconds()
|
||||
ugc.stats.GaugeCronDuration.WithLabelValues("sync_user_groups").Set(float64(ms))
|
||||
ugc.stats.GaugeSyncNumber.WithLabelValues("sync_user_groups").Set(float64(len(m)))
|
||||
|
||||
logger.Infof("timer: sync user groups done, cost: %dms, number: %d", ms, len(m))
|
||||
dumper.PutSyncRecord("user_groups", start.Unix(), ms, len(m), "success")
|
||||
|
||||
return nil
|
||||
|
||||
@@ -168,8 +168,6 @@ func (utc *UserTokenCacheType) syncUserTokens() error {
|
||||
ms := time.Since(start).Milliseconds()
|
||||
utc.stats.GaugeCronDuration.WithLabelValues("sync_user_tokens").Set(float64(ms))
|
||||
utc.stats.GaugeSyncNumber.WithLabelValues("sync_user_tokens").Set(float64(len(tokenUsers)))
|
||||
|
||||
logger.Infof("timer: sync user tokens done, cost: %dms, number: %d", ms, len(tokenUsers))
|
||||
dumper.PutSyncRecord("user_tokens", start.Unix(), ms, len(tokenUsers), "success")
|
||||
|
||||
return nil
|
||||
|
||||
@@ -243,6 +243,15 @@ func AlertHisEventGetById(ctx *ctx.Context, id int64) (*AlertHisEvent, error) {
|
||||
return AlertHisEventGet(ctx, "id=?", id)
|
||||
}
|
||||
|
||||
func AlertHisEventBatchDelete(ctx *ctx.Context, timestamp int64, severities []int, limit int) (int64, error) {
|
||||
db := DB(ctx).Where("last_eval_time < ?", timestamp)
|
||||
if len(severities) > 0 {
|
||||
db = db.Where("severity IN (?)", severities)
|
||||
}
|
||||
res := db.Limit(limit).Delete(&AlertHisEvent{})
|
||||
return res.RowsAffected, res.Error
|
||||
}
|
||||
|
||||
func (m *AlertHisEvent) UpdateFieldsMap(ctx *ctx.Context, fields map[string]interface{}) error {
|
||||
return DB(ctx).Model(m).Updates(fields).Error
|
||||
}
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user